anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
problem : point cloud2 in rviz | Question:
hello,
I have irobot create & kinect and i try to do this tutorial Visualizing TurtleBot 3D camera Data
and i have done every thing correctly but when i tried to Add a PointCloud2 View there is no thing appears
can anyone tell me ,what is the problem ?
thanks in advance
Originally posted by marwa on ROS Answers with karma: 153 on 2013-02-02
Post score: 2
Answer:
Try rostopic echo /camera/depth/points .. if u seen data is being transferred then u might have to adjust some settings in rviz related to tf..
Originally posted by ayush_dewan with karma: 1610 on 2013-02-02
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by marwa on 2013-02-15:
@shade
it seems to be problem with tf , u know how can i adjust it ?
Comment by ayush_dewan on 2013-02-15:
Can u elaborate problem with tf. Use tf_echo to check transformation between frame_id of camera_depth_points and constant frame like map/world or maybe odom. Are u seeing points when u echo /camera/depth/points?? | {
"domain": "robotics.stackexchange",
"id": 12690,
"tags": "ros"
} |
What is the significance of having two formulas for area moment of inertia? | Question: What is the significance of calculating area moment of inertia twice?
I mean calculating area moment of inertia w.r.t axis and
calculating same area moment of inertia w.r.t centroidal axis?
Why not area moment of inertia w.r.t axis only?
In rectangle, they are two area moment of inertia formulas, one with
axis and other one with respect to an axis collinear with the base.
Answer: I find the quoted statement poorly worded.
This is a guess on what they mean, but for a rectangle in the xy plane, I suspect the statement "one with the axis" means the moment of inertia of the rectangle with respect to the centroidal axis (x and y), that is, when the centroid of the rectangle is at the origin of the xy coordinate system so that the centroidal axes are the same as the x-y axes. See the figure at the left below where C indicates the location of the centroid.
Then when they say the moment of inertia "with respect to an axis collinear with the base" they probably mean the moment of inertia with respect to an axis that is collinear with the base (collinear with the x-axis). See the figure to the right below.
To say there are 2 moment of inertias for a rectangle is misleading. It only applies to the two figures below. But the rectangle can be located anywhere in the xy plane. The moment of inertia of the area is with respect to the xy axes. Since the there are an infinite number of possible locations of the rectangle, there are an infinite number of moments of inertia each unique to the specific location of the area with respect to the axes.
Generally though you start with the centroidal moments of inertia (figure at the left). Those are the values shown in the figure to the left. They are published for many basic shapes. Then if you move the area so that the centroid is no longer at the origin of the coordinate system you need to add an additional moment of inertia to the centroidal moment of inertia based on the parallel axes theorem. That is often also published for the case where the side(s) of the area contact the x and/or y axis (figure to the right). For other locations of the area with respect to the axes you need to do the calculations. In any case, the calculations are based on the parallel axis theorem. That is how the moments of inertia for the figure at the right were calculated based on the figure at the left. The theorem is given by
$$I_{x}=I_{xc}+d_{x}^{2}A$$
$$I_{y}=I_{yc}+d_{y}^{2}A$$
Where for the first equation
$I_{x}$ is the moment of inertia about the x-axis.
$I_{xc}$ is the centroidal moment of inertia about the centroidal x axis (left figure below).
$d_{x}^{2}$
is the distance from the x-axis to the centroidal x-axis (right figure below).
$A$ is the total area (in this case $A=bh$)
The same applies to the second equation, just substitute $y$ for $x$
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 59172,
"tags": "angular-momentum, moment-of-inertia"
} |
Is it appropriate to use a softmax activation with a categorical crossentropy loss? | Question: I have a binary classification problem where I have 2 classes. A sample is either class 1 or class 2 - For simplicity, lets say they are exclusive from one another so it is definitely one or the other.
For this reason, in my neural network, I have specified a softmax activation in the last layer with 2 outputs and a categorical crossentropy for the loss. Using tensorflow:
model=tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=64, input_shape=(100,), activation='relu'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Dense(units=32, activation='relu'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Dense(units=2, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Here are my questions.
If the sigmoid is equivalent to the softmax, firstly is it valid to specify 2 units with a softmax and categorical_crossentropy?
Is it the same as using binary_crossentropy (in this particular use case) with 2 classes and a sigmoid activation, and if so why?
I know that for non-exclusive multi-label problems with more than 2 classes, a binary_crossentropy with a sigmoid activation is used, why is the non-exclusivity about the multi-label case uniquely different from a binary classification with 2 classes only, with 1 (class 0 or class 1) output and a sigmoid with binary_crossentropy loss.
Answer: Let's first recap the definition of the binary cross-entropy (BCE) and the categorical cross-entropy (CCE).
Here's the BCE (equation 4.90 from this book)
$$-\sum_{n=1}^{N}\left( t_{n} \ln y_{n}+\left(1-t_{n}\right) \ln \left(1-y_{n}\right)\right) \label{1}\tag{1},$$
where
$t_{n} \in\{0,1\}$ is the target
$y_n \in [0, 1]$ is the prediction (as produced by the sigmoid), so $1 - y_n$ is the probability that $n$ belongs to the other class
Here's the CCE (equation 4.108)
$$
-\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n k} \ln y_{n k}\label{2}\tag{2},
$$
where
$t_{n k} = \{0, 1\}$ is the target of input $n$ for class $k$, i.e. it's $1$ when $n$ is labelled as $k$ and $0$ otherwise (so it's $0$ for all $K$ except for one of them)
$y_{n k}$ is the probability that $n$ belongs to the class $k$, as produced by the softmax function
Let $K=2$. Then equation \ref{2} becomes
$$
-\sum_{n=1}^{N} \sum_{k=1}^{2} t_{n k} \ln y_{n k} =
-\sum_{n=1}^{N} \left( t_{n 1} \ln y_{n 1} + t_{n 2} \ln y_{n 2} \right)
\label{3}\tag{3}
$$
So, if $[y_{n 1}, y_{n 2}]$ is a probability vector (which is the case if you use the softmax as the activation function of the last layer), then, in theory, the BCE and CCE are equivalent in the case of binary classification. In practice, if you are using TensorFlow, to choose the most suitable loss function for your problem, you could take a look at this answer. | {
"domain": "ai.stackexchange",
"id": 2701,
"tags": "binary-classification, sigmoid, softmax, categorical-crossentropy, binary-crossentropy"
} |
What do these files / annotations mean? | Question: I have no experience in Bioinformatics and I need to understand what the annotations given here mean (I am including the first few lines, please see the link for more):
contig_id feature_id type location start stop strand
gi|1045318032|gb|MAZB01000333.1| fig|252393.25.peg.1 peg gi|1045318032|gb|MAZB01000333.1|_511_2 511 2 -
gi|1045318033|gb|MAZB01000332.1| fig|252393.25.repeat.1 repeat gi|1045318033|gb|MAZB01000332.1|_1_128 1 128 +
gi|1045318033|gb|MAZB01000332.1| fig|252393.25.repeat.2 repeat gi|1045318033|gb|MAZB01000332.1|_318_532 318 532 +
gi|1045318033|gb|MAZB01000332.1| fig|252393.25.peg.2 peg gi|1045318033|gb|MAZB01000332.1|_530_321 530 321 -
gi|1045318034|gb|MAZB01000331.1| fig|252393.25.repeat.3 repeat gi|1045318034|gb|MAZB01000331.1|_1_534 1 534 +
gi|1045318034|gb|MAZB01000331.1| fig|252393.25.peg.3 peg gi|1045318034|gb|MAZB01000331.1|_3_410 3 410 +
gi|1045318035|gb|MAZB01000330.1| fig|252393.25.repeat.4 repeat gi|1045318035|gb|MAZB01000330.1|_1_128 1 128 +
gi|1045318035|gb|MAZB01000330.1| fig|252393.25.peg.4 peg gi|1045318035|gb|MAZB01000330.1|_3_539 3 539 +
gi|1045318035|gb|MAZB01000330.1| fig|252393.25.repeat.5 repeat gi|1045318035|gb|MAZB01000330.1|_413_539 413 539 +
Could someone help me to understand what these files and all annotations mean?
Answer: You are looking at the supplementary data of a paper. That seems to have given you a list of features, and some information about those features. Specifically, you seem to have a list of two types of element:
"peg". Based on the information there, I assume "peg" stands for "protein encoding gene". Note that all peg lines have a peptide sequence (not seen in your question but present in the original link) and their "function" column is always some sort of protein function (e.g. "hypothetical protein" or "Transposase").
"repeat". These are presumably repetitive elements.
For each line, you are given an identifier for the sequence in which this element can be found. For example, your first line is on gi|1045318032|gb|MAZB01000333.1|. That is a GI (gi|1045318032) and a GenBank identifier (gb|MAZB01000333.1). If you go to the NCBI website, and search for 1045318032, you will find that it is the Candidatus Erwinia dacicola isolate Erw_SC contig_333, whole genome shotgun sequence.
So, your first line is telling you that the authors have identified a protein encoding gene on sequence gi|1045318032 and the gene starts at position 511 of that sequence and ends at position 2. This might sound odd, but that's because the gene is on the negative (-) strand and positions are calculated with respect to the positive strand. So, if you were to count on the negative, the gene would start at 2 and end at 511.
The same goes for all the other lines you have there. Just look up the GI number and you can know what sequences they are referring to.
Finally, the second column, the "feature_id" looks like something specific to this figshare service that is hosting the data. I dont' know what it is, but I would guess it's an identifier for this particular feature. I don't think it has any relevance outside http://figshare.com. | {
"domain": "bioinformatics.stackexchange",
"id": 1150,
"tags": "file-formats, edger, identifiers"
} |
Why is it impossible to formulate unitary QFT in a dynamical background? | Question: I cannot recall the exact argument but I remember my professor saying something like unitary time evolution in a dynamical background "kicks" a state out of the Hilbert space constructed on curved Cauchy hypersurfaces, such as in the interior of Schwarzschild black holes. (It is best to model matter as open quantum systems in such backgrounds). On the other hand, unitary quantum field theory is pretty well-defined in static curved spacetimes.
Can somebody provide some mathematical structure to the above argument, if valid at all? Any reference to literature which studies the difficulty of formulating unitary quantum field theory in dynamical backgrounds is highly appreciated as well. Thank you.
Answer: I have a few comments
QFT is well-defined also on dynamical backgrounds, and there are various traditional and more modern approaches to constructing free and perturbative QFTs on fixed gravitational backgrounds.
Just from the title I would interpret "unitary QFT" meaning a QFT with unitary S-matrix. This is also true if the dynamical background is ''sufficiently nice'', as has been shown by Wald (see https://www.sciencedirect.com/science/article/pii/0003491679901350).
To your main concern, which is the unitarity of the time-evolution:
In the non-static case there is no time-translation symmetry of the system, i.e. the construction of an ordinary Schrödinger picture must fail physically because the system looks different at different times. It seems that a good discussion of these problems including technical aspects and many references can be found in Sec. 6 of https://sites.google.com/a/umich.edu/ruetsche-laura/ourweyl.pdf?attredirects=0.
A similar problem already appears when you have couplings to external fields on Minkowski space-time. In the case of external electromagnetic fields there seem to be some more recent attempts, e.g. involving 'time-varying Fock-spaces' https://arxiv.org/abs/1510.03890. I must admit that to me they appear quite mathematically involved, but perhaps one can do something similar for gravitational fields.
Such questions were an important motivation for the development of the 'algebraic approach' to QFT. There can talk about `time-evolution' as automorphisms of the algebras of observables at different times. In this language the question of the unitary time evolution then becomes the mathematical question of the 'unitary implementability' of the automorphism group of time translations. | {
"domain": "physics.stackexchange",
"id": 51290,
"tags": "quantum-field-theory, qft-in-curved-spacetime, open-quantum-systems"
} |
Can a carbocation ever be more stable than a neutral molecule? | Question:
The question is about finding the most stable molecule or species. Now my thought process was that cyclopropenylidene(1) was a carbene so would be unstable even though it is aromatic. Cyclopenta-1,3-diene(4) is anti-aromatic so is very unstable. Between cyclopropenium(2) and cyclohexa-1,3-diene(3) though, I am confused, especially since my answer key has the former. I think (3) should be the most stable out of them all. Cyclopropenium is an especially stable carbocation, but it's still electron deficient. How could it possibly be more stable than (3) which is a neutral conjugated diene?
On another note, is it even possible to measure the stability of a cation with respect to a neutral molecule?
Answer: Yes this question is poorly stated, probably through no fault of the OP. (It looks like something from a textbook, in which case the authors of the book are to blame.) You can really compare only things that match up in mass, atomic composition and charge. They probably want the student to choose the cyclopropenyl cation, choice 2, because it is aromatic and the full symmetry of the ion (as opposed to choice 1 which is not fully symmetric) maximizes the effect of this aromaticity.
The cyclopropenyl cation offers a case where a salt containing the ion is more stable than a compositionally equivalent set of neutral molecules (which actually is a proper comparison); to wit, $\ce{[C3H3]+[SbCl6]^-}$ is more stable than the combination of neutral molecukes $\ce{C3H3Cl + SbCl5}$ ($\ce{C3H3Cl}$ = 3-chlorocyclopropene). When the latter two are combined at -20°C in carbon tetrachloride solvent, the salt spontaneously forms and precipitates[1]. This is considered evidence of the strong stabilization of the cyclopropenyl cation that would theoretically be predicted by the usual aromaticity rules.
Reference
Breslow, R.; Groves, J. T. (1970). "Cyclopropenyl Cation. Synthesis and Characterization". J. Am. Chem. Soc. 92 (4): 984–987. doi:10.1021/ja00707a040. | {
"domain": "chemistry.stackexchange",
"id": 17985,
"tags": "organic-chemistry, thermodynamics, aromatic-compounds, stability, carbocation"
} |
Average Access time in caches | Question: I know that the average access time for systems with level 1 caches is:
Average Access Time = Hit time + (Miss Rate x Miss Penalty)
How can this be generalized for n level caches?
Answer: If I remember correctly, this is the formula:
Hit time +
(miss rate for cache 1 * (1 - miss rate for cache 2) * miss penalty for cache 1) +
(miss rate for cache 1 * miss rate for cache 2 * miss penalty for cache 2)
Essentially, we just split the case that cache 1 missed into the two cases: cache 2 hit, cache 2 missed.
I believe that if you understand this, it wont be hard to generalize for the $n$ cache levels. | {
"domain": "cs.stackexchange",
"id": 18530,
"tags": "computer-architecture, cache"
} |
Is energy always proportional to frequency? | Question: Google has no results found for "energy not proportional to frequency" and many results for E=hf. Is there an example of an energy that is not proportional to frequency?
Answer: Yes. For photons in vacuum, the energy per photon is proportional to the photon's classical, electromagnetic frequency, as $E = \hbar \omega = h f$. Here, we see a connection between two classical properties of light: the energy and frequency.
What is surprising is that the relation holds for matter, where there is no classical equivalent of the frequency. Nevertheless, in an interferometry experiment, an relative energy shift of $\Delta E$ can lead to an observable frequency difference $\Delta f$, so that the phase of an interferometer operated for a time $T$ is $\phi = \Delta E\,T/\hbar$. This was originally observed in neutrons and has more recently been seen in electrons and atoms. Even the rest mass energy $mc^2$ has a equivalent frequency, which is known as the Compton frequency $\omega_C = mc^2/\hbar$. While we can not (currently) experimentally measure it, it can be inferred from atom interferometry experiments.
The general idea of a matter-wave frequency occurs where it is possible to make and readout a superposition state, which does not occur classically. | {
"domain": "physics.stackexchange",
"id": 6717,
"tags": "forces, energy, standard-model, frequency, models"
} |
Building non-classical logics in Agda & Coq | Question: Is it possible to construct different systems of logic in Coq or Agda?
I ask because I'm interested in using a proof assistant to construct (and verify) theorems in things like many-valued logics, relevant logics, and conditional logics in Graham Priest's 2nd edition of An Introduction to Non-Classical Logic (I keep going back to over the years in my free time).
It seems like intuitionistic logic is definitely possible in Coq given it was developed alongside the calculus of constructions, but I'm curious if Coq (or Agda) could be used for other logics.
A part of my gut says, "It wouldn't be possible to do logics that use fewer rules of inference or swap out intuitionistic rules for incompatible rules (eg, removing the principle of explosion)." However, another part of my gut says, "Machines using binary gates can still represent & simulate quantum processes, so it may be possible for Coq to represent logics incompatible with it's own implementation."
Any references to good papers/books/lectures/etc. are happily welcome.
Answer: You can define many non-classical logics in Coq (and I assume Agda too), even if they are incompatible with the logic of your proof assistant, but you need to define the concept of inference yourself. That is, you can't rely on Coq formulas, you need to define your own language. And you can't rely on Coq logic, you need to define what counts as a proof. You still use Coq as a meta-language though.
Here is a silly example in Coq with a very restricted language.
(* well-formed formulas *)
Inductive formula : Set :=
| T : formula
| AND : formula -> formula -> formula.
(* axioms and rules of inference *)
Inductive provable : formula -> Prop :=
| TisProvable : provable T
| ANDI : forall f1 f2, provable f1 -> provable f2 -> provable (AND f1 f2)
| ANDEl : forall f1 f2, provable (AND f1 f2) -> provable f1
| ANDEr : forall f1 f2, provable (AND f1 f2) -> provable f2.
You can now show, using the meta-theory of Coq, that provable f for some specific formulas f, or in other words, you can prove theorems of this logic I just made up.
Lemma ANDTTT : provable (AND (AND T T) T).
Proof.
apply ANDI.
- apply ANDI; exact TisProvable.
- exact TisProvable.
Qed.
You can even make Coq automatically prove such simple lemmas:
#[export] Hint Resolve TisProvable ANDI ANDEl ANDEr : provableHints.
Lemma ANDTTT_auto : provable (AND (AND T T) T).
Proof. eauto with provableHints. Qed.
Some examples of non-classical logics implemented in Coq:
Hybrid Logic
PDL - polymodal logic
RC0 and WC - strictly positive polymodal logics
QRC1 - quantified strictly positive modal logic | {
"domain": "cs.stackexchange",
"id": 19739,
"tags": "logic, proof-assistants, coq, automated-theorem-proving, agda"
} |
ROS Navigation Goal Trigger | Question:
Hello all
I have a question, is there a way to take the navigation goal when a real robot reaches a goal as a trigger to operate a motor like a stepper motor for example?
Originally posted by Os7 on ROS Answers with karma: 50 on 2022-06-21
Post score: 1
Answer:
Yes, if you use move_base you can use its published topics:
https://wiki.ros.org/move_base#Action_API
Action Published Topics
move_base/feedback
(move_base_msgs/MoveBaseActionFeedback)
Feedback contains the current position of the base in the world.
move_base/status
(actionlib_msgs/GoalStatusArray)
Provides status information on the goals that are sent to the move_base
action.
Originally posted by ljaniec with karma: 3064 on 2022-06-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Os7 on 2022-06-21:
Thanks alot
Comment by ljaniec on 2022-06-21:
If it answers your question, you can mark it as correct, so the question will be marked as solved in the queue | {
"domain": "robotics.stackexchange",
"id": 37784,
"tags": "navigation"
} |
xcorr in MATLAB for periodic function | Question: I have a periodic signal and I want to find it's autocorrelation function.
I can calculate it exactly:
$$R_{uu}(h) = \frac 1M \sum_{k=0}^{M-1} u(k)\cdot u(k-h)$$
But will xcorr() use this function for periodic signals?
BTW, I've created my own function for periodic signals:
function [R h] = intcor(u,y)
%Calculates the correlation between two vectors u, y
M=length(u);
h=0:1:M-1;
R=zeros(M,1);
for i=1:M
for k=1:M
if k-h(i) <= 0
R(i)=R(i)+u(k)*y(k-h(i)+M); %since periodic, add M -> gets same result
else
R(i)=R(i)+u(k)*y(k-h(i));
end
end
R(i)=R(i)/M;
end
end
Answer: The xcorr function assumes a linear cross-correlation, i.e. it assumes that the signals are zero outside the intervals. If you want to have an efficient implementation of the periodic cross-correlation, you can refer to the properties of the Fourier transform with
$$
\mathcal{F}(u \star y)=\mathcal{F}(u) \cdot \mathcal{F}(y)^*,
$$
while exploiting the fact, that the DFT considers signals to be periodic anyway. Here's the corresponding code for that:
N = 20;
u = randn(N,1);
y = randn(N,1);
R = intcor(u, y);
R2 = real(ifft(fft(u) .* conj(fft(y))))/(N); % Calculate the periodic CC
hold off;
plot([u y R])
hold on;
plot(R2, 'ko');
legend({'u', 'y', 'intcor', 'fft-based'});
To answer your questions in the comments:
It is still not clear to me, if I can use xcorr to calculate the autocorrelation of a periodic signal correctly? -
No, you cannot use the xcorr function, as it assumes a linear cross-correlation. What you want is a calculation, where you assume the signal is periodic, and you just input one period of the signal.
What is the difference between linear cross-correlation and periodic cross-corellation? - Consider the general equation for cross-correlation:
$$
R_uy(h)=\sum_{k=-\infty}^{\infty}u(k)^*y(k-h)
$$
Normally, the summation is done over all times (i.e from $-\infty$ to $\infty$). However, the signals you pass to the xcorr-function are time-limited (i.e. they consist only of $N$ non-zero samples). Now, in the linear case, the cross-correlation assumes that the input-sequences vanish for $k<0$ or $k>N-1$. In the circular-correlation (i.e. assuming periodic signals), the signals are not assumed to be zero for the index above, but instead we assume $u(k+iN)=u(k), \forall i\in\mathbb{Z}$. I.e., the sequences are assumed to be periodic with period N. | {
"domain": "dsp.stackexchange",
"id": 10481,
"tags": "autocorrelation, estimation, cross-correlation"
} |
Complex conjugate of the Schrödinger equation? | Question: This might be a very simple question but I don't understand how to compute the complex conjugate of the Schrödinger equation:
$$
i\partial_t \psi = H\psi
$$
where $H$ is an hermitian operator. How to proceed now? All answers I found via the search function were not satisfactory for me.
First idea: Complex conjugate both sides:
$$
(i\partial_t \psi)^* = (H\psi)^*
$$
The problem is that I am not sure how to proceed here. What is the conjugate of an operator $A$ acting on a function $f$? So what is $(Af)^*$? Does the following hold: $(Af)^* = A^\dagger f^*$ ? If so, I get
$$
-i\partial_t^\dagger \psi^* = H\psi^*
$$
The problem is now, that I don't know what $\partial_t^\dagger$ is? If $\partial_t^\dagger = \partial_t$ holds then I have the common result!
Second idea: Say $C$ is the operator that conjugates a function: $Cf=f^*$. Then I can write
$$
H\psi^* = HC\psi = (HC + CH - CH)\psi = [H,C]\psi + CH\psi = [H,C]\psi + Ci\partial_t\psi
$$
where $[\cdot,\cdot]$ is the commutator. Moreover I find
$$
Ci\partial_t\psi = Ci\partial_t\psi - i\partial_tC\psi + i\partial_tC\psi = \{i\partial_t,C\}\psi - i\partial_t\psi^*
$$
where $\{\cdot,\cdot\}$ is the anticommutator. Thus I get
$$
H\psi^* = - i\partial_t\psi^* + [H,C]\psi + \{i\partial_t,C\}\psi
$$
If I could prove that $[H,C]= \{i\partial_t,C\} = 0$ then I would also get the common result.
But unfortunately I get stuck here... can someone resolve my problems?
Answer: Generally speaking the complex conjugate of an operator is not a standard notion of operator theory, though it can be defined after having introduced some general notions.
Definition. A conjugation $C$ in a Hilbert space $\cal H$ is an antilinear map $C : \cal H \to \cal H$ such that is isometric ($||Cx||=||x||$ if $x\in \cal H$) and involutive ($CC=I$).
There are infinitely many such maps, at least one for every Hilbert basis in $\cal H$ (the map which conjugates the components of any vector with respect to that basis).
On $L^2$ spaces there is a standard conjugation $$C : L^2(\mathbb R^n, dx) \ni \psi \mapsto \overline{\psi}\in L^2(\mathbb R^n, dx)\:, $$ where $\overline{\psi}(x) := \overline{\psi(x)}$ for every $x \in \mathbb R^n$ and where $\overline{a+ib}:= a-ib$ for $a,b \in \mathbb R$.
Definition. An operator $H : D(H) \to \cal H$ (where henceforth $D(H)\subset \cal H$) is said to be real with respect to a conjugation $C$ if $$CHx=HCx \quad \forall x \in D(H)$$ (which implies $C(D(H)) \subset D(H)$ and thus $C(D(H)) = D(H)$ in view of $CC=I$, so that the written condition can equivalently be re-phrased $CH=HC$).
The complex conjugate $H_C$ of an operator $H$ with respect to a conjugation $C$, can be defined as $$H_C :=CHC\:.$$ This operator with domain $C(D(H))$ is symmetric, essentially self-adjoint, self-adjoint if $H$ respectively if is symmetric, essentially self-adjoint, self-adjoint. Obviously it coincides with $H$ if and only if $H$ is real with respect to $C$.
Let us come to your issue. Let us start form Schroedinger equation
$$-i \frac{d}{dt} \psi_t = H\psi_t\:.$$
Here we have a vector valued map $$\mathbb R \ni t \mapsto \psi_t \in \cal H\:,$$
such that $\psi_t \in D(H)$ for every $t \in \mathbb R$ and the derivative is computed with respect to the topology of the Hilbert space whose norm is $||\cdot|| = \sqrt{\langle \cdot| \cdot \rangle}$:
$$\frac{d}{dt} \psi_t = \dot{\psi}_t \in \cal H$$
means
$$ \lim_{h\to 0} \left|\left| \frac{1}{h} (\psi_{t+h} -\psi_t) - \dot{\psi}_t \right|\right|=0\:.$$
(See the final remark)
If $C : \cal H \to \cal H$ is a conjugation, as it is isometric and involutive, in view of the definition above of derivative, we have
$$C\frac{d}{dt} \psi_t = \frac{d}{dt} C\psi_t\tag{1}$$
where both sides exist or do not simultaneously.
Summing up, given a conjugation $C$, and the (self-adjoint) Hamiltonian operator $H$, both in the Hilbert space $\cal H$, the complex conjugate of the Schroedinger equation
$$-i \frac{d}{dt} \psi_t = H\psi_t\:.\tag{2}$$
is a related equation satisfied by $C\psi_t$ and just obtained by applying $C$ to both sides of (2) and taking (1) and $CC=I$ into account, obtaining
$$i \frac{d}{dt} C\psi_t = H_C\: C\psi_t\:.$$
If $H$ is real with respect to $C$ (this is the case for a particle without spin described in $L^2(\mathbb R^3)$, assuming the Hamiltonian of the form $P^2/2m + V$ and $C$ is the standard complex conjugation of wavefunctions), the equation reduces to
$$i \frac{d}{dt} C\psi_t = H C\psi_t\:.$$
REMARK. It is worth stressing that $-i \frac{d}{dt}$ is not an operator in the Hilbert space $\cal H$ as, for instance, $H$ is. To compute $H\psi$, it is enough to know the vector $\psi \in D(H)$. To compute
$\frac{d}{dt}\psi_t$ we must know a curve of vectors $$\gamma :\mathbb R \ni t \mapsto \psi_t \in \cal H\:.$$
$\frac{d}{dt}$ computes the derivative of such curve defining another curve
$$\dot{\gamma} :\mathbb R \ni t \mapsto \frac{d}{dt}\psi_t \in \cal H\:.$$
More weakly one may view $\frac{d}{dt}|_{t_0}$ as a map associating vector-valued curves defined in aneighborhood of $t_0$ to vectors $\frac{d}{dt}|_{t_0}\psi_t$. In both cases it does not make sense to apply the derivative to a single vector $\psi$, contrarily to $H\psi$ is well defined. | {
"domain": "physics.stackexchange",
"id": 29719,
"tags": "quantum-mechanics, operators, hilbert-space, schroedinger-equation, complex-numbers"
} |
Problem with package structure? Hokuyo not working | Question:
Hey guys,
Sry I can't find a good question name for this :(
I made the last days my first approach of my very first own package. So i looked up a little bit in the forum and found something I could use. I wanted to publish some laser data from my hokuyo laser. So not really publish I just want to print it on my console.
Yesterday I made it to work, but today i started my pc and nothing worked so far. You can review the "package" here link to github.
As I said it my very first test but somehow if I let it run via eclipse nothing happens. The hokuyo itself works I can connect with it via rosrun urg_node urg_node _ip_address:="192.168.0.10". I just cant figure out what happend maybe some problems in my structure or in my CMakeLists.txt or smth like this? maybe you can help me.
Edit:
mkdir -p ~/new_hokuyo_test/src
cd ~/new_hokuyo_test/src/
catkin_init_workspace
3.* cd ..
catkin_make
cd src/
catkin_create_pkg hokuyo_test_pkg std_msgs roscpp rospy
cd ~/new_hokuyo_test/
catkin_make
Uncommented some stuff (like add executable etc) in CMakeLists.txt (see github)
catkin_make (just to test)
added hokuyo_test_pkg_node.cpp in src of hokuyo_test_pkg
catkin_make in ws directory
source devel/setup.bash
rosrun hokuyo_test_pkg hokuyo_test_pkg_node
Thats it basically
Edit2: The launch file
<launch>
<node pkg="urg_node" type="urg_node" name="urg_node">
<param name="ip_address" value="192.168.0.10" />
</node>
<node pkg="hokuyo_test_pkg" type="hokuyo_test_pkg_node" name="hokuyo_test_pkg_node" output="screen">
</node>
</launch>
Thanks for helping me, I know these are really beginner questions but anyways I am struggling with it and really try by myself to solve it and dont (maybe looks like :) ) post it instantly here. So Thank you for great help here
Originally posted by schultza on ROS Answers with karma: 232 on 2015-02-25
Post score: 0
Original comments
Comment by gvdhoorn on 2015-02-26:
Please update your question description with the exact steps that you are using to build your workspace and run your node.
Comment by gvdhoorn on 2015-02-26:
Also: we normally only commit the package directory itself to a github repository, not the entire catkin workspace (and especially not the devel directory). We can recreate the workspace and build it ourselves quite easily.
Answer:
mkdir -p ~/new_hokuyo_test/src
cd ~/new_hokuyo_test/src/
ok.
catkin_init_workspace
you don't need this, catkin_make will do all that is needed for you on first invocation.
catkin_make
I think you forgot a cd .. before catkin_make?
cd src/
catkin_create_pkg hokuyo_test_pkg std_msgs roscpp rospy
cd ~/new_hokuyo_test/
catkin_make
# Uncommented some stuff (like add executable etc) in CMakeLists.txt (see github)
catkin_make (just to test)
added `hokuyo_test_pkg_node.cpp` in `src` of `hokuyo_test_pkg`
catkin_make in ws directory
source devel/setup.bash
rosrun hokuyo_test_pkg hokuyo_test_pkg_node
And at this point you have the urg_node already running in another terminal? And a roscore?
Originally posted by gvdhoorn with karma: 86574 on 2015-02-26
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by schultza on 2015-02-26:
No the cd .. i just forgot to write it. edited. I got an roscore running yea, but not the urg_node is it needed? I tried it with urg node running (rosrun urg_node urg_node _ip_address:="192.168.0.10") But didnt make any diffrence. The node which i wrote strat but there are no prints
Comment by gvdhoorn on 2015-02-26:
How do you expect your test node to print anything, if there is no node that publishes those LaserScan messages? So yes, you need to start all three: roscore, urg_node and your own node. You might want to use a launch file for that.
Comment by gvdhoorn on 2015-02-26:
After you've started the urg_node, use rostopic list in another terminal (where you've also sourced devel/setup.bash) to see if it is actually publishing on the topic you expect. If it is, use rostopic echo /TOPIC_NAME to make sure there are actually msgs being published.
Comment by schultza on 2015-02-26:
You are right :) I started the urg_node which is publishing the /scan (which I am want to subscribe to) and also others like /echoes. So if I rostopic echo /scan I dont see anything, however if I rostopic echo /echoes I can see some data
Comment by schultza on 2015-02-26:
So ok, somehow it works, I didnt change anything. :( Can't explain why, maybe I missed starting the urd_node sometimes or started the other node too early. Anyways I am happy that its working now. Thank you for your help!!! There is one thing left,I dont know if I should open a new question for this
Comment by schultza on 2015-02-26:
I tried writing the launch file you mentioned. But it doesnt find my node: Error message here: ERROR: cannot launch node of type [hokuyo_test_pkg/hokuyo_test_package_node]: can't locate node [hokuyo_test_package_node] in package [hokuyo_test_pkg] I added the launch file in the question
Comment by schultza on 2015-02-27:
Okay found it already got a big typo in launch file
Comment by gvdhoorn on 2015-02-27:
Ok, good to hear that you found it. Question answered? :)
Comment by schultza on 2015-02-27:
yes :) Thank you very much for help an patience :) | {
"domain": "robotics.stackexchange",
"id": 20992,
"tags": "ros, urg-node, hokuyo, hokuyo-laser, cmake"
} |
Need help understanding which values can be inserted into a specific node in a binary tree | Question: I am studying binary trees and I am failing to see what numbers can be inserted into this specific node position. The values to pick from are : 16, 24, 36, 45, 49, 51, 58.
I have tried values 24, 36, 45, and 16, 24, 36. I have tried these values because going from left to right, they go in increasing order. Though this is not working and I'm not quite sure why. I'm not sure why what I am selecting is wrong. Any help trying to solve my confusion is appreciated.
Answer: Binary trees can serve many different purposes. They don't necessarily need to be in an ordering like such.
Note that below, I hid key parts of the solution. You can view them by hovering over with your mouse, but I encourage you to try to solve it using clues from my response without seeing the actual answer.
For your particular problem, it appears that the ordering of the tree is such that all nodes to the 'right' have a value greater than our root (global or local root, doesn't matter) and all nodes to the 'left' have a value less than our root.
With this in mind, $u$ must be greater than our root node, so
$16$
is ruled out. Then, by taking $u$ to be our next (local) root node, $u$ must be greater than the maximum value of its left subtree and $u$ must be less than the minimum value of its right subtree. From this, we can see that
$28 < u < 50$.
This rules out
$24, 51, 58$.
Therefore, the answer to your problem is
$36, 45, 49$ | {
"domain": "cs.stackexchange",
"id": 11854,
"tags": "binary-trees"
} |
Why does my TV screen stay dust-free while other screens do not? | Question: Recently I bought a pretty cheap flat-screen TV. Screen is IPS, semi-matte. 7 months now, I haven't cleaned it once, no dust at all. The TV works a few hours per day.
I was wondering why it does not collected dust, while my other computer & laptop screen which are also IPS, matte, collect dust like crazy.
Answer: It is unlikely that the electronics of the device is responsible for that. The times of high static voltage in TVs are long gone. The only electronic thing I think could be responsible for that is improper grounding. But I think even a manufacturer of cheap TVs cares about proper grounding, unless he wants to risk expensive lawsuits filed by the relatives of killed customers.
More likely the phenomenon is related to the plastics of casing or the screen. They could have treated it with antistatic agents (which I have read is not that uncommon), or the plastics itself could be minimally conductive, so that static charge does not accumulate on it. Plastics can be made conductive by immersing carbon dust in it (which is probably not advisable for a screen...).
One thing I have observed myself is that cleaning often makes things worse, because by that you introduce the charges to the surface, which stay there long enough to attract dust, thanks to the isolating properties of ordinary plastics. It's like rubbing a balloon or a vinyl record and attracting your hair with it. So, paradoxically, maybe simply the fact that you just bought the TV and have not cleaned it yet, might be responsible for it having stayed clean. | {
"domain": "physics.stackexchange",
"id": 77845,
"tags": "electrostatics, electrons, charge, electronics, electrical-engineering"
} |
How to use the iircomb filter in matlab | Question: I have a sound file that needs filtering which looks like this:
Hence I think that I have noise spaced equally at 832Hz. i tried to use the iircomb filter taken from mathworks website and i managed to create this:
clear
fs = 44100;
fo = fs/53;
q = 35;
bw = (fo/(fs/2))/q;
[b,a] = iircomb(fs/fo,bw,'notch'); % Note type flag 'notch'
fvtool(b,a);
[f,fs] = audioread('sample3b_va18535.wav');
fnew=filter(b,a,f)
N = size(f,1);
df = fs / N;
w = (-(N/2):(N/2)-1)*df;
y = fft(fnew(:,1), N) / N; % For normalizing, but not needed for our analysis
y2 = fftshift(y);
figure(2);
plot(w,abs(y2));
p = audioplayer(fnew,fs);
p.play;
Now, the filter works pretty good but there is still a little bit of noise after filtering which can be heard. Should I just live with it, or perhaps my filter could be improved? Could someone with more experience work on it for a minute or two? The sound file is only 3 seconds long.
Here is the audio file if anyone needs it: SOUND FILE
Thank you for help.
Answer: Precision delay is, in my opinion, best understood in terms of the Nyquist-Shannon sampling and reconstruction theorem.
If a continuous-time (a.k.a. "analog", but that is somewhat imprecise) signal $x(t)$ is bandlimited to below the Nyquist frequency $\frac{1}{2T}$ (where $T$ is the sampling period and $f_\mathrm{s} \triangleq \frac1T$ is the sample rate), then the original continuous-time signal $x(t)$ can be reconstructed from the discrete samples $x[n]$ as:
$$\begin{align}
x(t) &= \sum_{n=-\infty}^{\infty} x(nT) \, \operatorname{sinc}\left(\tfrac{t - nT}{T}\right) \\
&= \sum_{n=-\infty}^{\infty} x[n] \, \operatorname{sinc}\left(\tfrac{t - nT}{T}\right) \\
\end{align}$$
where
$$ \operatorname{sinc}(u) \triangleq \begin{cases}
\frac{\sin(\pi u)}{\pi u}, & \text{if } u \ne 0 \\
1, & \text{if } u = 0 \\
\end{cases} $$
All terms of the summation are bandlimited to a maximum frequency of $\frac{1}{2T}$, so the summation is bandlimited to the same bandlimit as the original $x(t)$. The samples are
$$ x[n] \triangleq x(t) \Bigg|_{t = nT} \triangleq x(nT)$$
Because $x(t)$ can be evaluated at any $t$, not only multiples of $T$, this is the basis in which we interpolate to get a precision delay.
Suppose we wanna delay by an amount of time $\tau$, then the output of the delay is
$$\begin{align}
y(t) &= x(t-\tau) \\
&= \sum_{n=-\infty}^{\infty} x[n] \, \operatorname{sinc}\left(\tfrac{t-\tau - nT}{T}\right) \\
\end{align}$$
So that is explicitly where you get the delayed signal of $y(t) = x(t-\tau)$ from the samples $x[n]$. The first problem we have is that we cannot add up an infinite number of terms. So our first practical approximation is truncating terms. But that is the same as applying a rectangular window (usually considered to be the worst one), so instead, we'll apply a better window.
$$\begin{align}
y(t) &= x(t-\tau) \\
&= \sum_{n=-\infty}^{\infty} x[n] \, \operatorname{sinc}\left(\tfrac{t-\tau - nT}{T}\right) \\
&= \sum_{n=-\infty}^{\infty} x[n-n_\tau] \, \operatorname{sinc}\left(\tfrac{t-\tau - (n-n_\tau)T}{T}\right) \\
&= \sum_{n=-\infty}^{\infty} x[n-n_\tau] \, \operatorname{sinc}\left(\tfrac{t-(\tau-n_\tau T)}{T}-n\right) \\
\end{align}$$
... for any integer $n_\tau$. Let's choose it as the integer part of the dimensionless $\frac{\tau}{T}$, which is the delay in sample units.
$$ n_\tau \triangleq \left\lfloor \frac{\tau}{T} \right\rfloor $$
The fractional part of the delay (in sample units) is
$$ \frac{\tau}{T} - \left\lfloor \frac{\tau}{T} \right\rfloor $$
(i gotta return to this a little bit later.) | {
"domain": "dsp.stackexchange",
"id": 7119,
"tags": "matlab, filters, infinite-impulse-response, sound, comb"
} |
Erosion compotition rule $\left ( f \ominus g \right ) \ominus h = f \ominus \left (g \oplus h \right ) $ | Question: I want to prove that erosion of a signal by a function $g$ followed by erosion with another function $h$ is equivalent to erosion of the signal by the dilation of the two functions (Erosion property):
$$\left ( f \ominus g \right ) \ominus h = f \ominus \left (g \oplus h \right ) $$
My approach:
By definition of erosion we get that: $$\left ( f \ominus g\right )(x) = ⋀_{y} f(x+y)-g(y)$$
So, we get that $$\left ( f \ominus g \right ) \ominus h = ⋀_{y}\left ( ⋀_{z} f(x+y+z)-g(y)-h(z)\right )$$
Is my approach correct so far? And if yes, how do I go on?
Answer: Morphological Erosion is not associative as dilation. Much like addition is associative but not subtraction. Consider the following basic definitions first:$$(A \ominus B)^C = A^C \oplus \widetilde{B} \tag{1}$$
$$(A \oplus B)^C = A^C \ominus \widetilde{B} \tag{2}$$
where $\widetilde{B}$ is the reflection of kernel $B$ and $A^C$ is the compliment of image $A$, inverse of image $A$. Now let us evaluate the expression in the question:
$$(A \ominus B) \ominus C = ((A\ominus B)^C \oplus \widetilde{C})^C \tag{using eq. 1}$$
$$(A \ominus B) \ominus C = ((A^C\oplus \widetilde{B}) \oplus \widetilde{C})^C \tag{using eq. 1} $$
$$(A \ominus B) \ominus C = (A^C \oplus (\widetilde{B} \oplus \widetilde{C}))^C \tag{since dilation is associative}$$
$$(A \ominus B) \ominus C = A \ominus \widetilde{(\widetilde{B} \oplus \widetilde{C})} \tag{using eq. 2}$$
$$(A \ominus B) \ominus C = A \ominus (B \oplus C) \tag{distributing the reflection}$$
Hope that answers the question | {
"domain": "dsp.stackexchange",
"id": 8696,
"tags": "filters, computer-vision, non-linear, morphological-operations"
} |
Effect of bracing on a music instrument top - is FEM modal analysis enough to predict this? | Question: I'm currently building a music instrument (string instrument), and I am at the step where one applies certain braces to the soundboard. Since I actually don't know how the bracing will affect the sound (or basically anything) I thought it would be a good starting point to model the soundboard and the applied braces with CAD, and then apply an FEM simulation that will show to me the effect of the braces.
To my Background, I'm a physicist, but besides a course in technical mechanics 1 (statics), and some time selfstudying continuum-mechanics and the cauchy-stress-tensor, I don't have a background in engineering.
The program I'm using can perform a modal analysis on the soundboard, and modal analyses are also is the predominantly used way to model e.g. violin bodys with FEM technology (for example in this publication]1, or in this youtube video).
But of what worth are those simulations actually? In the end, I roughly want to know how well the soundboard is able to transmit frequencies to the air that have been present in the strings before, that means how well an amplitude in a frequency in the strings will translate to an amplitude in air.
A modal analysis will show me in what ways (and with what frequencies) the body will vibrate on its own (without external force applied). But it won't show me the amplitude of the soundboard (because it's abitrary): following (this stack exchange answer on modal analysis, the amplitudes of the eigenmodes can have any value, similar to the length of an eigenvector that also can have any value.
Granted, a periodic force will excite a vibrational modes amplitude the more if it meets its frequency, but doesn't it also play a role where this force is applied? When I apply period forces at a point on the soundboard where a certain mode of that soundboard has a node, I would expect the mode not to be excited at all for example.
Additionally, I know from a simple harmonic oscillator that its amplitude, subject to an external driving force, will reach the maximum at its resonance frequency. Is the same true here? When I apply a driving force to the soundboard, will it be sufficient to check how it affects the eigenmodes of the soundboard, because this will be the strongest excitations anyway?
As an addendum, I am aware that I don't model the transfer from soundboard to air at all. For now, I only want to model the transfer from amplitudes in the strings to amplitudes in the soundboards vibrations.
So to make the question short and concise: What can (and what can't) modal analysis tell me about the transfer of periodic motion from the bridge (where the strings are attached) to the soundboard? And since this information alone is probably not enough - When modelling the effect of external forces on the soundboard, will it be enough to only look at the frequencies of the previously found out eigenmodes?
Answer:
What can (and what can't) modal analysis tell me about the transfer of periodic motion from the bridge (where the strings are attached) to the soundboard?
Very often in engineering, we find ourselves in a situation where getting the exact complete answer is difficult and time consuming. But often there is a simplified analysis method that gets us part of the answer quickly and easily. Often time part of the answer is actually enough to solve the problem, so we don't even need to bother with the complicated analysis.
For example, in your problem, you really want to know transmissibility (and/or resonant amplification). A forced response analysis will tell you that, but it can be long and complicated (and often requires inputs that you might not even know, like the damping). A modal analysis, only tells you natural frequency and mode shape. But we know that resonant amplification is related to the frequency ratio between the excitation and the natural frequency. So we can use this to get part of the answer.
E.g. let's say you are concerned with an excitation frequency of 440 Hz. If the natural frequency is 40,000 Hz then you don't even need to run a forced response analysis to know that the resonant amplification is basically nothing. Maybe it's 1.0001 and maybe it is 1.0002 but who cares. Further, if brace A gives you a natural frequency of 40,000 Hz and brace B is 41,000 Hz, then they are both who cares conditions, there is no reason to prefer one over the other. You can save yourself the hassle of running a forced response analysis.
Now on the other hand, if the excitation frequency is 440 Hz, and the natural frequency is 430 Hz, well now you know that the resonant amplification could be significant. You don't know the exact answer until you run the full forced response analysis, but you know it is definitely more than nothing. And if brace A gives you 430 Hz and brace B gives you 40,000 Hz, well that alone might be enough to tell you to prefer one over the over, even without running forced response.
In my work (not musical instruments but very much vibration), I use modal analysis as a first screening criteria. When I come up with a design, I first look at the natural frequencies. If they are not where I want then, then I start tweaking the design to move them around. When they start to get close to where I want them, then I start looking at forced response analysis.
When I apply a periodic force at one point, is there a simple way to determine how mutch an eigenmode will be driven by that point? I would guess that the bigger the eigenmode vibrates at that point, the better the force couples to the eigenmode.
Yes, that is it exactly. In mathematical terms, the response will be the dot product of an excitation force vector and the mode shape vector.
Overall I think you might benefit from an undergraduate textbook in vibration theory. Rao could be good. The most recent edition is typical textbook expensive, but a used copy of the previous edition is reasonable:
https://www.amazon.com/Mechanical-Vibrations-5th-Singiresu-Rao/dp/0132128195 | {
"domain": "engineering.stackexchange",
"id": 5043,
"tags": "finite-element-method, modeling, vibration, acoustics, modal-analysis"
} |
Determining initial and final states from FA diagram | Question: I am new to automata. And learning to find concatenation of two FAs. But this one has confused me
I know how to do concatenation but I am confused that in FA 1, what does x1 means? Is it mean final state or does it mean both initial and final state because it is on the start?? Just clear me on this that what is initial state and what is final state in FA 1? Then I'll take it from here and will be able to do concatenation.
P.S I know that +- in a state mean both initial and final state but I don't know about double circle form. Kindly help me on this.
Answer: In your diagrams, a triangle points at the initial state, and double circles indicate final states. Inside the circles you can find the state names, such as x1. | {
"domain": "cs.stackexchange",
"id": 12696,
"tags": "automata, finite-automata, regular-expressions"
} |
Generic error handler function for POSIX shell scripts | Question: Intention
I came with the idea of generic, portable, highly reliable, and further customizable function for Shell scripts, written in POSIX, for error handling.
Purpose
The function shall find out, if the terminal has color support, and act accordingly. If it has, then I highlight the error origin and the exit code with different colors. The main message shall be tabbed from the left, all for the best readability.
Example function call and error handler output (textual) + Explanation
print_usage_and_exit is a self-explanatory function, which watches over the number of given arguments, it accepts exactly one, let's give it some more:
Example function:
print_usage_and_exit()
{
# check if exactly one argument has been passed
[ "${#}" -eq 1 ] || print_error_and_exit 1 "print_usage_and_exit" "Exactly one argument has not been passed!\\n\\tPassed: ${*}"
# check if the argument is a number
is_number "${1}" || print_error_and_exit 1 "print_usage_and_exit" "The argument is not a number! Exit code expected."
echo "Usage: ${0} [-1]"
echo " -1: One-time coin collect."
echo "Default: Repeat coin collecting until interrupted."
exit "${1}"
}
Example function call - erroneous:
print_usage_and_exit a b c 1 2 3
Example output:
print_usage_and_exit()
Exactly one argument has not been passed!
Passed: a b c 1 2 3
exit code = 1
Example error handler output (visual)
The actual error handler function code
print_error_and_exit()
# expected arguments:
# $1 = exit code
# $2 = error origin (usually function name)
# $3 = error message
{
# check if exactly 3 arguments have been passed
# if not, print out an internal error without colors
if [ "${#}" -ne 3 ]
then
printf "print_error_and_exit() internal error\\n\\n\\tWrong number of arguments has been passed: %b!\\n\\tExpected the following 3:\\n\\t\\t\$1 - exit code\\n\\t\\t\$2 - error origin\\n\\t\\t\$3 - error message\\n\\nexit code = 1\\n" "${#}" 1>&2
exit 1
fi
# check if the first argument is a number
# if not, print out an internal error without colors
if ! [ "${1}" -eq "${1}" ] 2> /dev/null
then
printf "print_error_and_exit() internal error\\n\\n\\tThe first argument is not a number: %b!\\n\\tExpected an exit code from the script.\\n\\nexit code = 1\\n" "${1}" 1>&2
exit 1
fi
# check if we have color support
if [ -x /usr/bin/tput ] && tput setaf 1 > /dev/null 2>&1
then
# colors definitions
readonly bold=$(tput bold)
readonly red=$(tput setaf 1)
readonly yellow=$(tput setaf 3)
readonly nocolor=$(tput sgr0)
# combinations to reduce the number of printf references
readonly bold_red="${bold}${red}"
readonly bold_yellow="${bold}${yellow}"
# here we do have color support, so we highlight the error origin and the exit code
printf "%b%b()\\n\\n\\t%b%b%b\\n\\nexit code = %b%b\\n" "${bold_yellow}" "${2}" "${nocolor}" "$3" "${bold_red}" "${1}" "${nocolor}" 1>&2
exit "$1"
else
# here we do not have color support
printf "%b()\\n\\n\\t%b\\n\\nexit code = %b\\n" "${2}" "${3}" "${1}" 1>&2
exit "$1"
fi
}
EDIT
I realized the tput could very well be anywhere else than in /usr/bin/.
So, while keeping the original code untouched, I have changed the line:
if [ -x /usr/bin/tput ] && tput setaf 1 > /dev/null 2>&1
to a more plausible check:
if command -v tput > /dev/null 2>&1 && tput setaf 1 > /dev/null 2>&1
Answer: I'm a big fan of tput, which many script authors seem to overlook, for generating appropriate terminal escapes. Perhaps my enthusiasm started back in the early '90s, when using idiosyncratic (non-ANSI) terminals, but it still glows bright whenever I run a command in an Emacs buffer or output to a file.
A style point, on which you may disagree: I prefer not to enclose parameter names in braces unless it's required (for transforming the expansion, or to separate from an immediately subsequent word). So [ "$#" -ne 3 ] rather than [ "${#}" -ne 3 ], for example. The braces aren't wrong, but do feel unidiomatic there.
A simple improvement: you can save a lot of doubled-backslashes in the printf format strings by using single quotes rather than double quotes (this also protects you against accidentally expanding variables into the format string). I also find it easier to match arguments to format if I make judicious use of line continuation. Example:
printf '%s%b()\n\n\t%s%b%s\n\nexit code = %b%s\n' \
"$bold_yellow" "$2" \
"$nocolor" "$3" "$bold_red" \
"$1" "$nocolor" >&2
The definition of is_number isn't shown in your example function, but something similar is written in full here:
# check if the first argument is a number
# if not, print out an internal error without colors
if ! [ "${1}" -eq "${1}" ] 2> /dev/null
then
I think it makes sense to have is_number for that test; it would certainly reduce the need for comments:
if ! is_number "$1"
then # print out an internal error without colors
It probably makes sense to redirect output to the error stream once at the beginning of print_error_and_exit, since we won't be generating any ordinary output from this point onwards:
print_error_and_exit()
{
exec >&2
That saves the tedium of adding a redirect to every output command, and neatly avoids the easy mistake of missing one.
When we test that tput succeeds, perhaps we should tput sgr0 without redirecting to null, so that the output device is in a known state (thus killing two birds with one stone)?
If tput works at all, then when its argument can't be converted, its output is empty, which is fine. If we want to be really robust when tput doesn't even exist, then we could test like this:
tput sgr0 2>/dev/null || alias tput=true
Then we don't need a separate branch for the non-coloured output (we'll just output empty strings in the formatting positions). That won't quite work as I've written it, unless we specifically export the alias to sub-shells, but we can more conveniently just use a variable:
tput=tput
$tput sgr0 2>/dev/null || tput=true
We might also choose to test that the output is a tty (test -t 1, if we've done the exec I suggested, else test -t 2).
$bold_yellow saves no typing compared to $bold$yellow, and it's only used once anyway, so it can easily be eliminated. Same for $bold$red.
Modified code
Applying my suggestions, we get:
is_number()
{
test "$1" -eq "$1" 2>/dev/null
}
print_error_and_exit()
# expected arguments:
# $1 = exit code
# $2 = error origin (usually function name)
# $3 = error message
{
# all output to error stream
exec >&2
if [ "$#" -ne 3 ]
then # wrong argument count - internal error
printf 'print_error_and_exit() internal error\n\n\tWrong number of arguments has been passed: %b!\n\tExpected the following 3:\n\t\t\$1 - exit code\n\t\t\$2 - error origin\n\t\t\$3 - error message\n\nexit code = 1\n' \
"$#"
exit 1
fi
if ! is_number "$1"
then # wrong argument type - internal error
printf 'print_error_and_exit() internal error\n\n\tThe first argument is not a number: %b!\n\tExpected an exit code from the script.\n\nexit code = 1\n' \
"$1"
exit 1
fi
# if tput doesn't work, then ignore
tput=tput
test -t 1 || tput=true
$tput sgr0 2>/dev/null || tput=true
# colors definitions
readonly bold=$($tput bold)
readonly red=$($tput setaf 1)
readonly yellow=$($tput setaf 3)
readonly nocolor=$($tput sgr0)
# highlight the error origin and the exit code
printf '%s%b()\n\n\t%s%b%s\n\nexit code = %b%s\n' \
"${bold}$yellow" "$2" \
"$nocolor" "$3" "${bold}$red" \
"$1" "$nocolor"
exit "$1"
} | {
"domain": "codereview.stackexchange",
"id": 36130,
"tags": "error-handling, linux, sh, posix"
} |
BinaryGap challenge | Question:
A binary gap within a positive integer N is any maximal sequence of consecutive zeros that is surrounded by ones at both ends in the binary representation of N.
For example, number 9 has binary representation 1001 and contains a binary gap of length 2. The number 529 has binary representation 1000010001 and contains two binary gaps: one of length 4 and one of length 3. The number 20 has binary representation 10100 and contains one binary gap of length 1. The number 15 has binary representation 1111 and has no binary gaps.
Can I get feedback on my code? This is for the Binary Gap problem. I got 100% in correctness, but I would like to know how I can make it better performance wise. Also, I am not even sure about its complexity and how to make it better.
class Solution {
public int solution(int N) {
// write your code in Java SE 8
int max = 0;
boolean flag = false;
int temp = 0;
//converting number into binary and at the same time checking for max binary gap
while (N != 0) {
if (N%2 == 1) {
flag = true;
if (temp > max) {
max = temp;
}
temp = 0;
}
else {
if (flag) {
temp++;
}
}
N = N/2;
}
return max;
}
}
Answer: Counting bits
When the main task involves bits,
it's good to look for opportunities for bit shifting operations.
For example, you can check if the last bit is 1 with:
if ((num & 1) == 1) {
And you can shift the bits to the right by 1 with:
num >>= 1;
These are more natural in this context than num % 2 == 1 and num /= 2.
And often might perform better too.
Avoid flag variables
When possible, it's good to avoid flag variables.
You are using flag to indicate if you've ever seen a 1-bit.
For each 0-bit, you check if the flag is set.
This is inefficient.
There is a way to avoid this flag.
You can first shift until the first 1-bit.
That is, skip all the trailing zeros.
int work = N;
while (work > 0 && (work & 1) == 0) {
work >>= 1;
}
At this point we have reached a 1, or the end. It's safe to shift one more time, in case the last bit is 1.
work >>= 1;
After this, we can start counting zeros,
and reset the count every time we see a 1.
There's no more need for a flag.
Naming
temp is not a great name for a variable that counts zeros.
How about zeros instead?
Alternative implementation
Putting the above tips together (and a bit more),
this is a bit simpler and shorter:
public int solution(int N) {
int work = N;
while (work > 0 && (work & 1) == 0) {
work >>= 1;
}
work >>= 1;
int max = 0;
int zeros = 0;
while (work > 0) {
if ((work & 1) == 0) {
zeros++;
} else {
max = Math.max(max, zeros);
zeros = 0;
}
work >>= 1;
}
return max;
} | {
"domain": "codereview.stackexchange",
"id": 26203,
"tags": "java, programming-challenge"
} |
What should I consider as an observer to measure the speed of cosmic objects? | Question: I mean for example if earth is the observer, then there might be entire galaxies travelling faster than the speed of light relative to earth. So according to Einstein relativity this shouldn't be possible so I want to know what I should consider as an observer to measure cosmic objects' speeds.
Answer: First about reference frames of value for cosmological observations.
So, the preferred reference frame for cosmological measurements is the comoving one, meaning it is moving along with the average flow of galaxies due to the expansion. In that reference frame one is said to have zero peculiar velocity. We have to vectorially subtract our peculiar velocity. It is mostly due to the galaxy and local cluster moving wrt to the cosmic flow. The total is about 360 Kms/sec. When we do that adjustment vectorially right,, in the observations, we see the universe as on the average homogeneous and isotropic, and it is the reference frame where the CMB is also isotropic.
The comments and answer already explained that in fact galaxies can are are moving away from us faster than light. For galaxies at distances of about 14 billion light years, at a redshift of about z = 1.5, they are super-luminal (faster than light), but we still see them. We can see now out to the so called particle horizon, about 45 billion light years.
The fact that they are faster than light does not break relativity. The galaxies are not moving that fast, just the space between us is expanding, GROWING, and at distances greater than about 14 billion light years the space is expanding faster than light. There is no problem with that, general relativity explains it and has a pretty good model of it, with many cosmological values measured. As @Cort Ammon said in his comment, this whole thing should conceptually bother you. Nevertheless it is true, and it is fun to read and understand it and try to get convinced it's real. See the wiki articles on the expansion of the universe, and maybe some of the math. Try https://en.m.wikipedia.org/wiki/Metric_expansion_of_space
For a good intro w/o the math. | {
"domain": "physics.stackexchange",
"id": 39048,
"tags": "cosmology, reference-frames, relativity, observers"
} |
Average prime number from array- Javascript | Question: My mission is to solve this:
Write a function that will take an array of numbers, and will return the average of all numbers in the array that are prime number.
So far I wrote this:
function average(a){
var arr=[];
var newarr=[];
var i,j,sum,sum1,average,counter,counter1;
counter=counter1=sum=sum1=0;
for(i=0;i<a.length;i++){
for(j=2;j<(Math.floor(a[i]/2));j++){
if((a[i]%j===0) && (a[i]>2)){
arr.push(a[i]);
break;
}
}
}
for(i=0;i<a.length;i++){
sum=sum+a[i];
counter++;
}
for(i=0;i<arr.length;i++){
sum1=sum1+arr[i];
counter1++;
}
average=(sum-sum1)/(counter-counter1);
return average;
}
var a=[88,44,32,30,31,19,74,169,143,109,144,191];
console.log(average(a));
I am allowed to use only: conditions (if), loops, --, ++, %, /, *, -, +, ==, =!, =>, >, =<, <, ||, &&, =%, =/, =*, +-, =+, array.length, array.pop() and concat.
Any suggestions? Feedback on what I wrote? Thank you!
Answer: Single responsibility
Try to organize your programs in a way that every function has a single responsibility. For example checking if a number is a prime or not stands out here as a clear example that should be in its own function:
function isPrime(num) {
var factor;
for (factor = 2; factor < Math.floor(num / 2); factor++) {
if (num % factor == 0) {
return false;
}
}
return true;
}
Simplify the logic
The code implements the following algorithm:
Build an array of non-primes
Compute the sum of all values
Compute the sum of non-primes
Subtract the sum of non-primes from the sum of all numbers, and divide this by the count of non-primes
Consider the simpler alternative:
Compute the sum of primes, and also count them
Return the sum of primes divided by their count
This alternative is matches well the problem description too,
so it's easy to understand. And so is the code.
function average(nums) {
var i;
var sum = 0;
var count = 0;
for (i = 0; i < nums.length; i++) {
if (isPrime(nums[i])) {
sum += nums[i];
count++;
}
}
return sum / count;
}
Use better names
The code is very hard to read because most of the variable names don't describe well their purpose, for example:
arr stores non-prime numbers -> nonPrimes would describe this
a is an array of numbers -> nums would describe this
newarr... is not even used -> should not be there
And so on! (counter1, sum1 are poor names too, more on that later
Avoid unnecessary array creation
The code used arr to collect non-primes,
to compute their sum in a next step.
You could compute the sum directly, without building an array for it.
Creating arrays consumes memory, and therefore can be expensive, and can make your program inefficient.
When there is an easy way to avoid it, then avoid it.
Use whitespace more generously
This is hard to read:
for(i=0;i<a.length;i++){
This is much easier to read, and a widely used writing style:
for (i = 0; i < a.length; i++) { | {
"domain": "codereview.stackexchange",
"id": 42377,
"tags": "javascript, array, primes"
} |
$N(\frac{1}{2},2)=3$ for vectors in a Hilbert Space | Question: Came across This question regarding the maximum number of almost orthogonal vectors one can embed in a Hilbert space. They state that $N(\frac{1}{2},2)=3$, and that explicit construction of the vectors using the Bloch sphere shows this. However, I cannot seem to grasp what they mean by this. Their further example of $N(\frac{1}{\sqrt{2}},2)=6$ does make sense to me, as these are simply the eigenvectors of the pauli operators. But how does one show that the number of vectors which meet the following criteria is only 3?
$$\langle V_i|V_i\rangle = 1$$
$$|\langle V_i|V_j\rangle| \leq \epsilon, i \neq j$$
Answer: Here's a very visual way to think about this (I make no claim about it being a rigorous proof). Let
$$
|V_1\rangle=|0\rangle,|V_2\rangle=\frac12|0\rangle+\frac{\sqrt{3}}{2}|1\rangle,|V_3\rangle=\frac12|0\rangle-\frac{\sqrt{3}}{2}|1\rangle.
$$
These each have overlaps of 1/2. Now draw these on the Bloch sphere. They are three equally spaced vectors around a great circle. You cannot push one closer to another because that would increase their overlap.
Now, can I add a fourth vector? Whatever vector I add into the sphere, it must make an angle of $\pi/2$ or less with one of the existing vectors, and hence would have overlap $1/\sqrt{2}$ or greater. So, at least for this choice of three vectors, I cannot add a fourth and maintain the value of $\epsilon$.
With this picture in mind, you can probably also convince yourself that these vectors have to be selected this way. $|V_1\rangle$ is arbitrary, I can just orient the view so that it's at the top of the sphere. For $|V_2\rangle$ I've got an arbitrary freedom of rotation about the $V_1\rangle$ axis, so I just picked the orthogonal component to be real and positive. At that point, my choice of $|V_3\rangle$ was fixed - there was only one possible choice that could have the correct overlap.
If the visual version doesn't do it for you, I'm sure someone will formalise this mathematically... | {
"domain": "quantumcomputing.stackexchange",
"id": 1840,
"tags": "quantum-state, bloch-sphere"
} |
C++ Simple Shared Pointer Implementaion | Question: I wrote an implementation of a shared pointer. I would like a review of it.
It seems to work, but running it through Valgrind shows that that it leaks memory somewhere in my tests, but I don't know where.
Here is my .h file:
#include <stdexcept>
//Try to create a smart pointer? Should be fun.
template <class T>
class rCPtr {
T* ptr;
T** ptrToPtr;
long* refCount;
public:
rCPtr()
{
try
{
ptr = new T;
ptrToPtr = &ptr;
refCount = new long;
*refCount = 1;
}
catch (...)
{
if (ptr)
delete ptr;
if (refCount)
delete refCount;
}
}
explicit rCPtr(T* pT)
{
try
{
ptr = pT;
ptrToPtr = &ptr;
refCount = new long;
*refCount = 1;
}
catch(...)
{
if (ptr)
delete ptr;
if (refCount)
delete refCount;
}
}
~rCPtr()
{
(*refCount)--;
if (*refCount <= 0)
{
delete ptr;
ptr = nullptr;
delete refCount;
refCount = nullptr;
}
}
// I wonder if this is even necessary?
bool exists() const {
return *ptrToPtr; // Will be null if already freed (SCRATCH THAT IT CRASHES)
}
//Copy
rCPtr(const rCPtr& other) : ptr(other.ptr), refCount(other.refCount)
{
(*refCount)++;
}
// If you pass another of same object.
rCPtr& operator=(const rCPtr &right)
{
if (this == &right) {return *this;}
T* leftPtr = ptr;
long* leftCount = refCount;
ptr = right.ptr;
refCount = right.refCount;
(*refCount)++;
(*leftCount)--;
// delete if no more references to the left pointer.
if (*leftCount <= 0)
{
delete leftPtr;
leftPtr = nullptr;
delete leftCount;
leftCount = nullptr;
}
return *this;
}
// This will create a 'new' object (call as name = new var) Make sure the type matches the one used for this object
rCPtr& operator=(const T* right)
{
if (right == ptr) {return *this;}
T* leftPtr = ptr;
long* leftCount = refCount;
ptr = right;
*refCount = 1; // New refCount will always be 1
(*leftCount)--;
if (*leftCount <= 0)
{
delete leftPtr;
leftPtr = nullptr;
delete leftCount;
leftCount = nullptr;
}
}
T* operator->() const
{
if (exists())
return *ptrToPtr;
else return nullptr;
}
T& operator*() const
{
if (exists())
return **ptrToPtr;
// I dont know what else to do here
throw std::out_of_range("Pointer is already deleted");
}
// Gives ref to ptr
// Pls don't try to delete this reference, the class should take care of that
const T& get() const
{
return *ptr; // This reference does not count to refCount
}
// if used in bool expressions if(rCPtr) {I think}
explicit operator bool() const
{
return *ptrToPtr;
}
// returns the number of references
long getRefCount() const
{
return *refCount;
}
// Will attempt to cast the stored pointer into a new type & return it
// You probably should not delete this one either
template <class X>
X* dCast()
{
try
{
// X must be poly morphic or else, the cast will fail and cause an compiler error.
auto casted = dynamic_cast<X*>(*ptrToPtr);
return casted;
}catch(const std::bad_cast& me)
{
return nullptr;
}
}
// Resets current instance, if other instances exist will not delete object
void reset()
{
(*refCount)--;
ptrToPtr = nullptr;
if (*refCount <= 0) // If the amount of references are 0 (negative count may explode ??)
{
delete ptr;
ptr = nullptr;
delete refCount;
refCount = nullptr;
}
}
};
//Notes just finished typing it up. It compiled so that is good
//Valgrind DOES NOT like it though
I will take the chance to say thank you to anyone who answers this
/ Edit: Thanks for everyone who helped with answering my questions!
(And for putting up with all of them as well)
Answer: Data members
I don't see the point of the T** ptrToPtr member variable. Every instance of *ptrToPtr can be replaced by ptr. Remove that member and you have one less thing to delete.
Why is refCount a pointer to a long? If you make is just a long then you don't need to delete it. I see what's happening. Every instance that has the same pointer needs to have access to the same refCount. Got it.
Now I can see your thinking with ptrToPtr. Remember that pointers only contain addresses. If you copy a pointer, both pointers refer to the exact same data. You don't need to share references to pointers. That's why the member ptrToPtr is not needed. A pointer and a copy of a pointer do just as well pointing to the same data. From now on, consider it removed from the class.
Methods
Default constructor: rCPtr()
The default constructor allocates for a new default-constructed T and increments the refCount to one. Why? There's no data to be stored. There's no reason to call new when the user hasn't asked for anything to be stored. A better version would be
rCPtr() : ptr(nullptr), refCount(nullptr)
{
}
This will also make your exists() method work properly.
Constructor with pointer argument: explicit rCPtr(T* pT)
First, good job using explicit. That's good default for single-argument constructors.
Here, the only line that can possible throw is refCount = new long;. So, just as in the default constructor, there's no need for a try block.
rCPtr(T* pT) : ptr(pT), refCount(pT ? new long(1) : nullptr)
{
}
First, notice that you can combine a new statement with an initialization (new long(1)). As much as possible, create variables with the final values in the same statement that creates them.
There's no need for a try block here. If new long throws an exception (that is, your program tries and fails to allocate space for a single long) then your program (or your whole computer) is about to crash, so there's no reason to try to recover from this.
Notice the different initialization value of the refCount variable. If this instance contains a nullptr, then refCount should be a nullptr like in the default constructor.
NEW: Many commenters disagree with me about not checking for new long failing, so here's a version that handles that exception:
rCPtr(T* pT) try : ptr(pT), refCount(pT ? new long(1) : nullptr)
{
}
catch
{
delete pT;
throw;
}
This class is expected to handle the memory that the pointer points to, so the catch block deletes the pointer. Otherwise, the user would have to surround every rCPtr constructor with a try-catch block, which defeats the purpose of the class. You could also consider throwing a different exception like a std::runtime_error to inform other parts of the program what specifically failed.
Destructor: ~rCPtr()
This method is exactly the same as reset(). So, let's reuse the method.
~rCPtr()
{
reset();
}
You'll want to reuse reset() as much as possible because it contains the most important code: that which releases resources when the last rCPtr is destructed. It's so important that you should only write it once and get that one time right.
Check for non-null pointer: bool exists() const
This function can just return ptr. A nullptr will convert to false, anything else to true. This works correctly with the fixed constructors.
Copy constructor: rCPtr(const rCPtr& other)
Your copy constructor is fine. Although, the member ptrToPtr was not copied or restored with ptrToPtr = &ptr. So, copied rCPtrs have an uninitialized pointers that crash the program (or, worse, return garbage data) upon dereference. But, the ptrToPtr is no more, so that's not a problem any more.
Assignment operator with rCPtr argument: rCPtr& operator=(const rCPtr &right)
Consider this alternative to the assignment operator:
rCPtr& operator=(const rCPtr &right)
{
if(ptr == right.ptr) { return *this; }
reset();
ptr = right.ptr;
refCount = right.refCount;
(*refCount)++;
return *this;
}
You don't care the previous state of the rCPtr instance before the assignment, so just call reset() to release the previous pointer. If the pointer is the same as the one in the argument, the pointer will not be deleted since the argument also contains a reference to it, so the reference count cannot reach zero.
Assignment operator with a pointer: rCPtr& operator=(const T* right)
Convince yourself that this alternative to the assignment operator has the same behavior as your code:
rCPtr& operator=(const T* right)
{
if(right == ptr) { return *this; }
reset();
ptr = right;
refCount = new long(1);
return *this;
}
As in the previous assignment operator, you don't care the previous state of the rCPtr instance before the assignment, so just call reset() to release the previous pointer.
One other problem: I don't think this should compile. The method assigns a const T* to a T*. This should not be allowed because the data the argument points to is const. Are you compiling with -permissive?
Also, I added the missing return *this;.
Getting the underlying data: const T& get() const
This method's signature doesn't match the get() method in standard C++ smart pointers. The get() method in std::shared_ptr and std::unique_ptr return a raw pointer: T*. So, your method should be
T* get() const
{
return ptr;
}
NEW: Returned pointer should be non-const.
Dereferencing: T* operator->() const
In your code now, if the pointer is not nullptr return the pointer, otherwise, return a nullptr. This is equivalent to just returning the pointer return ptr;. Or, return get();. It's good to reuse methods so that work happens in very few places that are easier to keep track of.
Dereferencing: T& operator*() const
First, this should not compile. The method returns a non-const reference from a const method. This would allow the data pointed to by the pointer to be modified from a const rCPtr. This does not seem right. There should be two methods for dereferencing the pointer: const T& operator*() const and T& operator*(). What you had is correct. This method being const is the same as dereferencing a const pointer. The pointer cannot change, but the data it points to certainly can unless that data is also const.
So, after replacing **ptrToPtr with *ptr, we need to consider what happens when exists() returns false. Right now, your code throws an exception. This is a valid choice. The other choice that is taken by std::unique_ptr and std::shared_ptr is to just call *ptr regardless and crash the program if it is a nullptr. This is also valid and puts the user on notice that they are responsible for not doing that. Your way is safer, the other way can be faster. Either choice is fine.
Checking for non-null pointer: explicit operator bool() const
Two One thing. First, explicit does not belong here as it is only used for constructors with one argument. Second, You can call return exists(); here to reuse that method.
NEW: I just learned that the explicit keyword can be used with conversion operators to prevent implicit conversions.
Get reference count: long getRefCount() const
This method won't work after calling reset() since refCount is deleted. We need to check for exists() before dereferencing a nullptr.
long getRefCount() const
{
return exists() ? *refCount : 0;
}
Casting: template <class X> X* dCast()
Interesting method. But, casting to a pointer returns nullptr if the cast is unsuccessful. std::bad_cast is only thrown if the cast is to a reference type. So this method can be reduced to
template <class X>
X* dCast()
{
return dynamic_cast<X*>(ptr);
}
Deleting data: void reset()
The all-important method. After, removing the **ptrToPtr statement, the method needs to get rid of any references to the pointed-to data. Even if there are other references, this instance needs to stop referring to them. So, the pointers are now set to nullptr unconditionally.
void reset()
{
if( ! exists()) { return; }
(*refCount)--;
if (*refCount <= 0) // If the amount of references are 0 (negative count may explode ??)
{
delete ptr;
delete refCount;
}
ptr = nullptr;
refCount = nullptr;
}
With some of the changes above, we have to make sure that refCount is not nullptr before check its dereferenced value. One consistency check for this class is to make sure that either both ptr and refCount are nullptr or neither are.
Your comment tells me that you are afraid that a negative refCount means that something has gone wrong. You are right about that. Studying and testing your code will be necessary to avoid this situation, as it probably means that delete will soon be called on already deleted data.
Next things to think about
This code works for single-threaded programs, but what happens if two instances of rCPtr pointing to the same data (*refCount == 2) are in different threads of a program and reset() is called on both at the same time? That is, what happens when each line of reset() is called simultaneously in both instances in parallel? Shared pointers require a std::mutex at the beginning of the reset() function to make sure that multithreaded operation does not cause problems (there are other ways as well). | {
"domain": "codereview.stackexchange",
"id": 41117,
"tags": "c++, beginner, c++14, pointers"
} |
Can we say that Zookeeper updates are ACD (ACID without the I)? | Question: Zookeeper is based on the Zab (which is slightly different to Paxos) system.
We can do Atomic locks on top of a Zookeeper cluster.
Zookeeper provides eventual consistency.
Zookeeper provides durability.
(I'm aware that Zookeeper sacrifices Availability for Consistency and Partition Tolerance - making it CP - but I'm not asking that question here. )
My question is: Can we say that Zookeeper updates are ACD (ACID without the I)?
Answer: Atomicity is not about being able to implement "atomic locks" or not. Consistency as used in the CAP theorem (i.e. the C in CAP) is not the same thing as Consistency in ACID, and Consistency in CAP is way stronger than eventual consistency. ZooKeeper is an ACID system and a CP system, albeit the ACID "C" is not doing much work here.
Atomicity means a "transaction" either completes entirely or not at all. It's mostly orthogonal to concurrency. If you had a system that was Atomic but not Isolated, transactions could see the intermediate results of other in-progress transactions regardless of whether those other transactions got rolled back or not. Meanwhile, a non-Atomic but Isolated system would not see any intermediate results of in-progress transactions, but failed transactions need not rollback their work, so the detritus of failed transactions would be visible to all. Atomicity alone would not allow you to implement atomic locks; Isolation is really the more important property for that though technically Atomicity is also required.
Consistency in ACID means maintaining integrity constraints. ZooKeeper provides few such constraints, so this is accomplished more or less trivially. Consistency in CAP refers to linearizability which is different than but related to Isolation. Isolation refers to serializability. Traditionally, (one-copy) serializability was effectively strict one-copy serializability which is the combination of serializability and linearizability, but serializability and linearizability are actually distinct (though similar looking) notions, neither implying the other. ZooKeeper provides strict one-copy serializability (if you sync before every read), and thus it is both Isolated and Consistent in the CAP sense. | {
"domain": "cs.stackexchange",
"id": 6573,
"tags": "distributed-systems"
} |
Using Real Robot Data in Warehouse Scene Planning | Question:
Hello all,
First off, I am having a successful time working with all the ROS tutorials for my robot. It's great to have all this support from an online community :)
So far for my robot, I have created a URDF file, ran the planning wizard and created a launch file to be used in Warehouse Viewer for my robot. I have done most of the tutorials in Warehouse Viewer and now I would like to use my real-time robot data as the inputs for Warehouse Viewer.
However, I do not know where to start from here. The tutorial informs me of this warning:
Warning: Do not check "Use Robot Data" while the Execute Left Trajectory and Execute Right Trajectory services are not being advertised. Make sure they are running by invoking rosservice. Otherwise, the viewer will hang on startup while waiting for these (or any other) services.
As for my hardware, I am publishing motor encoder data (position and velocity) under topics /The_H20_Player/drrobot_motor_left and /The_H20_Player/drrobot_motor_right at 10Hz.
I would like to somehow use this data as the inputs for Warehouse Viewer. Does anyone know I can do this? Moreover, how should I structure the encoder data? Right now I just have it as an array of ints.
Thank you to everyone at answers.ros.org!
Kind Regards,
Martin
Originally posted by MartinW on ROS Answers with karma: 464 on 2013-01-22
Post score: 1
Answer:
Okay first you have to fix how you publish that encoder data...
The ROS standard is that your robot driver code should publish the encoder data to the /joint_states topic.
At a minimum you need to have the position data in radians. Look at some other robot (simulation) models to see how they work.
This joint states topic is a vector joints, with names matching the joints in your URDF file.
Publishing this topic correctly is the first thing you should test with a real robot.
It sounds like your data is currently in a different format/topic. If you can't, or don't want to modify the driver code to publish the joint states correctly, you could create a node that reads the drrobot_motor_* topics and re-publishes that data onto /joint_states.
You can look in the code for other robots, or google "ros joint state publisher" to find the code you need.
Next, you need to get Rviz loading and display your encoder data. If you are publishing a joint states topic with joint names matching your URDF, then Rviz will display the current state of your real robot!
Then, you can look at the Warehouse...
Download the motoman stack from ROS industrial. The SIA20D_Mesh_arm_navigation directories contain 2 launch files which show you the parameters that need to be changed for simulation VS. real robot.
Or, Google this forum for your same question, because it's been posted on here too!
*### Lastly, there is a useful node called The Joint State Publisher, which is a GUI for publishing fake joint angles. You can use this to read in your robot's URDF, and publish a fake /joint_states topic so you can test the Warehouse with "real robot data" before you fix your driver.
Use the top 4 lines from this launch file:
http://kaist-ros-pkg.googlecode.com/svn/trunk/arm_kinematics_tools/launch/fake_jointstate_pub_kinematics_solver.launch
Originally posted by dbworth with karma: 1103 on 2013-01-23
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by MartinW on 2013-01-23:
Thanks for the answer, dbworth! What are some other robot (simulation) models I can look at to see how they work? I can definitely augment the published data however I like so I will try to make it a proper /joint_states topic
Comment by MartinW on 2013-01-23:
Excellent, so I was able to easily transform the joint angles into joint_states msgs, so now I just have to ensure these are in radians and I'm all set to import this into warehouse :)
Comment by MartinW on 2013-01-23:
Hey dbworth, so I am publishing the data under joint_states, but how do I run rviz and have my urdf loaded. From the tutorials I made my launch file look like this: http://imgur.com/ch65ogl&JP0BfdG#1 But I get an error in rviz here: http://imgur.com/ch65ogl&JP0BfdG#0 Any ideas? Thanks for the help!
Comment by dbworth on 2013-01-24:
Other robots? Use Google and look here: http://www.ros.org/wiki/Robots . Find a robot (mobile or arm) this simple and similar to your own. For a manipulator, some examples are Clam or Katana. For a mobile robot, there's TurtleBot.
Comment by dbworth on 2013-01-24:
Testing your robot model in Rviz: write a launch file with the top 4 parts from the launch file I linked. It loads the URDF, loads the Robot State Publisher, loads a fake Joint State Publisher with GUI, and loads Rviz with a saved configuration. You will need need to save your own config from Rviz
Comment by dbworth on 2013-01-24:
Viewing your real robot in Rviz: copy the above new launch file, and remove the fake joint state publisher. You will then need to include the node (your hardware driver) that publishes the real joint states (encoders).
Comment by MartinW on 2013-01-24:
Ahh, okay! So I've got the fake joint state publisher to open with my robot, but when I take that part out and try to use the robot drivers I don't get the transformation. Here is my launch file http://i.imgur.com/poxM6eS.png, "H20_player" publishes the encoder data. How do I connect this with rviz
Comment by MartinW on 2013-01-24:
The "H20_player" publishes the /joint_states_left_arm and /joint_states_right_arm (sensor_msgs/JointState) but I am unsure how to make rviz subscribe to this in the launch file. If I'm thinking about this correctly haha thanks for all the help :)
Comment by MartinW on 2013-01-24:
Ahh I figured it out, I made it so my program just published /joint_states that had all the joint states in that one msg. Works in rviz now! It just needs some tweak in my urdf to match the simu movements a little better with the real world movements. And then get this into the warehouse planner! | {
"domain": "robotics.stackexchange",
"id": 12538,
"tags": "urdf, ros-fuerte"
} |
Why aren't (domestic) kettles insulated? | Question: In my experience of buying and using kettles, I have come across none which are insulated.
The obvious reasons as to why it would be beneficial being that heating time would be reduced, similarly, less power hence money would be required to heat an arbitrary volume of water. Some kettles become very hot on the outside so safety is also a factor!
Is there a reason why this is so, apart from the costs involved? I.e. cost of manufacture vs. operating cost over the product lifetime.
Answer: Most kettles are silver to minimize heat loss through radiation. (They also have small exit holes at the top to minimize heat loss of steam because conversion of liquid water to steam requires latent heat)
I expect the reason that there is usually no thermal insulation is that kettles heat water very quickly and because the air outside the kettle is a poor conductor of heat the ammount of heat lost by conduction/convection is probably minimal compared to the ammount of enrgy that goes into the heating of the liquid to make it boil.
By contrast, a hot water tank in a central heating system stores hot water for long periods of time, so it makes sense to carefully insulate hot water tanks.
In summary, I think one way to think about this is that the air outside the kettle is good enough thermal insulation for the short time that the kettle boils the water. | {
"domain": "physics.stackexchange",
"id": 78730,
"tags": "thermodynamics, everyday-life, power, thermal-conductivity"
} |
How does randomization avoid entering infinite loops in the vacuum cleaner problem? | Question: Suppose we have a vacuum cleaner operating in a $1 \times 2$ rectangle consisting of locations $A$ and $B$. The cleaner's actions are Suck, Left, and Right and it can't go out of the rectangle and the squares are either empty or dirty. I know this is an amateur question but how does randomization (for instance flipping a fair coin) avoid entering the infinite loop? Aren't we entering such a loop If the result of the toss is heads in odd tosses and tails in even tosses?
This is the text from the book "Artificial Intelligence: A Modern Approach" by Russell and Norvig
We can see a similar problem arising in the vacuum world. Suppose that a simple reflex vacuum agent is deprived of its location sensor and has only a dirt sensor. Such an agent has just two possible percepts: [Dirty] and [Clean]. It can Suck in response to [Dirty]; what should it do in response to [Clean]? Moving Left fails (forever) if it happens to start in square
A, and moving Right fails (forever) if it happens to start in square B. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Escape from infinite loops is possible if the agent can randomize its actions. For example, if the vacuum agent perceives [Clean], it might flip a coin to choose between Right and Left. It is easy to show that the agent will reach the other square in an average of two steps. Then, if that square is dirty, the agent will clean it and the task will be complete. Hence, a randomized simple reflex agent might outperform a deterministic simple reflex agent.
And this is the agent program from the same source:
function REFLEX-VACUUM-AGENT([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
Answer:
how does randomization (for instance flipping a fair coin) avoid entering the infinite loop?
The coin is flipped on each occasion that a decision is required (as opposed to once in order to define the actions that will be taken from that point onwards). Which means the agent can make a different decision each time it encounters the same (faulty detected) state.
This prevents tight loops in a single location that the quote is discussing. The "infinite loop" that is being broken is repeatedly and deterministically making a movement decision that results in no effective movement, thus leaving the agent in exactly the same state as before for all remaining time steps.
Aren't we entering such a loop If the result of the toss is heads in odd tosses and tails in even tosses?
It is not important that the agent might repeat ineffective movements multiple times. Eventually the agent will make a correct decision - this is very likely after only a few failed attempts at most. It is guaranteed in the limit of infinite time steps, unlike the situation where no randomness is added. | {
"domain": "ai.stackexchange",
"id": 2984,
"tags": "intelligent-agent, norvig-russell, randomness, simple-reflex-agents"
} |
Error using decision tree regressor | Question: I'm new to data science , while i'm implementing decision tree. I'm facing the following error. Where i went wrong;
Sample data in csv is:
x=dataset.iloc[:,:-1].values
y=dataset.iloc[:,:2].values
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
from sklearn.compose import ColumnTransformer
labelencoder_x=LabelEncoder()
x[:,0]=labelencoder_x.fit_transform(x[:,0])
onehotencoder=OneHotEncoder()
columntransformer=ColumnTransformer([('dummy cols',onehotencoder,[0])],remainder='passthrough')
x=columntransformer.fit_transform(x)
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=1)
from sklearn.tree import DecisionTreeRegressor
regressor=DecisionTreeRegressor(random_state=0)
regressor.fit(x,y) #Error occurance
Error:
'''
ValueError: could not convert string to float: 'Business Analyst'
'''
Answer: The problem is with the y value.
x=dataset.iloc[:,:-1].values -> selecting all columns but last #this one is fine
y=dataset.iloc[:,:2].values -> selecting all columns till 2nd #this one is wrong
Change this to :
y=dataset.iloc[:,2].values #selecting only 2nd column. | {
"domain": "datascience.stackexchange",
"id": 5929,
"tags": "decision-trees, machine-learning-model, data-science-model"
} |
Why "light cones" have different shapes near black holes? | Question: There is theory that light cone shape does not depend on the reference frame in which it is viewed. So why we draw light cones near black hole differently?
I thought that if I am observing (from the Earth) how light travels from black hole neighbourhood I will always measure speed of light.
Answer:
There is theory that light cone shape does not depend on the reference frame in which it is viewed. So why we draw light cones near black hole differently?
In general relativity, frames of reference are local, not global. Each of the light cones in your diagram corresponds to a certain local frame of reference. An observer using that frame of reference would draw his/her own light cone as the undistorted one, and would draw the other ones as distorted.
In GR, an observer in one local region of space has no unambiguous way of defining the speed of a distant object. Therefore there is no unambiguous way to say whether the speed of light is wrong, if the ray of light we're talking about is far away from us. | {
"domain": "physics.stackexchange",
"id": 15105,
"tags": "general-relativity, black-holes, metric-tensor, causality, event-horizon"
} |
How would you calculate final velocity for the simple electric train experiment? | Question: Alright, so basically I have an experiment to make, which is similar to the simple train video on youtube. I want to test different voltages & gauges of copper wire to see the changes in velocity. I want to be able to hypothesize the final velocities of each of the different experiments, so looking at the video, how would you be able to calculate the final velocity for the battery? I am a grade 12 student from a high school in Ontario.
https://www.youtube.com/watch?v=J9b0J29OzAU&t=24s
Here is the video, thank you.
Answer: btw it's not technically a train unless there is more than one towed car.
I think you will find that the finaly velocity of the car is proportional to the strength of the solenoid B field. If you look up solenoids you will find this is given by
$$v_f\propto B=\mu_0 \frac{NV}{lR}$$
$v_f$ final velocity
$B$ solenoidal B field
$\mu_0$ is the permeability of space, which equals $4\pi\times10^{-7}$ Tm/A
$N$ is the number of turns of the solenoid from one end of the car to the other
$l$ is the length of the part of wire from one end of the car to the other
$V$ is the battery voltage
$R$ is the wire resistance
Any changes to friction or strength of the car magnets, or of course gravitational gradient, will also alter the resulting velocity.
That should get you started. | {
"domain": "physics.stackexchange",
"id": 52704,
"tags": "electromagnetism, propulsion"
} |
Why can't we use the line element to distinguish coordinate from gravitational singularities? | Question: I am a bit confused as to why we can't use the line element to identify coordinate from gravitational singularities. My question stems from learning about the Schwarzschild Metric and the singularity present at the Schwarzschild Radius $R_s$ where we have $ds^2 = (1-\frac{R_s}{r})dt^2 -(1-\frac{R_s}{r})^{-1}dr^2 +r^2d\Omega^2$ so then $ds^2$ goes to negative infinity as $r$ approaches $R_s$.
Because $ds^2$ is a scalar I would assume it is invariant under coordinate transformations so we should find that even if we switch out of Schwarzschild coordinates we should still get that $ds^2$ goes to negative infinity at $R_s$ although I have of course seen this is not the case, though I'm not sure exactly why. More fundamentally, I assume that $(1-\frac{R_s}{r})^{-\frac{1}{2}}dr$ can be used to find the radial distance between two events and in this case if one event is outside of the Schwarzschild Radius and the other is at the Schwarzschild Radius this tells us that the radial distance between them is infinite. Because this is a physical measurement, presumably it should be unaffected by what our choice of coordinates are, otherwise the metric tensor and therefore the line element would not really serve as the metric of our space and would not measure distances on our manifold which also seems like the wrong conclusion.
Note: I am specifically talking about a case where the mass producing this curvature is within the Schwarzschild Radius so the Schwarzschild solution still holds at the radius. Also, I know a bit about manifolds so I can somewhat follow along with the formalism of differential geometry but I am by no means an expert so I would greatly appreciate it if you can explain a bit of the differential geometry you use if that is the best way to answer the question. Thanks!
Answer: There is a delicate interplay of factors going on here, but first: $ds^2$ is not a scalar, it is a tensor. Specifically, $ds^2$ is just bad notation for the coordinate-invariant metric $g$ which may be written in coordinates/components as $g=g_{\mu\nu}dx^\mu\otimes dx^\nu$.
The singularity at the Schwarzschild radius you have pointed out is, in fact, a coordinate singularity. This should perhaps not be surprising when using the notation I have written above for the metric tensor -- what you have identified is a singularity in one of the components of $g_{\mu\nu}$, which is itself a coordinate-dependent quantity.
To identify intrinsic singularities, we need to use scalars, as you point out. The most common tool for doing this is the Riemann curvature and its derived invariants. For example, the Ricci scalar is, as its name would imply, a scalar. Hence if we were to compute the Ricci scalar and found it to have a singularity, there would be a true intrinsic singularity in the spacetime.
But the Ricci scalar is only one quantity and so finding a singularity in the Ricci scalar is good enough to establish the existence of a singularity, the converse does not hold: the lack of a singularity in Ricci scalar does not mean there isn't a singularity in the spacetime. Generally, we need all the curvature invariants. For example, we could build
$$
R^\mu_{\mu\alpha\beta}R^{\alpha\beta}
$$
just to make something up. This is clearly a scalar quantity, and so we would need to check this also for more singularities to see if any were missed when we checked the Ricci scalar. There are infinitely many such invariants, but for the most common examples usually just checking the Ricci scalar is good enough.
I recommend looking in Carroll's GR book (or the nearly identical online notes). It is a generally good book on GR and a discussion of coordinate vs. intrinsic singularities can be found within it. | {
"domain": "physics.stackexchange",
"id": 73722,
"tags": "general-relativity, black-holes, metric-tensor, coordinate-systems, singularities"
} |
MVC4 Routes, using Default | Question: Should I leave the Default Route in if it's not adding any benefit?
I have a RouteConfig.cs file that looks like this:
public static void RegisterRoutes(RouteCollection routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
name: "Dashboard",
url: "{controller}/{action}",
defaults: new { controller = "JobDashboard", action = "Index" }
);
routes.MapRoute(
name: "Default",
url: "{controller}/{action}/{id}",
defaults: new { controller = "JobDashboard", action = "Index", enumDigit = UrlParameter.Optional }
);
routes.MapRoute(
name: "Login",
url: "{controller}/{action}",
defaults: new { controller = "Account", action = "Login" }
);
}
If I only have the Default and Login Routes and the user goes to the root of the site (eg www.sitename.com), then the user always goes to the Login page. I don't want them going to the Login page after they've logged in, and after a little bit of digging I discovered that MVC4 always sorts Route Order Importance by Custom Route first, and then the Default Route.
I created the Dashboard route, and everything is working fine. I then took out the Default route because it didn't seem needed. Nothing seems to be affected by removing the Default Route, and that leads me to the question, should I leave the Default Route in there?
I seem to recall that having a Default Route is good practice, but if it's not adding anything to the solution, is there a good reason to keep it around?
Answer: I am thinking that you would want to keep that default route in there in the case that something happens to the other routes, then there is something to route to if your custom routes get hosed up for some reason.
Defaults are there for a reason, don't get rid of them | {
"domain": "codereview.stackexchange",
"id": 5167,
"tags": "c#, asp.net-mvc-4, url-routing"
} |
Probability more than 1 when integrating the Electron density in Density functional theory | Question: The electron density used in density functional theory for a system of $N$ electrons with wavefunction $\psi$ is defined as
$$\rho(r)=N\int \Psi^*(r,r_2,\dots r_N)\Psi(r,r_2,\dots r_N) d^3r_2\dots d^3r_N$$
The interpretation of this is given as the probability of finding one of the $N$ electrons in the volume element $d^3r$.
The following property also holds: $$\int \rho(r)d^3r=N$$
I do not understand this, if $\rho(r)$ is the probability density with aforementioned interpretation, its integral over all space should simply mean: "The probability of finding an electron in all space" and that should be just $1$, not $N$. How can the probability of finding an electron at any point in all space be greater than 1?
Source: A Chemist’s Guide to Density Functional Theory. Second Edition
Wolfram Koch, Max C. Holthausen
Answer: The sum of the probabilities of all mutually exclusive events must equal one. However, just because you've found an electron somewhere, it does not mean that you can't find another somewhere else (except for $N=1$).
Reference: A Primer in Density Functional Theory by Fiolhais, C., Nogueira, F., & Marques, M. A. (Eds.), Springer, chapter 1.2. Especially the discussion below equation (1.21). | {
"domain": "physics.stackexchange",
"id": 86452,
"tags": "quantum-mechanics, wavefunction, probability, identical-particles, density-functional-theory"
} |
Superposition theorem | Question: Is superposition theorem applicable for circuits having semiconductor components like diodes, transistors, etc.?
Answer: Superposition theorem is applicable only to linear and bilateral circuits. Diodes and trasistors are often not bilateral and linear. | {
"domain": "physics.stackexchange",
"id": 11835,
"tags": "electricity, electric-circuits, superposition, linear-systems"
} |
Why is Artemisinin Compound Therapy preferrable over Artemisinin therapy with regards to drug resistance? | Question: How does adding other anti-malarial drugs to artemisinin, to obtain a "combined" therapy result in reducing the frequency of emergence of resistance?
And it is given in the form of artemisinin combination therapy,
combined with another anti-malarial drug, with the goal of reducing
the frequency of emergence of resistance. (Cf source).
Answer: The idea is rather simple. When an antibiotic is used, it becomes a driver to select for a mutant that has resistance towards the antibiotic. Let say the probability for that happening is P1 (Lets say 1 in 10, 0.1).
When you use two antibiotics with different modes of attack, the probability of acquiring resistance to both is P1 x P2. (Let say P1 =0.1 and P2 = 0.15). So P1 x P2 = 0.1 x 0.15 = 0.015
Hence the probability of becoming resistant to both antibiotics at the same time is very much lower. And as the antibiotics are always used together, the time a bug needs to develop resistance to the therapy becomes longer. The duration which you can use the drug is increased. | {
"domain": "biology.stackexchange",
"id": 8744,
"tags": "malaria"
} |
rtabmap process crashing, [rtabmap_ros with realsense ZR300] | Question:
Hi,
I am trying to use rtabmap_ros package with realsense ZR300 camera. After roslaunch one of the rtabmap process is crashed and the GUI visualization is empty. I have build both realsense_camera and rtabmap_ros from source.
Procedure I followed:
roslaunch realsense_camera zr300_nodelet_default.launch
roslaunch rtabmap_ros rgbd_mapping.launch
I have added the screenshot of the error. Looking for suggestions to solve the error.
Thanks.
Errors:
Debug info :
rqt_graph:
roswtf:
Loaded plugin tf.tfwtf
No package or stack in context
================================================================================
Static checks summary:
No errors or warnings
================================================================================
Beginning tests of your ROS graph. These may take awhile...
analyzing graph...
... done analyzing graph
running graph rules...
ERROR: connection refused to [http://i-ssrc-ws-13:40681/]
... done running graph rules
running tf checks, this will take a second...
... tf checks complete
Online checks summary:
Found 4 warning(s).
Warnings are things that may be just fine, but are sometimes at fault
WARNING The following node subscriptions are unconnected:
* /rtabmap/rtabmapviz:
* /rtabmap/info
* /rtabmap/global_path
* /rtabmap/mapData
* /rtabmap/goal_reached
* /rtabmap/goal_node
* /rqt_gui_py_node_5161:
* /statistics
WARNING The following nodes are unexpectedly connected:
* /camera/nodelet_manager->/roswtf_9415_1505292273588 (/tf)
WARNING These nodes have died:
* rtabmap/rtabmap-2
WARNING No tf messages
Found 2 error(s).
ERROR Could not contact the following nodes:
* /rtabmap/rtabmap
ERROR Errors connecting to the following services:
* service [/rtabmap/rtabmap/list] appears to be malfunctioning: Unable to communicate with service [/rtabmap/rtabmap/list], address [rosrpc://i-ssrc-ws-13:54771]
* service [/rtabmap/rtabmap/load_nodelet] appears to be malfunctioning: Unable to communicate with service [/rtabmap/rtabmap/load_nodelet], address [rosrpc://i-ssrc-ws-13:54771]
* service [/rtabmap/rtabmap/get_loggers] appears to be malfunctioning: Unable to communicate with service [/rtabmap/rtabmap/get_loggers], address [rosrpc://i-ssrc-ws-13:54771]
* service [/rtabmap/rtabmap/set_logger_level] appears to be malfunctioning: Unable to communicate with service [/rtabmap/rtabmap/set_logger_level], address [rosrpc://i-ssrc-ws-13:54771]
* service [/rtabmap/rtabmap/unload_nodelet] appears to be malfunctioning: Unable to communicate with service [/rtabmap/rtabmap/unload_nodelet], address [rosrpc://i-ssrc-ws-13:54771]
Originally posted by bvbdort on ROS Answers with karma: 3034 on 2017-09-13
Post score: 0
Answer:
rtabmap node is crashing on start without showing any log. This may be caused by different versions of the same library loaded from rtabmap's dependencies (like two shared libraries built with different Eigen version or configuration) or a problem when loading shared libraries. Can you output the cmake log of rtabmap library? You may try to limit the dependencies built with the library, like setting -DWITH_REALSENSE=OFF when building rtabmap library (rtabmap library doesn't need to be built with the sensor support if you are using the ros package for the camera).
$ cd rtabmap/build
$ cmake -DWITH_REALSENSE=OFF ..
$ make -j4
$ make install
cheers,
Mathieu
Originally posted by matlabbe with karma: 6409 on 2017-09-13
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by bvbdort on 2017-09-14:
Thanks for the answer. Its problem with my workspace, but I am not sure exactly what caused the problem. Now its working well after re installing the librealsense. This time I built librealsense without using sourcing ros environment. | {
"domain": "robotics.stackexchange",
"id": 28842,
"tags": "slam, navigation, realsense-camera, rtabmap, rtabmap-ros"
} |
A Discord bot that connects to a local database (OOP) | Question: I am currently working on a Discord bot as a way to learn and practice Python. I have been trying to learn object-oriented programming, and apply the "don't repeat yourself" principle.
The code below connects/disconnects from a local database (using XAMPP) with the "mysql.connector" package, and registers a user in the database using their Discord ID. In my example below, I define a class called MySQL, and define three methods (connect(), disconnect() and query_users()) that I found myself using in almost every command/method.
In my register command/method, I call all three methods (connect(), disconnect() and query_users()). Is this the correct way to access those methods and their variables in my register command? My code works great, but the more I read on OOP, the less confident I become, and the more I question myself. Any tips or confirmation would be greatly appreciated.
Thank you all for your help.
"""The following packages/modules are required:"""
import json
import mysql.connector
from discord.ext import commands as viking
with open('config/database.json') as config:
database = json.load(config)
class MySQL:
def __init__(self, viking):
self.viking = viking
def connect(self):
self.connection = mysql.connector.connect(**database)
self.cursor = self.connection.cursor(buffered=True)
def disconnect(self):
self.cursor.close()
self.connection.close()
def query_users(self):
self.cursor.execute("SELECT discord_id FROM users")
self.existing_users = ', '.join([str(users[0]) for users in self.cursor])
@viking.command(pass_context=True)
async def register(self, ctx):
"""*register
Viking will register your Discord ID in the Viking database."""
self.connect()
self.query_users()
discord_id = str(ctx.message.author.id)
if discord_id in self.existing_users:
await self.viking.say('You are already a registered member in the Viking database.')
else:
self.cursor.execute("INSERT INTO users (discord_id) VALUES (%s)", (discord_id,))
self.connection.commit()
await self.viking.say('You are now registered in the Viking database.')
self.disconnect()
Answer:
Is this the correct way to access those methods and their variables in my register command?
Yes. Looks good to me.
I can't say I'm fond of your docstrings. But your code is perfectly clear.
You are focused on MySQL / MariaDB. I tend to access mysql through sqlalchemy, just in case in future I'll want to use sqlite or postgresql or another.
The code starting if discord_id in ... is a bit odd, perhaps it could go into a function? It is a usual expectation that calling code could import your module (define your class) more than once without odd side effects. | {
"domain": "codereview.stackexchange",
"id": 27179,
"tags": "python, object-oriented, python-3.x, mysql"
} |
Speciation by polyploidy | Question: Speciation can occur by polyploidy. My understanding of the process is as follows:
'polyploidy is when the number of chromosomes in an organism's cell doubles. This means that the organism has more chromosomes than other individuals of the same species, meaning it cannot mate with other individuals. The polyploidy organism then evolves, eventually leading to it becoming a separate species'.
I realise this may not be exactly correct. Is someone able to provide a better description of speciation by polyploidy?
Answer: By definition, polyploidy just means that a cell or organism contains more than 2 pairs of homologous chromosomes (or is more than 2n). This is more common in plants than it is in animals. The plant, as shown below, undergoes failed meiosis, which means that the diploid (2n) cells never become haploid (n). As a result, a plant ends up with more than 2n when it self-polinates. The shown result is tetraploidy (4n), but there are other possible results (3n, 5n, etc).
Multiple plants within a population can end up with the same polyploidy number. They can then reproduce with each other but not with the original plants or any other plants. As a result, they become biologically isolated from the original group of plants and are considered a different species. It is a type of sympatric speciation, which means that it occurs without geographic isolation. | {
"domain": "biology.stackexchange",
"id": 4408,
"tags": "polyploidy, speciation"
} |
In the context of the equivalence principle, is gravity a fictitious force? | Question: I am currently self studying general relativity and I have a question regarding the equivalence principle:
If an observer is in an elevator in deep space (or Minkowski space) where there is no gravitational force, and the elevator is accelerating upward with an acceleration of $9.8 \frac{\text{m}}{\text{s}^2}$, ignoring tidal forces, they can't distinguish between this scenario and being in an elevator resting on the surface of the earth.
How can this be possible? I mean, if I were in a spaceship that is accelerating forever at $9.8 \frac{\text{m}}{\text{s}^2}$ I'd always feel the "fictitious" force pulling me back, just like in a non-inertial frame. But if I were in the elevator at rest, I wouldn't feel any fictitious force pulling me back. Or would I?
Answer: Einstein's Equivalence principle states that freefalling reference frames are equivalent to an inertial reference frame; or to put it another way, inertial accelerative effects are equivalent to gravitational effects. This thought experiment might help:
I step into an elevator in an Earth-based building, the door closes but the elevator stays still. A varies of senses tell me which way is "up". If I jump, I quickly land back on my feet. If I try to pick up my 30kg suitcase, I can feel its weight. Now, suppose I, along with the contents of the elevator, am instantly teleported to an identical elevator in a spaceship in deep space. If the spaceship is accelerating at 1 g in a direction parallel with my conception of "up", I will have no way of knowing whether the teleportation worked and I'm on the spaceship, or it failed and I rematerialised back in the Earth-based elevator. My sense of "up", my attempts to jump or to pick up the suitcase are all indifferentiable.
So long as it's at rest in relation to the building, the Earth-based elevator's acceleration towards the centre of the Earth at 1 g is resisted by mechanical forces, which I experience as the floor exerting an upwards force on me through my feet. In the spaceship, I experience an equivalent force through my feet due to the rocket engine propelling the floor "upwards".
Now, suppose I suddenly feel weightless. My stomach lurches and I feel like vomiting. A little push of my toes sends me gently towards the ceiling. I panic and grab the suitcase handle, which slows my overall upwards trajectory but sends the suitcase towards the ceiling too, and my torso starts swinging in an arc. Did the rocket engine just cut out and I'm now in "free fall", or did the elevator cable snap and it's now falling at 9.8 $ms^{-2}$ (ignoring air resistance etc). For the moment at least, it's very hard to tell; perhaps I pick up that there's no faint vibration (the rocket thrusters), or perhaps I hear some scraping sounds (the elevator). I'll discover the truth in only a few seconds...
When I am in the elevator at rest, I don't feel any fictitious force
pulling me back. Or do I?
In general relativity, gravity appears as a fictitious force; this is because GR attributes the apparent acceleration of gravity to the curvature of spacetime. You feel this fictitious force when you are in a non-inertial reference frame, e.g. "pulling you back" towards the floor in the elevator in an Earth-based building. You don't "feel" gravity in an inertial reference frame (e.g. when the elevator cable snapped) because in that frame there's no force acting on you. | {
"domain": "physics.stackexchange",
"id": 48741,
"tags": "general-relativity, gravity, reference-frames, equivalence-principle"
} |
Has anyone successfully compiled pcl_ros in Raspberry Pi boards? | Question: Could someone confirm if the package pcl_ros is compatible with ARM boards? I have attempted to compile it on a Raspberry Pi board using the options --parallel-workers 1 or --executor sequential, but the board freezes when it reaches 83%.
If anyone has successfully compiled this package on an ARM board, please let me know. Your success would motivate me to continue searching for solutions.
Answer: It definitely works on arm boards, but pcl is a highly templated library and has a moderately large memory requirement when compiling against it. It's most likely that you're running out of memory.
I would strongly recommend against allowing parallel workers on a memory limited platform like the raspberry Pi. In addition adding a swap space on disk (preferably as high speed as possible SD cards can be very slow compared to memory) will keep you going. And you're likely to need to be quite patient.
You should be able to monitor your memory usage as it builds to make sure it doesn't run out. If it does crash check for OOM messages in your syslog. | {
"domain": "robotics.stackexchange",
"id": 2703,
"tags": "ros-galactic, pcl"
} |
Parsing .PLY file using F# | Question: I'm fairly new to F# and have written a program to parse PLY files - this is however done in an imperative way with mutable values and as far as I know that should be avoided in functional languages.
The program works, but is the performance weakened by this imperative way of doing things? How is the coding style?
module PLYparsing
open System.IO;;
open System.Collections.Generic;;
open System.Text.RegularExpressions;;
// The types in a PLY file (at least in the ones we use)
type Vertice = V of float * float * float;;
type Face = F of int * int * int;;
/// <summary>Read all lines in a file into a sequence of strings.</summary>
/// <param name="fileName">Name of file to be parsed - must be situated in Resources folder.</param>
/// <returns>A sequence of the lines in the file.</returns>
let readLines (fileName:string) =
let baseName = Directory.GetParent(__SOURCE_DIRECTORY__).FullName
let fullPath = Path.Combine(baseName, ("Resources\\" + fileName))
seq { use sr = new StreamReader (fullPath)
while not sr.EndOfStream do
yield sr.ReadLine() };;
// Mutable values to be assigned during parsing.
let mutable vertexCount = 0;;
let mutable faceCount = 0;;
let mutable faceProperties = ("", "");;
let mutable (vertexProperties: (string * string) list) = [];;
let mutable (objectInfo: (string * string) list) = [];;
let mutable (vertices: seq<Vertice>) = Seq.empty;;
let mutable (faces: seq<Face>) = Seq.empty;;
// Malformed lines in the PLY file? Raise this exception.
exception ParseError of string;;
/// <summary>Checks whether a string matches a certain regex.</summary>
/// <param name="s">The string to check.</param>
/// <param name="r">The regex to match.</param>
/// <returns>Whether or not the string matches the regex.</returns>
let matchesRegex s r =
Regex.Match(s, r).Success
/// <summary>Parse the header of a PLY file into predefined, mutable values.</summary>
/// <param name="header">A sequence of the header lines in a PLY file, not including "end_header".</param>
/// <exception cref="ParseError">Raised when the input is not recognized as anything usefull.</exception>
let parseHeader (header: seq<string>) =
for line in header do
let splitted = line.Split[|' '|]
match line with
| x when matchesRegex x @"obj_info .*" ->
let a = Array.item 1 splitted
let b = Array.item 2 splitted
objectInfo <- objectInfo@[(a, b)]
| x when matchesRegex x @"element vertex \d*" ->
vertexCount <- int (Array.item 2 splitted)
| x when matchesRegex x @"property list .*" ->
let a = Array.item 2 splitted
let b = Array.item 3 splitted
faceProperties <- (a, b)
| x when matchesRegex x @"property .*" ->
let a = Array.item 1 splitted
let b = Array.item 2 splitted
vertexProperties <- vertexProperties@[(a, b)]
| x when matchesRegex x @"element face \d*" ->
faceCount <- int (Array.item 2 splitted)
| x when ((x = "ply") || matchesRegex x @"format .*") -> ()
| _ ->
System.Console.WriteLine(line)
raise (ParseError("Malformed header."));;
/// <summary>Convert a string to a vertice.</summary>
/// <param name="s">String containing a vertice.</param>
/// <returns>The converted vertice.</returns>
/// <exception cref="ParseError">Raised when the length of the input string is less that 3.</exception>
let stringToVertice (s: string) =
match s with
| s when s.Length < 3 -> System.Console.WriteLine(s)
raise (ParseError("Malformed vertice."))
| _ -> let splitted = s.Split[|' '|]
let x = Array.item 0 splitted
let y = Array.item 1 splitted
let z = Array.item 2 splitted
V(float x, float y, float z);;
/// <summary>Convert a sequence of strings to a sequence of vertices.</summary>
/// <param name="vertices">Sequence of strings to convert.</param>
/// <returns>A sequence of the converted sequences.</returns>
let parseVertices (vertices: seq<string>) =
Seq.map(fun a -> stringToVertice(a)) vertices;;
/// <summary>Convert a string to a face.</summary>
/// <param name="s">String containing a face.</param>
/// <returns>The converted face.</returns>
/// <exception cref="ParseError">Raised when the length of the input string is less that 3.</exception>
let stringToFace (s: string) =
match s with
| s when s.Length < 3 -> System.Console.WriteLine(s)
raise (ParseError("Malformed face."))
| _ -> let splitted = s.Split[|' '|]
let x = Array.item 0 splitted
let y = Array.item 1 splitted
let z = Array.item 2 splitted
F(int x, int y, int z);;
/// <summary>Convert a sequence of strings to a sequence of faces.</summary>
/// <param name="faces">Sequence of strings to convert.</param>
/// <returns>A sequence of the converted faces.</returns>
let parseFaces (faces: seq<string>) =
Seq.map(fun a -> stringToFace(a)) faces;;
/// <summary>Main function in PLY parsing. Calls all helper functions and assigns the required mutable values.</summary>
/// <param name="fileName">File to be parsed - name of file in Resources folder.</param>
let parsePLYFile fileName =
let lines = readLines fileName
// At which index is the header located? The vertices? The faces?
let bodyPos = lines |> Seq.findIndex(fun a -> a = "end_header")
let header = lines |> Seq.take bodyPos
parseHeader header
let vertexPart = lines |> Seq.skip (bodyPos + 1) |> Seq.take vertexCount
let facePart = (lines |> Seq.skip (bodyPos + vertexCount + 1) |> Seq.take faceCount)
// Parse the header, the vertices & the faces.
vertices <- parseVertices vertexPart
faces <- parseFaces facePart;;
Answer: That is how I would write it without exceptions and mutable states. I'm still learning so it might be done shorter, or more efficient.
Have a look at https://fsharpforfunandprofit.com/rop/.
Edit:
Exceptions are treated as bad style in functional programmimg because they are not represented by the signature of the function. For Example:
(Apple -> Banana -> Cherry) is a function that takes an Apple and a Bannan and gives back a Cherry. If this function rise an exception this is not obvious. In a pure language like Haskel I think it is not possible.
Mutable states can have side effects if used in parallel programming and with parallel programming you might improve the perfomance of parseVertices and parseFaces by adding async{...}.
To avoid mutable state I imaginetto wrap my data in brown paper and throw away or forget the old data. I hope the Compiler can handle the performance, at least if I use tail recursion. The easy use of async is the reward for avoiding mutable states.
In the project I work correct code and developing time are more importend than performance, because they are for a small number of users. For me F# fits there perfect.
Edit:
The code should no work without stackoverflow.
module PLYparsing
open System.IO;;
open System.Text.RegularExpressions;;
// The types in a PLY file (at least in the ones we use)
type Vertice = V of float * float * float;;
type Face = F of int * int * int;;
/// <summary>Read all lines in a file into a sequence of strings.</summary>
/// <param name="fileName">Name of file to be parsed - must be situated in Resources folder.</param>
/// <returns>A sequence of the lines in the file.</returns>
let readLines (fileName:string) =
let baseName = Directory.GetParent(__SOURCE_DIRECTORY__).FullName
let fullPath = Path.Combine(baseName, ("Resources\\" + fileName))
seq { use sr = new StreamReader (fullPath)
while not sr.EndOfStream do
yield sr.ReadLine() };;
// not-mutable values to be assigned during parsing.
type ParserResult =
{
VertexCount : int
FaceCount : int
FaceProperties : string * string
VertexProperties : (string * string) list
ObjectInfo: (string * string) list
Vertices: seq<Vertice>
Faces: seq<Face>
}
static member Init()=
{
VertexCount = 0
FaceCount = 0
FaceProperties = ("","")
VertexProperties =[]
ObjectInfo = []
Vertices = Seq.empty
Faces = Seq.empty
}
// Malformed lines in the PLY file? Raise this exception.
type ParserSuccess<'a> =
| Success of 'a
| Failure of string
let map f aPS=
match aPS with
| Success( a )-> f a |> Success
| Failure s -> Failure s
let combine xPS yPS =
match (xPS,yPS) with
| Success(x),Success(y) -> Success(x,y)
| _ -> Failure <| sprintf "Can not combine %A %A" xPS yPS
let bind f aPS =
match aPS with
| Success x -> f x
| Failure s -> Failure s
let outerSuccess<'a> (seqIn: ParserSuccess<'a> seq) =
let containsFailure =
seqIn
|>Seq.exists (fun (elPS) ->
match elPS with
| Failure _ -> true
| _ -> false)
match containsFailure with
| true ->
Failure ("Could be a litte bit more precise: Failure in " + (typeof<'a>).ToString())
| false ->
Success( Seq.map (fun s -> match s with | Success(v) -> v ) seqIn)
//exception ParseError of string;;
/// <summary>Checks whether a string matches a certain regex.</summary>
/// <param name="s">The string to check.</param>
/// <param name="r">The regex to match.</param>
/// <returns>Whether or not the string matches the regex.</returns>
let matchesRegex s r =
Regex.Match(s, r).Success
/// <summary>Parse the header of a PLY file into predefined, mutable values.</summary>
/// <param name="header">A sequence of the header lines in a PLY file, not including "end_header".</param>
/// <exception cref="ParseError">Raised when the input is not recognized as anything useful.</exception>
let parseHeader (header: seq<string>) =
let parseHeaderRaw accPS (line:string) =
match accPS with
| Failure (_) -> accPS
| Success (parserResult) ->
let splitted = line.Split[|' '|]
match line with
| x when matchesRegex x @"obj_info .*" ->
let a = Array.item 1 splitted
let b = Array.item 2 splitted
{ parserResult with ObjectInfo = parserResult.ObjectInfo@[(a, b)]} |> Success
| x when matchesRegex x @"element vertex \d*" ->
{ parserResult with VertexCount = int (Array.item 2 splitted)} |> Success
| x when matchesRegex x @"property list .*" ->
let a = Array.item 2 splitted
let b = Array.item 3 splitted
{ parserResult with FaceProperties = (a, b)}
|> Success
| x when matchesRegex x @"property .*" ->
let a = Array.item 1 splitted
let b = Array.item 2 splitted
{ parserResult with VertexProperties = parserResult.VertexProperties@[(a, b)]}
|> Success
| x when matchesRegex x @"element face \d*" ->
{ parserResult with FaceCount = int (Array.item 2 splitted)}
|> Success
| x when ((x = "ply") || matchesRegex x @"format .*") -> Success parserResult
| _ ->
Failure "Malformed header."
header
|> Seq.fold parseHeaderRaw (ParserResult.Init() |> Success)
/// <summary>Convert a string to a vertice.</summary>
/// <param name="s">String containing a vertice.</param>
/// <returns>The converted vertice.</returns>
/// <exception cref="ParseError">Raised when the length of the input string is less that 3.</exception>
let stringToVertice (s: string) =
match s with
| s when s.Length < 3 -> System.Console.WriteLine(s)
sprintf "Malformed vertices: %s" s |> Failure
| _ -> let splitted = s.Split[|' '|]
let pick i = Array.item i splitted
let x = pick 0
let y = pick 1
let z = pick 2
V(float x, float y, float z) |> Success
/// <summary>Convert a sequence of strings to a sequence of vertices.</summary>
/// <param name="vertices">Sequence of strings to convert.</param>
/// <returns>A sequence of the converted sequences.</returns>
let parseVertices (vertices: seq<string>) =
Seq.map stringToVertice vertices
|> outerSuccess
/// <summary>Convert a string to a face.</summary>
/// <param name="s">String containing a face.</param>
/// <returns>The converted face.</returns>
/// <exception cref="ParseError">Raised when the length of the input string is less that 3.</exception>
let stringToFace (s: string) =
match s with
| s when s.Length < 3 -> System.Console.WriteLine(s)
sprintf "Malformed vertices: %s" s |> Failure
| _ -> let splitted = s.Split[|' '|]
let x = Array.item 0 splitted
let y = Array.item 1 splitted
let z = Array.item 2 splitted
F(int x, int y, int z) |> Success
/// <summary>Convert a sequence of strings to a sequence of faces.</summary>
/// <param name="faces">Sequence of strings to convert.</param>
/// <returns>A sequence of the converted faces.</returns>
let parseFaces (faces: seq<string>) =
faces
|> Seq.map stringToFace
|> outerSuccess
/// <summary>Main function in PLY parsing. Calls all helper functions and assigns the required mutable values.</summary>
/// <param name="fileName">File to be parsed - name of file in Resources folder.</param>
let parsePLYFile fileName =
let lines = readLines fileName
// At which index is the header located? The vertices? The faces?
let bodyPos = lines |> Seq.findIndex(fun a -> a = "end_header")
let header = lines |> Seq.take bodyPos
// Parse the header, the vertices & the faces.
parseHeader header
|> bind (fun resultHeaderPS ->
let faces = lines |> Seq.skip (bodyPos + resultHeaderPS.VertexCount + 1) |> Seq.take resultHeaderPS.FaceCount |> parseFaces
let vertices = lines |> Seq.skip (bodyPos + 1) |> Seq.take resultHeaderPS.VertexCount |> parseVertices
combine vertices faces
|> map(fun (vertices, faces) ->
{ resultHeaderPS with Vertices = vertices; Faces = faces } ) ) | {
"domain": "codereview.stackexchange",
"id": 19544,
"tags": "performance, parsing, f#"
} |
source based install of orocos_kinematics_dynamic failed with vcs not setup correctly when installing ros electric from source | Question:
I ran the following:
rosinstall ~/ros-electric/ros "http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=electric&variant=desktop-full&overlay=no"
And got this at the very end of the output:
Installing https://kforge.ros.org/geometry/kdlclone orocos_kinematics_dynamics-0.2.1 to /home/ajc/ros-electric/ros/orocos_kinematics_dynamics
Cloning into /home/ajc/ros-electric/ros/orocos_kinematics_dynamics...
remote: Counting objects: 2292, done.
remote: Compressing objects: 100% (893/893), done.
remote: Total 2292 (delta 1502), reused 2092 (delta 1363)
Receiving objects: 100% (2292/2292), 668.20 KiB, done.
Resolving deltas: 100% (1502/1502), done.
fatal: Cannot setup tracking information; starting point is not a branch.
ERROR: Failed to install tree '/home/ajc/ros-electric/ros/orocos_kinematics_dynamics'
vcs not setup correctly
I realize I'm installing to ~/ros-electric/ros. I tried installing to ~/ros and the same problem emerges.
I have seen http://answers.ros.org/question/1645/vcs-not-setup-correctly-while-i-install-ros-using, and have attempted the installation about 10 times, always receiving this output.
I can and have installed diamondback with no issues. (I have removed its previous installation at ~/ros (its the only directory I removed, the problem still persists) I also tried removing the ~/.ros directory.
Some information:
git version 1.7.6
rosinstall 0.5.15
Linux whales 2.6.39-ARCH #1 SMP PREEMPT Sat Jul 9 15:31:04 CEST 2011 i686 AMD Athlon(tm) 64 X2 Dual-Core Processor TK-55 AuthenticAMD GNU/Linux
Any help would be greatly appreciated.
Originally posted by ajc on ROS Answers with karma: 56 on 2011-08-04
Post score: 0
Answer:
I suspect that this was an issue with the repository from which it was checkout out orocos_kinematics_dynamics. That's what the error means, and I've just tried the command myself now and it works.
Originally posted by tfoote with karma: 58457 on 2011-08-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 6350,
"tags": "ros, archlinux, rosinstall, ros-electric"
} |
why "map update loop missed" warning occours and what are is effects? | Question:
Hello, I am setting up a navigation as explained in Navigation Stack Setup Tutorial stack for P3AT spawned. after making all the costmap files when i run the move_base.launch file the following warning appears:-
[ WARN] [1413658951.699133967]: Map update loop missed its desired rate of 5.0000Hz... the loop actually took 2.1020 seconds
I just want to know why this error occurs and what are its effects on automated navigation??
Originally posted by Aarif on ROS Answers with karma: 351 on 2014-10-18
Post score: 1
Answer:
It means that your pc is attempting to update the map every 0.2 seconds (5 Hz), but it's actually taking 10x as long. This means you're likely saturating your system's processing capability, and need to reduce the load somewhere. The load could be coming from the costmap itself, from other costly operations such as SLAM, or just a limitation in your hardware.
From a practical point of view, it means that the costmap will have a 2 second delay before it contains data representing the latest sensor information.
To prevent the warning, you can go into the costmap's parameters, and reduce the update rate to something smaller.
Originally posted by paulbovbel with karma: 4518 on 2014-10-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Aarif on 2014-10-19:
Thank you so much @paulbovbel, error has been removed. :)
Comment by vanquang on 2020-07-02:
Hello @Aarif. How did you removed this error? I got it already
Comment by Aarif on 2020-07-12:
hi @vanquang, After banging my head for long, I wrote my own navigation program.
Comment by Kaveh_ROS on 2022-10-20:
Thanks.
It was my problem too, solved it this way:
changed the "update_frequency" from 5 to 2 in "costmap_common_params.yaml" file. | {
"domain": "robotics.stackexchange",
"id": 19769,
"tags": "ros"
} |
Denoising / thresholding via wavelets | Question: When I apply differerent thresholding, wavelet denosing functions to non stationary time series which has been detrended via Loess regression and demean it. I expect that when this processed series are submited to denoising / thresholding will result in a clean series with smaller values than the submited signal and not to values at any point which are above those of the signal, Is my thinking correct.
On another hand could it be thought as per those values in a procesed signal which lie above the processed signal as if the procesed signal would ought to be above those levels instead than below those levels. Of course the functions that described the signal is an aproximation so fiting erros should have to be expected.
Answer:
I expect that when this processed series are submited to denoising / thresholding will result in a clean series with smaller values than the submited signal
No, you can get values that are greater. For example, consider the Fourier series of a signal. The Fourier series basis functions are wavelets. If we approximate the signal with only a few of the Fourier series coefficients you will get values that are above the original signal.
E.g. from the (image from wikipedia) | {
"domain": "dsp.stackexchange",
"id": 3041,
"tags": "signal-analysis, continuous-signals"
} |
In what way is the Riemann curvature tensor related to 'radius of curvature'? | Question: In Misner, Thorne & Wheeler, they say, in their delightful 'word equations' that
$$\left(\frac{\mathrm{radius\,\, of \,\,curvature}}{\mathrm{of\,\, spacetime}}\right) = \left(\frac{\mathrm{typical\,\, component\,\, of\,\, Riemann\,\, tensor}}{\mathrm{as\,\, measured\,\, in \,\,a \,\,local \,\,Lorentz\,\, frame}}\right)^{-\frac{1}{2}}.$$
My question is: does this definition of radius of curvature (and others like it - where tensor are described in words) depend on the valence of the Riemann curvature tensor?
Answer: You link to a definition of curvature radius that is appropriate for "extrinsic" curvature – the curvature of a line/submanifold embedded into a higher-dimensional space. The Riemann tensor measures all the components of the intrinsic curvature so they're not exactly the same. However, they're of the same order and the MTW equation you mention is meant to be only an order-of-magnitude estimate, too. Of course, by the typical component, they mean roughly speaking the largest components. If some of them are zero, they're not typical.
To see that the estimate is right, just calculate the Riemann tensor for a sphere of radius $a$. You will get the Ricci scalar equal to something like $2/a^2$. The only thing you need to fix in your equation is the power. The typical component of the Riemann tensor goes like $a^{-2}$ where $a$ is the curvature radius. The power may be calculated by dimensional analysis. I guess that you – or they? – just forgot the power. | {
"domain": "physics.stackexchange",
"id": 6465,
"tags": "general-relativity, gravity, curvature"
} |
Use of Electronic Phenotype in EHR | Question: May I know what's the use of Electronic Phenotyping using EHR data?
I did refer this link but have few questions
I understand that Phenotype is a set of criteria that you apply on your EHR data to select patients of interest from EHR database.
However what I am trying to understand is the difference between ordinary SQL query and electronic Phenotype
If both are same,
1) what I would like to know is the use of electronic phenotyping? How it is used?
2) Why do we have pheKB repository and other repositories? What's the use in validating them in multiple sites?
3) Is it like identifying patients real time? I am not able to get the usefulness of this. I understand that we select patients of our interest (cohort), but i can just write an ordinary SQL and send it to my EHR database to identify patients. but what's special about Electronic Phenotype?
4) I also see that they use multiple conditions (Lab tests, Medications, Diagnosis etc) to identify patients. What am I missing here?
Can someone help me by giving an simple example? I am new to healthcare and trying to learn. Would really be helpful
Answer:
I understand that Phenotype is a set of criteria that you apply on your EHR data to select patients of interest from EHR database.
No, a phenotype is a way of behavior or other observed characteristic of a person, resulting from a combination on its underlying biology and the environment it's in. Think things like, height, running speed, strength, etc.
In the context in which you're interested, this would be more properly, "the clinical presentation of a disease or disorder".
Step one of diagnosing a patient is defining exactly how they present (i.e., exactly what their maladies, aberrant electrolyte levels, genetic mutations, etc. are).
An algorithm that only works on the test population is useless. Google "model overfitting".
The point of an algorithm is to take the EHR data and progress to a diagnosis. This isn't real time, you'd just run a batch of algorithms as test results come in.
You need patients in your test/validation sets to have the required inputs for your algorithm.
FYI, I don't know of any regulars on this site that deal with EHR data. If there's a community of such people on the web they must be elsewhere. | {
"domain": "bioinformatics.stackexchange",
"id": 1209,
"tags": "sequence-alignment, database, genome, statistics, machine-learning"
} |
Are there any ICP based localization methods implemented in ROS? | Question:
Robotics beginner (CSE undergraduate) here. I have been working on a octomap based navigation system that explores an unknown terrain for some time. I completed a prototype navigation system and are now having some issues with localization.
I used dead reckoning so far since i cannot use a prebuilt occupancy grid (system intended for a mapless or deformed environment) but now need more accuracy. I came across bundlefusion and elasticFusion that focus on 3D Dense SLAM. but they have specifically said that they are not compatible with ROS. Because of this, I started looking for already implemented ICP based mapping and localization and came across these two. mrpt_icp_slam_2d which only supports 2D and ethzasl_icp_mapping which doesn't support kinetic.
Are there any other ros packages that i can use out of the box?
Originally posted by KalanaR on ROS Answers with karma: 66 on 2020-04-04
Post score: 1
Original comments
Comment by Dragonslayer on 2020-04-04:
rtabmap might be of interest, it does slam from 3d data and also visual odometry. There is some ICP functionality, for example here it is talked about link text It can also output an octomap if build that way, Iam not sure it can take in an octomap for localization though.
"dead reckoning"? Odometry only that means?
Comment by KalanaR on 2020-04-04:
yeah. dead-reckoning means using just wheel encoders. i will look into rtabmap
Comment by KalanaR on 2020-04-04:
I went through it and here under 3.Localization mode, it says that it needs a previously built map (possibly using rtabmap itself,
In localization mode, a map large enough (>30 locations) must be already created (using rgbd_mapping.launch above). In rtabmapviz (GUI), click on "Localization" in the "Detection" menu. By looking over locations in the map, RTAB-Map would localize itself in it by finding a loop closure. Once a loop closure is found, the odometry is corrected and the current cloud will be aligned to the map.
so does this means it'll,
build the map -> find a loop closure -> correct odometry -> adjust the current cloud
would this happen on the fly (incremental) or in the end (first run to map without localization, and then from 2nd run onward it'll have the map and use it for localization) ?
Comment by Dragonslayer on 2020-04-05:
rtabmap does slam (simultanious localization AND mapping). As far as I know yes rtabmap can also build up on a existing map. I only use it for localizatioin with a prebuild database(map) and dont want it to ad to it, but i red that its done. Maybe try without the "delete databse on startup" argument and use slam mode. However usualy (most usecases/tutorials etc.) navigation uses a prebuild map and for this the map is only loaded at the node/package startup, to update the map for navigation you would need a navigation node to make service calls to map_server repeatadly. Iam sure the rtabmap people have already a working solution for this. rtabmap also does visual odometry, I dont know how exactly that works, depending on your usecase that might be of interest for sensorfusion or plain better then wheelodometry. How accurate do you wanna be? However rtabmap is cpu and ram heavy, specially with visual odometry turned on.
Comment by Dragonslayer on 2020-04-05:
Direct answer to your question: "...adjust current cloud" with visual odometry enabled this should happen (although Iam not sure how it works). I had problems with not correct alignment/building of the map when using wheelodometry, it seems rtabmap in this case just trusts the odometry is 100% acurate. But I may be wrong and there was just something wrong with the settings, in the end I got odometry good enough. Anyway people using visual odometry doesnt seem to have this issue. I have even seen a handheld 3d scanner using only rgb-d camera and rtabmap, that seemed to work out to cm accuracy, at close range at least.
Comment by KalanaR on 2020-04-05:
That database use is what worries me. I'll try to run it and see. my use case is creating a map for a map less environment (eg: disaster zone). So it will be like a blind walk for the robot. localization needs to be able to adept to it. will see what i can do. Thanks anyways
Comment by Dragonslayer on 2020-04-06:
Sound like rtabmap with visual odometry does exactly that (minus navigation). I wouldnt be bothered with this challenge if computing power isnt an issue. The only thing that might be tricky is if you have to use your own lighting, as a moving light will change the apearance (shadow angles and such) and rtabmap works with somehow recognizing features, for odometry at least. Soft light would be important in this case. In daylight this should be "easy". Wish you the best. Would appreciate it if you post an update after you tested it. You are welcome.
Answer:
This might be a bit late. But as @Dragonslayer mentioned, I digged deep and selected RtabMap for my work. Posting this as an answer so that someone comes looking for can get a final answer
Originally posted by KalanaR with karma: 66 on 2021-02-23
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34692,
"tags": "ros, navigation, mapping, icp, ros-kinetic"
} |
Is the amount of energy that can be stored in a single atom infinite? | Question: In classical thermodynamics, the general explanation of how thermal energy is stored in a mass of atom or molecules (such as a lump of copper) is that the atoms in the mass vibrate with almost constant mean position and velocity of increasing magnitude, but zero mean.
If we consider a single isolate atom in space being injected with energy in a way that it does not deflect it's mean position or zero mean velocity, is it therefore theoretically possible to store an infinite amount of energy in the form of kinetic energy of a single atom? Is the theoretical maximum of temperature of a mass of atoms (or molecules) infinite?
Answer: Sort of. There isn't a hard limit to how hot something can get. But beyond a certain point, it won't be a lump of copper any more.
If you heat copper, it will eventually melt. If you continue further, it will boil. Then the electrons would separate from the atoms, creating a plasma.
If you continue to extremes, the nuclei would come apart into protons and neutrons. Then the quarks inside the protons and neutrons would separate into a quark-gluon plasma.
Conditions like this only existed for a tiny fraction of a second after the big bang. But they did exist, and they left an imprint on the universe. | {
"domain": "physics.stackexchange",
"id": 76918,
"tags": "thermodynamics, energy, atoms, kinetic-theory"
} |
How does an electron beam condenser work? | Question: For example, a scanning electron microscope has multiple condensers that "focus" the beam into a smaller spot size. How does a condenser actually change the direction of electron flow in a non-uniform way (off-center electrons get shifted more than electrons that are closer to the center of the beam)?
Answer: I'm not actually familiar with the construction of these microscopes, but most devices using focused charged-particle beam rely on quadrupole magnets1 to maintain the control of the beam size and shape.
It is worth noting that a quadrupole that focsuses the beam in one plane tends to de-focus it in an orthogonal plane, but by using sets of multiple quadrupoles it is possible to control the beam spread in all directions.
1 Four magnets arranged at right angles in a plane normal to the beam, with two opposed magnets showing their north poles and the other two showing their south poles. This arrangement has zero field at the center and stronger field further from the center: just what you want to allow the core of the beam to continue while adjusting the halo. | {
"domain": "physics.stackexchange",
"id": 49572,
"tags": "electrons, particle-physics, accelerator-physics, microscopy"
} |
Understanding magnetic dipole | Question:
There is a sphere of radius $R$ which is charged uniformly by charge density $\rho$. It is rotating by $\omega \hat{z}$ about $z$ axis. Find the magnetic dipole.
I solve it in this way. Think of it in spherical coordinates and take infinitesimal ring on the sphere. Current density $J= \rho v = \rho \omega r \sin \theta$
Then infinitesimal magnetic dipole is
$$(\pi r^2 \sin ^2 \theta)(\rho \omega r \sin \theta)(2 \pi r^2 \sin\theta d\theta dr)$$
where the first term is from the area enclosed by the ring, the second is from current density, and the third is from infinitesimal volume of the ring.
Is it right? If it is wrong, where does the logic fail?
Answer: You may know that current is defined as the total charge passing a certain area during a unit time. Therefore,
\begin{align} dI &=\rho\times rdrd\theta\times2\pi r\sin\theta\times\frac{\omega}{2\pi} \\ &=\rho \omega r^2\sin\theta drd\theta \end{align}
where second and third term constitute the volume of infinitesimal torus and the last term is a frequency in which the total charge in the torus rotate.
Also, magnetic dipole moment is well-known for the following: $$\mu=IA $$ where $A$ is the area enclosed by the path of current $I$.
Finally we obtain the total magnetic dipole moment:
\begin{align}\mu&=\int A dI \\
&=\int \pi(r\sin\theta)^2 \rho \omega r^2\sin\theta drd\theta \\
&=\rho\omega\pi\int_0^\pi\int_0^R r^4\sin^3\theta drd\theta \\
&=\rho\omega\pi \frac{R^5}{5} \frac{4}{3} =\frac{4\rho\omega\pi R^5}{15}\end{align} | {
"domain": "physics.stackexchange",
"id": 28729,
"tags": "homework-and-exercises, electromagnetism, magnetic-moment"
} |
Physics behind Wheel Slipping | Question: Lets say that I'm in a car and I apply full acceleration suddenly. Now, the wheels would slip and hence the car doesn't displace much.
But If I start with some constant acceleration, slipping doesn't appear and the car moves normally. I think that its related to some friction mechanism.
But I don't understand why the wheel slips at high speeds and not at low speeds. It's like, when the speed is high, rules are changing.
Also, In each step F(s) (friction) should be equal to F (force in other direction). Isn't it? Any physical explanations?
Answer: It's hard to make the wheels spin at high speeds because you're in a higher gear, so the torque at the wheels is less. So I assume you are only asking about wheel spin in first gear i.e. it's quite easy to spin the wheels when pulling away in first gear but much harder if e.g. you're travelling at 10 mph in first gear.
The reason is that if you're stationary and drop the clutch the angular momentum of the engine contributes to the torque. That is, the torque at the wheels is the torque from the engine plus the torque from angular momentum stored in the flywheel, crankshaft etc. This happens because the engine is spinning faster that it would if the clutch were engaged, so engaging the clutch slows the engine speed. The extra torque is given by:
$$ \tau = I\frac{d\omega}{dt} $$
where $I$ is the moment of inertia of the spinning bits of the engine and $\omega$ is the engine speed, so $d\omega/dt$ is the rate of change of engine speed. If you drop the clutch the engine speed changes rapidly so $d\omega/dt$ is large and the extra torque is large. If you ease the clutch out $d\omega/dt$ is small so the extra torque is small and the wheels won't spin.
When you're driving at (e.g.) a steady 10 mph the engine speed matches the wheel speed, so if you now suddenly stamp on the accelerator it's only the torque from the engine that's available to spin the wheels. You don't get the contribution from $d\omega/dt$.
To see this try driving at 5 mph, then disengage the clutch, rev the engine and drop the clutch. As the clutch bites the wheels will spin just as they do when the car is stationary.
It's worth noting that a powerful car can spin the wheels in first gear even without playing with the clutch. In fact an old sports car I had many years ago would spin the wheels in second gear in the dry and in third gear if the road was wet! | {
"domain": "physics.stackexchange",
"id": 12909,
"tags": "newtonian-mechanics, classical-mechanics, acceleration, friction, velocity"
} |
Parity conservation in electromagnetic decays | Question: When a $\pi^0$ decays into photons, the only possible number of photons to which it can decay is $2\times n$, with $n$ a natural number. This is because in electromagnetic decays charge conjugation is conserved, let's assume that it decays into two photons
$$\pi^0 \to \gamma+\gamma$$
and:
$$C|\pi^0\rangle = 1$$
$$C|\gamma\rangle = -1$$
in this decay, parity also needs to be conserved, and
$$P|\pi^0\rangle = -1$$
$$P|\gamma\rangle = -1$$
the parity of the system of two photons is given by $(-1)^lP|\gamma\rangle P|\gamma\rangle$ where $l$ is the orbital angular momentum of the system. Does this mean that if parity is conserved, the orbital angular momentum of the resulting system must be $l=1$ (or 3, 5...) or am I deducing something wrong?
Answer: You are deducing it right. Both C and P should be conserved in the decay, so to obtain a negative parity, since you have two identical paricles in the final state, you need the proper value of angular momentum l. | {
"domain": "physics.stackexchange",
"id": 52882,
"tags": "particle-physics, angular-momentum, standard-model, parity, pions"
} |
'quantum state eraser' | Question: Imagine we have a general quantum state
$$ |\Phi \rangle = a_1|1\rangle+a_2|2\rangle+ a_3|3\rangle+a_4|4\rangle+a_5|5\rangle+a_6|6\rangle. \tag{1}\label{eq:1} $$
Then could we define a linear operator action over the general state \eqref{eq:1} so
$$ T_{6}|\Phi\rangle = b_1|1\rangle+b_2|2\rangle+ b_3|3\rangle+b_4|4\rangle+b_5|5\rangle, $$
so the state $ |6\rangle $ is no longer there, it has been 'erased'.
Or if we use the iterated 'erase operator'
$$ T_{6} T_{5} T_{4} T_{3} T_{2}|\Phi \rangle = k |1\rangle $$ after repetition we will have a pure state $ |1\rangle $.
How could these 'erase' operators be defined?
Every $ T_{m} $ would be a linear operator having a matrix representation.
Answer: Let's start with a simpler example where you have only three eigenstates.
Representing the state as a vector, you have:
$$|\Phi> \dot{=} \left(\matrix{a_1 \\ a_2 \\ a_3}\right)$$
And you want some operator $T_3$ such that:
$$
T_3 |\Phi> \dot{=} \left(\matrix{b_1 \\ b_2 \\ 0}\right)
$$
where
$$T_3 \dot{=} \mathbf{T_3} =
\left(\matrix{x_{11}\ x_{12}\ x_{13} \\
x_{21}\ x_{22}\ x_{23} \\
x_{31}\ x_{32}\ x_{33}}\right)
$$
i.e. you want $x_{ij}$ such that
$$
\left(\matrix{x_{11}\ x_{12}\ x_{13} \\
x_{21}\ x_{22}\ x_{23} \\
x_{31}\ x_{32}\ x_{33}}\right)
\left(\matrix{a_1 \\ a_2 \\ a_3}\right)
= \left(\matrix{b_1 \\ b_2 \\ 0}\right)
$$
One solution to that equation is
$$
\mathbf{T_3}=\left(\matrix{\frac{b_1}{a_1} 0\ 0 \\
0\ \frac{b_2}{a_2} 0 \\
0\ 0\ 0}\right)
$$
which gives you
$$T_3 = \frac{b_1}{a_1}|1><1| + \frac{b_2}{a_2}|2><2|$$
Other $T_i$ would be defined similarly. | {
"domain": "physics.stackexchange",
"id": 64633,
"tags": "quantum-information"
} |
Example of reduction in communication complexity | Question: Let us assume the standard situation in communication complexity with two players $P_1,P_2.$
We have a function $f:[n] \times [n] \mapsto \{0,1\}$ that both players known in advance. They wish to compute $f(x,y)$ given that the first player only knows $y$ and the second player only knows $x.$ The communication complexity of $f$ is the smallest number $k$ such that $P_1,P_2$ can always compute $f(x,y)$ communicating at most $k$ bits of information to each other.
If $f$ is the equality function (that is $f(x,y) = 1$ if and only if $x = y$) then I know that $P_1,P_2$ need to exchange at least $\log_2{n}$ bits in order to be able to always compute $f.$
Consider now the function $h_n:[n]\times[n] \mapsto \{0,1\}$ such that $h_n(x,y) = 1$ if and only if $x+y = n.$ I would like to show that $P_1,P_2$ cannot compute $h$ using fewer than $\Omega(\log{n})$ bits and the idea is to use the fact about the communication complexity of computing $f.$
Hence I am wondering how to make a reduction in this case?
Suppose we can compute $h_n(x,y)$ with fewer than $\Omega( \log{n})$ bits. Can we somehow simulate such a protocol in order to be able compute $f$ using less than $\Omega(\log{n})$ communication bits?
Player $P_1$ receives the string $y$ and $P_2$ the string $x.$ Now $x = y$ if and only if $x+y = 2x = 2y.$ Hence they could simulate the algorithm for computing $h_{2x}(x,y).$ The problem that I see here is that in this reduction $h_n$ is not fixed in advance and may not even define the same function for $P_1,P_2$ (if $x,y$ actually differ).
Hence I am wondering
How can one show that $h_n(x,y)$ cannot be computed communicating fewer than $\log_2{n}$ bits ?
Answer: Assume that each player gets a number in $[n]$, and they can compute $h_n(x,y)$. Each player knows $n$, so if $h_n$ is computable with less than $log(n)$ communication, they can compute $h_n(n-x,y)$ in under $log(n)$ bits of communication. Since you have $h_n(n-x,y)=1 \iff x=y$ then you managed to compute $\delta_{x,y}$ in sub logarithmic communication, contradiction. | {
"domain": "cs.stackexchange",
"id": 5132,
"tags": "complexity-theory, communication-protocols, communication-complexity"
} |
Trouble building fovis_ros on Hydro | Question:
Hello, I'm running Ubuntu 12.04 with ROS Hydro and wanted to test fovis_ros with a Kinect, but I'm having trouble building the package.
Following the advice given in this question I executed the following commands :
git clone
https://github.com/srv/fovis.git
git clone
https://github.com/srv/libfovis.git
So I had the fovis and libfovis folders inside catkin_ws/src.
Then I tried building with catkin_make and got the following messages at the end:
... [ 98%] Built target
fovis_ros_generate_messages
/home/ire/catkin_ws/src/fovis/fovis_ros/src/visualization.cpp:3:40:
fatal error:
libfovis/visual_odometry.hpp: No such
file or directory compilation
terminated. make[2]: ***
[fovis/fovis_ros/CMakeFiles/visualization.dir/src/visualization.cpp.o]
Error 1 make[1]: ***
[fovis/fovis_ros/CMakeFiles/visualization.dir/all] Error 2 make: *** [all] Error 2
Invoking "make" failed
I verified and the "visual_odometry.hpp" file is inside the libfovis/libfovis folder, like the rest of the files used by fovis_ros that don't seem to present an issue.
I tried building the libfovis library first, which gave no trouble and then adding fovis_ros but I got the exact same error. I also followed the advice from issue #11 but the problem remained.
Here is the complete log in case it's more useful.
Thank you in advance.
Originally posted by Athria on ROS Answers with karma: 78 on 2015-03-21
Post score: 0
Original comments
Comment by Miquel Massot on 2015-03-24:
Hi Athria, could you try to install ros-hydro-libfovis, remove libfovis from your workspace, delete build and devel directories and compile again?
Comment by Athria on 2015-03-24:
Hello, than you! I tried that before installing libfovis from the git and I got errors, but I did it again (removing the libfovis, build and devel folders, then installing ros-hydro-libfovis an then catkin_make) but this is what I got.
Comment by Athria on 2015-03-26:
Hello, I'm sorry, that was indeed the problem. I was selecting Hydro on the Branch scroll bar and then copying the clone URL on the right, but I guess it was not the way to do it; this time I downloaded the zip instead and it compiled without errors.
Thank you very much!
Answer:
Can you check if you are using hydro branch in fovis? Now our default is set on indigo.
You can simply switch the branch with "git checkout hydro".
Originally posted by Miquel Massot with karma: 1471 on 2015-03-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Athria on 2015-03-27:
Thank you! I was doing it wrong, now it compiles perfectly.
Comment by Sai Anirudh Kondaveeti on 2015-05-26:
I am still facing the same problem. I installed libfovis via apt-get then I 'git clone' the repository and then checked out to hydro branch. but still I get this error
Comment by Athria on 2015-05-27:
Hello Sai, I don't know if it will help you but this is exactly what worked for me in the end: I deleted what I had from fovis, then I downloaded (manually, not by git clone) the two folders (libfovis and fovis) from the hydro branch in github to the catkin/src/ folder and then it compiled well.
Comment by Athria on 2015-05-27:
I think when I was doing git clone it was downloading the indigo versions (I am really new with github) and that's why it wasn't working. I hope it helps
Comment by Sai Anirudh Kondaveeti on 2015-05-27:
Thanks, but it didn't work for me. | {
"domain": "robotics.stackexchange",
"id": 21193,
"tags": "ros, fovis, fovis-ros"
} |
Does Enamel need to be Removed From Magnet Wire for Yagi Antenna | Question: I am using 16AWG Magnet Wire to create a 15 element Yagi Antenna for the 2.4ghz range which should theoretically give me a little over 15db gain. The magnet wire I am using has an enamel coating on it.
Do I need to remove the enamel coating in order for the antenna to work? Of course I am going to remove it at the place where I solder the coax cable on but do I need to do it for all the directors and the reflector?
How much will this affect the quality of the signal gained?
Answer: Insulation around the wire does not prevent it from working as an antenna.
It can slightly affect the resonant length, but that's not the same thing.
Ideally you would calculate the length taking into account the coating, but the reality of it is that you should cut your wires about 10% too long, and then carefully trim them to length while measuring the resonant frequency. | {
"domain": "physics.stackexchange",
"id": 38303,
"tags": "electromagnetic-radiation, antennas"
} |
How does stroking metal with a magnet magnetize the metal? | Question: How does stroking metal with a magnet magnetize the metal? I have thought about it for a while and got absolutely no clue as to how the magnetic domains in the steel rod got shifted to the same direction (regions of metallic ions pointing the same direction, in an unmagnetised metal these domains cancel each other).
related link showing the stroking of metal with a magnet. Its particularly the manner in which we are stroking that could be causing the magnetisation. The video shows it being repeatedly stroked the same direction
Answer: If a ferromagnetic material is placed in, and then removed from, a magnetic field, the “remnant” magnetization within the material will depend on the strength of the field. Stroking a bar will bring the strong field from the pole of the magnet into close proximity with each part of the bar. Then the magnetic “domains” within the bar tend to drop back into alignment with nearest crystal axis. | {
"domain": "physics.stackexchange",
"id": 80411,
"tags": "electromagnetism"
} |
Google Foobar Challenge: Ion Flux Relabeling | Question: I am currently working on a Foobar challenge, the Ion Flux Relabelling. I came up with a java solution but Google returns an OutOfMemoryError.
The prompt is, quoting the original question:
Oh no! Commander Lambda's latest experiment to improve the efficiency of her LAMBCHOP doomsday device has backfired spectacularly. She had been improving the structure of the ion flux converter tree, but something went terribly wrong and the flux chains exploded. Some of the ion flux converters survived the explosion intact, but others had their position labels blasted off. She's having her henchmen rebuild the ion flux converter tree by hand, but you think you can do it much more quickly - quickly enough, perhaps, to earn a promotion!
Flux chains require perfect binary trees, so Lambda's design arranged the ion flux converters to form one. To label them, she performed a post-order traversal of the tree of converters and labeled each converter with the order of that converter in the traversal, starting at 1. For example, a tree of 7 converters would look like the following:
7
/ \
3 6
/ \ / \
1 2 4 5
Write a function answer(h, q) - where h is the height of the perfect tree of converters and q is a list of positive integers representing different flux converters - which returns a list of integers p where each element in p is the label of the converter that sits on top of the respective converter in q, or -1 if there is no such converter. For example, answer(3, [1, 4, 7]) would return the converters above the converters at indexes 1, 4, and 7 in a perfect binary tree of height 3, which is [3, 6, -1].
The domain of the integer h is 1 <= h <= 30, where h = 1 represents a perfect binary tree containing only the root, h = 2 represents a perfect binary tree with the root and two leaf nodes, h = 3 represents a perfect binary tree with the root, two internal nodes and four leaf nodes (like the example above), and so forth. The lists q and p contain at least one but no more than 10000 distinct integers, all of which will be between 1 and 2^h-1, inclusive.
The test cases provided are:
Inputs:
(int) h = 3
(int list) q = [7, 3, 5, 1]
Output:
(int list) [-1, 7, 6, 3]
Inputs:
(int) h = 5
(int list) q = [19, 14, 28]
Output:
(int list) [21, 15, 29]
I used a rather complex and dumb way to solve the problem. I created two lists, one recording every pair of children in the binary tree, the other one recording all the corresponding parents to these children in corresponding indexes. So my children list for the height 3 binary tree provided above would look like [[3, 6], [1, 2], [4, 5]], and the relevant part of my parent list would look like [7, 3, 6].
To find the parent of the elements given in array q, I just ran a loop to look for the element in the children list and, after finding it, record the corresponding parent since they have the same index.
import java.util.*;
public class Answer {
public static int[] answer(int h, int[] q) {
int root = ((int) Math.pow(2, h) - 1);
int[] corr = new int[q.length];
List<List<Integer>> childs = new ArrayList<List<Integer>>();
List<Integer> parent = new ArrayList<Integer>();
parent.add(root);
//Loop through each "height"
for(int i = 0; i < h - 1; i++){
int index = 0;
index = parent.size();
//Loop through each element in each height
for(int l = (int) Math.pow(2, i); l > 0; l--){
int x = parent.get(index - l) - (int) Math.pow(2, h-(i+1));
int y = parent.get(index - l) - 1;
childs.add(Arrays.asList(x, y));
parent.add(x);
parent.add(y);
}
}
for(int i = 0; i < q.length; i++){
if(q[i] == root){
corr[i] = -1;
}
for(int l = 0; l < childs.size(); l++){
if(childs.get(l).get(0) == q[i] || childs.get(l).get(1) == q[i]){
corr[i] = parent.get(l);
}
}
}
return corr;
}
}
Like I said, Google returns an OutOfMemoryError for my code during execution.
I totally understand that my method is not the most efficient solution. So really, I am trying to find the most effective way to solve this problem in Java. There must be something really simple that I am just not seeing.
Answer: In your own words: "I totally understand that my method is not the most efficient solution." In a sense, that's the summary of a code review for this problem.
The basic issue in your code is that you have ignored two of the clues in the question:
"Flux chains require perfect binary trees"
"she performed a post-order traversal"
These two clues are begging you to implement a recursive post-order traversal of a fixed depth tree.
If you know a node is missing from the flux chain, then you know that your traversal "up" from that node in a correct tree, will take you to the parent.
A complication in this problem is that you don't know the order of the disconnected flux chain nodes in the input, so you have to "search" for them in the output, but.... with a small trick, you can turn the whole problem in to a single (post-order) traversal of the ideal (correct) flux chain, and a small lookup table for the answer indices.
The above solution would be an \$O(n)\$ time complexity solution, and would use a small amount of memory proportional to the count of values in q.
Now, that's the solution I think they would expect in a good case.... but I suspect that there's a mathematical solution that is much faster.... I am still figuring it out... but, in the mean time, consider this code that does a post-order traversal, and identifies key nodes in the flux chain. It then uses a trick in the return value of the recursive function (negative values for missing links) and a HashMap to identify the keys, and their respective indices in the answer array. This is not my most pretty code, but it serves to show you the post-order traversal with the index lookup:
public static final int[] answer(int height, int[] nodes) {
int[] ans = new int[nodes.length];
Map<Integer, Integer> indices = new HashMap<>();
IntStream.range(0, nodes.length).forEachOrdered(i -> indices.put(nodes[i], i));
int next = postOrder(height, 0, 0, indices, ans);
if (next < 0) {
int i = indices.get(-next);
ans[i] = -1;
}
return ans;
}
private static int postOrder(int limit, int depth, int next, Map<Integer, Integer> indices, int[] ans) {
if (depth == limit) {
return next;
}
// left
int left = postOrder(limit, depth + 1, next, indices, ans);
next = left < 0 ? -left : left;
int right = postOrder(limit, depth + 1, next, indices, ans);
next = right < 0 ? -right : right;
int me = next + 1;
if (left < 0) {
int i = indices.get(-left);
ans[i] = me;
}
if (right < 0) {
int i = indices.get(-right);
ans[i] = me;
}
return indices.containsKey(me) ? -me : me;
}
You can see it running the test-cases in ideone: https://ideone.com/J0lMrf
Update:
I worked out a better solution using a binary search mechanism for locating the parent of a referenced link in the Flux chain.
It is a bit hard to describe in words, but if you inspect the post-ordered tree, you can predict which branch (left or right) to descend to find a node. You can also compute the size of the sub-trees to any node, and thus compute that node's label.
Expressed in code, you can compute the parent of any link in a tree of a given height, with the code:
private static final int parent(int height, int node) {
int size = (int)Math.pow(2, height) - 1;
if (node == size) {
return -1;
}
int before = 0;
do {
if (size == 0) {
throw new IllegalStateException();
}
// size is always odd, and halving it integer-division is also odd.
size >>>= 1;
int left = before + size;
int right = left + size;
int me = right + 1;
if (left == node || right == node) {
return me;
}
if (node > left) {
// nodes to the right have the left as offset.
before = left;
}
} while (true);
}
This makes the computation for any 1 node an \$O(\log{h})\$ operation... and, if there are q nodes to locate, the overall result would be an \$O(q \log{h})\$ one.
The "final" solution would be:
public static final int[] answerToo(int height, int[] nodes) {
return IntStream.of(nodes).map(n -> parent(height, n)).toArray();
}
I have combined this with the earlier solution, and the result is in ideone too: https://ideone.com/qiwPtR | {
"domain": "codereview.stackexchange",
"id": 23505,
"tags": "java, programming-challenge, tree"
} |
Refer to installed PCL (ARM) | Question:
Hi,
i was able to install PCL 1.7.0 on my ARM based PandaBoard.
If I now try to install different ROS stacks that depend on PCL it is always telling me that ROS is not able to find PCL. (I tried for example to install RGBD, Navigation, viso2_ros...)
Could anyone explain how do I refer to the PCL in the right way so that it is found by ROS?
Originally posted by RodBelaFarin on ROS Answers with karma: 235 on 2013-10-30
Post score: 0
Original comments
Comment by tfoote on 2013-11-14:
What version of ROS are you trying to use?
Comment by RodBelaFarin on 2013-11-15:
i used ROS groovy, but i upgraded now to hydro and there it works fine!
Answer:
Yeah, In Groovy ROS uses a custom packaged version of PCL. In Hydro it uses the system installed version. So upgrading to Hydro is the easiest solution.
Originally posted by tfoote with karma: 58457 on 2013-11-15
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by RodBelaFarin on 2013-11-16:
upgrading to hydro was really the best thing that i could do on my arm pandaboard.
most things work fine, but it's stillt not perfect.
today i tried to install ccny and i get again the error that PCL is not found, cause i have to built that on my own. any ideas to set THIS one to the right refer? | {
"domain": "robotics.stackexchange",
"id": 16008,
"tags": "slam, navigation, ccny-rgbd, viso2-ros, pcl-1.7"
} |
What are the Conflicting Predictions of General Relativity & Quantum Mechanics? | Question: I see a lot of questions in various sites about why the 2 theories are or aren't incompatible, I'm satisfied as to why that's the case.
However it has been mentioned that both theories make predictions about phenomena that contradict or are incompatible, and I've been unable to find any examples.
What are the conflicting/contrary/incompatible predictions made by General Relativity versus Quantum Mechanics? Or are the claims false?
Answer: As Mitchell Porter said, we need more than just the absence of self-evident contradictions. We need a theory that encompasses both quantum mechanics and general relativity. The most straightforward, naive union of these frameworks produces a theory that is "nonrenormalizable" – predicts all quantities to be equal to a finite number plus infinity (many types of infinities arise).
String theory is the only known reconciliation of general relativity and quantum mechanics and chances are high that this status won't ever change.
There's of course no contradiction between "appropriately reduced in reach" general relativity and "appropriately tamed" quantum mechanics – after all, to a certain extent, both of these frameworks have been established so there has to exist a more accurate theory that agrees with all the established insights.
However, there are contradictions between quantum mechanics (which seems perfectly exact and valid) and classical general relativity believed literally. For example, classical general relativity paints a black hole as a perfectly determined, unique state of the spacetime. It carries no entropy because it has "no hair" according to classical general relativity. According to quantum mechanics, this can't be the case. A black hole has to carry and does carry a huge entropy – in fact, a greater entropy than any other localized object of the same mass – which is needed for the second law of thermodynamics to hold (entropy has to increase in time) and which is needed to "preserve the information" about the initial state, something that is required by "unitarity" in quantum mechanics. So some "tunneling" of the information from the interior has to be possible. | {
"domain": "physics.stackexchange",
"id": 8701,
"tags": "quantum-mechanics, general-relativity"
} |
What is a synonym of the word design that can be used in context of evolution? | Question: For example let's take two sentences; "engineer made a design for camera", "evolution made an X for eye". What is the best X that could be used?
I need it for an essay about evolution.
Answer: Although not ideal, "adaptation" is more appropriate than "design" as a noun describing something that has come about in an evolutionary context, even though not all evolution is adaptive.
In writing, though, I would not phrase it as you have, rather I would change the agency from something evolution does as some sort of "entity" which makes it sound like a goal-driven "designer" with whatever words you use, and instead say that "Eyes evolved from simpler photosensitive groups of cells." Eyes deserve the agency, because having eyes is what would improve the survival of an organism versus eyeless conspecifics and increase the proportion of eyed individuals in the next generation (I hope my simplification of the eye-evolving process as if eyes are a simple Mendelian trait isn't too distracting here, I mean it mostly as metaphor), rather than eyes being a "goal" of something called evolution.
Similarly, you would not say "Chemistry makes water from hydrogen and oxygen," rather you would say something like "Hydrogen gas reacts exothermically in the presence of oxygen gas to produce water." There is no agent "chemistry," that is just the word we use to describe the set of processes in the universe that govern how molecules/atoms/subatomic particles interact. Similarly, "evolution" is not an agent who does things.
One place I would differ in this guidance is in talking very big picture, such as saying that "evolution produced the wide variety of species present today"; in that example, there isn't anything more local to act as an agent, and it's appropriate to refer to the evolutionary process as a whole. | {
"domain": "biology.stackexchange",
"id": 10534,
"tags": "evolution, terminology"
} |
gazebo::common::Exception when running Gazebo as a library | Question:
I am trying to run Gazebo as a library using the command
gazebo::setupClient(_argc, _argv);
and loading a world using
gazebo::physics::WorldPtr world = gazebo::loadWorld("worlds/empty.world");
I.e. I want to do what the example custom_main does.
However, when I compile and run this example (after running gazebo in a separate terminal), I get the following run-time error:
terminate called after throwing an instance of 'gazebo::common::Exception'
Aborted (core dumped)
gazebo::setupClient(_argc, _argv); returns true, but gazebo::physics::WorldPtr world == NULL. This is the case no matter the content of the string that I give to gazebo::loadWorld(). Looking at the source, I would expect to see some error messages in the console (or is that not where gzerr prints to?), but I don't, even if the string does not name a world file that exists.
What am I doing wrong?
Thanks
Originally posted by NickDP on Gazebo Answers with karma: 186 on 2014-11-19
Post score: 1
Answer:
You need to use gazebo::setupServer, not gazebo::setupClient. A server loads world, runs phyiscs, generates sensors data, etc. A client connects to a server. The gazebo GUI is an example of a client.
Originally posted by nkoenig with karma: 7676 on 2014-11-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by NickDP on 2014-11-20:
I see. Now it works, thanks a lot. So if I want to write a main() that manipulates the robot models without using messages, the correct way of doing it is to call gazebo::setupServer() in my main() and then load my world/models thereafter. And then I do gzclient in a terminal? | {
"domain": "robotics.stackexchange",
"id": 3678,
"tags": "gazebo"
} |
Leetcode: Maximum score from performing multiplication operations | Question: I am looking for feedback on how to make my code more performant (take less time) - specifically with a top-down approach.
My specific asks are:
How can I make this code faster? It currently takes around 170+ ms, but I see some solutions that are in the range of 10 ms.
I want to use a more functional approach to this - I want to write memo.entry().or_insert_with_key(), but I can't do that because memo needs to be recursively passed down to the subproblems. What can I do to use such a code structure?
Leetcode problem description:
You are given two 0-indexed integer arrays nums and multipliers of size n and m respectively, where n >= m.
You begin with a score of 0. You want to perform exactly m operations. On the ith operation (0-indexed) you will:
Choose one integer x from either the start or the end of the array nums.
Add multipliers[i] * x to your score.
Note that multipliers[0] corresponds to the first operation, multipliers[1] to the second operation, and so on.
Remove x from nums.
Return the maximum score after performing m operations.
use std::collections::HashMap;
use std::cmp::max;
impl Solution {
pub fn maximum_score(nums: Vec<i32>, multipliers: Vec<i32>) -> i32 {
// State: s (start index for nums), e (end index for nums), k (number of operations remaining)
// Subproblems: dp(s, e, k) = max(dp(s+1, e, k-1)+contrib(s), dp(s, e-1, k-1)+contrib(e))
// Base case: dp(x, x, any >= 1) = contrib(x)
// contrib(x) = multipliers[x] * nums[s/e]
let mut memo: HashMap<(usize, usize, usize), i32> = HashMap::new();
let m = multipliers.len()-1;
Self::dp(0, nums.len()-1, m, &mut memo, &nums, &multipliers, m)
}
fn dp(start_idx: usize, end_idx: usize, num_ops_left: usize, memo: &mut HashMap<(usize, usize, usize), i32>, nums: &Vec<i32>, multipliers: &Vec<i32>, max_ops: usize) -> i32 {
if !memo.contains_key(&(start_idx, end_idx, num_ops_left)) {
let (from_start, from_end) = match num_ops_left {
0 => (0, 0),
_ => (Self::dp(start_idx+1, end_idx, num_ops_left-1, memo, nums, multipliers, max_ops), Self::dp(start_idx, end_idx-1, num_ops_left-1, memo, nums, multipliers, max_ops)),
};
memo.insert((start_idx, end_idx, num_ops_left), max(from_start + multipliers[max_ops - num_ops_left] * nums[start_idx], from_end + multipliers[max_ops - num_ops_left] * nums[end_idx]));
}
*memo.get(&(start_idx, end_idx, num_ops_left)).unwrap()
}
}
Answer: I will not be commenting on the algorithm, as I've got no idea what's optimal. I will, however, be commenting on the code.
Leetcode...
First of all, it's just unidiomatic to:
Create an empty struct just to associate functions to it, free-functions are perfectly acceptable in Rust.
Take Vec<T> as parameter when not intending to modify the vector, &[T] should be preferred instead.
I do understand, though, that those are quite likely to be Leetcode artifacts... but dp is your own function, so &[i32] please.
Naming
The cost of calling a function is independent from the length of its name. Prefer meaningful names, then.
On the other hand, the _idx suffix is not very useful.
Oh... and an alias for that complex HashMap would be nice:
type MemoMap = HashMap<(usize, usize, usize), i32>;
Formatting.
The signature of dp is so long it runs off the side of the page. Use rustfmt (which you can invoke via cargo fmt) to format your code.
Simpler is better
Your use of match num_ops_left to check for 0 is unexpected, you can just use if really:
let (from_start, from_end) = if num_ops_left > 0 {
(Self::dp(...), Self::dp(...))
} else {
(0, 0)
};
In general, I advise using the simplest control-flow that works, so that when a more complex / less constrained control-flow form is used, it calls attention to itself "something special's going on here".
Redundancy is not helpful
You are tracking way too much state:
In your hashmap.
In the dp call.
Let's start with the hashmap: what's num_ops_left for? At any point in time, the number of operations left to do is equal to the number of multipliers to pick from, which itself can be deduced from:
multipliers.len() - (nums.len() - (end - start))
Hence, you can save space in your hashmap, and save time by not hashing redundant information.
Similarly, with regard to the dp call:
As mentioned, num_ops_left is redundant, and can be eliminated.
Similarly, max_ops is redundant.
Instead, you should pass multipliers: &[i32] and trim multipliers at each step so that you only pass the slice of multipliers left to use.
Avoid double-lookup, when possible.
You are correct that unfortunately you are not going to be able to use .entry and pass &mut MemoMap to compute what to insert... but that's the rare case.
The more frequent case should be that the entry is in the cache, and therefore you should vie to optimize this path.
This will also eliminate the very unfortunate unwrap, at the same time.
if let Some(cached) = memo.get(&(start, end)) {
return cached;
}
let from_start = Self::dp(...);
let from_end = Self::dp(...);
let score = max(...);
memo.insert((start, end), max);
max
Now, there's only a double-lookup on unsuccessful cache hits.
Hash Flooding
Hash Flooding is a Denial of Service (DoS) attack which appeared in the 2000s. It was mostly used to take down websites, by crafting specific lists of HTTP parameters where all keys would hash to the same hashmap bucket, turning each insertion into an O(N) operation, instead of an O(1) one, and thus turning inserting all the parameters into an O(N2), rather than an O(N) one. As a result, popular languages such as Python & Ruby changed their default hashing algorithms to be more collisions resistant.
When Rust was created, it followed in its predecessors footsteps, and therefore also picked a more collision resistant hash algorithm by default (Sip2-4). Unfortunately, for you, it's not exactly a fast hash algorithm. And with hash lookups being a significant part of your workload, it's really something you should investigate.
My typical goto is the fxhash crate, which simply uses the FxHash algorithm with the standard HashMap implementation by defining a type alias. If you can't import it, you should be able to copy/paste part of the code, it's fairly lightweight.
Allocation & re-allocation
Rust collections are typically created empty, with no allocation. In the case of "contiguous" collections, they then follow an exponential growth pattern, where (roughly) the allocation size doubles each time an allocation is required.
In total, it means each element is roughly inserted once and moved once or twice. Not too bad. But not optimal, either.
Instead, you can use their with_capacity factory function to create a suitably sized collection from the get-go, and avoid intermediate reallocations.
You should be able to estimate the number of items to be inserted based on the length of nums and multipliers at the beginning, so just do it.
Revised code
Note: the revised code uses ahash because it's available on the playground, I'd still recommend fxhash for simplicity though.
use std::cmp::max;
use std::collections::HashMap;
use ahash::RandomState;
struct Solution;
impl Solution {
pub fn maximum_score(nums: Vec<i32>, multipliers: Vec<i32>) -> i32 {
// State: s (start index for nums), e (end index for nums), k (number of operations remaining)
// Subproblems: dp(s, e, k) = max(dp(s+1, e, k-1)+contrib(s), dp(s, e-1, k-1)+contrib(e))
// Base case: dp(x, x, any >= 1) = contrib(x)
// contrib(x) = multipliers[x] * nums[s/e]
let mut memo = MemoMap::with_capacity_and_hasher(
multipliers.len() * multipliers.len(),
RandomState::new(),
);
Self::dp(0, nums.len() - 1, &mut memo, &nums, &multipliers)
}
fn dp(start: usize, end: usize, memo: &mut MemoMap, nums: &[i32], multipliers: &[i32]) -> i32 {
let Some((first, tail)) = multipliers.split_first() else {
return 0;
};
if let Some(cached) = memo.get(&(start, end)) {
return *cached;
}
let from_start = Self::dp(start + 1, end, memo, nums, tail);
let from_end = Self::dp(start, end - 1, memo, nums, tail);
let score = max(
from_start + first * nums[start],
from_end + first * nums[end],
);
memo.insert((start, end), score);
score
}
}
type MemoMap = HashMap<(usize, usize), i32, RandomState>;
See it on the playground. | {
"domain": "codereview.stackexchange",
"id": 45121,
"tags": "performance, rust"
} |
Reason why only massless particles can travel at speed of light? | Question: I have read this question:
Do all massless particles (e.g. photon, graviton, gluon) necessarily have the same speed $c$?
And the answer by WetSavannaAnimal aka Rod Vance:
"Incidentally, if we confine massless particles, e.g. put light into a perfectly reflecting box, the box's inertia increases by E/c2E/c2, where EE is the energy content. This is the mechanism for most of your body's mass: massless gluons are confined and are accelerating backwards and forwards all the time, so they have inertia just as the confined light in a box did. Likewise, an electron can be thought of as comprising two massless particles, tethered together by a coupling term that is the mass of the electron. The Dirac and Maxwell equations can be written in the same form: the left and right hand circularly polarized components of light are uncoupled and therefore travel at cc, but the massless left and right hand circular components of the electron are tethered together. This begets the phenomenon of the Zitterbewegung - whereby an electron can be construed as observable at any instant in time as traveling at cc, but it swiftly oscillates back and forth between left and right hand states and is thus confined in one place. Therefore it takes on mass, just as the "tethered" light in the box does."
So is this the reason why anything with mass, like our body, that has "massless gluons are confined and are accelerating backwards and forwards all the time, so they have inertia just as the confined light in a box did", so the gluons are confined, so they must travel back and forth between some kind of confining container, a "wall" or something?
Question:
As this body would speed up, as it reaches speed c, the gluons inside it will not be able to "accelerating backwards and forwards all the time", because they are already at speed c and the "wall/container" they are confined in is already moving at speed c in a direction, so the gluons would never reach the confining wall anymore? So they cannot have inertia anymore, and so the body cannot have mass anymore? (just like in SR, the photon clock, the photons can't reach the mirrors anymore, so time seizes to exist too)
So as anything that reaches speed c, will (or must have no rest mass in the first place) lose its rest mass?
Answer: Because in special relativity and in terms of conserved momentum and energy velocity is given by:$$\vec{v} = \frac{\vec{p}c^2}{E},$$ and energy and momentum are related by:$$E^2 = \left(mc^2\right)^2 + (pc)^2,$$ giving:
$$\vec{v} = \frac{\vec{p}}{\sqrt{\left(mc\right)^2 + \left(\vec{p}\right)^2}}c.$$ There is no real momentum, $\vec{p}$ that can give a velocity $\vec{v}$ higher than $c$, and only infinite momentum gives $v=c$ when $m\neq0$.
For the specific question you have, the individual gluons cannot accelerate, but the frequency of gluons traveling in one direction can be different than the frequency of the gluons traveling in the other. This gives the box of gluons, on average, a momentum and kinetic energy. The inertia of the box is a property it has, again averaged over a time long enough for the gluons to bounce back and forth, that is caused by the way gluons reflecting from a moving wall will change frequency in a way that exerts a force on the wall and changes the frequency of the gluons.
It's a classic undergraduate physics problem to calculate the change in frequency caused by the Doppler effect of light reflecting off of a moving object. The momentum and energy needed to cause that change in frequency have to come from somewhere, and it comes from the force exerted on the object the light reflects from. | {
"domain": "physics.stackexchange",
"id": 90419,
"tags": "special-relativity, mass, speed-of-light"
} |
Are all identical fermions in orthogonal states as opposed to different general states? | Question: A professor told me that most physicists assume that all identical fermions are in completely orthogonal states. If that is true, then does that mean that that the total wave function is highly localized for a closed box of a huge number of electrons? I get confused between when your supposed multiply wavefunctions and when your supposed to add wavefunctions. I know that for one electron, adding all orthogonal states of that one electron in a closed box would yield a highly localized spatial wavefunction.
Answer: What you refer to is probably to ground state of a infinite potential wall.
If this is the case, then there, in the ground state, the particles are not localized. You can find the solution of the orthogonal ground-states here: https://en.wikipedia.org/wiki/Infinite_potential_well
We can reduce the Problem to a one dimensional case, that doesn't change the Problem or solution to much.
With orthogonal states is meant that the integral over the wave function vanishes.
So in the case of a one dimensional box this is fulfilled for the given sinus functions. That doesn't imply that the two particles may not be found at the some position in space. If that would be the case, the electrons would have to be localized. But this is not the requirement for orthogonal states.
I hope this helps to clarify this. | {
"domain": "physics.stackexchange",
"id": 15040,
"tags": "wavefunction, electrons, fermions"
} |
Does a Meson hit earth or not hit earth? | Question: I am told a $\mu$ meson with an average lifespan of $2 \times 10^{-6}$ is created in the upper atmosphere at an altitude of $6000m$. When it is created, it has a velocity of $0.998c$ in a direction towards the Earth.
What is the average distance that it will travel before decaying, as determined by an observer on earth
Consider an observer at rest with respect to the $\mu$ meson. What is the distance he/she measures from the point of creation of the $\mu$ meson to the earth?
The question I have is, is the particle lifespan given measured in the (a) laboratory frame or in the (b) eigentime of the particle? How are these values usually quoted in physics?
If I interpret it as case (a), the answer for $1$ is $9472\ m$. If I interpret it as case (b) the answer for $1$ is $598\ m$. Are my values and understanding correct? If not, where did I go wrong? My answer for $2$ is $379.29\ m$
Answer:
The question I have is, is the particle lifespan given measured in the (a) laboratory frame or in the (b) eigentime of the particle? How are these values usually quoted in physics?
Particle lifetimes are always given in the rest frame of the particle.
As you've seen in your course, this lifetime will look different in other frames of reference, and the specific value of the observed lifetime will depend on the relative velocity between the particle and the frame. As such, it makes no sense to quote values in other frames of reference, as there is an infinity of such values. | {
"domain": "physics.stackexchange",
"id": 69003,
"tags": "homework-and-exercises, special-relativity, elementary-particles"
} |
Working Principle of a Transformer | Question: From my physics textbook (Written by Halliday & Resnick),I came to prove that
$$R_{\rm Primary}=\left(\frac{N_{p}}{N_s}\right)^2\times R_{\rm Secondary}$$
This formula is driven from the Conservation of Energy.But if there is a resistance in the primary coil it will dissipate heat following $I^2\times R_{\rm primary}\,.$ Then how can I write-
$$V_{\rm Primary}\times I_{\rm Primary} =V_{\rm secondary}\times I_{\rm Secondary}$$
And the formula "$R_{\rm Primary}=\left(\dfrac{N_p}{N_s}\right)^2\times R_{\rm Secondary}$" comes from that.
Answer:
But if there is a resistance in the primary coil it will dissipate
heat following $I^{2}×R$ (primary).
For an ideal transformer energy is conserved. Power in primary = power in secondary. Therefore there can be no resistance in the primary coil or there will be energy dissipated (lost) as heat in the primary coil. So for an ideal transformer the primary and secondary coils are ideal inductors.
The equation you have written is intended to give the impedance seen at the input to the transformer primary, $Z_p$, not the resistance of the primary coil. The impedance seen at the input of the primary coil is then a function of the impedance (load) of the load, $Z_L$, connected to the secondary, or
$$Z_{p}=\biggl(\frac{N_p}{N_S}\biggr )^{2}Z_L$$
Note that for an ideal transformer, where the primary and secondary coils are considered ideal inductors, the impedance of the primary coil and secondary coils themselves is considered to be purely inductive reactance, i.e., having no resistance.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 72379,
"tags": "electricity"
} |
Finding the frequency response of a filter defined as a Z-domain transfer function | Question: I am software engineer studying DSP on my own. I came across this question in a graduate textbook and am not sure how to proceed.
Determine the frequency response for the filter:
H(z) = 1.5 / (1 + 3z^-1)
Clearly, this is a z-domain transfer function. How do I get the frequency response? I am thinking of converting it into a time-domain difference equation, calculating about 128 points, running it through MATLAB's FFT, plot it, and voila, the final curve would be the answer. Is that a right approach? Is there a more analytic/math-y answer?
Answer: The analytic way is to substitute the variable $z$ by $e^{j\omega}$ to get the frequency response $H(\omega)$ (with $\omega = \frac{2 \pi f}{F_s}$) - that is to say, the frequency response is the $z$ transform evaluated on the unit circle.
Note that matlab has a built-in function for plotting the frequency response straight from filter coefficients (freqz), whose source code can be seen. Have a look! The implementation does not even involve solving the difference equation or filtering an impulse - you can simply divide the FFT of the numerator sequence by the FFT of the denominator sequence - padded up to the desired number of frequency points. | {
"domain": "dsp.stackexchange",
"id": 930,
"tags": "dsp-core, frequency-response, transfer-function"
} |
Interpretation of a singular metric | Question: I'm interested to find out if we can say anything useful about spacetime at the singularity in the FLRW metric that occurs at $t = 0$.
If I understand correctly, the FLRW spacetime is a combination of the manifold $\mathbb R^{3,1}$ and the FLRW metric $g$. The metric $g$ has a singularity at $t = 0$ because at that point the proper distance between every pair of spacetime points is zero. Presumably though, however the metric behaves, the manifold remains $\mathbb R^{3,1}$ so we still have a collection (a set?) of points in the manifold. It's just that we can no longer calculate distance between them. Is this a fair comment, or am I being overly optimistic in thinking we can say anything about the spacetime?
I vaguely recall reading that the singularity is considered to be not part of the manifold, so the points with $t = 0$ simply don't exist, though I think this was said about the singularity in the Schwarzschild metric and whether it applies to all singular metrics I don't know.
To try to focus my rather vague question a bit, I'm thinking about a comment made to my question Did the Big Bang happen at a point?. The comment was roughly: yes the Big Bang did happen at a point because at $t = 0$ all points were just one point. If my musings above are correct the comment is untrue because even when the metric is singular we still have the manifold $\mathbb R^{3,1}$ with an infinite number of distinct points. If my memory is correct that the points with $t = 0$ are not part of the manifold then we cannot say the Big Bang happened at a point, or the opposite, because we cannot say anything about the Big Bang at all.
From the purely mathematical perspective I'd be interested to know what if anything can be said about the spacetime at $t = 0$ even if it has no physical relevance.
Answer: The nature of singularities in GR is a delicate issue. A good review of the difficulties presented to define a singularity are in Geroch's paper What is a singularity in GR?
The problem of attaching a boundary in general to a spacetime is that there is not natural way to do it. for example, in the FRW metric the manifold at $t=0$ can be described by two different coordinate systems as: $$\{t,r\cos\theta,r\sin\theta \cos\phi,r\sin\theta \sin\phi\}$$
or $$\{t,a(t)r\cos\theta,a(t)r\sin\theta \cos\phi,a(t)r\sin\theta \sin\phi\}$$
In the first case we have a three dimensional surface, in the latter a point.
It might be tempting to define a singularity following other physical theories as the points where the metric tensor is undefined or below $C^{2}$. However, this is troublesome because in the gravitational case the field defines also the spacetime background. This represents a problem because the size, location and shape of singularities can't be straightforward characterize by any physical measurement.
The theorems of Hawking and Penrose, commonly used to show that singularities in GR are generic under certain circumstances have the conclusion that spacetime must be geodesically incomplete (Some light-paths or particle-paths cannot be extended beyond a certain proper-time or affine-parameter).
As mentioned above the peculiar characteristic of GR of identifying the field and the background makes the task of assigning a location, shape or size to the singularities very delicate. If one thinks in a singularity of the gravitational potential in classical terms the statement that the field diverges at a certain location is unambiguous. As an example, take the gravitational potential of a spherical mass $$V(t,r,\theta,\phi)=\frac{GM}{r}$$ with a singularity at the point $r=0$ for any time $t$ in $\mathbb{R}$. The location of the singularity is well defined because the coordinates have an intrinsic character which is independent of $V$ and are defined with respect the static spacetime background.
However, this prescription doesn't work in GR. Consider the spacetime with metric $$ds^{2}=-\frac{1}{t^{2}}dt^{2}+dx^{2}+dy^{2}+dz^{2}.$$ defined on $\{(t,x,y,z)\in \mathbb{R}\backslash \{0\}\times \mathbb{R}^{3}\}$. If we say that there is a singularity at the point $t=0$ we might be speaking to soon for two reasons. The first is that $t=0$ is not covered by our coordinate chart. It makes no sense to talk about $t=0$ as a point in our manifold using these coordinates. The second thing is that the lack of an intrinsic meaning of the coordinates in GR must be taken seriously. By making the coordinate transformation $\tau=\log(t)$ we obtain the metric $$ds^{2}=d\tau^{2}+dx^{2}+dy^{2}+dz^{2},$$ on $\mathbb{R}^{4}$ and remain isometric to the previous spacetime defined in $\{(t,x,y,z)\in \mathbb{R}\backslash \{0\}\times \mathbb{R}^{3}\}$. What we have done is find an extension of the metric to $\mathbb{R}^{4}$. The singularity was just a coordinate singularity, similar to the event horizon singularity in Schwarzschild coordinates. The extended spacetime is of course Minkowski spacetime which is non-singular.
Another approach is to define a singularity in terms of invariant quantities such as scalar polynomials of the curvature.
This are scalars formed by the Riemann tensor. If this quantities diverge it matches our physical idea that and object approaching regions of higher and higher values must suffer stronger and stronger deformations. Also, in many relevant cosmological models like FRW and Black Holes metrics one can show that this indeed happen. But as mentioned the domain of the gravitational field defines the location of events so a point where the curvature blow up might not be even in the domain. Therefore, we must formalise the following: statement "The scalar diverges as we approach a point that has been cut out of the manifold.". If we were in a Riemann manifold then the metric define a distance function $$d(x,y):(x,y)\in\cal{M}\times\cal{M}\rightarrow \inf\left\{\int\rVert\dot{\gamma}\rVert \right\}\in\mathbb{R}$$
where the infimum is taken over all piecewise $C^{1}$ curves $\gamma$ from $x$ to $y$. Moreover, the distance function allows us to define a topology. A basis of that topology is given by the set $\{B(x,r):y\in{\cal{M}}| d(x,y)\le r \forall x\in \cal{M}\}$. The topology naturally induce a notion of convergence. We say the sequence $\{x_{n}\}$ converges to $y$ if for $\epsilon> 0$ there is an $N\in \mathbb{N}$ such that for any $n\ge N$ $d(x_{n},y)\le \epsilon$. A sequence that satisfies this conditions is called a Cauchy sequence. If every Cauchy sequence converges we say that $\cal{M}$ is metrically complete
Notice that now we can describe points that are not in the manifold as a point of convergence of a sequence of points that are. Then the formal statement can be stated as: "The sequence $\{R(x_{n})\}$ diverges as the sequence $\{x_{n}\}$ converges to $y$" where $R(x_{n})$ is some scalar evaluated at $x_{n}$ in $\cal{M}$ and $y$ is some point not necessarily in $\cal{M}$.
In the Riemannian case if every Cauchy sequence converges in $\cal{M}$ then every geodesic can be extend indefinitely. That means we can take as the domain of every geodesic to be $\mathbb{R}$. In this case we say that $\cal{M}$ is geodesically complete. In fact also the converse is true, that is if $\cal{M}$ is geodesically complete then $\cal{M}$ is metrically complete.
So far, all the discussion has been for Riemann metrics, but as soon as we move to Lorentzian metrics the previous discussion can't be used as stated. The reason is that Lorentzian metrics doesn't define a distance function. They do not satisfy the triangle inequality. So we only have left the notion of geodesic completeness.
The three kinds of vectors available in any Lorentzian metric define three nonequivalent notions of geodesic completeness depending on the character of the tangent vector of the curve: spacelike completeness, null completeness and timelike completeness. Unfortunately, they are not equivalent it is possible to construct spacetimes with the following characteristics:
timelike complete, spacelike and null incomplete
spacelike complete, timelike and null incomplete
spacelike complete, timelike and null incomplete
null complete, timelike and spacelike incomplete
timelike and null complete, spacelike incomplete
spacelike and null complete, timelike incomplete
timelike and spacelike complete, null incomplete
Moreover, in the Riemannian case if $\cal{M}$ is geodesically complete it implies that every curve is complete, that means every curve can be arbitrarily extended . Again, in the Lorentzian case that is not the case, Geroch construct an example of a geodesically null, timelike and spacelike complete spacetime with a inextendible timelike curve of finite length. A free falling particle following this trajectory will accelerate but in a finite amount of time its spacetime location would stop being represented as a point in the manifold.
Schmidt provided an elegant way to generalise the idea of affine length to all curves, geodesic and no geodesics. Moreover, the construction in case of incomplete curves allows to attach a topological boundary $\partial\cal{M}$ called the b-boundary to the spacetime $\cal{M}$.
The procedure consist in building a Riemannian metric in the frame bundle $\cal{LM}$. We will use the solder form $\theta$ and the connection form $\omega$ associated to the Levi-Civita connection $\nabla$ on $\cal{M}$. Explicitly,
\begin{equation}
G_{ab}(X_{a},Y_{a})= \theta(X_{a}) \cdot \theta(Y_{a})+\omega(X_{a})\bullet \omega(Y_{a})
\end{equation}
where $X_{a},Y_{a}\in T_{p}\cal{LM}$ and $\cdot,\bullet$ are the standard inner product in $\mathbb{R}^{n}$ and $\mathfrak{g}\cong\mathbb{R}^{n^{2}}$.
Let $\gamma$ be a $C^{1}$ curve through $p$ in $\cal{M}$ and a basis $\{E_{a}\}$. Now choose a point $P$ in $\cal{LM}$ such that $P$ satisfies $\pi(P)=p$ and the basis of $T_{p}$ is given by $\{E_{a}\}$. Using the covariant derivative induced by the metric we can parallel propagate $\{E_{a}\}$ in the direction of $\dot{\gamma}$. This procedure defines a curve $\Upsilon $ in $\cal{LM}$. This curve is called the lift of the curve $\gamma$. The length of $\Upsilon$ with respect to Schmidt metric, $$l=\int_{\tau}\|\dot{\Upsilon} \|_{G} dt$$ is is called a generalised affine parameter. If $\gamma$ is a geodesic $l$ is an affine parameter. If every curve in a spacetime $\cal{M}$ that has finite affine generalised length has endpoints we call the spacetime b-complete. If it is not $b$- complete we call the spacetime b-incomplete.
A classification of singularities in terms of the b-boundary (See Chapter 8, The large Scale Structure of spacetime) was done by Ellis and Schmidt here.
In the case of the FRW the b-boundary $\partial\cal{M}$ was computed in this paper The result is that the boundary is a point. However, the resulting topology in $\partial\cal{M}\cup\cal{M}$ is non-Hausdorff. This means the singularity is in some sense arbitrary close to any event in spacetime. This was regarded as unphysical and attempts to improve the b-boundary construction were made without any attempt having a particular acceptance. Also the high dimensionality of the bundles involved make the b-boundary a difficult working tool.
Another types of boundaries can be attached. For example:
conformal boundaries used in Penrose diagrams and in the AdS/Cft correspondance. In this case the conformal boundary as seen here at $t=0$ is a three dimensional manifold.
Causal boundaries. This constructions depends only on the causal structure, so it doesn't distinguish between boundary points at a finite distance or at infinity. (See chapter 6, The large scale structure of spacetime)
Abstract boundary.
I am unaware if in the two last cases explicit calculations have been done to the case of the FRW metric. | {
"domain": "physics.stackexchange",
"id": 58164,
"tags": "general-relativity, cosmology, metric-tensor, big-bang, singularities"
} |
Optimization FIR filter | Question: I have a doubt about this :
Iterative methods for optimization in FIR filter (on the one hand traditional like window,frequency sampling ,WLS,Remez Exchange...,on the other hand evolutionary methods like SA,PSO,GA....) ,it is used for :
Verify that the filter has linear phase (symmetric).
or the symmetric is satisfied ,but have another problems in this type of
filters should be reduced or eliminated to be used in wider applications.
Please correct me if I am wrong about that.
Answer: I'm not sure if I understand your question correctly, but I'll try to make a few things clear. First of all, note that windowing and frequency sampling are not iterative. On the other hand, the Remez exchange algorithm (used in the ParksMcClellan program) is iterative.
All methods for the design of linear phase FIR filters impose the linear phase constraint in the formulation of the design problem, i.e., the filters are always guaranteed to have a linear phase. The difference between the different methods is either the way the optimal solution is computed, or the design criterion, i.e., the way the error to be minimized is defined. Some common design criteria are the least squared error criterion (used in the Matlab function firls.m), or the maximum (Chebyshev) error criterion, where the maximum error is minimized (used in the Matlab function firpm.m). | {
"domain": "dsp.stackexchange",
"id": 5918,
"tags": "filters, filter-design, finite-impulse-response, optimization, symmetry"
} |
the speed of the exhaust | Question: This is from Stephen Hawking's latest book; Brief Answers to the Big Questions.
The speed at which we can send a rocket is governed by two things, the speed of the exhaust and the fraction of its mass that the rocket loses as it accelerates.
Answer: He refers to Tsiolkovsky's Rocket Equation:
$$ \Delta v=v_e \ln {\frac {m_0}{m_f}} $$
where:
$v_e$ is the exhaust velocity;
${\frac {m_0}{m_f}}$ is the fraction of mass; $m_f$ - the "dry mass"/"final mass" (rocket without fuel) and $m_0$ - "wet mass"/"launch mass" (rocket fully fueled up.)
This equation is one of the most important in rocket science - describing the change of velocity a rocket can achieve. The implications are that the larger the difference between the mass of fuel and the mass of the craft, the larger the velocity achievable (but the $\ln$ results in diminishing returns as mass of fuel is increased) and that engines that impart the exhaust with most velocity provide most performance - linearly, without that pesky $\ln$ - but then... with square root of energy needed; $E={1\over 2} mv^2; v = \sqrt{2E \over m}$. So, increasing power of the engine - chemical energy of fuel, amount of electrical energy imparted by ion drive - results in diminishing returns again.
One of Hawking's last ideas - "Breakthrough Starshot" nicely sidesteps both problems. The propellant is photons, moving at speed of light, and the craft doesn't carry any fuel - the propellant is beamed from a ground-based station through a powerful laser. | {
"domain": "engineering.stackexchange",
"id": 2489,
"tags": "aerospace-engineering, rocketry"
} |
How can we choose any level for gravitational potential energy to be zero? | Question: In my book, I read that we can choose any level as Zero Gravitational P.E. and measure height of objects above it and call its energy 'mgh'. But by saying that all the points on that level is of zero potential, it can be inferred that it is an Equipotential surface but we know that all the points have different potentials (I know , a concentric sphere around earth is an equipotential surface but we often choose zero level that is not earth's surface )
Please explain as it is really confusing me :)
Answer:
In my book, I read that we can choose any level as Zero Gravitational
P.E. and measure height of objects above it and call its energy 'mgh'.
This works because we're almost always only interested changes in potential energy, rather than absolute values.
Let me show below that the reference point is of no importance in that case.
Let $h_r$ be a reference point where $U=U_r$. You can assume $U_r$ to be unknown. Now we look at an object of mass $m$ that is moved from $h_1$ to $h_2$ and want to know its change in potential energy.
We know that:
$$U_1=U_r+(h_1-h_r)mg$$
$$U_2=U_r+(h_2-h_r)mg$$
The change $\Delta U$ is:
$$\Delta U=U_2-U_1=U_r+(h_2-h_r)mg-[U_r+(h_1-h_r)mg]$$
$$\Delta U=mg(h_2-h_1)$$
So choosing an arbitrary, non-zero reference point gives the same, correct change in potential energy.
Another, more 'absolute' way of looking at it is by looking at the potential energy in a gravitational field$^\dagger$, far away from Earth's surface:
$$U=-\frac{GMm}{r}$$
Here obviously for $r\to +\infty$ then $U=0$. So here $r= +\infty$ is a good zero reference point.
$^\dagger$ http://hyperphysics.phy-astr.gsu.edu/hbase/gpot.html | {
"domain": "physics.stackexchange",
"id": 71059,
"tags": "newtonian-gravity, work, potential-energy, conventions, conservative-field"
} |
Calculations of compound gears? | Question: Recently I've been scratching my head over an assignment, and I have trouble understanding a couple of the parts.
The issue I have is with the overall calculations, since we've just started on mechanical issues and I have no experience except easy transmissions. Therefor, I am wondering how I calculate the ratio of the compound gear between A and B, and how many turns the worm gear have to turn to make the C spur gear travel a specified distance of 100mm.
The question goes as follows;
How many revolutions does the worm gear have to complete, for the
spring to compress entirely?
The C gear is connected to a rack that then compresses a spring, that'll have to travel 100mm.
Thanks!
Answer: In order for a tooth on C to travel 100mm, the teeth on B also need to travel 100mm. Let's not worry about how many revolutions that is.
A has 2.6 times as many teeth as B,and travels the same number of rotations, so, that means that a tooth on A must travel 260mm.
The pitch of the worm is 4mm, so, it needs to rotate 260/4=65 times in order to cause the teeth if A to move 260mm, and consequently the teeth of the rack meshed with C 100mm | {
"domain": "engineering.stackexchange",
"id": 2482,
"tags": "mechanical-engineering, transmission"
} |
Generate subsets of k elements from a set of strings | Question: This is the task:
Write a recursive program, which prints all subsets of a given set of N words.
Example input:
words = {'test', 'rock', 'fun'}
Example output:
(), (test), (rock), (fun), (test rock), (test fun),
(rock fun), (test rock fun)
In fact I need to generate all subsets from 0 to words.Length. In Pascal (if anybody knows) there is a function (not sure that it's "function") that looks like that:
var a:set of example
I need the same in C#. This is what I tried (the program works, but it's a lot of code):
class Program
{
static int abc;
static string[] extractedwords;
static int k;
static int margin;
static string[] words = { "coffee", "ice-cream", "chocolate", "red" };
static void Main(string[] args)
{
abc = 0;
k = 1;
Console.WriteLine("The margin of words: ");
margin = int.Parse(Console.ReadLine());
extractedwords = new string[margin];
GenerateWords(0);
}
static void GenerateWords(int n)
{
if (n == k)
{
if (n != 0)
{
for (int s = n - 1; s >= 0; s--)
{
for (int a = 0; a < s; a++)
{
if (extractedwords[s] == extractedwords[a])
{
return;
}
}
}
}
PrintWords(extractedwords);
return;
}
for (int a = 0; a < words.Length; a++)
{
extractedwords[n] = words[a];
GenerateWords(n + 1);
}
if (k >= margin)
{
return;
}
if (n == 0)
{
k++;
GenerateWords(n);
}
}
static void PrintWords(string[] words)
{
for (int n = 0; n < words.Length; n++)
{
Console.Write("{0} ", words[n]);
}
Console.WriteLine();
}
}
Answer: Unused variables
At least one of your variables (abc) is never used. This might be because you've changed the way the code works. If you refactor your code, try to remember to cleanup at the same time / straight after. It's a lot easier to do while the code is fresh in your mind and it prevents large amounts of technical debt building up over time.
margin of words
I found this prompt confusing (maybe it's just me). I think a better prompt would be something like "Please enter the maximum number of words per group: ". Try to be expressive and imagine that you're not going to be the one using your application.
Variable Names
Similarly, think about your variable names. They should express what it is the variables actually represent. Names like k, s, n and a tell me nothing about what the variable represents. This makes the code harder to read because you have to keep referring back to determine the previous context. Single letter variable names can be ok for short iteration variables where the context is obvious, but can you honestly say that this is meaningful in its current state:
for (int s = n - 1; s >= 0; s--)
Magic Numbers
You've got some magic numbers in your code. I'm not totally against using numbers when the context makes it obvious what the numbers mean. However, this isn't always the case with yours. For example:
GenerateWords(0);
When I look at this line, it looks like the method is going to generate 0 words which doesn't make sense. Looking at the function definition, the parameter is labelled as n, which again adds no context.
Console Output
At the moment your outputting straight to the console as you generate the combinations. Generally, you want to try to separate user interaction from your algorithms. This allows you to reuse the algorithms in different contexts, put different front ends on your code etc.
Static State
At the moment, your storing your state in static variables that are accessed from your recursive method. This is OK in your example code, however if you were to put the method into a library, it would mean that you couldn't call it from two different threads to process different word lists. If the state is passed in, instead, the methods become more flexible.
Minimise what your client needs to know
When you write recursive functions, you'll often need to pass extra parameters into the method in order to support the recursion and the termination conditions. Clients shouldn't need to know this information (this give you more flexibility if you want to change the way the method works in the future).
Putting it together
Putting some of the above together, you might end up with code something like this:
static void Main()
{
var combinations = GenerateWordCombinations(new string[] { "coffee", "ice-cream", "chocolate", "red" }, 2);
combinations.ForEach(x => Console.WriteLine($"({x})"));
}
static public List<string> GenerateWordCombinations(string[] inputWords, int? maxWordsPerCombination = null)
{
var combinations = new List<string> { "" };
GenerateWordCombinations(inputWords, "", maxWordsPerCombination??inputWords.Length, combinations);
return combinations;
}
static private void GenerateWordCombinations(IEnumerable<string> words, string prefix, int maxWordsPerCombination, IList<string> combinations)
{
if(words.Count() == 0 || maxWordsPerCombination == 0)
{
return;
}
foreach(var word in words)
{
GenerateWordCombinations(words.Where(x => x != word), prefix + word + " ", maxWordsPerCombination - 1, combinations);
combinations.Add(prefix + word);
}
}
Things to note about the alternative code above:
I've ignored user input
There's two versions of GenerateWordCombinations, the private one which is recursive and a public one which can be called from clients. The public one accepts a list of words to search for combinations and the maximum number of words to find in each combination. The number of words is optional and defaults to null, so that if it's not supplied the number of words in the supplied list can be used instead.
Rather than printing directly to the console, the GenerateWordCombinations methods add the combinations to a list and then returns the list to the caller.
The code uses some of the collection extension methods to do some of the heavy lifting.
Combinations are output in a different order from your original application. | {
"domain": "codereview.stackexchange",
"id": 30858,
"tags": "c#, recursion, combinatorics"
} |
What's the energy of all the light/electromagnetic radiation in our galaxy? | Question: I came upon this question while watching a pop-sci video on youtube about Dark Matter and thinking about all the things that could be contributing gravitational influence to a galaxy.
From relativity we know that mass and energy is more or less the same and and both bend spacetime (i.e. cause gravity). And given how much energy stars give off and how big galaxies are, there is a lot of light, a lot of photons whizzing around and altogether that adds up to a sizeable chunk of energy. Relatively speaking maybe negligible next to ordinary or dark matter, but should be a big number. Some bounded volume would have to be defined, but I have no idea if physicists have definition for the boundary of a galaxy or what it is.
Answer: We can easily see without a calculation that this mass-energy is negligible compared to the mass-energy of the stars. The galaxy is somewhere on the order of $10^4$ light years in size. That means that a star's light spends $\sim10^4$ years inside the galaxy before it's gone. So the ratio of the mass-energy of the light in our galaxy to the mass-energy of its stars is on the order of the fraction of the sun's mass that it loses by radiation over $\sim10^4$ years. This is a negligible fraction. (A calculation shows that it's $\sim10^{-9}$.)
There was an era when the universe's gravity was radiation-dominated, but that was in the very early universe. | {
"domain": "physics.stackexchange",
"id": 63246,
"tags": "energy, electromagnetic-radiation, astrophysics, estimation, milky-way"
} |
Is this a good code based node editor "framework"? | Question: Hi I'm trying to create a c++ code based node editor (GUI versions to be implemented). Here is how it works, each node has ports which can be of INPUT or OUTPUT (they only connect with each other so INPUT->OUTPUT or OUTPUT->INPUT).
Here is the code so far:
main.cpp
#include <iostream>
#include "addition_node.h"
#include "value_node.h"
int main() {
value_node<double>* number1 = new value_node<double>(new double(8.0));
value_node<double>* number2 = new value_node<double>(new double(9.0));
value_node<double>* number3 = new value_node<double>(new double(10.0));
addition_node<double>* add1 = new addition_node<double>();
addition_node<double>* add2 = new addition_node<double>();
addition_node<double>* add3 = new addition_node<double>();
number1->out->connect_to(add1->in1);
number2->out->connect_to(add1->in2);
//std::cout << number1->out->disconnect_from(add1->inputs[0]) << std::endl;
add1->out->connect_to(add2->in1);
number2->out->connect_to(add2->in2);
add2->out->connect_to(add3->in1);
number3->out->connect_to(add3->in2);
//number1->out->disconnect_from(add1->in1);
std::cout << *add3->out->value << std::endl;
}
node_editor.h:
#pragma once
#include <iostream>
#include <vector>
#include <memory.h>
#include <typeinfo>
enum iotype {
INPUT = true,
OUTPUT = false
};
struct node;
struct port_abstract {
node* parent = nullptr;
std::vector<port_abstract*> connections = {};
iotype io = INPUT;
virtual bool connect_to(port_abstract* p);
virtual bool disconnect_from(port_abstract* p);
virtual void make_empty();
virtual bool is_empty();
virtual void give_value_to(port_abstract* p);
};
template <class T>
struct port : port_abstract {
T* value;
virtual bool connect_to(port_abstract* p);
virtual bool disconnect_from(port_abstract* p);
virtual void make_empty();
virtual bool is_empty();
virtual void give_value_to(port_abstract* p);
};
struct node {
std::vector<port_abstract*> inputs, outputs;
virtual void compute() {};
};
bool is_valid_port(port_abstract* p) {
return p != nullptr && p->parent != nullptr;
}
void nested_compute(port_abstract* p) {
if (is_valid_port(p)) {
p->parent->compute();
for (port_abstract* output : p->parent->outputs) {
for (port_abstract* input : output->connections) {
if (p->is_empty()) {
input->make_empty();
} else {
output->give_value_to(input);
}
nested_compute(input);
}
}
}
}
bool connection_check(port_abstract* p1, port_abstract* p2) {
if (is_valid_port(p1) && is_valid_port(p2)) {
if (p1->io == (!p2->io)) {
std::vector<port_abstract*> ports = {};
(p1->io ? ports = p2->parent->inputs : ports = p2->parent->outputs);
for (port_abstract* p : ports) {
for (port_abstract* c : p->connections) {
if (c->parent == p1->parent) { return false; }
if (!connection_check(p1, c)) { return false; }
}
}
return true;
}
return false;
}
return false;
}
bool port_abstract::connect_to(port_abstract* p) {
if (is_valid_port(p)) {
if (
p->io == (!io) &&
(p->connections.size() != (p->io ? 1 : -1)) &&
(connections.size() != (io ? 1 : -1)) &&
connection_check(this, p)
) {
connections.push_back(p);
p->connections.push_back(this);
(p->io ? nested_compute(p) : nested_compute(this));
return true;
}
return false;
}
return false;
}
bool port_abstract::disconnect_from(port_abstract* p) {
if (is_valid_port(p)) {
size_t original_length = connections.size();
connections.erase(std::remove(connections.begin(), connections.end(), p), connections.end());
size_t new_length = connections.size();
if (new_length != original_length) {
p->connections.erase(std::remove(p->connections.begin(), p->connections.end(), this), p->connections.end());
(p->io ? nested_compute(p) : nested_compute(this));
return true;
}
return false;
}
return false;
}
void port_abstract::make_empty() { return; }
bool port_abstract::is_empty() { return false; }
void port_abstract::give_value_to(port_abstract* p) {}
template <class T>
bool port<T>::connect_to(port_abstract* p) {
port<T>* p1 = dynamic_cast<port<T>*>(p);
if (is_valid_port(p)) {
if (
p1 != 0 &&
p->io == (!io) &&
(p->connections.size() != (p->io ? 1 : -1)) &&
(connections.size() != (io ? 1 : -1)) &&
connection_check(this, p)
) {
connections.push_back(p);
p->connections.push_back(this);
(p->io ? ((port<T>*)p)->value = value : value = ((port<T>*)p)->value);
(p->io ? nested_compute(p) : nested_compute(this));
return true;
}
return false;
}
return false;
}
template <class T>
bool port<T>::disconnect_from(port_abstract* p) {
if (is_valid_port(p)) {
size_t original_length = connections.size();
connections.erase(std::remove(connections.begin(), connections.end(), p), connections.end());
size_t new_length = connections.size();
if (new_length != original_length) {
p->connections.erase(std::remove(p->connections.begin(), p->connections.end(), this), p->connections.end());
(p->io ? ((port<T>*)p)->value = nullptr : value = nullptr);
(p->io ? nested_compute(p) : nested_compute(this));
return true;
}
return false;
}
return false;
}
template <class T>
void port<T>::make_empty() {
if (value) {
value = nullptr;
return;
}
return;
}
template <class T>
bool port<T>::is_empty() {
return value == nullptr;
}
template <class T>
void port<T>::give_value_to(port_abstract* p) {
if (is_valid_port(p)) {
port<T>* p1 = dynamic_cast<port<T>*>(p);
if (p1 != 0) {
((port<T>*)p)->value = value;
}
}
}
value_node.h:
#pragma once
#include "node_editor.h"
template <class T>
struct value_node : node {
port<T>* out = nullptr;
value_node();
value_node(T* value);
virtual void compute();
};
template<class T>
value_node<T>::value_node() {
out = new port<T>();
out->io = OUTPUT;
out->parent = this;
*out->value = 0;
this->outputs.push_back(out);
}
template<class T>
value_node<T>::value_node(T* value) {
out = new port<T>();
out->io = OUTPUT;
out->parent = this;
out->value = value;
this->outputs.push_back(out);
}
template<class T>
void value_node<T>::compute() {}
addition_node.h:
#pragma once
#include "node_editor.h"
template<class T>
struct addition_node : node {
port<T>* in1;
port<T>* in2;
port<T>* out;
addition_node();
virtual void compute();
};
template<class T>
addition_node<T>::addition_node() {
in1 = new port<T>();
in2 = new port<T>();
out = new port<T>();
out->io = OUTPUT;
in1->parent = this;
in2->parent = this;
out->parent = this;
this->inputs.push_back(in1);
this->inputs.push_back(in2);
this->outputs.push_back(out);
}
template<class T>
void addition_node<T>::compute() {
std::cout << "compute" << std::endl;
if (in1 && in2) {
if (in1->value && in2->value) {
out->value = new T;
*out->value = *in1->value + *in2->value;
return;
}
}
//std::cout << '0' << std::endl;
out->value = nullptr;
}
Any advice is greatly appreciated!
Answer: I just recently wrote a similar bit of code, and I put it through half a dozen iterations before I was finally happy with it. Along the way, I ran into several issues you're probably encountering with this system. I'll talk through some of the pitfalls I ran into and how I dealt with them. Before that, here are some more general modifications I'd suggest...
Physical State vs. Logical State
Don't expose implementation -- C++ allows for access modifiers for exactly this purpose. For logical abstractions like this, favor classes and keep internal implementation (like pointers to parent and child nodes/ports) private. PHYSICAL state shouldn't matter to the user -- the values of your class's internal members. LOGICAL state should be accessible and (where applicable) mutable to the user. A connection is a logical abstraction from a set of child-to-parent pointers -- let the user think about the connection, you mess with the pointers.
Along the same vein, the current design has too much latitude -- writing a check to ensure that two output ports aren't connected should hint to you that you are exposing too much functionality to the client code. If output ports shouldn't be connected together, your code should never allow it to happen.
As for the node system and its implementation, there are several design choices to be made. First is your choice to tackle this problem with polymorphism -- a natural place to go given that every type of node is-a node. Each node contains a list of inputs and outputs. A drawback to this design choice is that the evaluation algorithm is tightly coupled to the hierarchy itself. A potential alternative would be to have a free function to traverse the hierarchy and evaluate it:
template <typename T> T evaluate(port<T> port) {
// ...
}
This allows slightly more latitude, as the internal node implementation no longer matters as long as the interface the function used to traverse the node graph doesn't change. The decreased coupling means that you could do something like this if you ever needed to:
template <typename T> std::string asExpression(port<T> port) {
// Returns the string "(2 * 3) + 2"
}
template <typename T> T evaluate(port<T> port) {
// Returns the number 8
}
Memory Management
Another drawback of using polymorphism is copy semantics. Consider the factory method:
node<float> getSomeUsefulFunction() {
// ...
}
How is this function implemented? If we create several node instances on the stack, point them at each other and return the root, its parent nodes go out of scope and we get an access violation trying to evaluate the result. Do we allocate heap memory for those nodes? Who deletes them? How do you decide which ones to delete? If a node has the same parent for two inputs, a naive algorithm could accidentally delete an object twice. Even if you decide who owns what and who deletes what, you'd still have problems if you ever wanted to copy-construct the whole hierarchy. You don't know the type of each node, so you can't simply copy-construct them. The typical solution to this dilemma is often known as the prototype pattern -- each node must implement a virtual clone method returning a node instance of the same type and with identical fields. Here again you run into memory management issues -- how do you allocate and manage memory you create within clone()?
You could try to use shared memory by only using shared pointers to these structures, or specifically enforcing it by making node structures work like shared pointers to an internal implementation. However, with this method, we no longer have a zero-cost abstraction -- we're using garbage collection. More importantly, though, does this choice properly reflect the intent of your program -- are the nodes supposed to have shared ownership? It might be a better idea to create a class to represent a node graph and force nodes to be created as part of a node graph. This way ownership is clear and undisputed. This means that the factory from earlier could be modified like this:
node_graph<float> getSomeUsefulFunction() {
node_graph<float> graph;
node<float>& n = // note the return by reference -- the memory is acquired
// and initialized by the node_graph and you get a handle
graph.add_node();
// ...
return graph; // Copy constructor implementation here is critical
}
At this point, you could even reference input/output nodes simply by their index within the node_graph's internal node container (although this quickly creates a problem for insertion or removal of nodes -- but it makes copying the structure much easier).
This brings me to the non-polymorphic node class. Instead of making a subclass for every different function, you could add a separate field, possibly an enum, to track the node's function. In other words, replace addition_node, multiplication_node, etc. with node(func::add), node(func::multiply), etc. It might look like this:
enum class node_func { add, subtract, value };
template <typename T> class node {
public:
node_func func;
T value;
}
template <typename T> T evaluate(node<T> n) {
switch (n.func) {
case node_func::value:
return n.value;
case node_func::add:
return evaluate(*n.input[0]) + evaluate(*n.input[1]);
// etc.
}
}
As long as your node_graph's copy and move constructors are properly implemented, you won't have problems with access violations because all the nodes used in the graph come with it in the copy/move. You won't have problems with deleting because the graph can simply delete all of its nodes when it goes out of scope with the assurance that nothing else should be referencing the nodes within that system.
The biggest drawback for this design lies within the copy/move functionality. Because the nodes would all contain pointers to nodes memory-managed by their parent node_graph, the node_graph would need to take special care to make sure all of these pointers are valid. A possible algorithm might look something like this:
node_graph::node_graph(const node_graph& other) {
m_nodes = other.m_nodes;
for (node& n : m_nodes) {
// for each pointer, get the index in 'other' of the node it points to
// set that pointer to point at our own node at that index
}
}
Recursion
One final consideration is recursion -- what happens when you connect a node's output to a node that eventually drives it? Your evaluation code will go on forever. Note that with the free-function design I proposed earlier, your code wouldn't necessarily ever encounter a problem with recursion -- for example, if you used the system to represent a set of logic gates and you wanted to simulate signals propagating through, recursion would be a common occurrence. Your evaluation function could then simply use only the current input values to advance the simulation instead of recursing all the way down the hierarchy into an infinite loop. Thus, recursion within a node graph system is not inherently evil. That said, you're still best off adding a function to traverse the hierarchy and detect loops. That way, when a recursive setup would result in an infinite evaluation, you can handle the case before you attempt to evaluate. | {
"domain": "codereview.stackexchange",
"id": 39072,
"tags": "c++"
} |
Relationship between toxicity of drugs and negative effects on brain | Question: Are psychoactive drugs with lower lethal doses more neurotoxic (more damaging to the brain)? For example, tetrahydrocannabinol (one of the active components of cannabis) has a much higher lethal dose than benzoylmethylecgonine (cocaine), so could we deduce that cocaine is more damaging to the brain?
Moreover, is a mathematical relationship between neurotoxicity and lethal doses known?
Answer: Short answer
The causes of death after heroin, cocain or cannabis overdose are mainly due to cardiac and respiratory arrest, and not to neurotoxic effects.
Background
The cause of death after a lethal overdose of your mentioned drugs are the following :
Cocaine (lethal dose: 30 mg - 5 g via mucus membrane (EMCDDA)): Cocaine-related deaths are often a result of cardiac arrest followed by an arrest of breathing (NIH). Cardiac effects are mediated via increased sympathetic output and a local anesthetic effect (Schwartz et al., 2010);
THC: (lethal dose: ~1000mg/kg i.v. in primates (Drug library)): No known cases of human fatalities. Toxicity appears as achypnea (rapid breathing), tachycardia (fast heart rate), ataxia, hyperexcitability, and seizures (Fitzgerald et al., 2013), but death seems mostly associated with respiratory arrest and cardiac failure (Drug library);
Heroin: (lethal dose 200 - 2000 mg i.v.(EMCDDA)): overdose often due to respiratory arrest (Anoro et al. 2004) caused by mu-opiate receptor activation in the brainstem (Karch, 2006).
Hence, heroin, THC and cocain are lethal mainly due to peripheral causes (i.e., cardiac and respiratory arrest). Although these effects are mediated, at least partly, through central mechanisms (i.e., occurring in the central nervous system and specifically the brain), the drugs themselves do not cause death because of neurotoxicity.
And a closing comment, with credits to @MarchHo - In general, neurotoxins do not cause death due to neural toxicity per se. Most notably even botulotoxin, being one of the most potent neurotoxins known, causes death due to respiratory failure (CDC).
References
- Anorro et al., Rev Esp Salud Publica (2004); 78(5): 601-8
- Fitzgerald et al., Top Companion Anim Med (2013); 28(1):8-12
- Karch, Drug Abuse Handbook (2006)
- Schwartz et al., Circulation (2010); 122:2558-69 | {
"domain": "biology.stackexchange",
"id": 3973,
"tags": "neuroscience, pharmacology, toxicology"
} |
(CNN+)RNN-HMM hybrid for learning phonemes from a spectogram | Question: I am currently working on a speech recognition task, on applying deep learning onto the standard acoustic model (gmm-hmm).
I've currently generated a spectrogram of my utterances, and using simple pattern recognition managed to receive a 40%WER on a yes/no dataset.. Not great but a start. The CNN is being fed a context window of 40 frames, in which the center frame is being detected for, my question is whether the use of an RNN could benefit here? so that the RNN handles the context, and the CNN does "image analysis" on one frame spectrogram.
and if so will it implementation wise cause some problems, When I was doing it with a CNN, was the context dependency resolved by doing pattern recognition on a bigger part of the spectrogram, depending of the context window size, but introducing RNN, CNN only has to do analysis on one frame (which I see can become to even get a proper results from) at the time, and can the one framed information be piped to the RNN until a certain context size has reached? and if so how?
Answer: Have you seen this...
http://ieeexplore.ieee.org/document/7953168/
... We propose to use a recently developed deep learning model, recurrent convolutional neural network (RCNN), for speech processing, which inherits some merits of recurrent neural network (RNN) and convolutional neural network (CNN). The core module can be viewed as a convolutional layer embedded with an RNN, which enables the model to capture both temporal and frequency dependance in the spectrogram of the speech in an efficient way.
Sound pretty accurately to what you are searching for, or trying to implement. | {
"domain": "datascience.stackexchange",
"id": 1839,
"tags": "keras, rnn, convolutional-neural-network, audio-recognition"
} |
Friction between two rough surface | Question: When two bodies move relative to each other there occurs force of friction between them. I wanted to ask if there were two rough bodies with different value of coefficient of friction. How do we compute friction then, please give an example if possible.
Thanks
Answer: Friction is not a property of a single surface or material, but of a pair of them.
There's no such thing as "the friction of steel", but rather, e.g., "the friction of steel on ice (0.03) or steel on steel (0.8)". And that is still not accounting for surface properties, temperature, etc. That's why coefficient of friction tables (check this one, from Wikipedia) always lists pairs of materials. | {
"domain": "physics.stackexchange",
"id": 47580,
"tags": "friction"
} |
Checking which pypi packages are Py3k-only | Question: I looked at the pypi classifiers and some guide to help me write the following script, and here's me wondering if it can be improved:
import xmlrpc.client
# pypi language version classifiers
PY3 = ["Programming Language :: Python :: 3"]
PY2 = ["Programming Language :: Python :: 2"]
PY27 = ["Programming Language :: Python :: 2.7"]
PY26 = ["Programming Language :: Python :: 2.6"]
PY25 = ["Programming Language :: Python :: 2.5"]
PY24 = ["Programming Language :: Python :: 2.4"]
PY23 = ["Programming Language :: Python :: 2.3"]
def main():
client = xmlrpc.client.ServerProxy('http://pypi.python.org/pypi')
# get module metadata
py3names = [name[0] for name in client.browse(PY3)]
py2names = [name[0] for name in client.browse(PY2)]
py27names = [name[0] for name in client.browse(PY27)]
py26names = [name[0] for name in client.browse(PY26)]
py25names = [name[0] for name in client.browse(PY25)]
py24names = [name[0] for name in client.browse(PY24)]
py23names = [name[0] for name in client.browse(PY23)]
cnt = 0
for py3name in py3names:
if py3name not in py27names \
and py3name not in py26names \
and py3name not in py25names \
and py3name not in py24names \
and py3name not in py23names \
and py3name not in py2names:
cnt += 1
print("Python3-only packages: {}".format(cnt))
main()
Sidenote:
$ time python3 py3k-only.py
Python3-only packages: 259
real 0m17.312s
user 0m0.324s
sys 0m0.012s
In addition, can you spot any functionality bugs in there? Will it give accurate results, assuming that pypi has correct info?
Answer: import xmlrpc.client
# pypi language version classifiers
PY3 = ["Programming Language :: Python :: 3"]
Why do you have a single string in a list?
PY2 = ["Programming Language :: Python :: 2"]
PY27 = ["Programming Language :: Python :: 2.7"]
PY26 = ["Programming Language :: Python :: 2.6"]
PY25 = ["Programming Language :: Python :: 2.5"]
PY24 = ["Programming Language :: Python :: 2.4"]
PY23 = ["Programming Language :: Python :: 2.3"]
You should put all these strings in one list.
def main():
client = xmlrpc.client.ServerProxy('http://pypi.python.org/pypi')
# get module metadata
py3names = [name[0] for name in client.browse(PY3)]
py2names = [name[0] for name in client.browse(PY2)]
py27names = [name[0] for name in client.browse(PY27)]
py26names = [name[0] for name in client.browse(PY26)]
py25names = [name[0] for name in client.browse(PY25)]
py24names = [name[0] for name in client.browse(PY24)]
py23names = [name[0] for name in client.browse(PY23)]
If you put the python 2.x versions in a one list, you should be able to fetch all this data
into one big list rather then all of these lists.
cnt = 0
Don't uselessly abbreviate, spell out counter
for py3name in py3names:
if py3name not in py27names \
and py3name not in py26names \
and py3name not in py25names \
and py3name not in py24names \
and py3name not in py23names \
and py3name not in py2names:
cnt += 1
I'd do something like: python3_only = [name for name in py3name if py3name not in py2names]. Then I'd get the number of packages as a len of that.
print("Python3-only packages: {}".format(cnt))
main()
Usually practice is to the call to main inside if __name__ == '__main__': so that it only gets run if this script is the main script. | {
"domain": "codereview.stackexchange",
"id": 1199,
"tags": "python"
} |
How to save and test CNN model on test set after training | Question: My CNN model is trained on the training set and validated on the validation set, now I want to test it on test set, here is my code:
x_img = tf.placeholder(tf.float32, name='x_img')
y_label = tf.placeholder(tf.float32, name='y_label')
reshape = tf.reshape(x_img, shape=[-1, img_x, img_y, img_z, 1], name='reshape')
def CNN_Model(input):
conv1 = conv_layer(reshape, num_channels, n_f_conv1, name="conv1")
max_pool1 = maxpool_layer(conv1, name="max_pool1")
conv2 = conv_layer(max_pool1, n_f_conv1, n_f_conv2, name="conv2")
max_pool2 = maxpool_layer(conv2, name="max_pool2")
shape = 4*4*4*64
flattened = tf.reshape(max_pool2,shape=[-1, shape], name='flattened')
fc = fc_layer(flattened, shape, n_node_fc, name="fc")
dropout1 = dropout(fc, keep_rate, name="dropout1")
output_layer = output(dropout1, n_node_fc, num_classes, name="output_layer")
return output_layer
def train_CNN(input):
train_predict = CNN_Model(x_img)
with tf.variable_scope("cross_entropy", reuse=tf.AUTO_REUSE):
lose = tf.nn.softmax_cross_entropy_with_logits_v2(logits=train_predict, labels=y_label, name='cross_entropy')
cost = tf.reduce_mean(lose, name='reduce_mean_cost')
tf.summary.scalar("cost", cost)
with tf.variable_scope("optimization", reuse=tf.AUTO_REUSE):
optimizer = tf.train.AdamOptimizer(learning_rate, name='AdamOptimizer').minimize(cost)
init = tf.global_variables_initializer()
print("Starting session...")
with tf.Session() as sess:
sess.run(init)
all_time = 0
batch_size = 120
batch = 0
print("Starting training...")
for epoch in range(num_epochs):
train_batch = train_data[batch:batch_size]
batch += batch_size
batch_size += batch_size
start_time = time.time()
ep_loss = 0
for data in train_batch:
X = data[0]
Y = data[1]
_, c = sess.run([optimizer, cost], feed_dict={x_img: X, y_label: Y})
ep_loss += c
end_time = time.time()
all_time += int(end_time-start_time)
print('Epoch', epoch+1, 'completed out of',num_epochs,'loss:',ep_loss,
'time usage: '+str(int(end_time-start_time))+' seconds')
correct_predict = tf.equal(tf.argmax(train_predict, 1), tf.argmax(y_label, 1))
accuracy = tf.reduce_mean(tf.cast(correct_predict, tf.float32), name='reduce_mean_acc')
print("Validation accuracy:", accuracy.eval({x_img:[i[0] for i in validate_data],
y_label:[i[1] for i in validate_data]}))
print("Test accuracy:", accuracy.eval({x_img:[i[0] for i in test_data],
y_label:[i[1] for i in test_data]}))
I have a test dataset stored as test_data like train_data in the code above, tried to do it in more than one way, but I did not succeed, can anyone share a testing code with me, ofcourse based on my code?
Answer: I don't know exactly where you have the problem but according to comments, take a look at the following line.
_, c = sess.run([optimizer, cost], feed_dict={x_img: X, y_label: Y})
feed_dict is used for passing data to your network. As you can see, X is the training data. You can replace it with the test data. You should also change the y_labels to the labels of the test data. | {
"domain": "datascience.stackexchange",
"id": 3732,
"tags": "machine-learning, neural-network, deep-learning, tensorflow, cnn"
} |
Indices of true values | Question: My task is to find the indices of a vector of booleans that are true. I'm coming from a JavaScript background and would like to learn how to write this in idiomatic Rust.
fn main() {
let fold_func = |acc: Vec<usize>, p: (usize, &bool)| -> Vec<usize> {
if *p.1 {
let mut clone = acc.clone();
clone.push(p.0);
clone
} else {
acc
}
};
let v = vec![true,false,true];
let result = v.iter().enumerate().fold(vec![], fold_func);
print!("{:?}", result) //[0, 2]
}
Answer:
Spaces come after commas
- let v = vec![true,false,true];
+ let v = vec![true, false, true];
Define the closure inline in the fold call. This allows removing the explicit types
Cloning the accumulator is more expensive than it needs to be. You have been given ownership of the vector, just declare it mutable and push to it.
The code now looks like:
let result = v.iter().enumerate().fold(vec![], |mut acc, p| {
if *p.1 {
acc.push(p.0);
acc
} else {
acc
}
});
Extract out the common part of the closure that returns the acc
Destructure p to give the components names.
The code now looks like:
let result = v.iter().enumerate().fold(vec![], |mut acc, (index, value)| {
if *value {
acc.push(index);
}
acc
});
I advocate to basically memorize all the methods on Iterator. In this case, filter, map, and collect allow writing the entire thing more succinctly, equally efficiently, as well as more flexibly.
There's no need to allocate the Vec, an array / slice works the same.
The code now looks like:
fn main() {
let v = [true, false, true];
let result: Vec<_> = v.iter()
.enumerate()
.filter(|&(_, &value)| value)
.map(|(index, _)| index)
.collect();
println!("{:?}", result);
}
I would expect that filter() filters out non true values, so indices would shift, but it seems like it's not happening
Yes, filter does remove the non-true values. The important thing to recognize is where the indices are added.
With iter().enumerate().filter(), enumeration happens before the filtering. Each index is added based on an unfiltered iterator.
If the code had instead been iter().filter().enumerate(), the enumeration happening after the filtering, then the index would have been added based on the filtered iterator. This would result in the indices 0, 1, 2, ..., which isn't very useful, as you point out. | {
"domain": "codereview.stackexchange",
"id": 25047,
"tags": "rust"
} |
Python for data analytics | Question: What are some data analytic package & feature in python which helps do data analytic?
Answer: You're looking for this answer: https://www.quora.com/Why-is-Python-a-language-of-choice-for-data-scientists | {
"domain": "datascience.stackexchange",
"id": 215,
"tags": "data-mining, python"
} |
Simple password generator | Question: Here is a simple password generator in Java:
public class PasswordGenerator {
public static final String LOWER_CASE = "abcdefghijklmnopqrstuvwxyz";
public static final String UPPER_CASE = LOWER_CASE.toUpperCase();
public static final String DIGITS = "0123456789";
public static final String PUNCTUATION_MARKS = "!,.:;@#$%^*()_-+={}\"<>?/\\№";
public static final String SIMILAR_CHARACTERS = "Ll1ioO0";
static class CharacterSet {
private String include;
private String exclude;
public CharacterSet() {
this.include = "";
this.exclude = "";
}
public void include(String str) {
include += str;
}
public void exclude(String str) {
exclude += str;
}
public char randomCharacter(Random random) {
int randomIndex;
char randomCharacter;
do {
randomIndex = random.nextInt(include.length());
randomCharacter = include.charAt(randomIndex);
} while (exclude.contains(String.valueOf(randomCharacter)));
return randomCharacter;
}
}
public String generatePassword(PasswordSettings passwordSettings) {
CharacterSet characterSet = characterSetFromSettings(passwordSettings);
StringBuilder passwordBuilder = new StringBuilder();
Random random = new Random(System.currentTimeMillis());
for (int i = 0; i < passwordSettings.getLength(); ++i) {
passwordBuilder.append(characterSet.randomCharacter(random));
}
return passwordBuilder.toString();
}
private CharacterSet characterSetFromSettings(PasswordSettings passwordSettings) {
CharacterSet characterSet = new CharacterSet();
if (passwordSettings.isLowerCase()) {
characterSet.include(LOWER_CASE);
}
if (passwordSettings.isUpperCase()) {
characterSet.include(UPPER_CASE);
}
if (passwordSettings.isDigits()) {
characterSet.include(DIGITS);
}
if (passwordSettings.isPunctuationMarks()) {
characterSet.include(PUNCTUATION_MARKS);
}
if (passwordSettings.isExcludeSimilarCharacters()) {
characterSet.exclude(SIMILAR_CHARACTERS);
}
return characterSet;
}
}
PasswordSettings is just a POJO class.
public class PasswordSettings implements Parcelable {
public static final Parcelable.Creator<PasswordSettings> CREATOR = new Parcelable.Creator<PasswordSettings>() {
@Override
public PasswordSettings createFromParcel(Parcel source) {
return new PasswordSettings(source);
}
@Override
public PasswordSettings[] newArray(int size) {
return new PasswordSettings[size];
}
};
private int length;
private boolean lowerCase;
private boolean upperCase;
private boolean digits;
private boolean punctuationMarks;
private boolean excludeSimilarCharacters;
public int getLength() {
return length;
}
public void setLength(int length) {
this.length = length;
}
public boolean isLowerCase() {
return lowerCase;
}
public void setLowerCase(boolean lowerCase) {
this.lowerCase = lowerCase;
}
public boolean isUpperCase() {
return upperCase;
}
public void setUpperCase(boolean upperCase) {
this.upperCase = upperCase;
}
public boolean isDigits() {
return digits;
}
public void setDigits(boolean digits) {
this.digits = digits;
}
public boolean isPunctuationMarks() {
return punctuationMarks;
}
public void setPunctuationMarks(boolean punctuationMarks) {
this.punctuationMarks = punctuationMarks;
}
public boolean isExcludeSimilarCharacters() {
return excludeSimilarCharacters;
}
public void setExcludeSimilarCharacters(boolean excludeSimilarCharacters) {
this.excludeSimilarCharacters = excludeSimilarCharacters;
}
public PasswordSettings() {
// empty constructor
}
public PasswordSettings(Parcel in) {
length = in.readInt();
lowerCase = ParcelHelper.readBoolean(in);
upperCase = ParcelHelper.readBoolean(in);
digits = ParcelHelper.readBoolean(in);
punctuationMarks = ParcelHelper.readBoolean(in);
excludeSimilarCharacters = ParcelHelper.readBoolean(in);
}
@Override
public int describeContents() {
return 0;
}
@Override
public void writeToParcel(Parcel dest, int flags) {
dest.writeInt(length);
ParcelHelper.writeBoolean(dest, lowerCase);
ParcelHelper.writeBoolean(dest, upperCase);
ParcelHelper.writeBoolean(dest, digits);
ParcelHelper.writeBoolean(dest, punctuationMarks);
ParcelHelper.writeBoolean(dest, excludeSimilarCharacters);
}
}
Answer: Keeping included and excluded characters, and picking a random character from the included set until you find one that's not in the excluded set is kinda ugly.
If you think about it, in theory, it's possible that you will keep picking excluded elements forever... Sure, the chances of that maybe infinitesimally slim, but this is still ugly.
Alternatively, you could generate a character set from which the characters to exclude have already been excluded. Consider something like this:
public static class CharacterSet {
private final String characters;
private final Random random;
private CharacterSet(String characters) {
this.characters = characters;
this.random = new Random();
}
public char getRandom() {
return characters.charAt(random.nextInt(characters.length()));
}
public static class Builder { ... }
}
This CharacterSet class is a more clear and tight abstract data type:
It contains a string of characters : naturally, hence the class name!
It contains a getRandom() method to pick a random character
Notice that I dropped "character" from the method name: it was redundant, thanks to the class name
I also dropped the Random parameter, and moved it to a private field instead. This simplifies usage, with little to no drawbacks
It contains a Builder, which can encapsulate the details of building up the character set from elements to include and exclude
Then the Builder can be implemented this way:
public static class Builder {
private List<Character> included = new ArrayList<>();
private Set<Character> excluded = new HashSet<>();
public void include(String str) {
for (char c : str.toCharArray()) {
included.add(c);
}
}
public void exclude(String str) {
for (char c : str.toCharArray()) {
excluded.add(c);
}
}
public CharacterSet build() {
StringBuilder builder = new StringBuilder(included.size());
for (char c : included) {
if (!excluded.contains(c)) {
builder.append(c);
}
}
return new CharacterSet(builder.toString());
}
}
And the method that creates a CharacterSet from PasswordSettings can be rewritten in terms of the Builder like this:
private CharacterSet characterSetFromSettings(PasswordSettings passwordSettings) {
CharacterSet.Builder builder = new CharacterSet.Builder();
if (passwordSettings.isLowerCase()) {
builder.include(LOWER_CASE);
}
if (passwordSettings.isUpperCase()) {
builder.include(UPPER_CASE);
}
if (passwordSettings.isDigits()) {
builder.include(DIGITS);
}
if (passwordSettings.isPunctuationMarks()) {
builder.include(PUNCTUATION_MARKS);
}
if (passwordSettings.isExcludeSimilarCharacters()) {
builder.exclude(SIMILAR_CHARACTERS);
}
return builder.build();
}
PasswordSettings
I'm wondering if you really need the setters.
It would be great if you could remove them,
and the parameterless constructor,
and make the class immutable. | {
"domain": "codereview.stackexchange",
"id": 17007,
"tags": "java, security, random"
} |
Is there a way to test if two NFAs accept the same language? | Question: Or at least generate a set of strings that one NFA accepts, so I can feed it into the other NFA. If I do a search through every path of the NFA, will that work? Although that will take a long time.
Answer: The decision problem is PSPACE-complete as Shaull noted.
However, it turns out that in practice it is often possible to decide NFA equivalence reasonably quickly. Mayr and Clemente (based on experimental evidence) claim that the average-case complexity scales quadratically. Their techniques rely on pruning the underlying labelled transition system via local approximations of trace inclusions.
Just like SAT is NP-complete in a worst-case analysis, yet often turns out surprisingly tractable for real-world instances, it therefore seems likely that NFA equivalence can be decided efficiently for many real-world instances.
Richard Mayr and Lorenzo Clemente, Advanced automata minimization, POPL 2013, doi:10.1145/2429069.2429079 (preprint) | {
"domain": "cs.stackexchange",
"id": 1489,
"tags": "algorithms, regular-languages, finite-automata"
} |
Use DOM and XPath to make some changes in HTML document | Question: I intend to make some changes in an HTML document, like remove, replace, append some nodes.
I have several arrays with same structure like the following example
$patterns[] = array(
'xpath' => '/html/body/div/p',
'insert' => $new_a,
'task' => 'replace'
);
...
$patterns[] = array(
'xpath' => '/html/body/header',
'insert' => $new_b,
'task' => 'remove'
);
and I use the following snippet
$dom = new DOMDocument();
if ((empty($html) !== true) && ($dom->loadHTML($html) === true))
{
$dom->formatOutput = true;
// Use DomXPath
$xpath = new DomXPath($dom);
// Process Dom changes
foreach ($patterns as $key => $value)
{
$xpath_results = $xpath->query($patterns[$key]['xpath']);
if ($xpath_results->length)
{
// Assign some values
$xpath_results = $xpath_results->item(0);
$task = $patterns[$key]['task'];
$replacement = $patterns[$key]['insert'];
if ($task != 'remove')
{
// Create the replacement
$newNode = $dom->createDocumentFragment();
$newNode->appendXML($replacement);
}
switch ($task)
{
case 'remove':
$xpath_results->parentNode->removeChild($xpath_results);
break;
case 'replace':
$xpath_results->parentNode->replaceChild($newNode, $xpath_results);
break;
case 'append':
$xpath_results->appendChild($newNode);
break;
case 'prepend':
$xpath_results->parentNode->insertBefore($newNode, $xpath_results);
break;
}
}
}
// Save Dom
$html = $dom->saveXML($dom->documentElement);
}
I would like your expert comments about if this is done the right way. The foreach to loop through the array elements, the switch to choose what will be done and if the use of createDocumentFragment is appropriate in this case.
Answer: Your code seems to globally do the right job, using a right way, and I can't see how you could really improve performance.
I only noticed a few bad points:
Unless I missed some subtleness, there is a typo in the case 'append': branch: it should be using $xpath_results->**parentNode**,like others.
You didn't take in account a possible error condition at $newNode->appendXML($replacement);: it may return FALSE, so you should cancel the process in that case.
The same applies to a possible wrong value in the task index of a pattern: it should be taken in account through the default: option of the switch().
In the other hand I would suggest some improvements at the coding level itself, essentially for readability.
You're using foreach ($patterns as $key => $value), but then always write $patterns[$key][...] so you force PHP to a few more work, while $value is never used.
I suggest to rather use foreach ($patterns as $value), then $value[...].
(or even more readable foreach ($patterns as $pattern), then $pattern[...])
Still for readability I would add a separate variable to clearly distinguish the successive "avatars" of $xpath_results, which is first a DOMNodeList then a DOMNode.
So you could change your DOMNode extraction to $node = $xpath_results->item(0);, and all following code becomes much more obvious.
Your // Assing some valuespart could be reduced to the $node assignation above: the two other variables are only used once so you could merely write $newNode->appendXML($pattern['insert']); and switch ($pattern['task']).
So here is my suggestion for the whole snippet after the remarks above:
if ($html) {
$dom = new DOMDocument();
if (!$dom->loadHTML($html)) {
$error = 'Error while loading source HTML!';
} else {
$dom->formatOutput = true;
// Use DomXPath
$xpath = new DomXPath($dom);
// Process Dom changes
foreach ($patterns as $pattern) {
$xpath_results = $xpath->query($pattern['xpath']);
if ($xpath_results->length) {
$node = $xpath_results->item(0);
if ($task != 'remove') {
// Create the replacement
$newNode = $dom->createDocumentFragment();
if (!$newNode->appendXML($pattern['insert'])) {
$error = "Wrong insert pattern: $pattern['insert']!";
break;
}
}
if (!@$error) {
switch ($pattern['task']) {
case 'remove':
$node->parentNode->removeChild($node);
break;
case 'replace':
$node->parentNode->replaceChild($newNode, $node);
break;
case 'append':
$node->parentNode->appendChild($newNode);
break;
case 'prepend':
$node->parentNode->insertBefore($newNode, $node);
break;
default:
$error = "Wrong task pattern: $pattern['task']!";
}
}
}
}
}
if (@$error) {
// print or return $error...
} else {
// Save Dom
$html = $dom->saveXML($dom->documentElement);
}
}
For the error processing, I arbitrarily choosed one simple option among numerous different possiilities. You can notice:
The @$error form: it is a simple and clear way to allow using variables set only in certain cases without having defined them previously.
Still for readability, I changed the initial tests into more simple (and sufficient) forms: if ($html) and if (!dom->loadHTML($html)). | {
"domain": "codereview.stackexchange",
"id": 14227,
"tags": "php, performance, dom, xpath"
} |
2D board game: good Model part? | Question: First time writing a big project in OOP. I am quite used to scientific programming but not to OOP, and even less to building GUIs. I am writing a 2D board game: the player can move on a map from tile to tile, meet Helpers and Enemies, and win a Trophy in the end.
I used the MVC pattern and the Model parts consist of the following classes: Board, Tile, Player, Opponent (abstract), Enemy, Helper, Trophy, Position, and HighScoreManager.
Putting them all here would be too much I guess, so here are the "core" files for the Model part:
Board.java
package model;
import java.util.Observable;
public class Board extends Observable {
// VARIABLES
private Player player;
private Tile[][] grid = new Tile[WIDTH][HEIGHT];
private Trophy trophy;
private Position activePosition = initialPosition;
private boolean gameFinished = false;
private HighScoreManager highScoreManager = new HighScoreManager();
// constants
static final int WIDTH = 10; // in number of tiles
static final int HEIGHT = 10; // in number of tiles
static final int TOTAL_NUMBER_OF_ENEMIES = 4;
static final int TOTAL_NUMBER_OF_HELPERS = 4;
// user input
int DIFFICULTY_LEVEL = 1;
// initial values
static final int initialXPosition = 0;
static final int initialYPosition = 0;
static final Position initialPosition = new Position(initialXPosition, initialYPosition);
static final int xTrophy = (int) (WIDTH*0.75);
static final int yTrophy = (int) (HEIGHT*0.75);
// CONSTRUCTOR
// METHODS
public void initBoard() {
player = new Player(initialPosition, DIFFICULTY_LEVEL);
trophy = new Trophy();
activePosition = initialPosition;
gameFinished = false;
highScoreManager.setHighScoreValue();
int numberOfHelpers = 0;
int numberOfEnemies = 0;
// create all the tiles
for (int i=0; i<WIDTH; i++) {
for (int j=0; j<HEIGHT; j++) {
this.grid[i][j] = new Tile("grass");
}
}
// add player
grid[initialXPosition][initialYPosition].setPlayer(player);
// add trophy
grid[xTrophy][yTrophy].setOpponent(trophy);
// randomly add enemies (only on even coordinates to spread them)
while (numberOfEnemies < TOTAL_NUMBER_OF_ENEMIES) {
int x = 2*(1 + (int)(Math.random()*(WIDTH/2-1)));
int y = 2*(1 + (int)(Math.random()*(HEIGHT/2-1)));
if (grid[x][y].getOpponent() == null) {
grid[x][y].setOpponent(new Enemy());
numberOfEnemies ++;
}
}
// randomly add helpers (only on even coordinates to spread them)
while (numberOfHelpers < TOTAL_NUMBER_OF_HELPERS) {
int x = 2*(1 + (int)(Math.random()*(WIDTH/2-1)));
int y = 2*(1 + (int)(Math.random()*(HEIGHT/2-1)));
if (grid[x][y].getOpponent() == null) {
grid[x][y].setOpponent(new Helper());
numberOfHelpers ++;
}
}
this.setChanged();
this.notifyObservers();
}
public boolean isGameOver() {
return (player.getStepsLeft() == 0 && trophy.getWon());
}
public Position computeDestination(String direction) {
Position position = player.getPosition();
Position destination = new Position(-10,-10); // TODO CHANGE THAT
switch(direction) {
case "left":
destination = position.plus(0,-1);
break;
case "right":
destination = position.plus(0,+1);
break;
case "down":
destination = position.plus(+1,0);
break;
case "up":
destination = position.plus(-1,0);
break;
default:
break;
}
return destination;
}
public boolean isWithinBounds(Position destination) {
int x = destination.getX();
int y = destination.getY();
return ( 0 <= x
&& x < WIDTH
&& 0 <= y
&& y < HEIGHT);
}
public boolean isAdjacent(Position destination) {
Position position = player.getPosition();
return( position.plus(0, +1).equals(destination)
|| position.plus(0, -1).equals(destination)
|| position.plus(+1, 0).equals(destination)
|| position.plus(-1, 0).equals(destination));
}
public boolean isValidDestination(Position destination) {
return (isAdjacent(destination) && isWithinBounds(destination)) ;
}
public void makeMove(Position destination) {
if (isValidDestination(destination)) {
int oldX = player.getPosition().getX();
int oldY = player.getPosition().getY();
int newX = destination.getX();
int newY = destination.getY();
grid[oldX][oldY].setPlayer(null); // remove player from old tile
player.move(destination); // move player
grid[newX][newY].setPlayer(player); // set player on new tile
this.setChanged();
this.notifyObservers(); // notify view of position change
activePosition = player.getPosition();
}
}
public boolean isInteractionPossible() {
int x = player.getPosition().getX();
int y = player.getPosition().getY();
return (grid[x][y].isInteractionPossible());
}
public void handleInteraction(Player player) {
int x = player.getPosition().getX();
int y = player.getPosition().getY();
if (grid[x][y].isInteractionPossible()) {
grid[x][y].handleInteraction(player);
}
this.setChanged();
this.notifyObservers(player);
if (this.getPlayer().getStepsLeft()==0 || trophy.getWon()) { // check if game is finished
gameFinished = true;
highScoreManager.setScore(this.getPlayer().getScore());
}
}
// GETTERS
public Player getPlayer() {
return player;
}
public Tile[][] getGrid() {
return grid;
}
public Tile getTile(int i, int j) {
return grid[i][j];
}
public Trophy getTrophy() {
return trophy;
}
public Position getActivePosition() {
return activePosition;
}
public boolean getGameFinished() {
return gameFinished;
}
public HighScoreManager getHighScoreManager() {
return highScoreManager;
}
public Tile getActiveTile() {
int x = activePosition.getX();
int y = activePosition.getY();
return grid[x][y];
}
}
Player.java
package model;
public class Player {
// VARIABLES
private Position position;
private int score = 0;
private int stepsLeft;
private int fightingSkill;
private int jokingSkill;
private int visionScope = 2;
private String skillChoice;
// CONSTRUCTOR
public Player(Position position, int difficultyLevel) {
this.position = position;
switch(difficultyLevel) {
case 1:
this.stepsLeft = 150;
this.fightingSkill = 5;
this.jokingSkill = 5;
case 2:
this.stepsLeft = 150;
this.fightingSkill = 2;
this.jokingSkill = 2;
case 3:
this.stepsLeft = 100;
this.fightingSkill = 2;
this.jokingSkill = 2;
case 4:
this.stepsLeft = 10;
this.fightingSkill = 1;
this.jokingSkill = 1;
}
}
// METHODS ------------------------------
public void move(Position destination) {
setPosition(destination);
stepsLeft -= 1;
}
public void increaseScore(int amount) {
score += amount;
}
public void increaseStepsLeft(int amount) {
stepsLeft+= amount;
}
public void increaseFightingSkill(int amount) {
fightingSkill += amount;
}
public void increaseJokingSkill(int amount) {
jokingSkill += amount;
}
// GETTERS
public Position getPosition() {
return position;
}
public int getScore() {
return score;
}
public int getStepsLeft() {
return stepsLeft;
}
public int getFightingSkill() {
return fightingSkill;
}
public int getJokingSkill() {
return jokingSkill;
}
public String getSkillChoice() {
return skillChoice;
}
public int getVisionScope() {
return visionScope;
}
// SETTERS
public void setPosition(Position position) {
this.position = position;
}
public void setSkillChoice(int choice) {
switch(choice) {
case 0:
this.skillChoice = "joke";
break;
case 1:
this.skillChoice = "fight";
break;
case 2:
this.skillChoice = "steps";
break;
default:
;
}
}
@Override
public String toString() {
return "play \t" + position + "\t"+ stepsLeft
+ "\t"+ jokingSkill + "\t"+ score + "\n";
}
}
Opponent.java
package model;
public abstract class Opponent {
// VARIABLES
private int bonus; // amount of skill points the player earns from that opponent
private static final int MAX_BONUS = 5;
// CONSTRUCTOR
public Opponent() {
this.bonus = 1 + (int)(Math.random()*(MAX_BONUS-1)); // generate bonus between 1 and 5
}
// METHODS
public abstract void interactWith(Player player);
public void increaseFightingSkill(Player player) {
player.increaseFightingSkill(bonus);
}
public void increaseJokingSkill(Player player) {
player.increaseJokingSkill(bonus);
}
// GETTERS
public int getBonus() {
return bonus;
}
// SETTERS
public void setBonus(int bonus) {
this.bonus = bonus;
}
}
Enemy.java
package model;
public class Enemy extends Opponent{
// VARIABLES
private static final int MAX_JOKING_THRESHOLD = 10;
private static final int MAX_FIGHTING_THRESHOLD = 10;
private int jokeThreshold;
private int fightThreshold;
private String[] options = {"joke", "fight"};
// CONSTRUCTOR
public Enemy() {
super();
this.jokeThreshold = 1 + (int) (Math.random()*(MAX_JOKING_THRESHOLD - 1));
this.fightThreshold = 1 + (int) (Math.random()*(MAX_FIGHTING_THRESHOLD - 1));
}
// METHODS
public void interactWith(Player player) {
String choice = player.getSkillChoice();
if (choice=="fight" && player.getFightingSkill() > fightThreshold) {
increaseFightingSkill(player);
player.increaseScore(this.getBonus());
} else if (choice=="joke" && player.getJokingSkill() > jokeThreshold) {
this.increaseJokingSkill(player);
player.increaseScore(this.getBonus());
}
}
// GETTERS
public String[] getOptions() {
return options;
}
// SETTERS
public String toString() {
return "enem";
}
}
Helper.java
package model;
public class Helper extends Opponent {
/*
* When the player encounters a Helper, the player is asked what skill he wants to
* improve (or get extra steps), and gets extra points in the skill of his choice
*/
// VARIABLES
private static final int MIN_STEPS_BONUS = 5;
private int stepsBonus;
private String[] options = {"joke", "fight", "steps"};
// CONSTRUCTOR
public Helper() {
super();
this.stepsBonus = 1 + (int)(3*(Math.random()*(MIN_STEPS_BONUS -1))); // generate magic between 5 and 15
}
// METHODS
public void interactWith(Player player) {
String choice = player.getSkillChoice();
if (choice == "fight") {
this.increaseFightingSkill(player);
} else if (choice == "joke") {
this.increaseJokingSkill(player);
} else if (choice == "steps") {
this.increaseStepsLeft(player);
}
}
public void increaseStepsLeft(Player player) {
player.increaseStepsLeft(stepsBonus);
}
// GETTERS
public int getStepsBonus() {
return stepsBonus;
}
public String[] getOptions() {
return options;
}
// SETTERS
public String toString() {
return "help";
}
}
Tile.java
package model;
public class Tile {
// VARIABLES
private String terrain;
private Opponent opponent;
private Player player;
// CONSTRUCTORS
public Tile(String terrain) {
this.terrain = terrain;
}
// METHODS
public boolean isInteractionPossible() {
return (opponent != null);
}
public void handleInteraction(Player player) {
this.opponent.interactWith(player);
}
// GETTERS
public String getTerrain() {
return terrain;
}
public Opponent getOpponent() {
return opponent;
}
public Player getPlayer() {
return player;
}
// SETTERS
public void setTerrain(String terrain) {
this.terrain = terrain;
}
public void setOpponent(Opponent opponent) {
this.opponent = opponent;
}
public void setPlayer(Player player) {
this.player = player;
}
@Override
public String toString() {
return "\n"+player + "\t"+opponent;
}
}
My questions
Any comment about structure or good practice is welcome, there is probably plenty to say.
I tried using the Observer-Observable pattern making Board the only observable to make it "philosophically simple". But then I feel like it gets messy when I want to notify different events to the GUI. Is there a better structure or a better way to send the notifications?
I've tried as much as possible to keep each class in the model independent on its own as much as possible, to be able to reuse a HighScoreManager for an other game for example. I don't whether it is always the best choice.
In case you want to see it, the whole project is on GitHub.
Answer: Things that I liked
Separation of Code and UI.
Clear naming of methods and variables. I didn't need any documentation with it, the names were enough.
Things that I didn't like
Why? Why would you use this everywhere? It is not required and reduces readibility.
No need to label things like // VARIABLES, // CONSTRUCTOR. If you are using an IDE like Eclipse use Ctrl + O, you can see all the methods, constructors and what not.
Things that you should include in your project
More description about the game in README.md.
LICENSE, the code over here is CC-3.0 licensed, but you haven't mentioned any in your GitHub project.
Some Suggestions
Use of Negation
Use negation to reduce the indentation and increase readibility. (Though it might be a little confusing at first)
Old onKeyPressed()
if (!board.getGameFinished()) { // keep moving only is game not finished
...
if (direction!="") {
destination = board.computeDestination(direction);
...
}
}
Changed onKeyPressed()
if (board.getGameFinished())
return;
...
if (direction == "")
return;
...
destination = board.computeDestination(direction);
board.makeMove(destination);
Use of Switch-Case statement
Avoid using MAGIC NUMBERS.
Usage of numbers like -1, 0, 1. What do these numbers mean? Maybe you can write comment and would be able to understand. But after 2-3 months you wouldn't be able to figure out their meaning.
switch (box.getChoice()) { // take action depending on user's answer to
case -1: //close
break;
case 0: //new game
board.initBoard();
break;
case 1: //high scores
String highScore = board.getHighScoreManager().getHighScore();
gui.showHighScore(highScore);
break;
case 2: //close
gui.askExitConfirmation();
break;
default:
;
}
Instead, make them private static final variables and use. Or better yet make an enum out of them.
public enum EndOfGame {
CLOSE_1(-1), NEW_GAME(0), HIGH_SCORES(1), CLOSE_2(2);
private int value;
private EndOfGame(int value) {
this.value = value;
}
public int getValue() {
return value;
}
public static EndOfGame fromValue(int value) {
for (EndOfGame endOfGame : EndOfGame.values()) {
if (endOfGame.getValue() == value)
return endOfGame;
}
return null;
}
}
and use it for switch-case as -
switch (EndOfGame.fromValue(box.getChoice())) {
case CLOSE_1:
break;
case NEW_GAME:
board.initBoard();
break;
case HIGH_SCORES:
String highScore = board.getHighScoreManager().getHighScore();
gui.showHighScore(highScore);
break;
case CLOSE_2:
gui.askExitConfirmation();
break;
default:
break;
}
Comments and Indentation
Ctrl + Shitf + F is your friend. Use it often, it will format your code for you.
Don't write code like this with comments describing every variable
public EndOfGameDialogBox(JPanel parentPane) {
choice = JOptionPane.showOptionDialog( parentPane, //parent pane
message,
title,
JOptionPane.YES_NO_CANCEL_OPTION, //type of options
JOptionPane.QUESTION_MESSAGE, //type of message
null, //icon
options, //list of buttons
options[0]); //default focus on first button
}
Instead, attach javadoc to your project, and you will be able to see the description of all the method arguments.
UI code on UI Thread
Go through the description about running the code on EDT as described here. | {
"domain": "codereview.stackexchange",
"id": 11477,
"tags": "java, game, mvc, observer-pattern"
} |
Maximum velocity of interactions | Question: In chapter 1, Section 1, para 7, of Landau & Lifshitz, Classical Theory of Fields, they argue that if a body moves faster than maximum velocity $V_m$ of interactions, that implies we can have an interaction with velocity greater than $V_m$, which in turn proves $V_m$ is not the maximum. Hence bodies have an upper bound on velocity. Can someone elaborate this argument?
Answer:
It is clear that the existence of a maximum velocity of propagation of interactions implies, at the same time, that motions of bodies with greater velocity than this are in general impossible in nature. For if such a motion
could occur, then by means of it one could realize an interaction with a velocity exceeding the maximum possible velocity of propagation of interactions.
L.D. Landau, E.M. Lifshitz "The Classical Theory Of Fields" (p. 1-2)
If you could move from $A$ to $B$ faster than $V_m$, then you could take something (e.g. energy and momentum) from $A$ and give it to $B$. The fact that $B$ received this from $A$ would constitute an interaction of $A$ with $B$ with the velocity larger than $V_m$. This would be impossible, because we already know that $V_m$ is the largest possible velocity of interactions. For this reason you cannot possibly move faster than $V_m$. | {
"domain": "physics.stackexchange",
"id": 51644,
"tags": "field-theory, causality, interactions"
} |
Confusion with potential in simple pendulum | Question: I'm a maths student taking a course in classical mechanics and I'm having some confusion with the definition of a potential.
If we consider a simple pendulum then the forces acting on the end are $mg$ and $T$. Now I know that the potential is defined such that $F = -\nabla V$. Now I also know that the total energy of this system is $$\frac{1}{2}m \dot{\vec{x}}^2 + mgz.$$ Now if we take the gradient of the potential we have $(0,0,mg)$. My question is, why doesn't the potential involve the tension in the pendulum?
Answer: From the perspective of Lagrangian mechanics, the tension $T$ is a constraint force that does no virtual work. Can you see why? Hence it can the be ignored in the Lagrangian formulation, cf. D'Alembert's principle. See also e.g. this Phys.SE post. The only remaining force in the Lagrangian formulation is gravity, which we encode via its corresponding potential $V=mgz$. | {
"domain": "physics.stackexchange",
"id": 19364,
"tags": "newtonian-mechanics, lagrangian-formalism, constrained-dynamics"
} |
Beta Andromedae (Mirach) and distances mentioned in original Cosmos series from 1980 | Question: In one of the episodes from Carl Sagan's show Cosmos he explains that Beta Andromedae is the second brightest star in the constellation Andromeda, and is 75 light years away. The link to the video is here.
I've looked up this star on Wikipedia and it says it's the brightest star in the constellation, not the second brightest, and instead of 75 light years away it says it's about 197 light years away.
I know that this show Cosmos is quite old. It aired in 1980. I understand our measurements are more accurate nowadays, but I was just wondering if anyone knew why the discrepancy with the brightest and second brightest star claims, and also that the Wikipedia article says it's 197 light years away, a difference of about 260% from the 75 light years mentioned by Sagan. That's quite a significant difference.
His information of the distance from us to the center of our galaxy is quite close: he says 30 thousand light years; Wikipedia says 27 thousand light years. As to our distance to Andromeda, he says: 2 million light years; Wikipedia says 2.5 million light years.
Have I completely got the wrong star that I'm looking up?
Answer: Looking at the SIMBAD data page for Beta Andromeda shows the source of both the parallax (distance) and the magnitudes. In this case, as it will be for many bright stars, the source of the parallax is the reprocessed data from the Hipparcos satellite, described in this paper. Prior to the launch of the Hipparcos satellite by ESA in 1989, parallaxes were very difficult to obtain and only low precision values were available for the closest stars.
The parallax given in SIMBAD for Beta Andromeda is $16.52\pm0.56$ milliarcseconds which translates to a distance of $\frac{1}{0.01652"} = 60.5\pm2.1$ parsecs or $60.5\times3.26=197$ light years. This small parallax would have been extremely challenging or impossible to measure accurately with the pre-CCD technology before 1980. Errors of several hundred percent were not uncommon. The prior parallax measurement which probably was an improvement on what was available in 1980 at the time of Cosmos is from van Altena et al. 1995. This lists a parallax of $47.7\pm7.9$ millarcsec, a nearly 5x larger error and which gives a distance of 68 light years.
Similarly we can see that the $V$-band magnitude comes from this collection and the difference between Beta Andromeda ($V=2.05$) and Alpha Andromeda ($V=2.06$), the in-theory brightest star in the constellation is only 0.01 magnitude. Measuring a star that bright to that precision was, and in fact still is, quite difficult as most detectors will saturate. So it's not particularly surprising given how close Alpha and Beta Andromeda are in brightness, that the early measurements got them reversed when Beta is in fact (very slightly) brighter.
This high brightness probably means that we will not see a more accurate from the Gaia satellite, the successor to Hipparcos. The Gaia DR2 data release lists a brightness limit of $G\sim3$ which may be improved slightly in later data releases with more sophisticated data processing and treatment of saturated stars. | {
"domain": "astronomy.stackexchange",
"id": 3491,
"tags": "star, distances"
} |
Energy (Hamiltonian) of Trial Wavefunction | Question: Here I give a part of derivation of Hartree-Fock equations in case where basis functions (wavefunctions) are orthonormal and real: $$ \langle \psi_i | \psi_j \rangle = \langle \psi_j | \psi_i \rangle = \delta_{ij} $$
Trial wavefunction is defined as: $$ |\Phi \rangle = \sum_{i=1}^n c_i |\psi_i \rangle $$
where $|\psi_i\rangle$ is basis function $i$.
Expectation value of energy is given by: $$ \langle \Phi | H |\Phi \rangle = \sum_{ij} c_i c_j \langle \psi_i |H|\psi_j \rangle $$
I don't quite understand, why is expectation value of energy for trial wavefunction equal sum of expectation values for every combination of two basis functions multiplied by their respective coefficients ($c_i$ and $c_j$)? What justifies this summation?
Answer: It's really linear algebra: if
$$
\left| \Phi \right \rangle = \sum_{i} c_i \left| \Psi_i \right \rangle
$$
taking Hermitian operator ("transpose conjugate"):
$$
\left| \Phi \right \rangle^\dagger = \left\langle \Phi \right| = \sum_{i} c^*_i \left\langle \Psi_i \right|.
$$
Now "sandwiching" $H$ you get:
$$
\left\langle \Phi \right| H \left| \Phi \right \rangle = \left(\sum_{i} c^*_i \left\langle \Psi_i \right| \right) H \left(\sum_{i} c_i \left| \Psi_i \right \rangle \right)
$$
now, before we expand, we should change one of the indices to $j$ to account for products of different terms, just like, say:
$$
(a_1 + a_2) \times (b_1 + b_2) = a_1 b_1 + a_1 b_2 + a_2 b_1 + a_2 b_2 = \sum_{i,j = 1}^{2} a_i b_j
$$ and not $\sum_i a_i b_i$. So, really:
$$
\begin{align*}
\left\langle \Phi \right| H \left| \Phi \right \rangle &= \left(\sum_{i} c^*_i \left\langle \Psi_i \right| \right) H \left(\sum_{i} c_i \left| \Psi_i \right \rangle \right) \\ & = \left(\sum_{j} c^*_j \left\langle \Psi_j \right| \right)\left(\sum_{i} c_i H\left| \Psi_i \right \rangle \right)
\end{align*}
$$
and apply the distribution rule of algebra:
$$
\left(\sum_{j} c^*_j \left\langle \Psi_j \right| \right) \left(\sum_{i} c_i H \left| \Psi_i \right \rangle \right) = \sum_{i, j} c^*_j c_i (\left\langle \Psi_j \right|) (H\left| \Psi_i \right \rangle) = \sum_{i, j} c_i c^*_j \left\langle \Psi_i \right| H\left| \Psi_j \right \rangle
$$ | {
"domain": "physics.stackexchange",
"id": 88064,
"tags": "quantum-mechanics, homework-and-exercises, energy, wavefunction, linear-algebra"
} |
Wilsonian RG and Effective Field Theory | Question: I'm having trouble reconciling the discussions of the Wilsonian RG that appear in the texts of Peskin and Schroeder and Zee on the one hand, and those of Schwartz, Srednicki, and Weinberg on the other.
In the former, they seem to say that as one scales down to lower momentum, the couplings with negative mass dimension ("irrelevant couplings") scale to smaller and smaller values as one integrates out more high-momentum modes. Hence, at energy scales much smaller than the initial cutoff, the theory will look like a renormalizable QFT since the irrelevant couplings become small under the RG flow.
In contrast, the books of Schwartz, Srednicki, and Weinberg state that the Wilsonian RG analysis does NOT imply the irrelevant couplings scale to small values as one integrates out high-momentum modes, but merely that they become calculable functions of the relevant and marginal couplings. I.e., they become insensitive to the values of the irrelevant couplings of the initial large-cutoff Lagrangian.
My question is, how do I reconcile these two views?
My first exposure to the subject was Peskin and Schroeder, and I thought it all made perfect sense at the time. Now that I've read the more recent books of Schwartz, et al., I'm wondering if either
I've misinterpreted what P&S and Zee are saying when they discuss Wilsonian RG and effective field theories, or
they've made some simplifying assumptions that the treatments of Schwartz et. al. don't make.
Regarding the 2nd point, when discussing how the couplings scale under the RG, P&S largely ignore the "dynamical part" that comes from evaluating loop diagrams, in which case the scaling of the couplings boils down to simple dimensional analysis. In this case there's no accounting for operator mixing (i.e., that relevant and marginal couplings can feed into the flow of irrelevant couplings). This seems to be different from Schwartz's treatment, where he keeps information from the beta functions that encode information from the loop diagrams and allow for operator mixing. Could this be the reason why they seem to say different things about the size of irrelevant couplings as you lower the cutoff?
Answer: One of the main (but usually not explicitly given) assumption of the perturbative RG is that even in presence of irrelevant couplings, the RG flow starts close to the Gaussian Fixed Point (FP). That way, the negative mass operators flow toward zero, making the Gaussian FP a better and better approximation, until the relevant couplings kick in.
In that case, one ends up with a "renormalizable" theory, and one can just take care of the one or two relevant couplings, thus going back to the old school QFT RG.
However, Wilson does not assume that the irrelevant (with respect to the Gaussian FP) couplings have to be small. In fact, in most stat-phys applicatiosn, all couplings are of the same order! (For instance, in the Ising model, there is only one parameter $K=J/T$, so the corresponding field theory has all couplings of the same order.) But that does not prevent one to do some RG calculation in principle. In fact, in these models, the flow never goes close to the Gaussian FP, and the flow is non-perturbative right from the beginning.
One should however keep in mind that if one is only interested in the critical behavior of the system, thanks to universality close to a (Wilson-Fisher like) FP, one can also study a simpler theory (say a $\phi^4$ QFT) which is enough to describe the fixed point structure (usually). This is what saves the perturbative RG from oblivion. | {
"domain": "physics.stackexchange",
"id": 87172,
"tags": "quantum-field-theory, lagrangian-formalism, renormalization, action, effective-field-theory"
} |
Ashcroft Mermin Solid State Physics Eq. 2.60ff | Question: I'm trying to follow the steps in Eq. 2.60 of said book.
What I cant seem to figure out is how to change the integration variables from 'k' to 'E', as they state.
The equation is
$$\int \frac{d\textbf{k}}{4\pi^3} F(\epsilon(\textbf{k})) = \int_0^\infty \frac{k^2 dk}{\pi^2} F(\epsilon(k)) = \int_{-\infty}^\infty d\epsilon \, g(\epsilon) F(\epsilon)$$
I can follow the first transformation (why is $\textbf{k}$ suddendly $k$?),
$$\int\frac{1}{4\pi^3} k^2 F(\epsilon(k)) \, dk \int_0^\pi \sin \theta \, d\theta \int_0^{2\pi} d\phi = \int_0^\infty \frac{k^2 dk}{\pi^2} F(\epsilon(k))$$
But what's happening in the second step is unclear to me.
In the book it says, "one often exploits the fact that the integrand depends on $\textbf{k}$ only through the electronic energy $\epsilon = \hbar^2k^2/2m$,...", but I'm unsure how this is used.
Could anybody point this out to me?
Answer: Let's start with your first expression sans some numerical constants:
$$ I \equiv \iiint\mathrm{d}\vec{k}\ F(\epsilon(\vec{k})) $$
where all we care about is that the function $F(\epsilon(\vec{k}))$ is rotationally invariant:
$$ F(\epsilon(\vec{k})) = F(\epsilon(k)) $$
We can seperate the angular integrals, which give a factor of $4\pi$, and we find
$$ I = 4\pi \int_0^\infty \mathrm{d}k\ k^2 F(\epsilon(k)) $$
Now we suppose that $\epsilon(k)$ is an invertable function. In fact, it is a simple quadratic, but we only need invertability and that it be sufficiently smooth. This means we can write
$$ k = k(\epsilon) $$
We also assume that $\epsilon(0)=0$ and $\epsilon(\infty)=\infty$, which is the only reasonable thing and also makes sure that the limits of integration stay trivial. Make a change of variables
$$ \mathrm{d}k = \mathrm{d}\epsilon \frac{\mathrm{d}k}{\mathrm{d}\epsilon} $$
to get
$$ I = 4\pi \int_0^\infty \mathrm{d}\epsilon \frac{\mathrm{d}k}{\mathrm{d}\epsilon} k^2(\epsilon) F(\epsilon) $$
Matching to your next expression
$$ I \propto \int_{-\infty}^\infty \mathrm{d}\epsilon\ g(\epsilon) F(\epsilon) $$
(which we require to hold for all $F$), we obtain
$$ g\left(\epsilon\right)\propto\begin{cases}
\frac{\mathrm{d}k}{\mathrm{d}\epsilon}k^{2}(\epsilon) & \epsilon\ge0\\
0 & \epsilon<0
\end{cases} $$
I leave you to work out the constant of proportionality (the 4s and $\pi$s), but you should get a functional dependence $g(\epsilon)\propto \sqrt{\epsilon}$. The only unusual thing is that they extend the range of integration to $\epsilon<0$ for some reason, but you can do this if you set $g(\epsilon)=0$ for negative $\epsilon$, since there are no states there. | {
"domain": "physics.stackexchange",
"id": 6427,
"tags": "solid-state-physics, integration"
} |
rosmake sbpl_interface - problem with missing dependency | Question:
Hello,
I would be happy for any commentary or answer.
If I expressed myself unclear, please tell me.
Background: the package sbpl_interface requires the collision_distance_field-package as a dependency.
Approach: The package collision_distance_fieldis located inside of the moveit_core package.
Due there is a package.xml file in it I put the whole MoveIt!_source-folder in my catkin_worspace under /src.
After catkin_make was successful, the package collision_distance_field is still missing:
faps@faps-Aspire-5740D:~/catkin_ws/src$ rosmake sbpl_interface
.................
[rosbuild] Building package sbpl_interface
Failed to invoke /opt/ros/groovy/bin/rospack deps-manifests sbpl_interface
[rospack] Error: package/stack 'sbpl_interface' depends on non-existent package 'collision_distance_field' and rosdep claims that it is not a system dependency. Check the ROS_PACKAGE_PATH or try calling 'rosdep update'
CMake Error at /opt/ros/groovy/share/ros/core/rosbuild/public.cmake:129 (message):
Failed to invoke rospack to get compile flags for package 'sbpl_interface'.
Look above for errors from rospack itself. Aborting. Please fix the
broken dependency!
Call Stack (most recent call first):
/opt/ros/groovy/share/ros/core/rosbuild/public.cmake:203 (rosbuild_invoke_rospack)
CMakeLists.txt:12 (rosbuild_init)
-- Configuring incomplete, errors occurred!
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package sbpl_interface written to:
[ rosmake ] /home/faps/.ros/rosmake/rosmake_output-20130430-014403/sbpl_interface/build_output.log
[rosmake-2] Finished <<< sbpl_interface [FAIL] [ 0.88 seconds ]
[ rosmake ] Halting due to failure in package sbpl_interface.
[ rosmake ] Waiting for other threads to complete.
[ rosmake ] Results:
[ rosmake ] Built 39 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/faps/.ros/rosmake/rosmake_output-20130430-014403
Info: rosdep update also fails.
Questions:
How can I fix the missing dependency?
I've allready installed
MoveIt!-Full via sudo apt-get install ros-groovy-moveit-full, shouldn't moveit_core with all contained packages already be installed?
Many Thanks in advance
rosrookie
Originally posted by rosrookie on ROS Answers with karma: 55 on 2013-04-26
Post score: 0
Answer:
Hi rosrookie,
Thanks for giving MoveIt! a try. Unfortunately, sbpl_interface has not been fully ported to the latest version of MoveIt! yet which is why its not working. The authors are working on updating it right now and we'll let you know as soon as its fully up and running.
Originally posted by Sachin Chitta with karma: 1304 on 2013-05-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 13960,
"tags": "moveit, ros-groovy"
} |
Why is this language over {a,b,c} regular? | Question: The language of all words over the alphabet {a,b,c} such that
the number of as in the word
minus the number of cs in the word
is divisible by three.
How is this language regular? Lecturer notes says that it is, and then provides no explanation at all.
I tried drawing a DFA for it, but it just seems that the longer the word, the more states there will be and there could be an infinite amount, which is not allowed?
Please help, thanks.
Answer: All you need to keep track of is the value "number of $a$s seen minus number of $c$s seen", modulo 3 – call that $X$. There are only three different values that $X$ can take. When you see the next character of the input, it's either $a$, $b$ or $c$. Think about how each one affects $X$ and you should be very close to a 3-state automaton that accepts the language you're interested in. | {
"domain": "cs.stackexchange",
"id": 2922,
"tags": "formal-languages, regular-languages"
} |
ros::Timer won't start if initialized before dynamic reconfigure server | Question:
Hey, I am not sure where to post the issue, so I am showing it here first. Maybe I am doing something wrong.
My ros::Timer won't start if I initialize it before the initialization of a dynamic reconfigure server. Here is the summary:
Environment
Ubuntu 20.04 + ROS Noetic
empty workspace with no specific flags
tested on amd64 or arm64
Observations
when ros::Timer is initialized before a dynamic reconfigure server (DRS), the timer won't start
it affects fast timers (100 Hz and more) rather than slow timers
it is non-deterministic, sometimes it starts correctly
the higher the number of DRS parameters, the higher the chance of replicating the issue (10+ parameters = near 100% chance)
seems not to be influenced by the computational resources, happens on i9-9900K as well as on Rpi4
does not happen at all on 18.04 + ROS Melodic
Minimal non-working example
https://github.com/klaxalk/ros_timer_drs_bug
has two timers, 1 Hz and 100 Hz
has a pre-compiler #define which can switch the order of initialization
the 100 Hz timer won't run
Solution
I don't know, the DRS seems to be devoid of ros::Timers.
Can't say for sure, but swapping the order is not a viable workaround for ros::Pluginlib plugins. So far it looks like some plugins' timers are blocked by other plugins' DRSs.
the only workaround I found is to use a slow timer (or a thread) to check the activity and restart any broken fast timers... but... duh...
Sources
Full sources here: https://github.com/klaxalk/ros_timer_drs_bug
The main cpp file:
#include <ros/ros.h>
#include <nodelet/nodelet.h>
#include <dynamic_reconfigure/server.h>
#include <timer_tester/timer_testerConfig.h>
#define TIMERS_BEFORE 1
namespace timer_tester
{
class TimerTester : public nodelet::Nodelet {
public:
virtual void onInit();
private:
ros::NodeHandle nh_;
// | --------------- dynamic reconfigure server --------------- |
boost::recursive_mutex mutex_drs_;
typedef timer_tester::timer_testerConfig DrsConfig_t;
typedef dynamic_reconfigure::Server<DrsConfig_t> Drs_t;
boost::shared_ptr<Drs_t> drs_;
void callbackDrs(timer_tester::timer_testerConfig& config, uint32_t level);
// | ------------------------- timers ------------------------- |
ros::Timer timer_fast_;
void timerFast(const ros::TimerEvent& te);
ros::Timer timer_slow_;
void timerSlow(const ros::TimerEvent& te);
};
void TimerTester::onInit() {
ros::NodeHandle nh_ = nodelet::Nodelet::getMTPrivateNodeHandle();
ros::Time::waitForValid();
ROS_INFO("[TimerTester]: initializing");
#if TIMERS_BEFORE == 1
ROS_INFO("[TimerTester]: creating timers before DRS");
timer_fast_ = nh_.createTimer(ros::Rate(100.0), &TimerTester::timerFast, this);
timer_slow_ = nh_.createTimer(ros::Rate(1.0), &TimerTester::timerSlow, this);
#endif
// | --------------- dynamic reconfigure server --------------- |
drs_.reset(new Drs_t(mutex_drs_, nh_));
Drs_t::CallbackType f = boost::bind(&TimerTester::callbackDrs, this, _1, _2);
drs_->setCallback(f);
#if TIMERS_BEFORE == 0
ROS_INFO("[TimerTester]: creating timers after DRS");
timer_fast_ = nh_.createTimer(ros::Rate(100.0), &TimerTester::timerFast, this);
timer_slow_ = nh_.createTimer(ros::Rate(1.0), &TimerTester::timerSlow, this);
#endif
ROS_INFO_ONCE("[TimerTester]: initialized");
}
// | --------------------- timer callbacks -------------------- |
void TimerTester::timerFast([[maybe_unused]] const ros::TimerEvent& te) {
ROS_INFO_THROTTLE(0.1, "[TimerTester]: 100 Hz timer spinning");
}
void TimerTester::timerSlow([[maybe_unused]] const ros::TimerEvent& te) {
ROS_INFO_THROTTLE(0.1, "[TimerTester]: 1 Hz timer spinning");
}
void TimerTester::callbackDrs([[maybe_unused]] timer_tester::timer_testerConfig& config, [[maybe_unused]] uint32_t level) {
ROS_INFO("[TimerTester]: callbackDrs() called");
}
} // namespace timer_tester
#include <pluginlib/class_list_macros.h>
PLUGINLIB_EXPORT_CLASS(timer_tester::TimerTester, nodelet::Nodelet);
The dynamic reconfigure server config:
#!/usr/bin/env python
PACKAGE = "timer_tester"
import roslib;
roslib.load_manifest(PACKAGE)
from dynamic_reconfigure.parameter_generator_catkin import *
gen = ParameterGenerator()
main = gen.add_group("Main");
main.add("a", double_t, 1, "A", 1.0, 0.0, 10.0);
main.add("b", double_t, 2, "B", 1.0, 0.0, 10.0);
main.add("c", double_t, 4, "C", 1.0, 0.0, 10.0);
main.add("d", double_t, 8, "D", 1.0, 0.0, 10.0);
main.add("e", double_t, 16, "E", 1.0, 0.0, 10.0);
main.add("f", double_t, 32, "F", 1.0, 0.0, 10.0);
main.add("g", double_t, 64, "G", 1.0, 0.0, 10.0);
main.add("h", double_t, 128, "H", 1.0, 0.0, 10.0);
main.add("i", double_t, 256, "I", 1.0, 0.0, 10.0);
main.add("j", double_t, 512, "J", 1.0, 0.0, 10.0);
main.add("k", double_t, 1024, "K", 1.0, 0.0, 10.0);
main.add("l", double_t, 2048, "L", 1.0, 0.0, 10.0);
exit(gen.generate(PACKAGE, "TimerTester", "timer_tester"))
Originally posted by klaxalk on ROS Answers with karma: 91 on 2020-10-25
Post score: 9
Original comments
Comment by klaxalk on 2020-10-28:
Submitted it as an issue: https://github.com/ros/ros_comm/issues/2085
Answer:
I finally got it. The problem is not with DRS in particular but with anything that takes a non-zero amount of time to execute. It can be broken with sleep() or just with some traditional operations. The problem is in fact caused by the nodelet, specifically by the first commit that got to Noetic: https://github.com/ros/nodelet_core/5272c34. The commit was intended to fix this particular issue, but it, in fact, causes it. They disable callback queues before calling onInit() and re-enable them after. But it seems that this is not a good solution. I have submitted an issue there: https://github.com/ros/nodelet_core/issues/106. In the meantime, a temporary workaround is to compile the nodelet_core without that commit.
Originally posted by klaxalk with karma: 91 on 2020-11-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35674,
"tags": "c++"
} |
Relation of liquid pressure with breadth | Question:
Firstly, I would like to clear out that, my question is not very clear. I myself am not sure what problems I have on understanding this. So, I'll try my best to explain where I have confusions.
My confusions lies with this formula,
$$ P= h \rho g$$
More specifically, I am confused about pressure when very thin pipes lead into very large tanks. For example, in a tank full of water there is a small ball, this ball experiences a pressure, P, due to the water above it. As the walls just above the ball of the tank becomes narrower, P, shouldn't change, and just before the walls conjoin, there is just an array of molecules in the "tube", and the pressure is still P. Once the walls conjoins, pressure P disappears. For some reason this seems absurd to me. In an instant, all the pressure is gone. I feel like something is missing but I am not sure what.
I hope you have been able to understand my confusion, because I haven't. Please help.
Answer:
Once the walls conjoins, pressure P disappears.
Not true.
The important point is that your formula ($P = h \rho g$) shows the increase in pressure from the top of a fluid to the bottom of a fluid. Also, a lot of time we don't necessarily mean absolute pressure, but the pressure excess over atmospheric.
When the tube is open, your formula gives you the increase in pressure over the height of the tube. And since it's open, you know the pressure at the top of the tube is 1atm.
When you close the tube, the top of your (shortened) water column is now the top of the box. But since the box is closed, you don't directly know what the pressure at the top is. The absolute pressure at the bottom of the box can still be the same because the box is continuing to squeeze the fluid at a pressure above 1atm.
if there was no atmospheric pressure to begin with, how would there be atmospheric pressure after the box closes?
While the tube is there, we can determine the pressure at the level of the bottom of the tube. If the tube has height $h$, then the pressure at the top of the box/bottom of the tube is $h \rho g$.
When you seal the box (or close a valve), this pressure is still present. It's just that instead of it coming from the water in the tube, it's coming from the sides of the vessel due to the quantity of water inside.
As neither before nor after closing the vessel is the pressure due to the atmosphere, it would be wrong to call this "atmospheric pressure". It's a pressure due to the vessel, and the amount of pressure was set by the height of water in the tube before closing the valve.
Think of a bike tire. You push some air in, you close the valve. The fluid inside the tire remains pressurized.
In your example you push some water in (because the height of the tube is pushing down on water inside), then you close the valve and the fluid inside remains pressurized. | {
"domain": "physics.stackexchange",
"id": 64895,
"tags": "fluid-dynamics, pressure"
} |
Why N2O is a "laughing gas"? | Question: It is (mostly) used as an "oxidizer" in motors and engines, I understand that much. The oxygen "oxidizes" with other chemicals, while the nitrogen returns to the air.
But $\ce{N2O}$ also produces "euphoria" in human beings, works as a mild anesthetic, and causes people to laugh. Why does this happen?
Answer: In contrast to $\ce{O2}$ (or $\ce{CO}$), $\ce{N2O}$ does not bind to hemoglobin and it is not metabolized in the human body. It is taken up through the lungs, gets in the blood and leaves the body the same way being exhaled.
According to a 2004 study by P. Nagele in PNAS (DOI), $\ce{N2O}$ acts as an antagonist for the $N$-methyl-$D$-aspartate (NMDA) receptor. Other infamous antagonists are drugs like ketamine or phencyclidine.
Further information on NDMA receptor antagonists can be found here. | {
"domain": "chemistry.stackexchange",
"id": 1859,
"tags": "inorganic-chemistry, everyday-chemistry"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.