anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Chemical potential of an ideal quantum gas | Question: As far as I understand from textbooks, absolute values of the chemical potential are meaningless, only changes have physical meaning. That would mean that, from statistical mechanics, one should only be able to determine $\mu$ up to a constant. However, it appears that Kittel, eq. 12(a) page 121 obtains an absolute value for the chemical potential of an ideal gas:
$$ \mu = k_B T \log(n/n_Q)$$
where $n$ is the gas concentration and $n_Q=(m k_B T/2\pi\hbar^2)^{3/2}$. It seems that there are no undetermined constants in this expression. How is that possible then that only differences matter?
I think I understand that in terms of work only chemical potential differences are relevant, in that case the question is why doesn't a constant appear in the above expression?
Answer: In macroscopic thermodynamics, the value of internal energy $U$, or entropy $S$ or other derived quantities (the Gibbs energy $G$, the free energy $F$) is often thought of as depending on the arbitrary choice of the reference state where that particular quantity has some prescribed value, such as zero. Which state is chosen to be such reference state does not matter. This arbitrariness poses no real problem as all experimentally relevant results of the macroscopic theory can be formulated in terms of changes $\Delta U,\Delta S, \Delta F$, etc.
Chemical potential $\mu$ for a simple system obeys the relation
$$
dU = TdS -pdV+\mu dN,
$$
thus we can use as its definition
$$
\mu =\frac{\partial U(S,V,N)}{\partial N}.
$$
Since
$$
dF = -SdT -pdV+\mu dN,
$$
it also obeys the relation
$$
\mu =\frac{\partial F(T,V,N)}{\partial N}.
$$
It is an intensive quantity like absolute temperature $T$ and pressure $p$, and just like with those, the reference state where its value is zero is arbitrary, because values of $U$ and $F$ are arbitrary; and this arbitrariness is also in dependence on $N$, so it allows for different values of $\mu$.
However, by convention $T$ is absolute temperature, so no longer arbitrary, $p$ is absolute pressure, so no longer arbitrary. With $\mu$ it is less clear, since it depends on how we define energy $U$ and entropy $S$ (and these are additional conventions). If we change definition of $U$ to $U+U_0$ (say, we want to take into account rest energy $mc^2$) $\mu$ will have different value than if we do not; similarly with $S$.
In quantum statistical physics which Kittel teaches in his book, we analyze some model of the thermodynamic system, and it is usual to adopt a specific definition of energy based on eigenvalues of a specific Hamiltonian operator and of entropy in terms of some definite number of quantum states, or in terms of partition sum over all quantum states. This leads to definite energy and entropy values for any microstate. In case of energy, the macrostate which implies quantum states with the smallest possible eigenvalues of the standard Hamiltonian $\sum_k \frac{p_k^2}{2m} +V$ (where $V$ is the standard pair-wise potential energy function, with no rest energy terms) is used as the one with zero or the smallest possible value of internal energy; and in case of entropy, the macrostate which has zero or the smallest possible number of allowed microstates (in simple cases, the ground state) is used as the reference state with zero or the lowest entropy. Thus the quantum models and calculations adopt specific definitions of energy and entropy, which remove this arbitrariness from the resulting thermodynamic quantities. But this is because we've chosen definite definition of energy out of infinity of the possible ones (the number of equivalent Hamiltonians is infinite, all differing by a constant shift), and we've chosen a specific definition of entropy, such as (microcanonical definition) entropy being log of number of states with energy lower than $U$, or (canonical definition) entropy being a specific derivative of the partition function, even though there is no real need to adopt such definition, except history and convenience.
This is mostly done for convenience, to simplify the formulae, verbal descriptions and the whole exposition. There is no real necessity to prefer one Hamiltonian to base definition of energy on and there is no real necessity to select one specific definition of entropy. It is possible to formulate the quantum statistics allowing for the arbitrariness in definition of energy and entropy and then it becomes clear the values of these thermodynamic quantities are arbitrary, depending on our conventions. | {
"domain": "physics.stackexchange",
"id": 98882,
"tags": "statistical-mechanics, chemical-potential"
} |
Forgotten password logic | Question: I'm just trying to see if anyone disagrees with the way I'm handling my logic for this. Something doesn't feel right with it but I don't quite know what it is.
Just wanted to add that the new_password_key is NOT a password for the user to log in with. As of right now I was going to have them directed to a page from a link in an email where they can enter a new password.
function forgot_password_submit()
{
$this->form_validation->set_rules('username', 'Username', 'trim|required|xss_clean');
if (!$this->form_validation->run())
{
echo json_encode(array('error' => 'yes', 'message' => 'There was a problem submitting the form! Please refresh the window and try again!'));
}
else
{
if (!is_null($user_data = $this->users->get_user_by_username($this->input->post('username'))))
{
if (!isset($user_data->new_password_key) && (!isset($user_data->new_password_requested)))
{
if(!strtotime($user_data->new_password_requested) <= (time() - 172800))
{
echo json_encode(array('error' => 'yes', 'message' => 'You have to wait 2 days before a new temp password can be emailed!'));
}
else
{
if ($this->kow_auth->forgot_password($this->input->post('username')))
{
$this->kow_auth->send_email('forgot_password', 'KOW Manager Forgot Password Email', $user_data);
echo json_encode(array('success' => 'yes', 'message' => 'A temporary password has been emailed to you!'));
}
else
{
echo json_encode(array('error' => 'yes', 'message' => 'A temporary password could not be created for you!'));
}
}
}
else
{
echo json_encode(array('success' => 'yes', 'message' => 'Check your email for your temporary password!'));
}
}
else
{
echo json_encode(array('error' => 'yes', 'message' => 'User does not exist in the database!'));
}
}
}
EDIT:
What I'm reallying wondering about how to use the functionality of this setting this temporary new password for when it expires.
Library Function:
/**
* Generate reset code (to change password) and send it to user
*
* @return string
* @return object
*/
function forgot_password($username)
{
if (strlen($username) > 0) {
if (!is_null($user_data = $this->ci->users->get_user_by_username($username)))
{
$data = new stdClass;
$data->user_id = $user_data->user_id;
$data->username = $user_data->username;
$data->email = $user_data->email;
$data->new_password_key = md5(rand().microtime());
$data->first_name = $user_data->first_name;
$data->last_name = $user_data->last_name;
$this->ci->users->set_password_key($user_data->user_id, $data->new_password_key);
return $data;
}
}
return NULL;
}
Model:
/**
* Set new password key for user.
* This key can be used for authentication when resetting user's password.
*
* @param int
* @param string
* @return bool
*/
function set_password_key($user_id, $new_password_key)
{
$this->db->set('new_password_key', $new_password_key);
$this->db->set('new_password_requested', date('Y-m-d H:i:s'));
$this->db->where('user_id', $user_id);
$this->db->update('users');
return $this->db->affected_rows() > 0;
}
EDIT 2:
This is what I'm going to use for the controller. There just seems to be some logic issues I have with it because what if it gets down to the if statement if ($already_sent_password) and for some reason they didn't get it. Then what? Or what if it gets down to if (!strtotime($user_data->new_password_requested) <= (time() - 172800)) which is starting to sounds stupid to me because why make them have to wait two days to get a new password key.
function forgot_password_submit()
{
$this->form_validation->set_rules('username', 'Username', 'trim|required|xss_clean');
if (!$this->form_validation->run())
{
$this->kow_auth->output('There was a problem submitting the form! Please refresh the window and try again!', FALSE);
return;
}
$user_data = $this->users->get_user_by_username($this->input->post('username'));
if ($user_data === NULL)
{
$this->kow_auth->output('User does not exist in the database!', FALSE);
return;
}
$already_sent_password = (isset($user_data->new_password_key) && isset($user_data->new_password_requested));
if ($already_sent_password)
{
$this->kow_auth->output('Check your email for your temporary password!');
return;
}
if (!strtotime($user_data->new_password_requested) <= (time() - 172800))
{
$this->kow_auth->output('You have to wait 2 days before a new temp password can be emailed!', FALSE);
}
else
{
if ($this->kow_auth->forgot_password($this->input->post('username')))
{
$this->kow_auth->send_email('forgot_password', 'KOW Manager Forgot Password Email', $user_data);
$this->kow_auth->output('A temporary password has been emailed to you!');
}
else
{
$this->kow_auth->output('A temporary password could not be created for you!', FALSE);
}
}
}
Anybody have any more ideas?
Answer: I see nothing that's wrong with your code. My first reaction though is that you repeat the echo quite a bit. I would turn all the
echo json_encode(array());
into something like this:
function output($message, $success = TRUE) {
$status = $success ? array('succes' => 'yes') : array('error' => 'yes');
echo json_encode($status, $message);
}
The two other things I would have preferred to do differently, is avoiding the deep intendation, and moving some logic out from the if()'s. This is perhaps just a matter of taste, but I find this much easier to read and follow.
function forgot_password_submit() {
$this->form_validation->set_rules('username', 'Username', 'trim|required|xss_clean');
if (!$this->form_validation->run()) {
output('There was a problem submitting the form! Please refresh the window and try again!', FALSE);
return;
}
$user_data = $this->users->get_user_by_username($this->input->post('username'));
if ($user_data === NULL) {
output('User does not exist in the database!', FALSE);
return;
}
$already_sent_password = (isset($user_data->new_password_key) && isset($user_data->new_password_requested));
if ($already_sent_password) {
output('Check your email for your temporary password!');
return;
}
if (!strtotime($user_data->new_password_requested) <= (time() - 172800)) {
output('You have to wait 2 days before a new temp password can be emailed!', FALSE);
} else {
if ($this->kow_auth->forgot_password($this->input->post('username'))) {
$this->kow_auth->send_email('forgot_password', 'KOW Manager Forgot Password Email', $user_data);
output('message' => 'A temporary password has been emailed to you!');
} else {
output('A temporary password could not be created for you!', FALSE);
}
}
}
function output($message, $success = TRUE) {
$status = $success ? array('succes' => 'yes') : array('error' => 'yes');
echo json_encode($status, $message);
} | {
"domain": "codereview.stackexchange",
"id": 1430,
"tags": "php, codeigniter"
} |
Getting some concept of cancer genetics | Question: I have two groups of patients : Responders to chemotherapy and non-responders to chemotherapy. I treat this as a dichotomous outcome we can call response to chemotherapy.
For a fixed set of cancer driver genes, I have calculated cancer cell fraction (ccf) for each group.
Does anyone know of any interaction or relation of response to chemotherapy and ccf?
Answer: Assuming that my comment is trivial, I think that there is some literature on this question. I'm assuming that the ccf in question is for the relevant tissue type for the cancer.
You might also consider cross-posting to SE Medical Sciences, where people are more familiar with these subjects.
Of course, naively one would expect that a larger ccf would yield a larger possible population of cancer cells to provide "escape" mutations or resistance to chemo, so high ccf should then be negatively predictive. As an extreme example, a highly metastatic cancer will have a high ccf over the whole body, and will be more resistant to chemo just because that's a highly advanced disease.
However, as usual things are a little more complex. The ccf itself has subpopulations. The following I got by just googling around a little bit. For some of the fundamental thinking on this, see this old paper. In this paper, the author puts emphasis for example on the fraction of cancer cells in each stage of the cell cycle; this results in what the author calls "clonogenic" and "nonclonogenic" populations that respectively yield new clones of cancer cells or do not yield new clones. Probably that terminology is somewhat outdated, but it gives a sense of the problem; related are "tumor stem cells". For a more modern treatment, you can look at this.
For a med school lecture treatment, see here to learn a little bit more. For an example of a recent prediction study which uses omic data to predict chemo response, which might be helpful in reformulating your study, see here.
In other words, not all tumor cells are created equal, some are more likely to resist chemo than others. However, overall I would expect that there is probably some kind of relationship between ccf and response to chemo, because they are both related to the mediator variable "number of clonogenic cancer cells"; it's just a somewhat oversimplified model compared to the actual biology, so it looks like the publications in question don't tend to focus on it. | {
"domain": "biology.stackexchange",
"id": 10455,
"tags": "molecular-genetics, cancer"
} |
Can someone help explain the difference between strength and stiffness? | Question: I just can't seem to get it through my head and I don't understand the difference. As far as I can understand, strength is a materials resistance to permanent fracture while stiffness would be resistance to temporary fracture. If possible could I also get a brief explanation of ductility and resistance to fracture, and the differences between all of these. I'm familiar with chemistry but new to engineering.
Answer:
Stiffness is the reverse of flexibility, the degree of deformation under stress, therefor a stiff material requires more stress or force to deform. eg, a cantilever beam under a normal load applied to its end will bend less if it is stiffer.
Strength is the amount of load per square unit of the material that will deform the material irreversibly and permanently. | {
"domain": "engineering.stackexchange",
"id": 5402,
"tags": "structural-engineering, materials, material-science"
} |
Converting truth table to algebraic normal form | Question:
Is there any efficient algorithm to convert a given truth table of a Boolean function to its equivalent algebraic normal form (ANF)?
I have seen that Sage has one implementation (official documentation):
sage: from sage.crypto.boolean_function import BooleanFunction
sage: B = BooleanFunction("12fe342a")
sage: B.algebraic_normal_form()
x0*x1*x2*x3*x4 + x0*x1*x2 + x0*x1*x3*x4 + x0*x1*x3 + x0*x1*x4 + x0*x2*x3*x4 + x0*x2*x4 + x0*x3*x4 + x0*x3 + x0 + x1*x2*x4 + x1*x3 + x1*x4 + x2*x3*x4 + x2*x3 + x2*x4
But, nowhere I could find the algorithm.
It would be helpful if someone kindly explains with a suitable example.
UPDATE The answer can be found here. It is copied below.
Answer: Source (Credit goes to user pico of crypto.stackexchange)
From TRUTH TABLE to ANF
First write [6, 4, 7, 8, 0, 5, 2, 10, 14, 3, 13, 1, 12, 15, 9, 11] in that way: the columns of matrix are those numbers in $\mathbb{F_2^4}$.
$$
\begin{bmatrix}
0&0&1&0&0&1&0&0&0&1&1&1&0&1&1&1\\
1&0&1&0&0&0&1&1&1&1&0&0&0&1&0&1\\
1&1&1&0&0&1&0&0&1&0&1&0&1&1&0&0\\
0&0&0&1&0&0&0&1&1&0&1&0&1&1&1&1
\end{bmatrix}
$$
Then multiply it with Moebius transformation matrix :
$$
M_1 = \begin{bmatrix}
1
\end{bmatrix},
M_2 = \begin{bmatrix}
1&1\\
0&1
\end{bmatrix}, \cdots,
M_{2^k} = M_2 \otimes M_{2^{k-1}} = \begin{bmatrix}
M_{2^{k-1}}&M_{2^{k-1}}\\
0&M_{2^{k-1}}
\end{bmatrix}.
$$
So for $k=4$,
the matrix is:
$$ \begin{bmatrix}
1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 \\
0 &1 &0 &1 &0 &1 &0 &1 &0 &1 &0 &1 &0 &1 &0 &1 \\
0 &0 &1 &1 &0 &0 &1 &1 &0 &0 &1 &1 &0 &0 &1 &1\\
0 &0 &0 &1 &0 &0 &0 &1 &0 &0 &0 &1 &0 &0 &0 &1\\
0 &0 &0 &0 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1\\
0 &0 &0 &0 &0 &1 &0 &1 &0 &1 &0 &1 &0 &1 &0 &1\\
0 &0 &0 &0 &0 &0 &1 &1 &0 &0 &1 &1 &0 &0 &1 &1\\
0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &1 &0 &0 &0 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &1 &1 &1 &1 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &1 &0 &1 &0 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &0 &0 &1 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &0 &0 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 &1 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1\\
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1
\end{bmatrix}
$$
Then you have this matrix:
$$
\begin{bmatrix}
0&0&1&1&0&1&1&0&0&1&0&0&0&1&1&0\\
1&1&0&0&1&1&1&0&0&1&1&0&0&0&0&0\\
1&0&0&1&1&1&0&0&0&1&0&1&1&0&1&0\\
0&0&0&1&0&0&0&0&1&1&0&1&0&1&0&0
\end{bmatrix}
$$
Each row gives the coordinate function $S_1,S_2,S_3$and $S_4$ resp.
The entries of each row are the coefficients of $1, x_0, x_1, x_0x_1, x_2, x_0x_2, x_1x_2, x_0x_1x_2, x_3, x_0x_3, x_1x_3, x_0x_1x_3, x_2x_3, x_0x_2x_3, x_1x_2x_3, x_0x_1x_2x_3$.
From ANF to TRUTH TABLE (TT)
Exactly the inverse of operations. Note that $M_{2^k}^{-1}=M_{2^k}$ for any $k$.
i.e. [TT] * $[M]$ = [ANF] and [TT] = [ANF] * $[M]$.
Note: The arithmetics are taken modulo 2. | {
"domain": "cs.stackexchange",
"id": 13875,
"tags": "algorithms, boolean-algebra"
} |
Electric field in asymmetric conductor configuration | Question: I was trying to figure out an electrostatics exercise. I am confortable solving these type of problems, when there is an easy application of symmetrical properties and Gauss's Law. But this one took me by surprise. As the picture shows, there is a spherical surface with charge +q, as well as the ellipsoidal surface. Both are conductors and hollow, without any type of contact between them. The figure shows a cross section of the configuration, showing qualitatively that they are not centered at the same point, but still the ellipsoidal axis passes right through the center of the sphere shell.
I don't have to calculate the field in all space. All I must do is order the points A,B,C and D in increasing order of electric field. I am stucked because it seems to me that I don't have the tools to contemplate the charge redistribution problem.
If we were dealing with static charges it would be quite easy to realize that the superposition principle applies inmediately. Maybe it is easier if I try to imagine different steps to achieve this distribution. First I imagined an empty spherical shell, with an homogeneus charge distribution throughout it's surface. Using Gauss we would get that the field inside is zero. Later we would add the ellipsoidal. And here is when I ask myself, the charge distribution of the spherical shell at a first instant doesn't affect the ellipsoidal one. The latter creates an electric field that would reaccomodate the spherical charges. After this I am not sure of what could be said about the field inside the sphere.
It is not zero between the ellipsoidal and the sphere. But it is indeed zero inside the ellipsoidal. Inmediately outside the ellipsoidal the field is normal to the surface.
I must say also, that the exercise asks to calculate the external field to the sphere, which I suspect is the same as if there was a sphere charged with $+2q$. This hunch is product of the way the question is being asked. If this is true it would be very nice to know why since it is inconclusive what happens after the "redistribution time".
Hope the doubt is clear enough. Thanks for reading.
Answer: I would start with an ellipsoid as if it was alone.
The field inside the ellipsoid would be obviously zero. That takes care of point C.
The field at points A and D would be stronger than the field at point B, because, given the same potential, the field is stronger where the radius is smaller.
Next, I would introduce an uncharged sphere. When we add the sphere, the charges on the ellipsoid will induce a negative charge, -q, on the inner surface of the sphere and a positive charge, +q, on the outer surface of the sphere.
Since point A is closer to the sphere, the concentration of the induced charges in that area of the sphere will be greater and, as a result, charges on the surface of the ellipsoid would shift upwards, which will make the field near point A stronger than the field near point D.
So, the answer to the first question would be: C, B, D, A.
The induced charge +q on the outer surface of the sphere will be evenly distributed due to the symmetry of the sphere. Adding +q charge to it, will just double the charge density, as you've suggested. So, the field strength around the sphere will be uniform and could be calculated using the standard formula for a charged sphere.
Addressing additional questions in the comments here.
why there must be a inner charge surface
If we put an uncharged spherical shell around a charged (+q) ellipsoid, the flux over the surface of the sphere will correspond to +q (according to Gauss) and the field will be the same as if the outer surface of the sphere was evenly charged by +q.
For that to hold, the outer surface of the sphere has to be actually contain charge +q, since the field outside the sphere is not affected by anything else: because the field inside the skin of the spherical shell, which is a conductor, is zero.
For the spherical shell to remain neutral, there has to be a charge -q on the inner surface.
And also you answered the ordering question with a total charge of q
on the configuration, and this is not quite so since the conservation
of charge implies that there is +2q. I mean, the resultant order has
to contemplate the initial configuration.
Putting additional charge +q on the spherical shell does not affect the field inside the spherical shell and therefore does not affect the ABCD order. Think superposition.
"The induced charge +q on the outer surface of the sphere will be
evenly distributed due to the symmetry of the sphere." why this?
Since, as mentioned earlier, there is no field inside the skin of the spherical shell (a conductor), the distribution of charges over the outer surface of the spherical shell is affected only by the charges on that outer surface, i.e., these charges "see" each other but not any charges inside the shell. Because of that, the charges on the outer surface will be distributed the same way as if it was just a charged sphere, which is a known case: the charges should be distributed uniformly due to the symmetry of the sphere.
I am unaware of a condition that establishes that since a surface is
equipotential, the charge distribution must be homogeneous. If we took
the ellipsoidal surface, the charge distribution as you said will not
be homogeneous but still there is an equipotential surface.
You are right: an equipotential surface does not imply a homogeneous distribution of charge. My statement was and is: "The induced charge +q on the outer surface of the sphere will be evenly distributed due to the symmetry of the sphere". A sphere has a 3D symmetry, ellipsoid - does not. We can also say that a sphere has the same radius everywhere and therefore the same charge density. | {
"domain": "physics.stackexchange",
"id": 100135,
"tags": "electrostatics, conductors"
} |
Merge multiple maps in kinetic | Question:
How do you merge multiple maps (generated by map_server map_saver) to form a single map?
I want to generate a single map of a building with high accuracy. Then I will use the map in AMCL for localization
Place the LIDAR in a known position
Generate the scan map using gmapping with the known position
Move to a new known position by moving the LIDAR manually
Repeat 2 until the entire building is mapped
Merge the maps to a single map
Originally posted by Archhaskeller on ROS Answers with karma: 7 on 2018-05-23
Post score: 0
Answer:
I'm serching for live map merging at the moment. I use multirobot_map_merger at the moment. (Works not well till the moment for me).
But https://github.com/ahhda/Map-Merge-Tool may help you. It's made to merge two map pictures.
Originally posted by MrCheesecake with karma: 272 on 2019-08-14
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 30890,
"tags": "navigation, ros-kinetic, gmapping, amcl"
} |
Proving ALLTM complement not recognizable | Question: A few definitions..
$$
\begin{align*}
\mathrm{ALL}_{\mathrm{TM}} &= \Bigl\{\langle M \rangle \,\Big|\, \text{$M$ a Turing Machine over $\{0,1\}^{*}$},\;\; L(M) = \{0,1\}^{*} \Bigr\}
\\[2ex]
\overline{\mathrm{ALL}}_{\mathrm{TM}} &= \Bigl\{\langle M \rangle \,\Big|\, \text{$M$ a Turing Machine over $\{0,1\}^{*}$},\;\; L(M) \ne \{0,1\}^{*} \Bigr\}
\\[2ex]
B_{\mathrm{TM}} &= \Bigl\{\langle M \rangle \,\Big|\, \text{$M$ is a Turing Machine over $\{0,1\}^{*}$},\;\; \varepsilon \in L(M) \Bigr\}
\end{align*}
$$
We are showing a reduction from $B_{\mathrm{TM}}$ to $\overline{\mathrm{ALL}}_{\mathrm{TM}}$. In my notes I have the following solution to this problem which I'm trying to understand.
Let $\alpha \in \{0,1\}^*$. Check that $\alpha$ is of form $\langle M \rangle$, where $M$ is a TM over $\{0,1\}$. Else, let $f(\alpha)$ be anything not in $\overline{\mathrm{ALL}}_{\mathrm{TM}}$.
Let $f(\alpha)$ be $\langle M' \rangle$, where $M'$ on $x$ runs $M$ on $\varepsilon$ (blank string) for up to $|x|$ steps. If $M$ accepts (in that time), then $M'$ rejects. Otherwise, $M'$ accepts.
What I'm trying to understand is why must we run the TM $M'$ for $|x|$ steps for this to work? If we change the part #2 of the transformation to the following, why wouldn't this work?
Let $f(\alpha)$ be $\langle M' \rangle$, where $M'$ on $x$ runs $M$ on $\varepsilon$ (blank string). If $M$ accepts, reject. Otherwise accept.
Which then it follows that $\varepsilon \in L(M) \!\iff\! L(M) = \varnothing$, that is $L(M) \neq \{0,1\}^*$.
Answer: First, there is a small typo in (1) - if $\alpha$ is not a legal encoding, then you should return something that is in $ALL_{TM}$ (since you are reducing to the complement).
For your question: the key point here is that a TM does not always halt on its input. Thus, the phrase "otherwise accept" in your suggestion for (2) is not computable. If you run $M$ on $\epsilon$, and $M$ does not halt, then the lanugage of $M'$ will be $\emptyset$, which makes the reduction fail.
In general, when you perform reductions that output a machine that simulates another machine, you must always consider the possibility that the latter will not halt. If this poses a problem (as it does here), a good trick to circumvent it is to limit the step number, and since often you want this limit to exist, but to be ``unbounded'', then using the input for the machine as a limit is a good idea. | {
"domain": "cs.stackexchange",
"id": 1305,
"tags": "computability, turing-machines, undecidability"
} |
Unable to locate package ros-melodic-desktop-full | Question:
Hey, I am trying to install the BETA version of ROS Melodic on Ubuntu 18.04 by following stanard installation instructions but end up with:
Unable to locate package ros-melodic-desktop-full
It looks like everything is online and there's just no desktop-full on the repo. Is there any other way to currently install and test ros-melodic without building from source?
EDIT:
Building from source also did not work for me:
$ rosinstall_generator desktop_full --rosdistro melodic --deps --tar > melodic-desktop-full.rosinstall
The following not released packages/stacks will be ignored: desktop_full
No packages/stacks left after ignoring not released
$ wstool init -j8 src melodic-desktop-full.rosinstall
Using initial elements from: melodic-desktop-full.rosinstall
WARNING: Not using any element from melodic-desktop-full.rosinstall
Writing /home/arek/ros_catkin_ws/src/.rosinstall
update complete.
$ ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release
bash: ./src/catkin/bin/catkin_make_isolated: No such file or directory
Anyone know what's wrong?
Originally posted by ArkadiuszN on ROS Answers with karma: 19 on 2018-04-28
Post score: 1
Answer:
Anyone know what's wrong?
Nothing is really "wrong": there just isn't a version of desktop_full released for Melodic. The fact that the installation instructions won't work yet is also clearly stated at the top of the ROS Melodic installation instructions page:
CAUTION: Release of this distribution is pending.
ROS Melodic Morenia has not been fully released yet, so these instructions will not entirely work.
Trying to build this from source is not going to change it: rosinstall_generator won't be able to find a desktop_full package (as it can only build packages that have been released).
To install Melodic packages, you can check status_page/ros_melodic_default.html to see which packages have been released (this page is also linked from the Melodic install page, under section Build farm status). See status_page/blocked_releases_melodic.html to see why something hasn't/can't be released into Melodic yet.
Is there any other way to currently install and test ros-melodic without building from source?
Yes: install individual packages for now.
Originally posted by gvdhoorn with karma: 86574 on 2018-04-28
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by ArkadiuszN on 2018-04-28:
Oh, I get it. According to Melodic Planning there's no beta yet, my mistake. Thank you for your answer.
Comment by Yev_D on 2020-03-13:
So it's 2020 and this still doesn't work, even though it has been released.
Comment by gvdhoorn on 2020-03-13:
Do you expect any assistance?
Because if you do, then:
you should post a new question
provide much more information
Comment by James Newton on 2020-07-31:
That message is no longer on that page. Melodic full IS now released. | {
"domain": "robotics.stackexchange",
"id": 30752,
"tags": "ros-melodic, ubuntu, ubuntu-bionic"
} |
How to estimate the Kolmogorov length scale | Question: My understanding of Kolmogorov scales doesn't really go beyond this poem:
Big whirls have little whirls that feed on their velocity, and little whirls have lesser whirls and so on to viscosity. - Lewis Fry Richardson
Th smallest whirl according to Wikipedia would be that big:
$\eta = (\frac{\nu^3}{\varepsilon})^\frac{1}{4}$
... with $\nu$ beeing kinematic viscosity and $\epsilon$ the rate of energy disspiation.
Since I find no straightforward way to calculate $\epsilon$, I'm completely at loss at what orders of magnitude to expect. Since I imagine this to be an important factor in some technical or biological processes, I assume that someone measured or calculated these microscales for real life flow regimes. Can anyone point me to these numbers?
I'm mostly interested in non-compressible fluids, but will take anything I get.
Processes where I believe the microscales to be relevant are communities of synthropic bacteria (different species needing each others metabolism and thus close neighborhood) or dispersing something in a mixture.
Answer: The size of the Kolmogorov scale is not universal, it is dependent on the flow phenomena you are looking at. I don't know the details for compressible flows, so I will give you some hints on incompressible flows.
From the quotes poem, you can anticipate that everything that is dissipated at the smallest scales, has to be present at larger scale first. Therefore, as a very crude estimate, for a system of length $L$ and velocity $U$ (and dimensional grounds, on this scale viscosity does not play a role!), one could argue that
$$\varepsilon=\frac{U^3}{L}$$
For a crude estimate, one could use this $\varepsilon$ to estimate the Kolmogorov length scale.
To put in numbers, suppose you ($L=1$m) are running ($U=3$m/s) (in air $\nu=1.5\times10^{-5}$ m$^2/$s), then $\eta=100$ µm. Which sounds at least reasonable. | {
"domain": "physics.stackexchange",
"id": 96920,
"tags": "fluid-dynamics, turbulence"
} |
Extract single values from Joy | Question:
I created a class, in which I want to subscribe both to image_raw and Joy topics, then I want to combine the data to draw/write something on the image with openCV. My code works and is able to
correctly subscribe image_raw
correctly subscribe Joy
publish some extracted values from Joy (PS3 Joystick)
write things on the image with openCV.
The code:
class image_converter:
def __init__(self):
self.image_pub = rospy.Publisher("image_topic_2",Image,queue_size=10)
self.bridge = CvBridge()
self.image_sub = rospy.Subscriber("usb_cam/image_raw",Image,self.callback)
def callback(self,data):
try:
cv_image = self.bridge.imgmsg_to_cv2(data, "bgr8")
gray = cv2.cvtColor(cv_image,cv2.COLOR_BGR2GRAY)
start = time.time() #simulate FPS count
global myjoy_p
global myjoy
myjoy3 = rospy.Subscriber("joy", Joy, self.joyread)
myjoy_p = rospy.Publisher("copy_topic", Int16MultiArray,queue_size=1)
#AA = myjoy_p.data[1] THIS IS MY QUESTION, SEE BELOW
BB = 7.5
cv2.putText(cv_image, 'JOY: {0}'.format(int(BB)),(40,300),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,0,255))
elapsed = time.time() - start
cv2.putText(cv_image, 'FPS: {0}'.format(int(1 / elapsed)),(10,40),cv2.FONT_HERSHEY_SIMPLEX,0.5,(0,0,255))
cv2.imshow("Modified Image", cv_image)
cv2.waitKey(1)
except CvBridgeError as e:
print(e)
try:
self.image_pub.publish(self.bridge.cv2_to_imgmsg(cv_image, "bgr8"))
except CvBridgeError as e:
print(e)
def joyread(self,data):
global myjoy
#Declaration type
myjoy = Int16MultiArray()
myjoy1 = Int16MultiArray()
myjoy1.data.append(int(255*data.axes[0])) #LEFT LEFTWARD
myjoy1.data.append(int(255*data.axes[1])) #LEFT UPWARD
myjoy1.data.append(int(255*data.axes[2])) #RIGHT LEFTWARD
myjoy1.data.append(int(255*data.axes[3])) #RIGHT UPWARD
myjoy = myjoy1
#publish
myjoy_p.publish(myjoy1)
This code works, even if there are 4 different myjoy variables (myjoy, myjoy_p, myjoy1, myjoy3) because I'm not a ROS expert and I made a lot of tries.
The question: I would like to extract one of the values from myjoy and store the value in AA, then use AA instead of BB.
This is what I tried to do in the commented line
AA = myjoy_p.data[1]
The problem: I can not access to myjoy values out of the joyread callback. the error message is
AttributeError: 'Publisher' object has no attribute 'data'
I tried all of myjoy variables. They are different object types (the error message changes a bit) but none of them seems to work. What I have to do?
Thank you in advance.
(ROS Indigo and Python script)
Originally posted by marcoresk on ROS Answers with karma: 76 on 2017-04-15
Post score: 0
Answer:
I finally found the solution. Myjoy mantains joy structure in the array. It was
AA = int(255*myjoy.axes[0])
and all other myjoy were quite useless.
Originally posted by marcoresk with karma: 76 on 2017-04-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 27614,
"tags": "ros, callback, joy-node"
} |
rviz window closes itself | Question:
Hello,
I am trying to visualize the simulated of particles in rviz using points. But in the middle of simulation process, the rviz is getting crash. It gives the following description of problem.
rviz: /tmp/buildd/ros-diamondback-visualization-common-1.4.0/debian/ros-diamondback-visualization-common/opt/ros/diamondback/stacks/visualization_common/ogre/build/ogre_src_v1-7-1/OgreMain/include/OgreAxisAlignedBox.h:252: void Ogre::AxisAlignedBox::setExtents(const Ogre::Vector3&, const Ogre::Vector3&): Assertion `(min.x <= max.x && min.y <= max.y && min.z <= max.z) && "The minimum corner of the box must be less than or equal to maximum corner"' failed.
Aborted
Can any one help? thanks.
Originally posted by Ali Abdul Khaliq on ROS Answers with karma: 63 on 2011-05-15
Post score: 2
Original comments
Comment by JonW on 2011-05-16:
I've seen a similar problem in gazebo when fiddling with the stepsize parameter. "The minimum corner of the box must be less than or equal to maximum corner"' failed.
Comment by Ali Abdul Khaliq on 2011-05-16:
The resolution and rviz window size is fine. I just noticed that with other simulation display, the rviz is working fine but with the points simulation it gives the error. Can it be something wrong with the point simulation?
Comment by Eric Perko on 2011-05-15:
I've seen this before but I can't quite recall in what cases... are you running the window under a weird resolution or making the rviz window very small/oddly sized?
Comment by Dannis Lee on 2017-07-20:
I also come across the same problem these days, can you tell me how to fix it? thanks.
Comment by Simon Harst on 2017-08-09:
I just had this when publishing an incomplete odometry-orientation and opened a bug report on github.
Comment by alexe on 2020-04-11:
Happened to me too when I was accidentally publishing messages with illegal values, in my case NaNs in a PoseStamped. Once I made sure there were no NaNs in the message, the error no longer occurred.
Answer:
If you can provide minimal code to reproduce this, you should file a bug.
Originally posted by Mac with karma: 4119 on 2011-06-17
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5575,
"tags": "ros, simulation, rviz, points"
} |
Shared entanglement to copy orthogonal states | Question: Assume that Alice and Bob are allowed to share entanglement and are spatially separated. Alice is given an unknown state and asked to measure this in the computational basis to obtain $\vert 0\rangle$ or $\vert 1\rangle$. Is there some way for Bob to also have a copy of same state as Alice instantaneously?
Note that it does not violate no-signalling since the outcome of the measurement for Alice is random - so she cannot use it to communicate. Another perspective is that this is sort of like cloning but since the only outcomes that Alice gets are $\vert 0\rangle$ or $\vert 1\rangle$ and they are orthogonal, it isn't forbidden by the no-cloning.
If this can be done, how should she and Bob design a quantum circuit that achieves this? Otherwise, what forbids this possibility?
Answer: Assume this works. Then, nothing prevents Alice from applying the same protocol to a quantum state that is known to her, such as $|0\rangle$ or $|1\rangle$. This way, she could send information to Bob instantaneously. Thus, it violates faster-than-light communication and thus is impossible. | {
"domain": "quantumcomputing.stackexchange",
"id": 482,
"tags": "entanglement, non-locality, cloning"
} |
Is there a rotational analogue to reduced mass, i.e. reduced moment of inertia? | Question: Is there a rotational analogue to reduced mass? Is there anything called reduced moment of inertia?
Can we apply the concept of reduced moment of inertia to calculate the change in rotational kinetic energy in a collision where kinetic energy is purely rotational namely
$$\Delta KE =1/2\frac{I_1I_2}{I_1+I_2}(1-e^2){\omega_{rel}}^2$$
(I just came up with this formula by drawing parallels to formula for loss in translational K.E which is
$$\Delta K={\frac {1}{2}}\mu v_{\rm {rel}}^{2}(e^{2}-1)$$
)
For instance a rod ($M$,$L$) is hinged at its end and lies on a horizontal table, a point mass $m$ strikes it at its end perpendicular to the rod at a velocity $v$ and sticks to it .
Could we use the above formula to find the heat evolved?
P.S this is just an example I came up with to demonstrate my point and give some context you need not answer it if not necessary but please address my first query.
Answer: To see that your thesis is true , let calculate the collision case of this example .
The two pendulums are start with initial angular velocity and collide.
I start with writing the equation of motion shortly after the collision , you have to deals just with the constraint force.
$$I_1\,\ddot \varphi_1=L\,F_c$$
$$I_2\,\ddot \varphi_2=-L\,F_c$$
where $~I_1\,,I_2~$ are the inertia about the suspension points,
~$F_c~$ is the constraint force shortly after the collision.
with :
$$\ddot \varphi=\frac{d}{dt}\,\dot \varphi$$
you obtain
$$I_1\,\frac{d}{dt}\,\dot \varphi_1=L\,F_c$$
$$I_2\, \frac{d}{dt}\,\dot \varphi_2=-L\,F_c$$
$\Rightarrow$
$$I_1\,\int_{\dot\varphi_{1i}}^{\dot \varphi_{1f}}\,d\dot \varphi_1=L\,\int_{t_i}^ {t_f}\,F_c\,dt=L\,p\tag 1$$
$$I_2\,\int_{\dot\varphi_{2i}}^{\dot \varphi_{2f}}\,d\dot \varphi_2=-L\,\int_{t_i}^ {t_f}\,F_c\,dt=-L\,p\tag 2$$
where $~i~$ stay for "initial state" and $~f~$ the "final state", p is the linear momentum.
from Eq. (1) and (2) you get:
$$I_{{1}} \left( \dot\varphi _{{1f}}-\dot\varphi _{{1i}} \right) =L\,p\tag 3$$
$$I_{{2}} \left( \dot\varphi _{{2f}}-\dot\varphi _{{2i}} \right) =-L\,p\tag 4$$
you have now two equations for three unknowns, the third equation is, that during the collision the relative velocity is zero, thus
$$\dot\varphi _{{1f}}=\dot\varphi _{{2f}}\tag 5$$
solve Eq. (3),(4) and (5) you obtain the final velocity's and the linear momentum.
$$\dot{\varphi}_{1f}=\frac{I_1\,\dot{\varphi}_{1i}+
I_2\,\dot{\varphi}_{2i}}{I_1+I_2}$$
$$\dot{\varphi}_{2f}=\dot{\varphi}_{1f}$$
$$p=\frac{I_1\,I_2}{L(I_1+I_2)}\,(\dot{\varphi}_{2i}-\dot{\varphi}_{1i})$$
with those results, you can obtain the kinetic energy differenz
$$\Delta T=T_i-T_f=\frac 12\,I_1\,\dot{\varphi}_{1i}^2
+\frac 12\,I_2\,\dot{\varphi}_{2i}^2-\left(\frac 12\,I_1\,\dot{\varphi}_{1f}^2
+\frac 12\,I_2\,\dot{\varphi}_{2f}^2\right)$$
$$\boxed{\Delta T=\frac 12 \frac{I_1\,I_2}{I_1+I_2}\,\left((\dot{\varphi}_{2i}-\dot{\varphi}_{1i})\right)^2}$$
thus, the result are analog to head to head balls collision | {
"domain": "physics.stackexchange",
"id": 74558,
"tags": "newtonian-mechanics, rotational-dynamics, inertial-frames, moment-of-inertia"
} |
Is a stack machine with a forward read iterator Turing complete? | Question: It is well known that a machine with a single stack as only unlimited storage is not Turing complete, if it can only read from the top of the stack. I want a machine which is (slightly) more powerful than a stack machine, but still not Turing complete. (I wonder whether there exists a non-Turing complete machine, which can deterministically simulate any non-deterministic pushdown automata with an only polynomial slow-down.) The most benign (straightforward) extension that came to my mind was a (single) forward read iterator.
Let me elaborate the implementation details, to make it clear what I mean by a forward read iterator. A singly linked list can be used for implementing a stack. Let the list be implemented by a pointer pTop, which is either zero, or points to an SList node. An SList node consists of a payload field value and a pointer field pNext, where pNext is either zero, or points to an SList node. Let the forward read iterator be implemented by a pointer pRead, which is either zero, or points to an SListnode. The pointers pTop and pRead cannot be accessed directly, but can only be used via the following methods:
Push(val) creates a new SList node n with n.value = val and n.pNext = pTop, and sets pTop = &n.
Pop() aborts if pTop == 0 or pRead == pTop. Otherwise it reads val = pTop->value and pTopNext = pTop->pNext, frees the SList node pointed to by pTop, sets pTop = pTopNext and returns val.
ReadBegin() sets pRead = pTop.
ReadNext() aborts if pRead == 0. Otherwise it reads val = pRead->value, sets pRead = pRead->pNext and returns val.
ReadFinished() returns true if pRead == 0, and false otherwise.
Answer: Your model is Turing-complete, unfortunately.
You can simulate a queue in your data structure using the following algorithm. It introduced 3 new stack symbols: $d, x, y$.
Enqueue(val) is just Push(val).
For Dequeue():
ReadBegin().
Count the number of anything else - number of $d$ in the whole stack (which should be always non-negative). Push $y$ or pop $x$ for every $d$, and push $x$ or pop $y$ for anything else. Always prefer pop to push. Finally there won't be any $y$ in the stack and the result will be the number of $x$ on the top of the stack.
ReadBegin().
While pTop is a $x$:
Repeat ReadNext() until it returned something other than $x$ and $d$.
Pop().
Push a $d$.
The last result of ReadNext() is returned as the result of Dequeue.
The proof is straightforward. Check the revision history for a more complicated version firstly reducing it to a two-way version. | {
"domain": "cs.stackexchange",
"id": 4437,
"tags": "computation-models, turing-completeness, linked-lists, stacks"
} |
Can I transform LaserScan messages? | Question:
Is there a way to transform LaserScan messages from one frame to another? I'd like to keep the message type (i.e., I don't want a PointCloud message, I want to keep LaserScan). I have a tf tree setup and I simply want my to work with my LaserScan message in a different frame.
Originally posted by kamek on ROS Answers with karma: 79 on 2013-11-22
Post score: 0
Answer:
That´s not easily possible because of the way a scan is represented in the sensors_msgs/LaserScan message. The distance measurements are given in a vector and the position of scan endpoints can be computed using the other fields in the message (angle_min etc.). This representation obviously is only valid when using the actual frame_id of the LIDAR.
The easiest way to transform it to another frame is converting it to a pointcloud using the tools provided in laser_pipeline.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2013-11-22
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by kamek on 2013-11-22:
That's what I figured, that you'd have to first convert to a pointcloud. Is there a simple way to convert back to a LaserScan message after the transformation? I thought maybe someone had written a LaserScan->PointCloud->Transform->LaserScan tool.
Comment by Stefan Kohlbrecher on 2013-11-22:
Look at the contents of the message, the representation only is viable when the frame used is the one where the LIDAR´s spinning mirror is. (Nearly) every attempt to put a transformed scan into a LaserScan message will be an approximation and erroneous (apart from some corner cases).
Comment by kamek on 2013-11-22:
Thanks. I suppose it would work using a different LaserScan message that had ranges and bearings rather than ranges, start angle, and angle increments. | {
"domain": "robotics.stackexchange",
"id": 16243,
"tags": "laserscan, transform"
} |
I'm trying to understand DFAs is my DFA correct in this question if not why? | Question: I have the following alphat {0,1}.
I'm told to draw a DFA which fulfills the following: {w|w starts with 1 and ends in 0}
This is the DFA I came up with.
Answer: This dfa is incorrect.
To start, a dfa MUST define a transition for every node, including accepting nodes.
Anyhow, the current dfa would have accepted the language L = { w | w is a series of 1's and a single 0 at the end }
To correct this, you need to add another transition from B to A when it recieves 1 and a transition from B to itself when recieving 0. This is required to allow strings with 0 in the middle to also be accepted | {
"domain": "cs.stackexchange",
"id": 16270,
"tags": "regular-languages, finite-automata"
} |
Two messages with the same sequence number | Question:
I have a bag file with a lot of messages on different topics. I've just found out that I have two messages with the same sequence number, but they were published to different topics. I thought that every message sent over ROS had a unique sequence number in the system, but, apparently, this isn't true.
What exactly is the sequence number used for? In which circumstances can we have two (or more) messages with the same sequence number?
Originally posted by nbro on ROS Answers with karma: 372 on 2017-05-05
Post score: 0
Answer:
What exactly is the sequence number used for?
It's somewhat of a 'relic' from the earlier versions of ROS. The idea was that it could be useful to be able to correlate messages to each other, or deal with out-of-order delivery over transports that do not guarantee in-order delivery.
That never really got used (afaik), and on multiple occasions proposals to remove seq from Header have been put forth -- ROS 2 RFC: Remove seq from Header.msg on the ros-sig-ng-ros list being one of the more recent ones. That posting by @William also explains some of the shortcomings of seq, and might also answer some of your other questions about it.
In which circumstances can we have two (or more) messages with the same sequence number?
There is no guarantee whatsoever that seq is unique, in any way. From the posting linked earlier:
[..] in ROS 1, the seq field is sometimes set by the middleware, but inconsistently not set in other cases. For example, it is set when publishing, but not when sending a Service Request. Also, some client libraries don’t set the seq field on publish [..]
and:
[..] seq is specific to a single publishing source, or topic depending on the middleware's behavior [..]
Uniqueness guarantees would also be really hard to implement robustly. Imagine a highly distributed system with 100+ hosts with tens of nodes of each. seq is a simple uint32, so setting up some oracle that can reliably generate unique nrs would be almost impossible due to the limited width of the integer and because of the necessity to somehow keep track of nrs already used across all involved graph participants.
So there can be many situations in which multiple msgs have identical sequence nrs. Most commonly it can occur if publishers (or the underlying client library) don't initialise seq at all (it stays 0). In other cases seq could be set by the caller, who could happen to have started counting at the same nr (say 0) as some other publishing entity in your node graph. This would also lead to the possibility of receiving different msgs with identical seq values.
In conclusion: no, seq is not required nor guaranteed to be unique - not even within a single process - and you should not rely on it being unique. It's only still there because of its history and for reasons of backwards compatibility. It probably won't be maintained as part of a Header (or equivalent structure) in ROS2 and if we could, it would've been removed from ROS1 a long time ago already.
Edit: found some related discussion: Is the sequence number in Header message deprecated on ROS Discourse. The linked ros2/common_interfaces#1 on Github is also interesting, especially this comment.
Originally posted by gvdhoorn with karma: 86574 on 2017-05-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by nbro on 2017-05-06:
Why isn't this fact about the deprecation of seq explicitly and very clearly written in the documentation: http://docs.ros.org/kinetic/api/std_msgs/html/msg/Header.html ?
Comment by gvdhoorn on 2017-05-06:
Honestly? I guess things change, are repurposed / updated over time and it only takes one person not doing their job in such a federated development model. This is going to sound a bit like a cliche, but a PR against ros/std_msgs would be certainly be appreciated.
Comment by gvdhoorn on 2017-05-06:
Also: I'm not sure the field itself is deprecated. It's more the fact that users should not assume that it's unique or consistently used everywhere. But I guess that's about the same.
I'm not a core developer, so all of this is assumptions and extrapolation on my part. | {
"domain": "robotics.stackexchange",
"id": 27824,
"tags": "ros, rosbag, messages"
} |
What is called a salt? | Question: While I was doing experiments in my school days, they gave me different substances with the combination of base and acid radicals, for example ammonium nitrate and zinc chloride and they seldom called it a salt.
But actually a salt is a NaCl as far as I know.
So My Question is: Actually, what is called a salt?
Why do they call the acid and basic radical a salt?
Answer: I think what confuses you is that 'salt' has two meanings: In the every day sense it means table salt (which has the formula NaCl) but in the chemical sense it means a certain way for particles to hold together and form a tangible substance.
I'd define the latter as any periodic arrangement/lattice (see e.g. here ) of particles held together by ionic bonds.
The positive ion is almost always a metal, the negative one can be anything (so usually either a halogen (those lack only one valence electron to have a full shell, see e.g. Cl) or a (smallish) molecule-ion like SO4^2−)
base and acid radicals
I really don't see much connection between salts and either bases/acids and radicals. Do consider that not all ions are bases/acids and all radicals are highly reactive. It would rather be an edge case to have a salt with a radical in it; most salts have ions with energetically favourable number of electrons (as a rough rule this means all "electron shells" are either full or empty.)
NaCl is a good example.
Because different sized salt crystals still have extremely similar behaviour, the salt formulas express only the ratio of anions and cations. Realize that on the whole salts are neutral.
Why do they call the acid and basic radical a salt?
A single kind of particle is never called a salt, salt means an arrangement of two ions of different polarity. I guess some lab 'recipes' have sloppy language if something is provided as a salt, but only one component is important for the reaction.
Perhaps I can give a better answer I understood what you mean by radical in this context. | {
"domain": "chemistry.stackexchange",
"id": 5425,
"tags": "acid-base"
} |
Importing .ipnyb file from Kaggle into local Jupyter | Question: Total beginner question here, please let me know if it would be more appropriate somewhere else.
I just created my first iPython notebook in Kaggle and I downloaded the ipnyb file. Now I have installed Jupyter locally and want to try working on the same notebook that way.
It seems like I got Jupyter working because I am able to create a new local notebook and it looks like what I would expect:
But when I open the ipnyb file that I downloaded from Kaggle, I just see what looks like raw JSON instead of a live notebook:
I also noticed that the icons of these two notebook files look different:
Any suggestions about what I might be doing wrong and how I might get my properly import my Kaggle notebook into Jupyter locally?
Answer: Thats because "ipnyb" is not proper format. Try downloading/changing it properly to .ipynb extension. | {
"domain": "datascience.stackexchange",
"id": 6613,
"tags": "jupyter, kaggle"
} |
Rydberg and electrical energy | Question: I am a high school student. This year I learned the Rydberg constant and its use of finding energy of an electron in chemistry, besides in physics I learned to calculate the energies of two loaded objects. So, I used $$k\times \frac{ q1\times q2}{d}$$
where
$k$ is $9 \times 10^9$, $q1$ is charge of an electron = -q2, $R$ is $-2.18 \times 10^{-18}$, and $d$ is distance of first orbital (radius of hydrogen)
and carried it on to the first one, which is equal to the energy of a hydrogen's electron. The thing is, my physics equation is equal to 2 Rydbergs. I would be most grateful, if someone explained me what wrong is.
Answer: Firstly you should note that the calculation of the energy in the hydrogen atom is based on the Bohr model and this gives a highly misleading idea of what the atom is like. The electron in a hydrogen atom doesn't orbit the proton like the Earth orbits the Sun. In fact the ground state of hydrogen has zero orbital abgular momentum so the electron doesn't orbit at all. The Bohr model was superceded by quantum mechanics, and QM gives a very different description of the hydrogen atom.
But as long as you bear in mind that your calculation is just for fun and isn't a valid description of the physics, we can answer your question very easily. Your equation:
$$ V = \frac{kq_1q_2}{r} $$
calculates the potential energy. Note that this energy is negative because the charges of the electron and proton have different signs so the energy is $V = -2R$ as you say. To get the total energy you have add the kinetic energy of the orbiting electron and the kinetic energy is $T = +R$. The total energy is:
$$ E = T + V = R + -2R = -R $$
And that's why the total energy is $-R$.
Incidentally, you might have spotted that the kinetic energy is half the (negative of the) potential energy:
$$ 2T = -V $$
This isn't a coincidence! This relationship is due to the virial theorem. It applies to all inverse square law forces, so it applies to gravity as well. For example it tells us that the Earth's kinetic energy is half its gravitational potential energy. | {
"domain": "physics.stackexchange",
"id": 29426,
"tags": "energy, hydrogen, electrochemistry"
} |
Archimedes Principle Question | Question:
A Spherical buoy has a diameter of 2m and weighs 3kN. It is designed to be used in a variety or circumstances by it being floated in water and anchored to the floor with a chain.
Draw the free body diagram and determine the following given thta the volume of a sphere is,
$$Volume =\frac{\pi D^3}{6}$$
The upthrust or buoyant force (Fb) in sea water.
The tension in the chain (T)
Assume at 4Degrees the specific gravity of sea water is 1.042, g = 9.81m/s^2
1.
Equation
$$Pf Vd g$$
$$Fb = PfVdg = (1.042kg/m^3)(4.188m^3)(9.81m/s^2)$$
$$ = 42.8N$$
Is this correct? Trying to learn this. I think it is correct. But wanted to check so i understand how to do it correctly.
Then i am going to find question 2.
Answer: A Buoy by definition has to float to be visible to the ships to alert the to the submerged obstacles or the depth datum.
Assuming the weight of the sphere and the tie, W the sphere sinks such that the buoyancy of the part under the water, called a sphere cap, is equal to the wight of the sphere plus approximately the weight of the tie.
Assuming the water $ \rho=1 $
In a calm sea the buoy will set at H, submersion height such that:
$ V=W= \frac{\pi H^2}{3}(3r-H) $
r the radius of the sphere
H is the height of the submersion (Cap)
they usually anchor the buoy to the floor of the sea with an anchor which is many times heavier the sphere buoyancy even if fully submerged so the maximum tension on the tie is the full buoyancy minus the weight of the sphere.
. | {
"domain": "engineering.stackexchange",
"id": 3292,
"tags": "mechanical-engineering, fluid-mechanics, hydrostatics"
} |
Fission Weapon Mechanism and Critical Mass | Question:
In the image we see the interior design of the Little Boy atomic bomb dropped on Hiroshima. There are two cylinders composed of uranium, a hollow cylinder at the top which will slide down to merge with the cylinder at the bottom. Both of these cylinders are below the critical mass but their combined mass surpasses the critical mass causing it to explode.
So my question is: How do we define the merging of these cylinders because the fact that they are touching each other does not mean that they've actually became one material, maybe there's a single atom layer of air trapped between even if there's no air between there's no metallic bonds either. If this is the case then the merged situation is the same as the non merged situation thus they're still two separate objects which can't sustain fission.
Does the cylinders need to make metallic bonds with each other or not? Or do they bond with each other because of the heat generated by the conventional explosives?
Or is it just enough for them to make electrical contact for us to say that the combined cylinders have passed the critical mass? If this is the case will they explode if we just connect them together with an electrically conductive wire?
Answer:
How do we define the merging of these cylinders
Merging (like touching) isn't really the important bit. The fuel is rearranged to form a supercritical geometry. The Chernobyl reactor went critical with all the fuel separated into rods that didn't touch each other.
Each unit of uranium in the weapon has some amount of decay that produces neutrons. These neutrons leave the fuel and have some chance of finding more fuel and initiating a fission event based on where the fissile material is and what is in the way.
In the weapon, the two cylinders are so far apart initially that most of the neutrons released fly away and do not reach the other to start additional reactions.
When the pieces are brought together, it is not the fact that they touch or anything, just that they are close enough for the neutrons to have an increased chance of encountering more material. If the space between the two were filled with a neutron absorber, that would limit the reaction. But a few mm of air would not be significant.
Although the common term is "critical mass", mass is only one portion of the equation. Fuel composition, geometry, and interposing materials such as moderators and absorbers all determine criticality together. | {
"domain": "physics.stackexchange",
"id": 82135,
"tags": "nuclear-physics, explosions, nuclear-engineering"
} |
Why doesn't a compression pulse on a spring move backwards? | Question: I've a spring and I give it a single impulse towards left. A compression zone is produced as is shown in the picture, then this compression moves ahead to the left of the spring and we see a compression zone in the middle of the spring .
My question is that why doesn't this compression in the middle of the string move both to the left and right as it relaxes (as I've drawn on the picture)? I have seen that the correct thing is that the compression pulse will move towards the left only, however we can argue that the compression in the middle of the spring will push on both sides of the spring and hence produce pulses going both towards left and right.
Could anyone please help me in understanding as to why the compression pulse will go only to the left and not to the right.
Also are there going to be any rarefactions produced in the spring if I just push it and leave it there?
Answer: The third example in your image is a snapshot of the pulse. Unfortunately the snapshot can only show us the deformation (or potential energy) in the spring. It cannot show us the velocity of the elements in the spring.
If the compression is created, held statically in place, and then released, the waves will move in both directions as you show in the final example.
But when created from a compression from the right, the elements of the spring will also have a velocity. This velocity when combined with the potential energy of the compression serve to cancel out the pulse to the right and only allow the pulse to the left to propagate. | {
"domain": "physics.stackexchange",
"id": 74804,
"tags": "newtonian-mechanics, classical-mechanics, waves, acoustics, spring"
} |
Can $\emptyset$ be reducible to any other language? | Question: While solving some question, that involved the empty set $\emptyset$, I was really wondering, is $\emptyset$ reducible to any other language, i.e., $\emptyset \leq A$ such that $A$ is a language over a given alphabet $\Sigma^*$?
I mean, one can never take $x \in \emptyset$, right? or am I missing anything?
Maybe $\emptyset \leq \emptyset$? because if I take a reduction $f$ such that $x \in \emptyset \Leftrightarrow f(x) \in \emptyset$, this is always true, because $x \in \emptyset$ is never true and $f(x) \in \emptyset$ is also never true, so that function is a reduction function in the empty-concept, no?
Answer: Recall that: a (many-one) reduction from $A \subseteq \Sigma^{*}$ to $B \subseteq \Sigma^{*}$ is a map $f : \Sigma^{*} \to \Sigma^{*}$ such that $x \in A \iff f(x) \in B$ for all $x \in \Sigma^{*}$. We usually put extra conditions on $f$, such as polytime computable, but let us not dwell on that here. We write $A \leq B$ when there is such an $f$ and say that $A$ is reducible to $B$.
The statement $A \leq B$ may be written in logical notation as
$$\exists f : \Sigma^{*} \to \Sigma^{*} . \forall x \in \Sigma^{*} . (x \in A \Leftrightarrow f(x) \in B).$$
It is a basic exercise in logic to figure out that:
$\emptyset \leq B$ is equivalent to
$$\exists f : \Sigma^{*} \to \Sigma^{*} . \forall x \in \Sigma^{*} . f(x) \not\in B,$$
which is equivalent to $B \neq \Sigma^{*}$.
$A \leq \emptyset$ is equivalent to
$$\exists f : \Sigma^{*} \to \Sigma^{*} . \forall x \in \Sigma^{*} . x \not\in A,$$
which is equivalent to $A = \emptyset$.
Thus, only the empty set is reducible to the empty set, while the empty set is reducible to every set, except $\Sigma^{*}$. | {
"domain": "cs.stackexchange",
"id": 2243,
"tags": "computability, reductions"
} |
python tkinter Monty Hall GUI visualization | Question: I created a visualization to Monty Hall with python. this is my first program with python tkinter. I'm new to python (and OOP in python). I used three pictures for the doors. Adding them here.
download the doors gifs - https://ufile.io/2h7fb
Or, download separately:
door1.gif - https://ibb.co/jSNntw
door2.gif - https://ibb.co/bTCntw
door3.gif - https://ibb.co/bs0U6G
Would love to get some review. I'm sure my object oriented skills are not good at the moment.
from tkinter import Tk, Canvas, Button, PhotoImage
import random
DOOR_1 = 1
DOOR_2 = 2
DOOR_3 = 3
DOORS = (DOOR_1, DOOR_2, DOOR_3)
class MontyHall(Tk):
def __init__(self):
Tk.__init__(self)
self.total_counter = 0
self.winning_switchers = 0
self.winning_non_switchers = 0
self.geometry("600x600")
self.title("Monty Hall")
self.rowconfigure(2, weight=1)
self.canvas = Canvas(self, width=600, height=500)
self.text = self.canvas.create_text(300, 120)
self.answer = self.canvas.create_text(300, 140)
self.text_switching = self.canvas.create_text(300, 30)
self.text_not_switching = self.canvas.create_text(300, 60)
self.door1_pic = PhotoImage(file='door1.gif')
self.door2_pic = PhotoImage(file='door2.gif')
self.door3_pic = PhotoImage(file='door3.gif')
self.door1_show = self.canvas.create_image(100, 350, image=self.door1_pic)
self.door2_show = self.canvas.create_image(300, 350, image=self.door2_pic)
self.door3_show = self.canvas.create_image(500, 350, image=self.door3_pic)
self.canvas.grid(row=2, sticky='s')
self.canvas.tag_bind(self.door1_show, "<Button-1>", self.door1)
self.canvas.tag_bind(self.door2_show, "<Button-1>", self.door2)
self.canvas.tag_bind(self.door3_show, "<Button-1>", self.door3)
def door1(self, *args):
self.canvas.itemconfig(self.text, text="you chose door number 1")
self.choose_door(DOOR_1)
def door2(self, *args):
self.canvas.itemconfig(self.text, text="you chose door number 2")
self.choose_door(DOOR_2)
def door3(self, *args):
self.canvas.itemconfig(self.text, text="you chose door number 3")
self.choose_door(DOOR_3)
def choose_door(self, door_chosen):
self.block_doors(4)
self.canvas.itemconfig(self.answer, text="")
correct_door = random.choice(DOORS)
if correct_door != door_chosen:
close = random.choice(list(set(DOORS) - {correct_door, door_chosen}))
else:
close = random.choice(list(set(DOORS) - {door_chosen}))
self.block_doors(close)
self.options(door_chosen, correct_door, close)
def block_doors(self, block_door_number):
if block_door_number == 4:
self.canvas.tag_unbind(self.door1_show, "<Button-1>")
self.canvas.tag_unbind(self.door2_show, "<Button-1>")
self.canvas.tag_unbind(self.door3_show, "<Button-1>")
if block_door_number == 3:
self.canvas.delete(self.door3_show)
elif block_door_number == 2:
self.canvas.delete(self.door2_show)
elif block_door_number == 1:
self.canvas.delete(self.door1_show)
def unhide_door(self, open_door):
if open_door == 1:
self.canvas.tag_bind(self.door2_show, "<Button-1>", self.door2)
self.canvas.tag_bind(self.door3_show, "<Button-1>", self.door3)
self.door1_show = self.canvas.create_image(100, 350, image=self.door1_pic)
self.canvas.tag_bind(self.door1_show, "<Button-1>", self.door1)
if open_door == 2:
self.canvas.tag_bind(self.door1_show, "<Button-1>", self.door1)
self.canvas.tag_bind(self.door3_show, "<Button-1>", self.door3)
self.door2_show = self.canvas.create_image(300, 350, image=self.door2_pic)
self.canvas.tag_bind(self.door2_show, "<Button-1>", self.door2)
if open_door == 3:
self.canvas.tag_bind(self.door1_show, "<Button-1>", self.door1)
self.canvas.tag_bind(self.door2_show, "<Button-1>", self.door2)
self.door3_show = self.canvas.create_image(500, 350, image=self.door3_pic)
self.canvas.tag_bind(self.door3_show, "<Button-1>", self.door3)
def options(self, door_number, correct_door, closed_door):
self.total_counter += 1
def show_results():
self.canvas.itemconfig(self.text_switching,
text='Switching won {0:5} times out of {1} ({2:.2f}% of the time)'
.format(self.winning_switchers, self.total_counter,
(self.winning_switchers / self.total_counter * 100)))
self.canvas.itemconfig(self.text_not_switching,
text='Not switching won {0:5} times out of {1} ({2:.2f}% of the time)'.
format(self.winning_non_switchers, self.total_counter,
(self.winning_non_switchers / self.total_counter * 100)))
def change(*args):
button1.grid_forget()
button2.grid_forget()
chosen_door = next(iter(set(DOORS) - {door_number, closed_door}))
if chosen_door == correct_door:
self.canvas.itemconfig(self.answer, text="you were right")
self.winning_switchers += 1
else:
self.canvas.itemconfig(self.answer, text="you were wrong")
self.unhide_door(closed_door)
def keep(*args):
button1.grid_forget()
button2.grid_forget()
if door_number == correct_door:
self.canvas.itemconfig(self.answer, text="you were right")
self.winning_non_switchers += 1
else:
self.canvas.itemconfig(self.answer, text="you were wrong")
self.unhide_door(closed_door)
button1 = Button(self, text='Click to change door')
button1.bind("<Button-1>", change)
button1.grid(row=0, sticky='w')
button2 = Button(self, text='Click to keep door')
button2.bind("<Button-1>", keep)
button2.grid(row=1, sticky='w')
show_results()
if __name__ == "__main__":
application = MontyHall()
application.mainloop()
Answer: Overview
You've done an excellent job:
Overall code layout is good
You did a good job partitioning code into functions
You leveraged code written by others with the imports
Used meaningful names for classes, functions and variables
Here are some adjustments for you to consider, mainly for coding style.
Documentation
Add comments at the top of the file to state the purpose of the code. For example:
'''
Monty Hall problem.
Here's how it works... add summary here
'''
Instructions
When I run the code, all I see is three doors. I don't know what I am expected to do.
You should display some simple instructions in the GUI to prompt the user
for expected actions, such as:
Click on one of the doors to blah, blah, blah...
Efficiency
In the unhide_door function, it is wasteful to check the value of the open_door
variable in three separate if statements. They can be combined into a single if/elif:
if open_door == 1:
// ...
elif open_door == 2:
// ...
elif open_door == 3:
You partially did this in function block_doors. However, you should merge
the two if statements into one if/elif.
Simpler
Consider removing these constants; it might be simpler just to use the numbers themselves:
DOOR_1 = 1
DOOR_2 = 2
DOOR_3 = 3
DRY
There are a lot of instances where the the code is repeated 3 times, once
for each door. Although I don't have a specific recommendation in mind,
it would be worth spending some time looking for an opportunity to combine
common code. | {
"domain": "codereview.stackexchange",
"id": 45570,
"tags": "python, beginner, object-oriented, tkinter"
} |
How does thermal wavelength work exactly? | Question: In many sources it is stated that the thermal wavelenth indicates the rough size of the atom. It is then stated that this wavelenght is the de-Broglie wavelength of a particle with a momentum with the average kinetic energy in that temperature.
I don't really understand what this means in the context of quantum theory. If a particle has a very well defined momentum value, its position is very well undefined. In the limiting case if the particle actually has a de-Broglie wavelength of any value, $\psi$ will be a complex sinusoid with no size at all.
Is the thermal wavelength just an observational fact? Have people just noticed that gas atoms interact with the environment in such a way that the atoms localize to an area which happens to be the same as the wavelength of a momentum eigenstate with the eigenvalue of expected value of the atoms momentum? Or is there some proof for this somewhere?
Answer:
In many sources it is stated that the thermal wavelenth indicates the rough size of the atom. It is then stated that this wavelenght is the de-Broglie wavelength of a particle with a momentum with the average kinetic energy in that temperature.
Average kinetic energy of an atom is
$$\frac{3}{2}k_B T$$
de-Broglie wave is not an arbitrary wave function $\psi$, but a plane wave with energy
$$ E=\hbar\omega=\frac{\hbar^2 k^2}{2m},$$
where $k=\frac{2\pi}{\lambda}$ is the wave vector.
Thus, the definition requires us to equate these quantities:
$$
\frac{3}{2}k_B T = \frac{\hbar^2 k^2}{2m}
= \frac{h^2}{2m\lambda^2} \Rightarrow \lambda=...$$
Rough size of atom
Note that here we treat an atom as a point-like particle, described by a De-Broglie wave. That is, the wave function here is for the overall motion of the atomic center-of-mass, rather than for an inner state of an atom, which is extensively discussed in QM books. This might be a possible source of confusion. Thus, what is meant here by the size of atom is not the size of the electronic cloud around the nucleus, but rather the extension of the wave packet in space. | {
"domain": "physics.stackexchange",
"id": 87146,
"tags": "quantum-mechanics, statistical-mechanics, thermal-radiation, wavelength, quantum-statistics"
} |
R packages: How to access csv files in data subfolder? | Question: I have successfully written an R package and want to ship it with a specific csv file. I placed the file in the data and data-raw subfolders.
read.csv("data/foobar.csv")
The above command fails. How can I read the csv file?
Answer: data-raw is for storing data alongside a short R script that will do the conversion to R data for the user, and the user will just use the data() function.[source] Alternatively, if you want the raw CSV to be user-accessible, I think you need to use the extdata folder, as documented here. Then the user can get the actual path to the file on their system, after package installation, with system.file("extdata", ..., package = "mypackage"). Then, finally, they can feed that path to read.csv() with whatever options they like. | {
"domain": "datascience.stackexchange",
"id": 6255,
"tags": "r"
} |
Do mel-spectrograms of two audios have linear property? | Question: Suppose I have an audio and some noise, and want to do data augmentation for analysis.
Since each audio and noise can have corresponding mel-spectrogram, instead of computing the mel-spectrogram of the wav form of audio + noise, is it sufficient for just adding-up the mel-spectrograms of audio and noise?
Any mathematical justification of proving/disproving linearity is super appreciated!
Answer: No.
Mel-spectrogram is the projection of spectrogram, $|\text{STFT}|$ or $|\text{STFT}|^2$, onto mel basis. Linearity is lost at modulus: $|\text{STFT}(x_0)| + |\text{STFT}(x_1)| \neq |\text{STFT}(x_0 + x_1)|$.
However, one can first combine the $\text{STFT}$'s, which are themselves linear and so is their sum, and then project them: this is same as mel-spectrogram of combined input.
Brief math: STFT is convolution with windowed complex sinusoids, and convolution is linear: $h \star x_0 + h \star x_1 = h \star (x_0 + x_1)$. The mel projection step is also linear.
Demo below.
import numpy as np
import librosa
#%% Direct mel #################################
kw = dict(sr=22050, n_fft=2048) # defaults
x0 = np.random.randn(4096)
x1 = np.random.randn(4096)
M0 = librosa.feature.melspectrogram(x0, **kw)
M1 = librosa.feature.melspectrogram(x1, **kw)
M = librosa.feature.melspectrogram(x0 + x1, **kw)
assert not np.allclose(M0 + M1, M)
#%% Direct STFT ################################
S0 = librosa.stft(x0, n_fft=kw['n_fft'])
S1 = librosa.stft(x1, n_fft=kw['n_fft'])
S = librosa.stft(x0 + x1, n_fft=kw['n_fft'])
assert np.allclose(S0 + S1, S)
#%% Mel, first combine STFT ####################
mel_basis = librosa.filters.mel(**kw)
MS01 = np.dot(mel_basis, np.abs(S0 + S1)**2)
assert np.allclose(MS01, M) | {
"domain": "dsp.stackexchange",
"id": 10428,
"tags": "spectrogram"
} |
Deriving expression of electric field at a point above centre of hemisphere | Question: I have been trying to derive the expression for electric field due to a solid uniformly charged hemisphere at a point which is a certain distance above the centre. I have identified that the differential element will be a disc. I have no idea how to proceed further. I know the expression for electric field due to a disc. Please guide me further.
Answer: Let us divide the hemisphere into solid discs, of infinitesimal width $dx$ and radius r. Let us also denote the radius of the hemisphere as R and the distance from each disc from the centre of the sphere as x (by sphere I mean the sphere from which the hemisphere was obtained). Consider now a point lying on the axis, at a distance $x'$ from the centre of the sphere. Now you know the E field due to an individual disc is $E_{disc}=\frac{Q}{4\pi r\epsilon_0} (1-\frac{x'-x}{\sqrt{(x'-x)^2+r^2}})=\frac{Q}{4\pi \sqrt{R^2-x^2}\epsilon_0} (1-\frac{x'-x}{\sqrt{(x'-x)^2+R^2-x^2}})$ from Pythagoras, and that Q is $\rho \pi r^2 dx$, where $\rho$ is the charge density. Now the integral for a single variable follows, since R and x' are constant. | {
"domain": "physics.stackexchange",
"id": 43393,
"tags": "homework-and-exercises, electrostatics"
} |
Is there a Feature selection process for ARIMA model? | Question: I have a dataset representing sales per day for certain products. It contains 30000 observations and 6 features (target included). Since my task is to make a prediction about the number of pieces sold, I decided to use an ARIMA model (following this tutorial here).
I realized that the only preprocessing step for auto ARIMA is removing all the columns except target. (since auto ARIMA trains on the previous values).
So, my question is: is it common to not perform any feature selection and only keep the Target variable in ARIMA?
Thank you.
Answer: In some sense, it is common to do feature selection before you fit the ARIMA model, or at the very least, it is natural (in my opinion).
The problem is that there seems to be little development in automatic feature selection techniques for statistical time series models that can use exogenous variables (like ARIMA). Thus, it is not clear as to how we can do feature selection. To make things worse, auto.arima doesn't do any feature selection on exogenous variables, it just uses AICc to find the most optimal order of your model (in a stepwise fashion in its default setting). If you include exogenous variables in your model, they will always be included in all models in the selection process.
Basically, one way to do variable selection would be to try all possible combinations of exogenous variables, use auto.arima to find the "best" orders based on AICc, record this model's AICc (recall that AICc penalizes models that have large amounts of fitted parameters that do not increase the model's likelihood by a justifiable amount), and then pick the absolute best model out of all combinations of exogenous variables. Kind of a pain, and possibly very time consuming.
I hope this helps. | {
"domain": "datascience.stackexchange",
"id": 5317,
"tags": "time-series, predictive-modeling, feature-selection"
} |
A Question about Path Integral Measure | Question: I want to do the following path integral.
$$\mathcal{Z}=\int\mathcal{D}x e^{iS[\dot{x}]}$$
The action only denpends on $\dot{x}$. For some reason, I want to replace the integral measure $\mathcal{D}x$ by $\mathcal{D}\dot{x}$.
So I have
$$\mathcal{Z}=\int\mathcal{D}\dot{x}\mathrm{Det}\left(\frac{\delta x}{\delta\dot{x}}\right)e^{iS[\dot{x}]}.$$
The variable $x$ is related with $\dot{x}$ via the linear transformation
$$x(t)=\int_{0}^{t}\dot{x}(s)ds,$$
which implies
$$\mathrm{Det}\left(\frac{\delta x}{\delta\dot{x}}\right)\equiv 1.$$
Am I correct in the above derivation?
Answer:
For the corresponding problem with discretized time, the Jacobian determinant of
the coordinate transformation
$$(x^0,x^1,\ldots, x^N)\qquad\longrightarrow \qquad (x^0,v^{1/2},\ldots, v^{N-1/2}), $$
where $$v^{j+1/2}~:=~\frac{x^{j+1}-x^j}{\Delta t} ,$$
would be $\det=(\Delta t)^{-N}$, not unity.
For continuum time, the velocity is $$v(t)~=~\frac{dx(t)}{dt}~=~\int \!dt^{\prime} x(t^{\prime}) \frac{d}{dt}\delta(t\!-\!t^{\prime}). $$ Whether the functional determinant $${\rm Det}\frac{\delta v(t)}{\delta x(t^{\prime})} $$ is unity (or not) depends on regularization scheme and boundary conditions. However, see also this related Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 56431,
"tags": "path-integral, functional-derivatives, functional-determinants"
} |
Multiple robots in Gazebo | Question:
I am working with a simulated robot in Gazebo that comes from a urdf.xacro
I expect to have multiple copies of it.
The thing is that I want to change the origin of my robots not manually (when I launch them). I know that there is a tag inside the robot.urdf.xacro file that specifies the origin, but I want to know if there is any parameter which I can modify without having to rewrite the urdf file.
Can you help me?
Thanks
Originally posted by arenillas on ROS Answers with karma: 223 on 2014-06-20
Post score: 0
Answer:
Hi arenillas, does this post answer your question: http://answers.gazebosim.org/question/4190/how-do-you-run-multiple-hector_quadrotors-without/ ?
Originally posted by al-dev with karma: 883 on 2014-06-21
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by arenillas on 2014-06-23:
It was just what I was looking for.
Thanks | {
"domain": "robotics.stackexchange",
"id": 18332,
"tags": "ros, gazebo, multiple"
} |
How much water is needed or released in the process of burning 1000 kcalories from fat reserves? | Question: Recently, I have started working on my weight and, having a curious mind, I looked deeper into this whole exercise/metabolism/energy production thing. And I became curious how much water is needed to burn 1000 calories while doing aerobic exercise.
At first I looked into glucose oxidation and it looked pretty straightforward - fully oxidizing 1 glucose molecule generates so much energy and releases 6 water molecules. So unless a lot of water is involved in breaking down glycogen to individual glucose molecules, it's a no-brainer.
Then however I realized that glycogen is used in moderate-to-intensive exercise, while I am curious about slow energy burning exercise like walking. And apparently in this case, especially after some time walking, the body starts burning fat. This is where I am completely lost, as it is way above my high school level of biochemistry, most of which I have forgotten.
So my question is, is water consumed or released while burning fat? And how much would it be in order to generate 1000 kcalories?
Answer: There are 9 (dietary) calories in a gram of fat, as a rule of thumb. So for 1000 kcal (or 1000 dietary calories), you would metabolize about 110 g of fat. To estimate how much water is made, we assume the fat is made exclusively from oleic acid. The net reaction for complete oxidation is:
$$\ce{C18H34O2 + 51/2 O2 -> 17 H2O + 18 CO2}$$
To get the mass of water, we take the mass of fat and multiply it by the ratio of stoichiometric coefficients and the ratio of molar masses:
$$m_\mathrm{water} = m_\mathrm{fat} \cdot \frac{\pu{18 g mol-1}}{\pu{282.47 g mol-1}} \cdot \frac{17}{1} = \pu{119 g}$$
This is just the core reaction. There are lots of other things that happen, including the hydrolysis of the triglyceride (which uses up one water for every fatty acid released). As the fatty acid is oxidized, you might be making ATP, which has an effect of the water balance as well. You would have to specify exactly what you mean with "burning" fat to give an idea what you want to consider as the net effect. The net reaction I used looks like a combustion reaction, but the set of reactions going on in the body is more elaborate and more controlled. | {
"domain": "chemistry.stackexchange",
"id": 17525,
"tags": "biochemistry"
} |
What is Hubbles WFC3 "UVIS 47 G200" filter? What is it used for? How to find an example? | Question: Extensive reading for Are the dispersion directions of the prism and the grating in Hubble WFC3 UVIS G280 perpendicular? Can we call this a "grism"? With cross-disparsion? led me to Instrument Science Report WFC3 2003-02; WFC3 UVIS Filters: Measured Throughput and Comparison to Specifications.
Table 1: "UVIS Filter Performance Specifications from Version D JPL Spec or CEI" lists near the bottom
Fnumber Fname Lam0 FWHM lam -50 ... lam +50 ... Description
------- ----- ------ ---- ------- ------- -----------
UVIS 47 G200 2775.0 1850 1800 ... 3650 ... UV prism
Question: What is Hubbles WFC3 "UVIS 47 G200" filter? Is it just a prism or is it a combination of a prism and a filter?
What is it used for?
How can I find an example of how it has been used in practice?
Answer: I believe that is a typo. In Tables 4 and 5 of the same document, "UVIS 47" is listed as "g280", which is the grism you are curious about.
A perusal of several other HST documents (e.g., various Wide Field Camera 3 Instrument Handbooks) turns up no mention of a "G200" element, and searches of the HST archive turn up nothing if "G200" is entered in the "Filter/Grating" field. (Entering "G280" turns up observations with that grism.) | {
"domain": "astronomy.stackexchange",
"id": 5962,
"tags": "observational-astronomy, spectroscopy, hubble-telescope, instruments"
} |
Python code to get Accrued interest | Question: I've wrote this code and it does what is expected.
However, I'm thinking if there are better ways to write this, specially if is useful to wrap it in a class, or other ways to make it less loosen.
I'll be glad and grateful for any others corrections or suggestions.
interest_list = [0.5, 0.4, 0.3, 0.5, 0.7, 0.4, -0.2, -0.5, 0.3,
0.7, 0.9, 1.0]
def get_unitary(interest_list):
unit_list = []
for i in range(len(interest_list)):
unit_list.append(1+ interest_list[i]/100)
return unit_list
# I've tested on some lists and found this faster than using reduce()
def prod(uni_list):
p = 1
for i in uni_list:
p *= i
return p
def get_accrued(prod):
sub_1 = prod -1
return sub_1 * 100
accrued = get_accrued(prod(get_unitary(interest_list)))
Answer: It'd be easier to help if you explained the purpose of the algorithm.
However the obvious simplification is in the get_unitary function. You don't want that list, you only want to work with the values that come back from that operation. So you can omit the creation and population of the list and make a generator function that just pops out the values sequentially using yield
def get_unitary(interest_list):
for value in interest_list:
yield (1 + value / 100)
Since prod just iterates the result, this produces the same output.
In fact, the function itself is so simple that you can reduce it even further by turning the get_unitary function into a generator expression. This is a different way of writing the version above.
def get_unitary(interest_list):
return (1 + value / 100 for value in interest_list)
But since that function is not actually consumed by any other code, you can just include it in the prod() function:
def product(interest_list):
unitary_generator = ( (1 + value / 100) for value in interest_list)
p = 1
for i in unitary_generator:
p *= i
return p
accrued = get_accrued(product(interest_list))
(I changed prod to product for clarity).
Without knowing the context, its hard to know if it makes sense to keep the get_accrued function on its own. Since it only consumes the result of product() you can fold them together, maybe including a comment about the algorithm. So that will get you to this as a final form:
interest_list = [0.5, 0.4, 0.3, 0.5, 0.7, 0.4, -0.2, -0.5, 0.3,
0.7, 0.9, 1.0]
def accrued_interest(interest_list):
unitary_generator = ( (1 + value / 100) for value in interest_list)
p = 1
for i in unitary_generator:
p *= i
# if the logic behind this is not clear,
# an comment here would be useful!
return (p - 1) * 100
accrued = accrued_interest(interest_list)
And of course you could also omit the generator altogether if your input data were just preformatted into numbers into the right range, such as [150, 140, 130... ] and so on | {
"domain": "codereview.stackexchange",
"id": 36981,
"tags": "python, python-3.x"
} |
Testing a Pivotal API request client using lots of mocking | Question: I have a class that is all about doing HTTP requests, and logging (in file system & database). It's only using 3 dependencies to do these things, so I'm fine with the code so far.
Here it is for convenience :
<?php
namespace App\Services\Pivotal\Request;
use App\Core\Request\HttpRequest;
use App\Core\Request\Log\RequestsLog;
use App\Services\Pivotal\Token\Factory\TokenFactory;
use Config;
class PivotalRequest
{
private $httpRequest;
private $tokenFactory;
private $requestLog;
private $serviceId;
private $baseUri = 'https://api.pivotal.io/vnz';
private $version = 'v1';
public function __construct(HttpRequest $httpRequest, TokenFactory $tokenFactory, RequestsLog $requestLog)
{
$this->serviceId = 287;
$this->httpRequest = $httpRequest;
$this->tokenFactory = $tokenFactory;
$this->httpRequest->setHeaders(
[
"Content-Type" => "application/json",
"Authorization" => "Bearer {$this->tokenFactory->getTokenId()}"
]
);
$this->httpRequest->setLogName('pivotal');
$this->requestLog = $requestLog;
}
public function get($uri, $datas = [])
{
return $this->request($uri, $datas, 'get');
}
public function post($uri, $datas = [])
{
return $this->request($uri, $datas, 'post');
}
public function useVersionNumber($versionNumber)
{
$this->checkVersionNumber($versionNumber);
$this->version = "v{$versionNumber}";
}
protected function request($uri, $datas, $method)
{
$this->prepareRequest($uri, $datas);
return $this->getResponse($method);
}
protected function prepareRequest($uri, $datas)
{
$fullUri = $this->getFullUri($uri);
$this->httpRequest->setRequestLog($this->requestLog->fromConnection($this->serviceId, $fullUri));
$this->httpRequest->setUri($fullUri);
$this->httpRequest->setDatas($datas);
}
protected function getResponse($method)
{
if ($method === 'get') {
$response = $this->httpRequest->get();
}
else if ($method === 'post') {
$response = $this->httpRequest->post();
}
return $response;
}
protected function checkVersionNumber($versionNumber)
{
if (!is_int($versionNumber)) {
throw new \InvalidArgumentException('Version number should be a positive int.');
}
}
private function getFullUri($uri)
{
$this->checkUri($uri);
return "{$this->baseUri}/{$this->version}/$uri";
}
private function checkUri($uri)
{
if (empty($uri)) {
throw new \InvalidArgumentException('Uri should not be empty.');
}
}
}
Now I'd like to write tests for this class. It seems I can't do anything else but mock, mock, and mock even more.
So far, this is what I have :
<?php
use App\Core\Request\HttpRequest;
use App\Core\Request\Log\RequestsLog;
use App\Services\Pivotal\Request\PivotalRequest;
use App\Services\Pivotal\Token\Factory\TokenFactory;
class PivotalRequestTest extends TestCase
{
/**
* @test
*/
public function full_passing_request_test()
{
$tokenId = 1;
$tokenFactoryMock = Mockery::mock(TokenFactory::class);
$tokenFactoryMock->shouldReceive('getTokenId')->once()->withNoArgs()->andReturn($tokenId);
$fullUri = 'https://api.pivotal.io/vnz/v1/livecheck';
$httpRequestMock = Mockery::mock(HttpRequest::class);
$httpRequestMock->shouldReceive('setHeaders')->once()->with([
'Content-Type' => 'application/json',
'Authorization' => 'Bearer 1'
]);
$httpRequestMock->shouldReceive('setLogName')->with('pivotal');
$httpRequestMock->shouldReceive('setRequestLog')->once();
$httpRequestMock->shouldReceive('setUri')->once()->with($fullUri);
$httpRequestMock->shouldReceive('setDatas')->once()->with([]);
$httpRequestMock->shouldReceive('get')->once()->withNoArgs();
$requestLog = new RequestsLog();
$requestLog->srv_name = 'srv_name';
$requestLog->request_name = $fullUri;
$requestLog->ip_src = 'ip_src';
$requestLog->service_id = 287;
$requestLog->request_status = 'prepared';
$requestLog->webservice_status = 'prepared';
$requestLogMock = Mockery::mock(RequestsLog::class);
$requestLogMock->shouldReceive('fromConnection')
->once()
->with(287, $fullUri)
->andReturn($requestLog);
$pivotalRequest = new PivotalRequest($httpRequestMock, $tokenFactoryMock, $requestLogMock);
$pivotalRequest->get('livecheck', []);
}
}
This looks really smelly to me, because I feel like I'm only describing the inners of the class, so instead of describing what the class should do, I describe how it does what it does (which should be of no interest, right ?).
Is that common to have so many mocks (basically almost only mocks) to test a class ? Is that a good practice ? If not, how can I change this ?
Answer: First of all, Bravo for writing unit tests! Secondly, Bravo for doing it in PHP!
Seriously though, I don't see enough people doing this type of testing.
What you're running into is referred to as test pain. Test Pain describes the amount of effort and setup required to sufficiently isolate code dependencies in order to perform unit testing.
In your case, the problem is the HttpRequest class. The class has a deep object graph that your PivotalRequest class knows too many details about.
From here, you have a couple of options:
1. Test Helpers
One approach here would be to create some helper code in your Unit tests that can easily create default values for mocked HTTP data (like request, cookies, query string, etc...) to help reduce test pain when dealing with HttpRequest.
The reason you have almost ONLY mocks as you said, is because your class is actually pretty compact and simple (which are both good). It is only the sheer size of mocking an HTTP Request compared to the amount of your actual code that makes it look this way.
As far as testing the inner-workings of your class vs. testing what it does. I would suggest renaming your test to clear this up. The convention I usually follow is something like this:
ObjectName_MethodName_GivenArguments_ShouldBehaveAsFollows
Or, for a more concrete example:
PivotalRequest_get_WithValidRequest_ShouldReturnValidResponse
By being that explicit, it becomes clear that you are really only interested in input and output and what happens in-between is less important (unless the test is focused on what is in-between).
2. Create a lightweight abstraction
If the HttpRequest class is too complicated, it could be that the code smell here is that class does too much and should be split / broken down into smaller chunks.
Also, you could consider creating a wrapper class that is simpler to Mock such as:
class CompactHttpRequest
{
private $_httpRequest;
function __construct(HttpRequest $httpRequest)
{
$this->_httpRequest = $httpRequest;
}
public function setHttpRequestDefaults($queryString, $headers, $formData)
{
// TODO: map incoming arguments to $this->_httpRequest
}
}
Then, use the CompactHttpRequest class in the constructor of your PivotalRequest class instead of HttpRequest. In doing so, you'll only ever need to mock one method. In addition, the only aspect of that mock that you care about is that the setHttpRequestDefaults method is called. The specifics of what is passed becomes irrelevant for the purposes of the test.
3. Be less specific
In this test, the actual content of the headers being set seems like extra data that isn't necessary to specify if we're not ACTUALLY going to make the HTTP request.
4. Refactor object relationships
This being my preferred approach as I believe it is more in the spirit of OO programming.
It seems to me like the PivotalRequest doesn't do much as you also note in your question. That being the case, it seems to me to make more sense to have a Class that can process different types of requests and make your PivotalRequest class inherit from the HttpRequest class and override the specifics like the URL being posted to, the HTTP headers, etc...
Here is an illustration I made to demonstrate what I mean:
In this way, your system becomes more flexible and all of your core logic is in one spot in the RequestProcessor class.
Overall, the best way to know what will work for your application and what won't is to decide what type of testing you will do.
I am more of a mockist tester as defined by Martin Fowler, so I am interested in having slightly stronger coupling between my tests and my implementation because it forces me to think more about what I'm doing and although it makes tests a bit more fragile, it also finds more bugs.
For more on this, check out his article here:
http://martinfowler.com/articles/mocksArentStubs.html | {
"domain": "codereview.stackexchange",
"id": 22715,
"tags": "php, unit-testing, json, mocks, phpunit"
} |
Priority queue with both decrease-key and increase-key operations | Question: A Fibonnaci Heap supports the following operations:
insert(key, data) : adds a new element to the data structure
find-min() : returns a pointer to the element with minimum key
delete-min() : removes the element with minimum key
delete(node) : deletes the element pointed to by node
decrease-key(node) : decreases the key of the element pointed to by node
All non-delete operations are $O(1)$ (amortized) time, and the delete operations are $O(\log n)$ amortized time.
Are there any implementations of a priority queue which also supportincrease-key(node) in $O(1)$ (amortized) time?
Answer: Assume you have a priority queue that has $O(1)$ find-min, increase-key, and insert. Then the following is a sorting algorithm that takes $O(n)$ time:
vector<T>
fast_sort(const vector<T> & in) {
vector<T> ans;
pq<T> out;
for (auto x : in) {
out.insert(x);
}
for(auto x : in) {
ans.push_back(*out.find_min());
out.increase_key(out.find_min(), infinity);
}
return ans;
} | {
"domain": "cs.stackexchange",
"id": 98,
"tags": "data-structures, priority-queues"
} |
Alignment of a compass needle | Question: From what I know, the compass needle aligns itself with the earth's magnetic field since Earth's geographic north pole(magnetic south) attracts the compass needle's north pole.
However, when a compass is placed near a current carrying wire, how does it aligns itself in the direction of the wire's magnetic field even though there is no north or south pole to attract the tips of the compass needle?
Answer: You shouldn't think about it as being attracted towards one of the poles, but rather the needle aligns itself with the magnetic field lines. A magnet generates field lines that look like the following:
The field lines from the magnet extend throughout all space, which is how we are able to detect them on the surface of the earth. What your compass needle is doing is aligning itself with the arrows in the diagram above, which tells you in what direction the field points at that location, which also implies in what direction the south pole of the magnet is.
Magnetic fields are also generated by current carrying wires. To understand why moving charges (currents) create a magnetic field, see this answer: How do moving charges produce magnetic fields?. Up close, these fields can be stronger than that of the earth, and therefore the needle will align it self with the net field near the wire. | {
"domain": "physics.stackexchange",
"id": 37426,
"tags": "electromagnetism"
} |
Ellingham diagram/graph for tungsten | Question: I've recently come across one of the methods to form tungsten carbide($\ce{WC}$) from wolframite ($\ce{(Fe,Mn)WO4}$) by using carbothermic reduction which was supposedly one of the first methods for extracting tungsten from its ores developed in 1783. But I wanted to know the particular temperature at which the reduction occurs, so I've searched online for the Ellingham diagram of wolframite but I've found no sources for that. There were Ellingham diagrams for many metals except for wolfram. I just want to know if anyone can cite a place where I can find it?
Answer: Wendel [(1)] studied the oxidation of tungsten and produced this diagram for the free energy of formation of various tungsten oxides:
Each of the lines represents free energy per mole of $\ce{O2}$, and $\ce{WO3}$ appears lower than any lower oxide (red, lowest line in the graph). Therefore $\ce{WO3}$ is the only thermodynamically stable oxide and the only one that would appear in an Ellingham diagram. Fitting that line onto the diagram we find that it crosses the $\ce{C/CO}$ line at about 1200°C.
This equilibrium termperature of 1200°C assumes the carbon monoxide is generated at one atmosphere partial pressure. The reaction could take place at somwhat lower temperature using a lower $\ce{CO}$ pressure. In practice, the reduction conditions are more likely driven by kinetics instead of thermodynamics. Wang et al.[2] report that carbothermic reduction at 1100°C gives a mixture of tungsten metal and carbides, which are then purified to ultrafine $\ce{WC}$ By reacting with more carbon at 1200°C:
In the current study, ultrafine and high-purity tungsten carbide (WC) powders are successfully prepared by a two-step process: carbothermic reduction of WO3 followed by carbonization reaction. The effects of the C/WO3 molar ratio, reaction temperature and reaction time on the phase transition and morphology evolution of the products are investigated. During the carbothermic reduction process, all the oxygen in yellow tungsten trioxide (WO3) is removed by carbon to generate a mixture of W, W2C and WC at 1100 °C; and then the as-prepared powder is mixed with an appropriate content of carbon black and carbonized at 1200 °C. The carbon content in the finally obtained WC powders is almost equal to the theoretical value. Furthermore, it is found that a high C/WO3 molar ratio at the first stage is beneficial for decreasing the particle size of the WC powders. When the C/WO3 molar ratio is 3.5, the single phase WC with a particle size of about 200 nm can be obtained. Therefore, this carbothermic reduction–carburization process may provide a simple, low-cost, and high efficiency route to prepare the WC powders in a large-scale.
References
1.
J. Wendel. Thermodynamics and Kinetics of TungstenbOxidation and Tungsten Oxide Sublimation in the Temperature Interval 200°–1100°C. MS thesis submitted to Lund University (Sweden). (Lund, Sweden: European Spallation Source ESS AB, 2014), p. 14.
2.
Kai-Fei Wang, Guo-Dong Sun, Yue-Dong Wu, Guo-Hua Zhang (2019).
"Fabrication of ultrafine and high-purity tungsten carbide powders via a carbothermic reduction–carburization process".
Journal of Alloys and Compounds 784, 362-369,
ISSN 0925-8388,
https://doi.org/10.1016/j.jallcom.2019.01.055. | {
"domain": "chemistry.stackexchange",
"id": 16437,
"tags": "inorganic-chemistry, thermodynamics, reference-request, metallurgy"
} |
Quantization of the massless neutrino field | Question: If a massless neutrino or anti-neutrino is considered (in the whole post I consider neutrinos res. anti-neutrinos as mass-less), it is described by the Weyl-equation:
$$\overline{\sigma}^{\mu}\partial_\mu \chi_a =0 $$ if it is left-handed and neutrino (i.e. transforming according to $D^{(\frac{1}{2},0)}$)
or
$$\overline{\sigma}^{\mu}\partial_\mu \xi^\dagger_\dot{b} =0 $$ if it is right-handed and anti-neutrino (i.e. transforming according to $D^{(0,\frac{1}{2})}$)
Instead a Dirac-particle is described by a bispinor (4-spinor) $$\left(\begin{array}{c} \chi_a \\ \xi^{\dagger}_\dot{b} \end{array} \right)$$ which has 4 degrees of freedom: (spin-up particle; spin-down particle; spin-up anti-particle; spin-down anti-particle) a solution of the Weyl-equation apparently has only one degree of freedom, as the helicity is bound to the neutrino-type (neutrino or anti-neutrino).
However, the Weyl-solution has 2-components. Even worse, according to Landau/Lifschitz volume 4 (I also searched in Srednicki for such a development, but could not find it), the corresponding field operator of the free Weyl-equation can be developed in positive and negative frequency solutions:
$$\chi_a = \sum_p (U(p)_a a_p e^{ipx} + V(p)_a b_p^\dagger e^{-ipx})$$
I use capital letters for the 2-spinors $U(p)$ and $V(p)$ in order to distinguish them from the well-known bispinor solutions of the Dirac-equation $u(p)$ and $v(p)$.
Q:How is such a development in positive and above all negative frequency solutions possible? It seems to be that in a solution $\chi_a$, which solely describes neutrinos, anti-neutrinos mix in due to the appearance of the negative frequency solutions. This is actually the point I don't understand at all.
Q: What is the relation of $U(p)_a$ and above all $V(p)_a$ with the Dirac solutions $u(p)$ and $v(p)$ ?
Q: In particular how is guaranteed that $V(p)_a$ remains a left-handed 2-spinor, i.e. doesn't turn into a right-helicity 2-spinor (which seems to be manifest as $V(p)$ is the coefficient of the negative frequency solution)
I consider this detail as important because upon taking the hermitian conjugate the field-operator turns apparently into a right-handed 2-spinor:
$$ \chi^\dagger_\dot{a} = \sum_p (U(p)_\dot{a} a^\dagger_p e^{-ipx} + V(p)_\dot{a} b_p e^{ipx})$$
Such a change of representation does not happen if the hermitian conjugate of a bispinor Dirac-solution is taken since the hermitian conjugate of a bispinor Dirac-solution transforms in a representation which is equivalent to the original (standard bispinor) one.
Answer: The massless neutrino is described by a two-component spinor $\chi_{a}$. If we take the complex conjugate we get $(\chi_{a})^{*}=\chi^{*}_{\dot{a}}$. The massless neutrino can be described as $\chi_{a}$ or as $\chi^{*}_{\dot{a}}$. There are not two independent undotted and dotted spinors $\chi_{a},\xi^{*}_{\dot{a}}$. Notice that Landau and Lifshitz volume 4 2nd edition express this point just before equation (30.6) on page 112. This should clear up all the problems posed in the question. | {
"domain": "physics.stackexchange",
"id": 61294,
"tags": "antimatter, neutrinos, dirac-equation, spinors, second-quantization"
} |
How are monocytes larger than capillaries? | Question: I have read that the average size of a capillary is about 8 micrometers. How is it possible that the 15 micrometer or so monocytes in blood do not block these vessels?
https://en.m.wikipedia.org/wiki/Blood_vessel#Vessel_size
https://www.britannica.com/science/monocyte
Answer: They aren't completely rigid and can change shape to squeeze through (see Downey et al).
If they are activated, monocytes can get stuck in capillaries and block them, which contributes to poor circulation following reperfusion after an ischemic blockage (see Engler et al).
Downey, G. P., Doherty, D. E., Schwab 3rd, B., Elson, E. L., Henson, P. M., & Worthen, G. S. (1990). Retention of leukocytes in capillaries: role of cell size and deformability. Journal of applied physiology, 69(5), 1767-1778.
Engler, R. L., Schmid-Schönbein, G. W., & Pavelec, R. S. (1983). Leukocyte capillary plugging in myocardial ischemia and reperfusion in the dog. The American journal of pathology, 111(1), 98. | {
"domain": "biology.stackexchange",
"id": 9861,
"tags": "immunology, hematology, cardiology, blood-circulation, circulatory-system"
} |
Feynman Lectures Chapter 4.2: Add or remove weights in a non-ideal machine? | Question: Excerpt:
A very simple weight-lifting is shown in Fig. 4-1. This machine lifts three units "strong". We place three units on one balance pan, and one unit on the other. However, in order to get it actually to work, we must lift a little weight off the left pan. On the other hand, we could lift a one-unit weight by lowering the three-unit weight, if we cheat a little by lifting a little weight off the other pan. Of course, we realise that with any actual lifting machine, we must add a little extra to get it to run. This we disregard, temporarily. Ideal machines, although they do not exist, do not require anything extra.
Figure 4.1:
This may be already answered by @mmesser314
here, but I want ask the question from a different point of view.
During the first part of the paragraph, we are told that "little weights" must be "lifted off" the left and the right pans to get the machine to "actually
work". Assuming that in this context lift off = remove, that makes sense. But then we are abruptly told that we should
add a "little extra" if we want to use an "actual" lifting machine. I assume that actual = in real-life and extra = additional weight or force.
So which one is it if the machine is not ideal - do we add or remove weights for it to work? Do we do both? First "lift off" a little weight, let the lifting and the lowering occur and add a "little extra" to do the reverse operation?
Answer: First just a comment on the wording: Feynman says you need to remove "a little weight," not "little weights." I picture this as using a knife to shave a small amount of mass from one of the cubes. Of course the details of how you remove the mass doesn't really matter, but I just want to make sure it's clear that Feynman is not saying to remove an entire box from either scale.
A second overall comment is that I think you are reading Feynman at a very high level of precision, whereas Feynman tends to use very physical and "natural language" arguments. Just as a suggestion, you might find a different book (Landau and Lifschitz? Kleppner?) that gives more mathematical details to be less frustrating. Of course Feynman has a lot of insight and is worth reading, but I'm just raising this since this is one common reason people find Feynman to be difficult to follow (until you already have some background in what he is talking about).
Anyway here is how I interpret Feynman's words:
"lift off": remove some mass from the balance.
"actual lifting machine": one whose pivot has frictional forces that act on the balance, which dissipate energy.
"add a little extra": add some extra energy (not mass).
Putting this all together, if you "lift off" (remove) a little (infinitesimal) mass from the left plate in your figure, then the balance will start to lift the three masses. However, if there is friction in the pivot of the balance that can dissipate heat (which there will certainly be if this is an "actual lifting machine"), then the balance will slow to a stop before the masses reach their maximum possible height. Therefore one must "add a little extra" energy (beyond simply removing an infinitesimal amount of mass) to overcome friction and raise the three masses. | {
"domain": "physics.stackexchange",
"id": 73607,
"tags": "newtonian-mechanics, newtonian-gravity, energy-conservation, thought-experiment"
} |
How to write a cleaner code for this piece of php snippet? | Question: I would like to know if there is a better/shorter/cleaner/resource saving method for this snippet of php code with codeigniter? Thanks.
$code = $this->uri->segment(3);
if (!$code) {
// validation
$this->form_validation->set_rules('code', 'PassCode', 'required|trim|max_length[100]|xss_clean');
// run validation to check if validation rules are satifised
if ($this->form_validation->run() == true) {
$code = $this->input->post('code');
$user_id = $this->user_model->getUserId($code);
if (empty($user_id)) {
$data['errorMessage'] = 'Your code is not valid.';
} else {
redirect('user/profile');
}
}
} else {
$user_id = $this->user_model->getUserId($code);
if (empty($user_id)) {
$data['errorMessage'] = 'Your code is not valid.';
} else {
redirect('user/profile');
}
}
Answer: if ($this->form_validation->run() == true)
is redundant, just use...
if ($this->form_validation->run())
Also, you can move the whole if (empty($user_id)) block outside to the end to avoid duplicating it:
$code = $this->uri->segment(3);
if (!$code) {
// validation
$this->form_validation->set_rules('code', 'PassCode',
'required|trim|max_length[100]|xss_clean');
// run validation to check if validation rules are satifised
if ($this->form_validation->run()) {
$code = $this->input->post('code');
$user_id = $this->user_model->getUserId($code);
}
} else {
$user_id = $this->user_model->getUserId($code);
}
if (empty($user_id)) {
$data['errorMessage'] = 'Your code is not valid.';
} else {
redirect('user/profile');
} | {
"domain": "codereview.stackexchange",
"id": 1515,
"tags": "php, codeigniter"
} |
How to measure vacuum permittivity? | Question: In this question, the first answer (though I don't completely understand that answer) states that $\epsilon_0$ is the proportionality constant in Gauss' law. If that's the case why isn't it assumed to be just "1". This actually leads to the question, how was $\mathbf{\epsilon_0}$ measured and determined, which again beings me back to "What is vacuum permitivity?"
P.S: I made a series of questions, here. But as it was too broad, I was told to form separate questions, but I have linked everything there, in the comments, kindly take a look.
Answer: As the comment by G. Smith says, you can actually set the proportionality constant to one. But then you would have to measure electric charge in some other units.
Consider the setup of SI units. One coulomb is the charge that is carried by a current of 1 Ampere in one second. An Ampere is defined as the current that causes two infinitely long and thin wires at 1 meter from each other to attract with a force of $2 \cdot 10^{-7}$ Newtons per each meter of the length of the wires. So, this definition is kind of tied to the Lorentz force. When you ask a question like "What is the Coulomb force between two static charges in vacuum?", you get a strange constant.
In the Gaussian units, for example, the situation is different. Here the charge in such a way that the constant in Coulomb's law is equal to one.
In short, if you define the charge so that it "makes sense" in terms of meters, kilograms, and Newtons, you will get odd-looking constants in electromagnetic laws. But if you define the charge units so that electromagnetic laws look nice, then one unit of charge in this system will have an odd-looking proportionality constant to the Coulombs (1 CGS charge unit $ \approx 3.33564×10^{−10}$ C). | {
"domain": "physics.stackexchange",
"id": 62621,
"tags": "electrostatics, gauss-law, polarization, dielectric, unit-conversion"
} |
What is the electric potential at the point charge, at the source of field? | Question: $$V(r)=k_eq/r$$
What is the electric potential when the test charge is placed very very close to the source charge, ie when $r=0$? Is it infinity?
Answer: In this case, why don't you just input actual values and see what happens?
You get the electric field by taking negative potential gradient and as you can see, it varies inversely with square of distance. So if the distance changes from 1m to 10^-3 m then the force will be 10^6 times greater. You see where I am going with this? It becomes stronger and stronger and hence you have to do more and more work. Thus the potential will approach infinity. | {
"domain": "physics.stackexchange",
"id": 37461,
"tags": "electrostatics, electric-fields, potential-energy, singularities, coulombs-law"
} |
rosbag tools for bag operation | Question:
I recently started to work exhaustively with rosbag.
Also started with extracting videos out of it and also start writing my own tools for timestamp manipulation (which was a nice exercise btw).
But then I found that there already exist tools for all these task.
Before I start writing more tools for bag operations, I'd like to ask what other repositories/packages/tools exist.
Edit:
I put the (hopefully ever-growing list) into an answer, so that this question gets more attention
Please send more suggestions!
Originally posted by tik0 on ROS Answers with karma: 220 on 2018-06-04
Post score: 2
Original comments
Comment by stevejp on 2018-06-04:
This post on ros discourse last week for "Qt rosbag exporter", which is a Qt based GUI for working with bag files.
Comment by tik0 on 2018-06-05:
Just awesome! Thanks for the hint! I'll put it in the list.
Comment by lmathieu on 2018-06-05:
In a totally different technology (web based), there is https://marvhub.com and https://github.com/ternaris/marv-robotics who works on bag files for displaying informations on the web (the best example i found : https://marvhub.com/#/detail/q4mwxc3epkcqpqhiyoru7xclii )
Comment by tik0 on 2018-06-11:
@Imathieu This is just mindblowing. Never thought of such possibilities! I will directly integrate this into my work! Thanks for this awesome tip.
Comment by PrieureDeSion on 2018-06-13:
@tik0 if you are looking to manipulate/log simple text-based data, here's a little something I wrote: https://gist.github.com/PrieureDeSion/f9e185ec14556e905373c5be72718d6d
It may be of use for quick processing and extraction.
Comment by tik0 on 2018-06-28:
Found another toolset from this post, which is able to trim and merge bag files: https://answers.ros.org/question/10683/is-there-a-way-to-merge-bag-files/
Answer:
So the hopefully ever-growing list to work with bag files is as follows:
Native rosbag
http://wiki.ros.org/rosbag
https://github.com/ros/ros_comm/tree/kinetic-devel/tools/rosbag/scripts
http://wiki.ros.org/rosbag/Cookbook
http://docs.ros.org/kinetic/api/rosbag/html/index.html
bag tools
http://wiki.ros.org/bag_tools
rqt_bag
http://wiki.ros.org/rqt_bag
http://wiki.ros.org/rqt_bag_plugins
rqt_bag_exporter
https://gitlab.com/InstitutMaupertuis/rqt_bag_exporter
https://discourse.ros.org/t/qt-rosbag-exporter/4964
MARV Robotics (extensible data management platform)
Code: https://github.com/ternaris/marv-robotics
Example: https://marvhub.com/#/collection/bags
Example: https://marvhub.com/#/detail/q4mwxc3epkcqpqhiyoru7xclii
Matlab solutions
native robotics toolbox: https://de.mathworks.com/help/robotics/ref/rosbag.html
(tested and working well): https://github.com/unl-nimbus-lab/bag2matlab
(untested): https://github.com/bcharrow/matlab_rosbag
Various bag tools (trimming, merging, editing, ... bag files)
bagtools: https://bitbucket.org/daniel_dube/bagedit/wiki/Home
Rosbag Editor: https://github.com/facontidavide/rosbag_editor
rosbag_fixer (work around message header's missing dependency information): https://github.com/gavanderhoorn/rosbag_fixer
Thread on Importing Rosbag in Python 3
Bag export to other formats
This is not well supported. So the best way is to export the bag to CSV which can then be converted using pandas, for instance
Thread on rosbag to npz or h5 conversion
Edit
Added Matlab solutions
Add toolset for trimming and merging bag files
Add Rosbag Editor
Add rosbag_fixer
Add bag export
Originally posted by tik0 with karma: 220 on 2018-06-11
This answer was ACCEPTED on the original site
Post score: 5
Original comments
Comment by VictorLamoine on 2018-06-28:
As I found myself working more and more with rosbag files this is very useful, is there a place where we should put this list on the wiki?
Comment by tik0 on 2018-06-29:
Also thought about that. But do you have anything in mind? From my point of view the ROS wiki might not be optimal, since this topic is very generic.
Comment by gvdhoorn on 2019-07-18:
Could add gavanderhoorn/rosbag_fixer:
Replaces incomplete msg defs with the correct ones in bags with topics published by rosjava
Works around rosjava/rosjava_bootstrap#16.
Would be better to have an actual fix committed, but that would depend on someone picking that issue up and actually solving it. | {
"domain": "robotics.stackexchange",
"id": 30967,
"tags": "rosbag, ros-kinetic"
} |
Electric field created by a uniformely charged, infinitely thin, straight wire of infinite length | Question: Using Gauss's law, I find that the expression for this field is :
$$E(r) = \frac{\lambda}{2\pi r \epsilon_0}$$
where $\lambda$ is the line charge density.
However, when I try to use the direct formula :
$$E(r) = \frac{1}{4\pi\epsilon_0} \int_{-\infty}^{+\infty}\frac{\lambda}{r^2}dl$$
The integral diverges...
Is it not possible to find the expression of this field without using Gauss' law ?
Why doesn't the formula just work, like it is supposed to ?
EDIT : I forgot to mention that the wire is infinitely thin
Answer: Coulomb's law says
$$ \vec{E(\vec{r})} = \frac{1}{4 \pi \epsilon_0} \int \frac{\rho(\vec{r}') (\vec{r} - \vec{r}') \, d^3 r'}{|\vec{r} - \vec{r}'|^3} .$$
The charge density is a constant $\lambda$ and is linear. We'll fix that the wire runs along the $Y$ axis and we're calculating the field along the $X$ axis. This being the case, let us fixe $\vec{r}=R\,\hat{x}$ as the horizontal distance $R$ from the wire to where we're measuring the field, and $\vec{r}'= z\, \hat{z}$ as the vertical coordinate ranging from $-\infty$ to $\infty$ along the wire. This brings us to
$$ \vec{E} = \frac{\lambda \hat{x}}{4 \pi \epsilon_0} \int_{-\infty}^{\infty} \frac{R \, dz}{(R^2 + z^2)^{3/2}} ,$$
where we've canceled the $Y$ component due to symmetry considerations. Integrating this expression leads us to the proper result.
This problem is usually solved using an angle as a parameter, but this integral here is so much fun. You can of course simplify by exchanging to angle variables, but do play a little bit proving that it converges, evaluating it and then taking the limit. It is also necessary to prove that this limit exists (try using l'Hôpital), and to evaluate it I'd advise you to use a geometric series in $z$.
EDIT: We're calculating the field on circle with radius $R$, of course. Maybe that didn't come out very clear. | {
"domain": "physics.stackexchange",
"id": 20333,
"tags": "electric-fields"
} |
How does the Naive Bayes algorithm function effectively as a classifier, despite the assumptions of conditional indpendence and bag of words? | Question: Naive Bayes algorithm used for text classification relies on 2 assumptions to make it computationally speedy:
Bag of Words assumption: the position of words is not considered
Conditional Independence: words are independent of one another
In reality, neither of those conditions often holds, yet Naive Bayes is quite effective. Why is that?
Answer: The main reason is that in many cases (but not always) the model obtains enough evidence to make the right decision just from knowing which words appear and don't appear in the document (possibly also using their frequency, but this is not always needed either).
Let's take the textbook example of topic detection from news documents. A 'sports' article is likely to contain at least a few words which are unambiguously related to sports, and the same holds for many topic as long as the topics are sufficiently distinct.
In general tasks which are related to the general semantics of the text work reasonably well with unigrams (single words, unordered) as features, whether with NB or other methods. It's different for tasks which require taking syntax into account, or which require a deeper understanding of the semantics. | {
"domain": "datascience.stackexchange",
"id": 7497,
"tags": "naive-bayes-classifier, naive-bayes-algorithim"
} |
Using pre-,post-, and in-order indexes to find information about a Binary Search Tree | Question: Recently I have been studying ways of traversing a BST (in python), and have collided with the terms pre-order, post-order and in-order.
I believe that I understood the three terms pretty well, and have tried several exercises and examples, which I got right. But I have problems when formulating a relation between these ways of traversing a tree with other properties of the tree.
Take this exercise as an example:
I have tried to draw several binary trees and to imagine some kind of relation between the numbers and the size of the subtrees, but always been unable to.
Thanks for any help in advance! -- This exercise is from Open Data Structures (in pseudocode) from Pat Morin
Answer: Long story short: it is possible in constant time if the tree is a full binary tree. If not, there are some cases where there is not enough information to find the size of the subtree in constant time.
Note that you have access to the node of the tree and not only the numbers of the node $u$.
That means that you can check in constant time the pre-, in- and post-order numbers of the left and right child of $u$.
Now you can use the pre-order to find the size of the left child and the post-order to find the size of the right child. Can you see how?
Some vocabulary and details on tree traversal
A bit of vocabulary:
a descendant $v$ of a node $u$ is a node that appears in the subtree rooted in $u$. A left (resp. right) descendant is a descendant of the left (resp. right) child of $u$;
an ancestor $u$ of a node $v$ is a node such that $v$ is a descendant of $u$; $u$ is a left (resp. right) of $v$ if $v$ is a right (resp. left) descendant of $u$;
a cousin $w$ of a node $v$ is a node such that $v$ and $w$ have a common ancestor $u$, different of $v$ and $w$. $w$ is a left (resp. right) cousin of $v$ if $w$ is a left (resp. right) descendant of $u$ and $v$ is a right (resp. left) descendant of $u$.
Now here is an array to know what appears before and after a node $u$ in each order:
Which node
pre-order
in-order
post-order
left ancestor
before
before
after
right ancestor
before
after
after
left descendant
after
before
before
right descendant
after
after
before
left cousin
before
before
before
right cousin
after
after
after
What does that mean? It means that for a node $u$, $u.pre$ (the pre-order number of $u$) is equal to the number of ancestors of $u$ plus the number of left cousins. Similarly, $u.post$ is the number of descendants plus the number of left cousins.
That also means that $u.post - u.pre$ is the number of descendants minus the number of ancestors.
If the tree is a full binary tree, there is a constant time solution
Now let's go back to the problem.
Consider a node $u$. If $u$ has no child, then the size of the subtree is obviously $1$. Otherwise, $u$ has a left child $v$ and a right child $w$. Note that all descendants of $v$ are also left cousins of $w$.
since $v$ and $w$ have the same number of ancestors, $w.pre - v.pre$ is equal to the size of the subtree rooted in $v$!
that also means that $w.post - v.post$ is equal to the size of the subtree rooted in $w$ (because the number of descendants of $v$ + left cousins of $v$ is equal to the number of left cousins of $w$).
That means that the size of the subtree rooted in $u$ is:
$$w.pre + w.post - v.pre - v.post + 1$$
If the tree is not full, some conclusions are possible, but there may be problems
Now, as Hendrik Jan pointed it out in the comments, what if $v$ or $w$ doesn't exist?
if $u$ has only a left child $v$, it is possible to know the size of the right child of $v$, using the in-order number. Indeed, since $u$ has no right child, $u.in - u.post$ is equal to the number of left ancestors of $u$, which is also the number of left ancestors of $v$. That means that the number of right descendants of $v$ is $u.in - v.in - 1$ (since $u.post = v.post + 1$ because $u$ has no right child). Now three cases are possible:
$v$ has no left child. Since we know the size of the right child, we know the size of the subtrees rooted in $v$ and in $u$ (it is $u.in - v.in + 1$ for the latter);
$v$ has a left and a right child. Using the previous method, we can find the size of the subtree rooted in $v$ and then the size of the subtree rooted in $u$;
$v$ has a left child but no right child. In the case, we are stuck as we cannot find any conclusion using only the informations in $u$ and $v$. As an example, here are two trees where a node $u$ has an only left child $v$ and $u$ and $v$ have the same pre-, in- and post- order numbers, but the subtree rooted in $v$ are not of the same size:
1
/ \_________
2 5
/ \ /
3 4 6
/
7
/
8
1
/ \_________
2 4
\ /
3 5
/
6
/
7
\
8
\
9
In both trees, the node $5$ has a pre-order of $5$, in-order of $8$ and post-order of $7$, and the node $6$ (only left child of $5$ that has only a left child) has a pre-order of $6$, in-order of $7$ and post-order of $6$. However, the subtree rooted in $5$ is of size $4$ in the first tree and of size $5$ in the second tree. That means that knowing only those informations cannot lead to the conclusion. One could find similar examples with a left path as long as wanted.
Similar conclusions can be done if $u$ has only a right child. | {
"domain": "cs.stackexchange",
"id": 19852,
"tags": "binary-trees, python, binary-search-trees, subtree"
} |
How to do if a potential function Does not work? Amortized analysis | Question: Here is an example taken from CLRS.
q)Consider an ordinary binary min-heap data structure with n elements supporting the instructions INSERT and EXTRACT-MIN in O(lg n) worst-case time. Give a potential function Φ such that the amortized cost of INSERT is O(lg n) and the amortized cost of EXTRACT-MIN is O(1), and show that it works.
$\Phi(H) = 2 \cdot (size of heap = n) = 2n$
insert:
the amortized cost has the formula
$a_n = c_n + (\Phi_{n+1}) - \Phi_n$
$= log(n) + 2(n+1) - 2n = log(n) + 2 = log(n)$
holds ?
$a_n = c_n + (\Phi_{n+1}) - \Phi_n$
$= log(n) + 2(n+1) - 2n = log(n) + 2 = log(n)$
delete is a bit different because after operation n goes down by 1 hence
$a_n = c_n + (\Phi_{n+1}) - \Phi_n$
$= log(n) + 2(n) - 2(n+1) = log(n) - 2 = log(n)$
so delete is obviously wrong since its not O(1) but insert gave correct one. How do I properly show a potential function being incorrect? Is it enough to just show this? Note: I'm not looking to solve the above question just looking how to disprove potential functions.
Answer: When you delete, your $actual$ $cost$, or $c_n$ by your notation, is $log(n)$.
it means you want: $\Delta \phi = \phi_{n+1}-\phi_n \approx -log(n)$
Its not hard to find such potential function that satisfies both:
try $\phi = nlog(n)$
for insert:
$Cost_{insert} = log(n) + (n+1)log(n+1) - nlog(n)$
$= log(n)+log(n+1)+nlog(n+1)-nlog(n) = O(log(n))$
$Cost_{delete} = log(n) + (n-1)log(n-1) - nlog(n) $
$= log(n)-log(n-1)+nlog(n-1)-nlog(n) $
$\leq (log(n)-log(n)) +(nlog(n)-nlog(n)) = O(1)$
All potential functions that satisfy $\phi(i) \geq 0 $, $\phi(0)=0 $ are correct, they just may not prove the amortized time you have in mind. | {
"domain": "cs.stackexchange",
"id": 13576,
"tags": "amortized-analysis"
} |
How does laughing gas (N₂O) work? | Question: Laughing gas (N2O), well, makes people laugh.
How does just a gas make us do that, there has to be some hormones at work...
So, I wanted to know how this works? What is the mechanism?
Answer: This information is all strictly for Entonox - a brand of analgesic gas comprising 50% Oxygen (O2) and 50% Nitrous Oxide (N2O), Laughing Gas. This mixture is known as 'Gas and Air' and is in very common use.
The active ingredient in Entonox is of course the nitrous oxide, so the discussion of the mechanism below refers solely to the N2O as you asked for.
Nitrous oxide enters the blood by diffusion from the alveoli whilst it is being inhaled, but does not bind with haemoglobin. It is fat soluble so quickly moves into cells, including synapse ends in the brain. Because of the stability of the compound, N2O is not metabolised by the body so has its effect as that molecule, then is eliminated by diffusion out of the lungs once inhalation has ceased (taking roughly 2 minutes for on and offset).
According to the material that BOC pharmaceuticals provide, the exact mechanism of the analgesia is not fully understood. It is known, however, to induce "inconsistent changes in the basal levels of thalamic nuclei".
N2O inhibits NMDA receptors in the brain whilst simultaneously encouraging the stimulation of the parasympathetic GABA receptors. This eventually produces an anaesthetic effect. It is also understood that N2O promotes the release of endogenous opioid neurotransmitters ('natural painkillers' e.g. endorphins) that specifically activate descending pain pathways. This inhibits the transmission of pain. In this way the analgesia provided by nitrous oxide is antinociceptive (literally pain reducing) rather than a generalised limbic depressor.
However, nitrous oxide also positively effects potassium ion channels too [ref] - reducing the chance of an action potential being generated in affected neurons. Research into this area of the effects of N2O is ongoing.
Euphoria is a common side effect of N2O usage, hence the name laughing gas. This is as part of wider emotional changes that can occur when nitrous oxide is being administered. For example, some people instead of laughing become scared or in other cases extremely aggressive towards those nearby. The emotional excitement may result in depressive or manic behaviour even to the point of psychosis and hallucinations especially in those who have a preexisting vulnerability to mental illness. The precise mechanism for these disinhibiting is again not fully understood.
Whilst it seems a lot of the answers are missing, the course I'm taking this information from is available online at Discover Entonox - modules 8-10 have relevance to this question. It's free to register and view the materials, they're all nicely narrated with diagrams etc. | {
"domain": "biology.stackexchange",
"id": 365,
"tags": "human-biology, pharmacology"
} |
How to find impulse response for the given system? | Question:
How can I find the impulse response for the following system in time domain? I actually would like to find my mistake in my attempt. Below is what I have tried according to the answer given for this question: Why is particular solution zero for an impulse excitation signal?
The given system/circuit can be described by the following differential equation:
$$\frac{x(t)-V_c(t)}{R} - C \frac{\text{d}V_c(t)}{\text{d}t} = 0$$
Since we are looking for the impulse response, $x(t) = \delta(t)$ and $V_c(t) = h(t)$. By plugging in these values and rearranging the terms, we get:
$$\frac{\text{d}h(t)}{\text{d}t} + \frac{h(t)}{RC} = \frac{\delta(t)}{RC}$$
$h(t)$ is given to be $h_h(t) + A_0\delta(t)$ in the referred question and answer, where $h_h(t)$ is the homogeneous solution of the aforementioned differential equation. Here, $h(t)$ does not include any derivatives of $\delta(t)$ probably because the degree of the output is $\textbf{greater than}$ or equal to the degree of the input.
Homogeneous solution is found to be $h_h(t) = c_1e^{-\frac{t}{RC}}$. Therefore:
$$h'(t) = -\frac{c_1}{RC}e^{-\frac{t}{RC}} + A_0\delta'(t)$$
We can now find the unknowns by plugging $h(t)$ and $h'(t)$ into the differential equation.
$$\frac{\delta(t)}{RC} = -\frac{c_1}{RC}e^{-\frac{t}{RC}} + A_0\delta'(t) + \frac{c_1}{RC}e^{-\frac{t}{RC}} + \frac{A_0}{RC}\delta(t)$$
$$\frac{\delta(t)}{RC} = A_0\delta'(t) + \frac{A_0}{RC}\delta(t)$$
From $\frac{1}{RC}\delta(t) = \frac{A_0}{RC}\delta(t)$:
$$A_0 = 1$$
From $0 = A_0\delta'(t)$:
$$A_0 = 0$$
Why do I have such a contradiction and what is my mistake? Thank you in advance.
Answer: Since the system is causal, its impulse response has the form
$$h(t)=c_1e^{-t/RC}u(t)+A_0\delta(t)\tag{1}$$
Note the unit step function $u(t)$ in $(1)$.
From $(1)$ we obtain the derivative
$$h'(t)=-\frac{c_1}{RC}e^{-t/RC}u(t)+c_1\delta(t)+A_0\delta'(t)\tag{2}$$
Plugging $(1)$ and $(2)$ into the system's differential equation (with $\delta(t)$ as its input)
$$\frac{\delta(t)}{RC}=h'(t)+\frac{h(t)}{RC}\tag{3}$$
gives
\begin{align*}
\frac{\delta(t)}{RC} &= -\frac{c_1}{RC}e^{-t/RC}u(t)+c_1\delta(t)+A_0\delta'(t) + \frac{c_1}{RC}e^{-t/RC}u(t)+\frac{A_0}{RC}\delta(t) \\
&= \left(c_1+\frac{A_0}{RC}\right)\delta(t)+A_0\delta'(t)
\end{align*}
from which it follows that $A_0=0$ and $c_1=1/RC$.
Hence, the impulse response is given by
$$h(t)=\frac{1}{RC}e^{-t/RC}u(t)\tag{4}$$ | {
"domain": "dsp.stackexchange",
"id": 12565,
"tags": "linear-systems, impulse-response, time-domain, dirac-delta-impulse"
} |
online detection of plateaus in time series | Question: I need to detect plateaus in time series data online. The data I am working with represents the magnitude of acceleration of a tri-axis accelerometer. I want to find a reference time window that I can use for calibration purposes. Because of that, the system must not move and hence only gravity should influence the system.
How can I find such plateaus or is there even a more principled approach that I can take?
Answer: I found a good solution for my problem here: https://stats.stackexchange.com/a/201315/101744
Thanks to everyone! | {
"domain": "datascience.stackexchange",
"id": 748,
"tags": "time-series, online-learning"
} |
Function execution is slow in callbacks. Why? | Question:
Hi
My (Heavy) Image processing functionis slow in the subscriber callback.
But if I run the same function outside ROS in release mode it is fast. (10X fast)
Why I want to put the image processing function in callback?
Because I want to process received image as soon it arrives.
Any comments!
My code looks like:
class storedData {
public:
cv::Mat inIm = cv::Mat::zeros(376, 672, CV_8UC3);
cv::Mat outIm = cv::Mat::zeros(376, 672, CV_8UC3);
void depthCallback(const sensor_msgs::CompressedImageConstPtr& msgDepth)
{
try
{
tic2 = ros::Time::now();
imProcess(inIm, outIm); // Heavy Image Processing
imPTime = ros::Time::now().toSec() - tic2.toSec();
std::cout<<"Im process time= "<<imPTime<<std::endl;
cv::imshow("view", outIm);
cv::waitKey(1);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("Could not convert the depth!");
}
}
};
int main(int argc, char **argv)
{
ros::init(argc, argv, "PredictiveDisplay");
ros::NodeHandle nh;
storedData obj;
cv::namedWindow("view", 0);
cv::startWindowThread();
ros::Subscriber subDepth = nh.subscribe("/depth/depth_registered/compressed", 2, &storedData::depthCallback, &obj);
ros::spin();
cv::destroyWindow("view");
}
Originally posted by JeyP4 on ROS Answers with karma: 62 on 2019-03-27
Post score: 1
Original comments
Comment by tfoote on 2019-03-27:
If you could add your instrumentation to the example it would be helpful with an example of it running fast inside the callback but slow outside the callback we'd be able to help you. Without being able to reproduce it all we can do is guess at things.
Comment by Reamees on 2019-03-28:
First thing I would try would be setting ROS_BUILD_TYPE to Release in your CMakeLists.txt. Also clean build the package. See the linked question for options.
You can read more about it here: http://wiki.ros.org/rosbuild
answers.ros.org question regarding a similar problem: catkin-compiled-code-runs-3x-slower
Comment by JeyP4 on 2019-03-29:
@gvdhoorn
Thanks for your comment.
What does simple catkin_make do? It builds in debug or release mode?
Comment by gvdhoorn on 2019-03-29:
It's not Catkin that decides this, it's CMake.
If you don't configure any CMAKE_BUILD_TYPE, CMake will use None, which sets no flags at all.
Comment by JeyP4 on 2019-03-29:
Thanks @gvdhoorn.
catkin_make -DCMAKE_BUILD_TYPE=RelWithDebInfo
magically worked for me. I can accept this as an answer, if you can convert it to answer.
Comment by tfoote on 2019-03-29:
converted to an answer. Yes, it's a little surprising, the default build has no optimizations, but actually also does not have debug info either. It's really the default that you never really want.
Answer:
First thing I would try would be setting ROS_BUILD_TYPE to Release in your CMakeLists.txt. [..] You can read more about it here: http://wiki.ros.org/rosbuild
Please don't refer to rosbuild pages. They're only kept for archival purposes.
ROS_BUILD_TYPE is not used in Catkin packages. Use CMAKE_BUILD_TYPE instead.
And don't set it in your CMakeLists.txt, but pass it as an argument to your catkin_make invocation (as: -DCMAKE_BUILD_TYPE=Release (or probably better: -DCMAKE_BUILD_TYPE=RelWithDebInfo)).
Originally posted by gvdhoorn with karma: 86574 on 2019-03-28
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 32769,
"tags": "ros, ros-kinetic, callbacks"
} |
Why is the enthalpy of a reaction equal to the difference between the enthalpies of combustion of the reactants and the products? | Question: My textbook gives me the following formula for calculating the enthalpy change of any reaction:
$$\Delta H_\mathrm{r}^\circ=\sum(\Delta H_\mathrm{c}^\circ)_\text{reactants}-\sum(\Delta H_\mathrm{c}^\circ)_\text{products}\label{b}\tag{0}$$
where $\Delta H_\mathrm{c}^\circ$ is the standard enthalpy of combustion. Note that the enthalpy of combustion for same elements must be to the same product (i.e. all the elements of nitrogen in whichever compound they are, should be combusted to the same nitrogenous oxide)
My textbook simply assume it as a "fact". However, I have yet not found an intuitive or a mathematical way to prove it.
For starting the proof, I would write this balanced equation involving an arbitrary hydrocarbon:
$$\ce{C_xH_y + H_2 -> C_xH_{y+2}}\label{a}\tag{1}$$
The next step is obviously to write their standard combustion reactions, like this one for the first reactant:
$$\ce{C_xH_y + $\left(x + \frac{y}{4}\right)$O2 -> xCO2 + \frac{y}{2}H2O}$$
and then another two for the other two compounds, and then show that, by Hess's Law, they can be remodeled into the original equation by simple arithmetic.
However, there are certainly a lot of problems with this approach:
What if reaction $\ref{a}$ had more than one reactants, or more than one products? How do I write a proof for the general reaction involving $m$ reactants and $n$ products?
What if the reactant wasn't a hydrocarbon but rather an arbitrary organic compound like $\ce{C_xH_yS_zN_pO_q}$ with many more different elements. How would we write a proof then?
I find that in trying to approach a proof for a general reaction, I have made the situation extremely complex. I wonder if an intuitive or a mathematical proof of such a complex equation is even possible. And if it is not, then how can we even assume equation $\ref{b}$ as true? Or, is it that the reaction $\ref{b}$ does not apply to all organic reactions, but only a subset of them?
PS: While I was experimenting with different equations, I stumbled upon $\ce{N2 + 3H2 -> 2NH3}$. Here, if you write out the individual combustion equations and balance them out, you'll get $\Delta H_\mathrm{r}^\circ=(\Delta H_\mathrm{c}^\circ)_{\ce{N2}} + 3(\Delta H_\mathrm{c}^\circ)_{\ce{H2}}-2(\Delta H_\mathrm{c}^\circ)_{\ce{NH3}}$ as expected from the general formula. However, the interesting thing to note here is that it does not matter whether you combust nitrogen to nitrous oxide, dinitrogenpentoxide, or perhaps any other nitrogenous oxide. In the end, it will all balance out and give you the answer you're looking for. (I call them self-balancing equations.)
Source: KS Verma; Physical Chemistry for JEE (Advanced): Part 1; Chapter - Thermodynamics Illustration 6.56III
Answer: Second approach, but very abroad assumptions. Assume a reaction
$$\ce{reactants -> products} \tag{1} \label{rxn1}$$
where $\text{A}$ and $\text{B}$ are not necessarily one compound each. Let's analyze element $x$ that appears in the reactants as $x_{a}^{\alpha}$, and appears in products as $x_{b}^{\beta}$, where $\alpha$ and $\beta$ are the oxidation states, and $a,b$ are the number of atoms of the element in each case.
Then, combustion of $x$ can be written as:
$$\ce{ $x_{a}^{\alpha}$ + $\frac{ay}{2c}$ O2 -> $\frac{a}{c}$ $x_{c}^{\gamma}$O_{y}}$$
The enthalpy of this reaction is
$$\Delta H_\mathrm{c}(x_a^\alpha) = \frac{a}{c}\Delta H_\mathrm{f}\left(x_{c}^{\gamma}O_{y}\right) -\Delta H_\mathrm{f}(x_a^\alpha) $$
Analogously, and assuming the product of combustion is the same:
$$\Delta H_\mathrm{c}(x_b^\beta) = \frac{b}{c}\Delta H_\mathrm{f}\left(x_{c}^{\gamma}O_{y}\right) -\Delta H_\mathrm{f}(x_b^\beta) $$
If the stoichiometric coefficient of $x$ in the reaction \ref{rxn1} is $m$, the stoichiometric coefficient in the products will be $\frac{am}{b}$
The difference of enthalpies of combustion (taking into account stoichiometric coefficients of \ref{rxn1}), is:
$$
\begin{align}
\Delta H_\text{rxn} &= m\Delta H_\mathrm{c}(x_a^\alpha) - \frac{am}{b}\Delta H_\mathrm{c}(x_b^\beta)\\
&= \frac{am}{c}\Delta H_\mathrm{f}\left(x_{c}^{\gamma}O_{y}\right) - m\Delta H _\mathrm{f}(x_a^\alpha) - \frac{am}{b}\left(\frac{b}{c}\Delta H_\mathrm{f}\left(x_{c}^{\gamma}O_{y}\right) -\Delta H_\mathrm{f}(x_b^\beta) \right)\\
&= \frac{am}{b}\Delta H_\mathrm{f}(x_b^\beta) - m\Delta H_\mathrm{f}(x_a^\alpha)
\end{align}
$$
Which is the definition we all know. Considering this for all elements in the reaction, then the formula is true. Now, taking this with a grain of salt, since I haven't found a way to prove it for cases where the combustion product is not an oxide (I got a feeling it can be done with charge balance, that's why I was considering the oxidation states, but I haven't gotten to a proof for that case, I left them in case someone else has an idea based on that). | {
"domain": "chemistry.stackexchange",
"id": 11220,
"tags": "physical-chemistry, thermodynamics, enthalpy"
} |
Thermodynamic Equilibrium math. characterisation $d P=0$ question | Question: I have a basical question on the mathematical condition characterizing a system beeing in thermodynamic equilibrium:
Let $P(A, B, C)$ is a thermodynamic potential with natural variables $A, B, C$ describing a given system
(e.g. : internal energy $U(S, V, N)$, Entropy $S(U, V, N)$, Helmholtz free energy $F(T, V, N)$ or Gibbs energy $G(T, p, N)$
then for a system described by $P(A, B, C)$ is that is in thermodynamic equilibrium the necessary condition is
$$d P= \frac{\partial P}{\partial A} dA +\frac{\partial P}{\partial B} dB +\frac{\partial P}{\partial C} dC=0$$
i.e. in TD equilibrium the potential is extremal (but recall that it depends on concrete potential if the desired extremum is a maximum or minimum).
From basic analysis course we know that a neccessary condition on a function $P: \mathbb{R}^3 \to \mathbb{R}$ to have at point a $(A_0,B_0,C_0)$ an extremum tells that it's gradient at $(A_0,B_0,C_0)$ vanish, ie
$$(\nabla P)_{(A, B, C)=(A_0,B_0,C_0)}= \left(\frac{\partial P}{\partial A}, \frac{\partial P}{\partial B}, \frac{\partial P}{\partial C}\right)_{(A, B, C)=(A_0,B_0,C_0)}= (0,0,0)$$
This imply obviously that at extremal points neccessarily all partial derivatives in natural variables vanish.
On the other hand if we focus on the example of internal energy $P(A,B,C)=U(S, V, N)$, then using
$$\mathrm{d} U = \frac{\partial U}{\partial S} \mathrm{d} S + \frac{\partial U}{\partial V} \mathrm{d} V + \sum_i\ \frac{\partial U}{\partial N_i} \mathrm{d} N_i\ = T \,\mathrm{d} S - p \,\mathrm{d} V + \sum_i\mu_i \mathrm{d} N_i\,$$
we have
$$T = \frac{\partial U}{\partial S},\quad p = \frac{\partial U}{\partial V},\quad \mu = \frac{\partial U}{\partial N}.$$
On the other hand there obviuosly exist systems in thermodinal equilibrium with $T, p, \mu \neq 0$.
And this exactly my Question: if I have a TD system described by natural variables $A, B, C$ and TD potential $P(A,B,C)$ and I want to determine it's locus where it has it's TD equilibrium, how one should read & interpret the condition
$$dP= \frac{\partial P}{\partial A} dA +\frac{\partial P}{\partial B} dB +\frac{\partial P}{\partial C} dC=0$$
correctly?
The mathematical criterium that demands that all partial derivatives have to vanish at extremal points lead me to an absurd statement as I tried to described above. That is I think that the condition $d P=0$ has in this case to be interpreted in another way as I did above. Can somebody explain what is the right interpretation of $d P=0$ plausibly compatible with physics and analysis. Where is the error in my reasonings above?
Answer: At equilibrium, a thermodynamic potential is maximized or minimized with respect not to its natural variables but to whatever variables that are not externally constrained. An unconstrained variable could be: the extent to which chemical reaction has proceeded; the energy of subsystem 1 when the system is divided into two subsystems that can exchange heat with each other; etc.
Consider a thermodynamic relation concerning the Helmholtz free energy:
$$
dA = -SdT - PdV + \bigg(\frac{\partial A}{\partial X}\bigg)_{T, V} dX.
$$
Here, temperature ($T$) and volume ($V$) are natural variables of $A$, and X is some unconstrained variable. If we fix $T$ and $V$ (i.e., $dT = dV = 0$) and let the system reach equilibrium, the system will adjust itself such that $A$ is minimized with respect to $X$, making $(\partial A / \partial X)_{T, V} = 0$, i.e., $dA = 0$ for an arbitrary $dX$.
Note that if we lift the constraint $dV = 0$ as well when minimizing $A$, we will have $P = 0$, and OP found this strange. But this result isn't strange at all! Intuitively, a system with no constraint on its volume will expand forever to become infinitely large, in which case $P = 0$. | {
"domain": "physics.stackexchange",
"id": 64063,
"tags": "thermodynamics, statistical-mechanics, equilibrium"
} |
Distance to Proxima Centauri (Gaia VS New Horizons parallax program) | Question: The current best parallax measurement for the nearest star to the Solar System, Proxima Centauri, has been given by the European Space Agency's Gaia mission. This is $768.5004\; \pm \; 0.2030 \; mas$ (milliarcseconds) as for Gaia Data Release 2. This puts Proxima Centauri at a distance of $4.2441 \;\pm\; 0.0011\; ly$. Therefore the uncertainty is so small that the error spans $141.8 \; AUs$, which is more or less the size of the orbit of the Dwarf Planet Eris. The precission is staggering, but if we want to send one day a light-sail mission to this system and explore its planets we will need two orders of magnitude greater precission in the distance estimates (to at least start talking on how to aim at the $AU$ level).
NASA's New Horizons team has launched an interesting proposal to measure the parallax of Proxima Centauri and Wolf 359 by using the Long Range Reconnaissance Imager (LORRI) onboard of the New Horizons probe. Gaia was made to perform these astrometric measurements with high accuracy (less than a milliarcsecond) and New Horizons was not. But while Gaia has a $\sim 2 \; AU$ baseline for the parallax measurements, New Horizons is at $\sim 47 \; AU$ from Earth (as for the 6th February, 2020). The team wants to get this new measurement for Proxima's parallax on April 22, 2020, using simultaneus observations from Earth and the Kuiper Belt.
I guess that the real baseline for the measurement can't be the entire $47\; AU$ between the spacecraft and Earth but something less (since the probe is not moving in the plane perpendicular to the Proxima - Earth line), but still it will be a huge increase with respect to Gaia's baseline. Still LORRI is not as prepared as Gaia to measure small angles.
So, my question is, How these two things (larger baseline but lower ability to measure small angles for New Horizons) balance out? Will the distance to Proxima Centauri get more accurate with New Horizons parallax program, or is Gaia going to still hold the record for that measurement with its $2\; AU$ baseline? And if New Horizons gets a more accurate parallax measurement, how large we should expect the error in the distance to be compared to the current $141.8 \; AUs$ uncertainty measured by Gaia?
Answer: LORRI will be used in 4x4 mode, which yields 4-arcsec pixels. The error in positions is unlikely to be better than 1%, or 40 mas, about 200x larger than Gaia's error. NH has a baseline ~20x larger, but this means it still misses Gaia by ~10x.
The date was selected for New Moon to help Earth observers find the two targets. There is no Earth analogue for LORRI, but it should be readily matched by amateur rigs. The project will use professional ~0.5 - 1.0m telescopes to get Earth obs. Simultaneous Earth obs are not strictly required, but the simultaneous times are offered to conduct the aesthetically purest demonstration of instantaneous parallaxes. | {
"domain": "astronomy.stackexchange",
"id": 4261,
"tags": "star, distances, parallax, gaia"
} |
Microphone Data Comparison for breathing data... best way to compare two files? | Question: I am currently trying to compare the amplitude vs time data between two different microphone readings (.wav files of inhales and exhales) and detect which reading produced a louder sound, on average. I was wondering what the best methodology for this would be, and if there is a good mathematical way to go about this, is there any Python support for it (i.e. a library or article about it).
For reference, the current approach we are looking into is to calculate some sort of curve that encompasses the readings, and then take the absolute value, and then calculate the area under the curve during each inhale/exhale, comparing that integral value between each reading on average.
Here is an image of what the data looks like:
Thank you
Answer: A typical loudness chain comprises of
Frequency weighting filter. For breathing, A-weighting is probably fine
RMS detector with a suitable time constant (maybe 100ms or so)
If you want a single number: suitable integration over the entire clip. Energy average will probably be ok here.
Potential Python libraries:
https://github.com/csteinmetz1/pyloudnorm
https://github.com/SuperShinyEyes/spl-meter-with-RPi
I have no experience with any of these, so use at your own risk. | {
"domain": "dsp.stackexchange",
"id": 10387,
"tags": "discrete-signals, signal-analysis, audio, python"
} |
Can DNA replicate without polymerase? | Question: Would it be possible for short DNA molecules to replicate, for example, if it's heated to the point where the strands separate (as far as I know, that's what happens in PCR?) and freely floating bases could "connect" to their correspondent bases (A/T, C/G)?
I'm assuming that A/T and C/G bases strongly 'attract' each other naturally, perhaps that's where I'm wrong?
Answer: No, replication cannot happen in the absence of polymerase (on the timescales relevant to humans).
You are correct that in PCR, the first step is to separate DNA strands at 98 C. This heating dissociates the strands semi-permanently. So when you cool down the reaction, the strands do not re-associate very quickly.
When you cool down the reaction to 72 C, A/T and C/G can make temporary connections with their opposite bases. However, the free nucleotides cannot quickly form phosphodiester bonds with the growing chain of DNA in the absence of a polymerase. The nucleotide will most likely dissociate from the DNA strand before the correct bond forms. In other words, it is a kinetic problem.
However, at high temperatures, specialized conditions, and on long timescales, such as those present in earth's pre-biotic period, the answer may be different. | {
"domain": "biology.stackexchange",
"id": 12077,
"tags": "molecular-biology, dna-replication"
} |
C++ - Hash map implementation using smart pointers | Question: I am trying a simple Hash Map implementation in c++. I have used this and this as reference. The implemented design uses a class HashEntry to manage individual key-value pair. The class HashMap handles the map itself. The map has functions to insert(put) a key-value pair, to retrieve(get) value based on a key and to erase(erase) a key-value pair. It also keeps track of its size and capacity. Here is the commented code:
#include <iostream>
#include <memory>
#include <vector>
#include <cstddef>
#include <stdexcept>
#include <climits>
// Class for individual entries of key-value pair
class HashEntry
{
int key_v;
int val_v;
// The smart pointer to handle multiple keys with same hash value.
// This will be used to create a linkedlist.
std::shared_ptr<HashEntry> next_v;
public:
HashEntry(int key, int val) : key_v{key}, val_v{val}
{}
int key() const
{
return key_v;
}
int val() const
{
return val_v;
}
std::shared_ptr<HashEntry> next() const
{
return next_v;
}
void set_val(int val)
{
val_v = val;
}
void set_next(std::shared_ptr<HashEntry> next)
{
next_v = next;
}
};
// Class for the Hash Map
class HashMap
{
std::vector<std::shared_ptr<HashEntry>> map_v;
std::size_t capacity_v{0};
std::size_t size_v{0};
public:
HashMap(std::size_t);
std::size_t hash_func(int);
std::size_t size() const;
void put(int, int);
int get(int);
bool erase(int);
};
HashMap::HashMap(std::size_t capacity)
{
capacity_v = capacity;
map_v.resize(capacity_v);
}
// The hashing function
std::size_t HashMap::hash_func(int key)
{
return key % capacity_v;
}
std::size_t HashMap::size() const
{
return size_v;
}
// The function to insert key-value pair
void HashMap::put(int key, int val)
{
// If capacity is reached, throw exception
if(size_v == capacity_v)
{
throw std::length_error{"Capacity exceeded!\n"};
}
std::size_t hash_value = hash_func(key);
// If the hash_value has never been set before, use that space
// for key-value pair. Otherwise, add to the list.
if(map_v[hash_value] == nullptr)
{
map_v[hash_value] = std::make_shared<HashEntry>(key, val);
}
else
{
auto node = map_v[hash_value];
std::shared_ptr<HashEntry> pre = nullptr;
while(node)
{
if(node->key() == key)
{
node->set_val(val);
return;
}
pre = node;
node = node->next();
}
pre->set_next(std::make_shared<HashEntry>(key, val));
}
size_v++;
}
// Retrieve value based on key
int HashMap::get(int key)
{
auto hash_value = hash_func(key);
auto node = map_v[hash_value];
// If node is not set, nothing to retrieve.
// Otherwise, check the key and if required, the associated list.
// If not found, report the issue.
if(node == nullptr)
{
std::cout << "Key not found! Returning INT_MIN for: " << key << "\n";
return INT_MIN;
}
if(node->next() == nullptr && node->key() == key)
{
return node->val();
}
else
{
while(node)
{
if(node->key() == key)
{
return node->val();
}
node = node->next();
}
}
std::cout << "Key not found! Returning INT_MIN for: " << key << "\n";
return INT_MIN;
}
// Remove key-value pair based on key
bool HashMap::erase(int key)
{
auto hash_value = hash_func(key);
// If no value is set against hash value, there is nothing to erase.
// Otherwise, check if keys match and if yes, proceed to erase.
// Otherwise, check the list for a match and if there is a match,
// proceed to erase.
// Otherwise, return false.
if(map_v[hash_value] == nullptr)
{
return false;
}
else if(map_v[hash_value]->key() == key)
{
map_v[hash_value] = map_v[hash_value]->next();
size_v--;
return true;
}
else if(map_v[hash_value]->next())
{
auto pre = map_v[hash_value];
auto node = map_v[hash_value]->next();
while(node)
{
if(node->key() == key)
{
pre->set_next(node->next());
size_v--;
return true;
}
pre = node;
node = node->next();
}
}
return false;
}
int main()
{
HashMap hm1{10};
std::cout << "Size: " << hm1.size() << "\n";
for(int i = 0; i < 10; i++)
{
hm1.put(i, i + 10);
}
std::cout << "Size: " << hm1.size() << "\n";
std::cout << "Get: " << hm1.get(6) << "\n";
std::cout << "Erase: " << std::boolalpha << hm1.erase(6) << "\n";
std::cout << "Size: " << hm1.size() << "\n";
// Check the output of get() after key is erased
std::cout << "Get: " << hm1.get(6) << "\n";
// Try adding a key which will have same hash_value as another key
// Also try retrieving both keys
hm1.put(15, 25);
std::cout << "Size: " << hm1.size() << "\n";
std::cout << "Get: " << hm1.get(15) << "\n";
std::cout << "Get: " << hm1.get(5) << "\n";
// Erase a pair existing in the list and see how all keys in the list behave.
std::cout << "Erase: " << std::boolalpha << hm1.erase(5) << "\n";
std::cout << "Size: " << hm1.size() << "\n";
std::cout << "Get: " << hm1.get(5) << "\n";
std::cout << "Get: " << hm1.get(15) << "\n";
return 0;
}
Questions:
1) Is the implementation correct for an open hashing (closed addressing) hash map?
2) The function get returns an INT_MIN if it has no other choice. The other possible ways to handle this scenario could be using std::optional, throw an exception, change the function to not return anything, or return a bool. Is there any other smarter/ more elegant way?
3) Any better suggestions for hash_func. Is there any good reading material on how to select a good hashing function or is it just a matter of experience?
If possible, kindly provide general reviews and suggestions. Thank you!
Answer: I read both your reference implementers. They look more like Java that has been literally translated to C++. Though it may work neither of these are very good implementations and as such yours suffers from the same problems.
Design
Design of Hash
Your hash seems to have a limit on the number of elements.
if(size_v == capacity_v)
{
throw std::length_error{"Capacity exceeded!\n"};
}
But there does not seem to be a need for this. Each bucket in the hash is basically a list of linked elements. So you can store as many members as you like per list. So there really is no need for a size.
Hashing function are hard to write. The version you use can be used as a reasonable one but under a few conditions. The main one is that the value you use to diveide by (and thus the number of buckets) should be a prime number.
std::size_t HashMap::hash_func(int key)
{
return key % capacity_v; // here capacity should be prime.
} // This will give you a much larger chance of
// of not creating some pattern and using some
// buckets more than others.
I know this does not sound important but there is some significant maths done on this problem (make this prime it will save you a lot of headaches.
Alternatively you can use std::hash to generate you a better number that is less likely to clash (that use the module operator on that to get the actual index number).
Design of C++
You use std::shared_ptr to link all the elements in the list together. Personally I would have used a pointer (its self contained and you can control all the uses). But there is an argument for using a smart pointer (just not std::shared_ptr).
In this situation there is no sharing of the pointer. Each pointer is owned by exactly one parent. So use std::unique_ptr this will handle all the management of the pointer without the extra overhead needed by shared pointer.
Design of Interface
Your interface is put/get/erase.
The only big issue I see is the get(). What happens when you get a key that does not exist? You return a magic value (which is a bit of a code smell). How do you distinguish the magic value from a real value (do you prevent the magic value from being inserted?).
There is a small issue around put(). What happens if you put a key that already exists. Simply overwrite it? Sure that works. Might be nice to warn in this situation.
To me the put/get design has an issue for me in that it requires a two phase update. If I want to modify a key I first have to get the value associated with the key manipulate and then put the value back. In C++ we usually perform a retrieve from a container via a reference. That way once you have the value you can manipulate it directly. Also this makes it more efficient as you are not re-calculating the location using a hash and search on a list.
int& value = cont.get(23);
value = value + 5; // update in place as we are using a reference.
It even looks really nice if you overload operator[] so it looks like a normal array accesses.
int& value = cont[23];
value = value + 5; // update in place as we are using a reference.
// Or simply
cont[23] += 5;
Code Review
I would have put this class as a private member of HashMap.
class HashEntry {}
It does not seem to be leaked by any of the HashMap interfaces (which is good) so it does not need to be exposed to the user. Also by making it private you can remove the get/set methods which are completely useless in this context.
Sure you can keep this simple first time and just use int as the key and value. But in C++ when we make containers we usually create them generically so anything can be the key or value. I would have a look at templates.
int key_v;
int val_v;
OK. You can use a smart pointer to manage this chain.
// The smart pointer to handle multiple keys with same hash value.
// This will be used to create a linkedlist.
std::shared_ptr<HashEntry> next_v;
But std::shared_ptr is the wrong smart pointer. Use std::unique_ptr.
OK I see why you had to use std::shared_ptr. You don't use references anywhere in your code. Which means that all values are copied on return. std::unique_ptr is non copyable which would be an issue here. You still need to use std::unique_ptr but you need to return by reference to make this work.
std::shared_ptr<HashEntry> next() const
Prefer to use the initializer list.
HashMap::HashMap(std::size_t capacity)
{
capacity_v = capacity; // put this in the initializer list.
map_v.resize(capacity_v);
}
This function does not change the state of the container. You can also mark it const.
std::size_t HashMap::hash_func(int key)
{
return key % capacity_v;
}
OK. You do need to search the list to see if it exists and over-right on a push (would be nice to know if I did over-right).
// The function to insert key-value pair
void HashMap::put(int key, int val)
{
while(node)
{
if(node->key() == key)
{
node->set_val(val);
return;
}
pre = node;
node = node->next();
}
BUT I think it is a bad idea to add to the end.
pre->set_next(std::make_shared<HashEntry>(key, val));
There is a concept of locality of reference. If you have used something then you will probably use it again soon (or a value close it). So if you put the value at the front a subsequent get (or over-righting put) will not have to perform as long of a search. So I would put the new value at the beginning of the chain.
In this function:
// Retrieve value based on key
int HashMap::get(int key)
{
I am not sure why you have this extra if statement
if(node->next() == nullptr && node->key() == key)
{
return node->val();
}
Seems redundant to me. | {
"domain": "codereview.stackexchange",
"id": 32294,
"tags": "c++, hash-map, pointers"
} |
Mapping bisulfite reads to a reduced size genome | Question: Is it worth mapping bisulfite reads (WGBS) to a reduced size genome?
I'm interested only in modifications in CpG context, thus instead of mapping to a whole genome (human genome) I would:
Extract DNA sequence around each CpG (-1000/+1000 bp)
Map reads (using bismark) to this subset
My idea is to reduce mapping time, however afraid that there might be some mapping bias or other issues.
Answer: CG sequences are pretty common; I expect you'll hit almost all of the genome when looking at a 2kb window centred on every CG, so there wouldn't be much point in doing that for efficiency purposes.
dariober has pointed out that reads coming from elsewhere in the genome could be incorrectly mapped to the reduced genome with high mapq score since the best, genuine genomic region is not present. So even if you are interested in few small regions, you should still map to the entire genome. | {
"domain": "bioinformatics.stackexchange",
"id": 390,
"tags": "ngs, read-mapping, sequence-alignment"
} |
Membership ratio graph | Question: Does anyone know how the what the kind of graph in this image is called?
Each color represents a class and the height of the color, in a particular instant, represents the ratio of elements that belong to that class.
How can I produce such a graph, as an example, in R or Python?
Answer: This looks like a "proportional stacked area chart." The R Graph Gallery gallery has a similar one here:
They have great sample sample to provide guidance on how to re-create it (https://www.r-graph-gallery.com/136-stacked-area-chart.html). | {
"domain": "datascience.stackexchange",
"id": 6142,
"tags": "graphs, representation"
} |
How do ants follow each other? | Question: I was observing ants in my house.They all were going in a straight line and also some of the ants were coming back through the the same line.
I took some water and rubbed the line with my finger, then the ants were not able to follow each other. Looks like they were confused.
My assumption is that they may had secreted some chemical .
Am I right ?
If yes, then which chemical is that and how it is secreted?
Answer: The chemical we are talking about here is called pheromone, trail pheromone to be specific.
A pheromone is a secreted or excreted chemical factor that triggers a social response in members of the same species. Pheromones are chemicals capable of acting outside the body of the secreting individual to impact the behavior of the receiving individuals.1
Pheromones are of mainly 9 types (13 exactly, but 4 not so common) which are:
Aggregation pheromones
Alarm pheromones
Epideictic pheromones
Releaser pheromones
Signal pheromones
Primer pheromones
Territorial pheromones
Trail pheromones
Sex pheromones
Nasonov pheromones
Royal pheromones
Calming pheromones
Necromones2
Ants, and many other animals, use trail pheromones to mark their trail as a guide for others in their gang. Other ants, catching the signal of trail pheromone, follow the way it goes and reach their gang leader. Trail pheromones are volatile compounds, so it is not very likely that you would see ants following the exactly same path tomorrow or a week later. All ants release trail pheromones, so as long as ants are going through that path, the trail signal will keep getting stronger and will also tell lost ants "Hey, bro! We are going this way. Don't you want to join us?" See, for example, here3:
In the beginning, different ants follow different paths, but as soon as an ant finds the shortest path, all ants join it on that path, due to which pheromones on other paths evaporates and finally, they all walk in straight line.
Of course, trail pheromones are very useful for organisms who have lost their way home, but you know, other organisms, like their predators, are not just sitting and watching them. Some predators can also catch pheromones and find out where the whole team is going, to attack them all at once. Such chemicals are known as kairomones. See this4:
A kairomone is a semiochemical, emitted by an organism, which mediates interspecific interactions in a way that benefits an individual of another species which receives it, and harms the emitter.
Semiochemical is just a type of pheromone which a species X releases, but a species Y can also respond to its presence. Its just like when you walk on road and say "Whoa! Seems like a pizza delivery van just passed by" (obviously, smell of pizza is not by a pheromone ;)
The trail pheromones released by army ants can also be detected by a blind snake or a forest beetle, leading it right inside their nest. Obviously, all semiochemicals are not bad. Chemicals like synomones (helping both species) and allomones (just the opposite of kairomones) are also present5, but they are beyond this question's scope.
References:
Medical Definition of Pheromone
Types of Pheromones - Wikipedia
Life in Wireframe - Ant Algoithms
Kairomones - Wikipedia
These Pheromones Get The Animals That Secrete Them Killed | {
"domain": "biology.stackexchange",
"id": 6684,
"tags": "biochemistry, entomology"
} |
Has a reflected wave an arbitrary phase shift? | Question: Let an EM wave propagate in the $\hat{z}$ direction -
$$\vec{E_I}(z,t)=E_0e^{i(kz-\omega t)}\hat{x}$$
it hits a conducting surface at $z=0$ so there is a reflected wave -
$$\vec{E_R}(z,t)=E_{0R}e^{i(-kz-\omega t)}\hat{x}$$
Since the total field must vanish on the conducting surface we conclude that -
$$E_{0R}=E_0e^{i\pi}$$
However, if the conducting plane was placed at $z=L$ we would find -
$$E_{0R}=E_0e^{i(2kL+\pi)}$$
It appears that the phase difference (which is physical?) between the incoming and reflected wave is arbitrary. On the other hand, our choice of coordinates is also arbitrary. As far as the waves are concerned, the position of the conducting plane should not matter at all, so there is an apparent conflict.
Edit:
We found the two waves to be -
$$\vec{E_I}(z,t)=E_0e^{i(kz-\omega t)}\hat{x}$$
$$\vec{E_R}(z,t)=E_0e^{i(k(2L-z)-\omega t + \pi)}\hat{x}$$
Their sum is a standing wave -
$$\vec{E_I}(z,t) + \vec{E_R}(z,t)=
E_0(e^{i(kz-\omega t)} + e^{i(k(2L-z)-\omega t + \pi)})\hat{x}=
E_0e^{i(kL-\omega t)}(e^{ik(z-L)} - e^{-ik(z-L)})\hat{x}=
2iE_0e^{i(kL-\omega t)}\sin(k(z-L))\hat{x}$$
And the difference in their phases is -
$$\Delta \phi(x)=2k(L-z)+\pi$$ which at $z=L$ comes out as $\pi$ so the boundary conditions are satisfied. However in some other points the phase difference is not $\pi$
Edit2:
If the reflected wave gains only an additional phase $\pi$ an immediate conclusion is that the wavenumber must be quantised. This is odd, because if the surface is moved just a little bit further away, the standing wave will be destroyed. This will lead to a violation of the boundary conditions at the interface.
Answer: Firstly, the condition of the relative phase $\varphi_2(x,t) - \varphi_1(x,t) = \pi$ applies only to the point in space, where the mirror is located. Hence, it applies only to $x=L$, but for all times $t$. If it would apply to all points in space, the sum of the two waves would be zero. Hence, we would not obtain a standing wave, but zero amplitude everywhere in space.
Secondly, let's start @ $x=0$ with a wave travelling to the right,
$
y_1(x,t)
=e^{i(\omega t - kx)}
= e^{i \varphi_1(x,t)}
$, and a wave travelling to the left,
$
y_2(x,t)
=e^{i(\omega t + kx + \phi)}
= e^{i \varphi_2(x,t)}
$. Please note, that $\phi$ is the phase of the reflected wave at position $x=0$ (and time $t=0$ -- as time is irrelevant for the following arguments, I will omit it in the further discussion). Now, let's apply the stated boundary condition. For the point $x=L$ we get
$$
\pi = \varphi_2(L,t) - \varphi_1(L,t)
= 2kL + \phi
$$
which leads to
$\phi = \pi - 2kL$.
Let's consider each of the two terms separately:
The first term, $\pi$ is phase shift due to the reflection on an optical denser medium.
The second term, $2kL$, is the phase of the reflected waves at position $x=0$.
Look at it this way: The reflected wave, which is at time $t=0$ at $x=0$ is the "incident wave of the past" ($t<0$). This "incidence wave of the past" has travelled the distance $2L$. Therefore, it has picked up the phase $2kL$ in addition to the phase shift.
Finally, note that a clever way to express the phase of the incident and reflected wave is to use
\begin{align}
y_1(x,t)
&=e^{i(\omega t - k(x-L))}
\\
y_2(x,t)
&=e^{i(\omega t + k(x-L)+ \pi)}
\end{align}
E.g. using $L=1.2\lambda$ yields the following | {
"domain": "physics.stackexchange",
"id": 65924,
"tags": "electromagnetism, waves, electromagnetic-radiation"
} |
Right after the Big Bang, how did particles overcome extreme gravity and other forces and manage to fly apart? | Question: I have read this question:
Why did the universe not collapse to a black hole shortly after the big bang?
where Lubos Motl says:
This matter has no center - it is almost uniform throughout space - and has high enough velocity (away from itself) that the density eventually gets diluted.
Now this and none of the other (there are a lot) answers answer my question specifically. I am not asking about collapse into a black hole. I am asking, right after the Big Bang, the density was extreme, thus gravity and curvature had to be extreme, maybe the escape velocity (meaning in this case the velocity needed for particles to fly apart) could have reached close to or even exceed the speed of light. But that is just gravity. There are the other forces (at that point unified if I understand correctly), that must have been holding particles together. This could involve the photon and the quark epoch as well.
Now first I thought:
Maybe it was not particles flying apart, but simply just space expanding between them. But wait. First of all, space is expanding even now. Everywhere. Contrary to popular belief, space is expanding everywhere. Even here where we are. It is just that here, the other forces are dominating. Us, who are made up of matter, are held together by the other forces, that dominate over space expansion. So space is expanding right here, but the matter we are made up of stays together. No flying apart here. Space was expanding back then too. then how was space expansion able to overcome all the other forces back then but not now?
It might be just a scale issue. Space expansion, some call it dark energy, might just be some kind of force, negative pressure, that is spread over the whole universe. It acts only on large scales. For now. But when the universe was extremely small, the scales were small too, and maybe dark energy was concentrated onto this small region, making it relatively stronger compared to the other forces.
Question:
Right after the Big Bang, how did particles overcome extreme gravity and other forces and manage to fly apart?
Answer: There are two reasons why there was no gravitational collapse in the very early universe:
The distribution of energy and matter soon after the big bang was very nearly uniform. Because of this symmetry there was no reason for gravitational collapse to happen in one place rather than another - the net gravitational force at each location netted out to something very very close to zero.
The early universe was at a very high temperature, which meant that fundamental particles were moving quickly, and gravity had very little effect on them.
Space was expanding, but this was not "overcoming" gravity. In fact, the expansion of space meant that the universe was cooling down, which assisted gravity. Like a pencil balanced on its point, the universe was in a state of unstable equilibrium, which became more unstable as it cooled.
As the universe expanded and cooled, fundamental particles combined into protons and neutrons, and then into atoms (almost all of which were hydrogen and helium atoms). This took several hundred thousand years. The very small deviations from absolute symmetry were then enough to trigger the collapse of the cooling atoms into gravitationally bound clouds, and then into the first stars and galaxies. But this process was very slow, and the first stars (called Population III stars) took hundreds of millions of years to form. | {
"domain": "physics.stackexchange",
"id": 88805,
"tags": "cosmology, gravity, space-expansion, big-bang"
} |
What is meant by "catalytic amount of a hormone"? | Question: This textbook says:
In the classic definition, hormones are secretory products of the ductless glands, which are released in catalytic amounts into the blood stream and transported to specific target cells (or organs), where they elicit physiologic, morphologic and biochemical responses.
What is meant by catalytic amount? Does it mean that amount required for catalysis? I know only renin which is catalytic and act as an enzyme, and many other hormones are non enzymatic so why the term catalytic is used in general sense?
Answer: The writer obviously means “small amount”, and has chosen his words badly as hormones do not themselves catalyse reactions.
The idea he is probably trying to convey is that a few molecules of hormone binding to their receptor can cause changes to many molecules within the target cell. This often (generally?) involves catalysis within the cell produced by the receptor activating an enzyme, so one can understand the line of thought leading to the misuse of this term. | {
"domain": "biology.stackexchange",
"id": 6765,
"tags": "biochemistry, physiology, terminology, enzymes, endocrinology"
} |
Obtaining Velocity from Acceleration | Question: I'm following a research paper (PDF via RG), part of which is about using the accelerometer of a smartphone to assist the user's positioning via WLAN.
Accelerometer is used to determine if the user is static, slow walking or fast walking.
Here is the algorithm:
For the implementation I'm using Swift on iOS. I've calculated the Euclidian Norm of the accelerometer data which I receive every second.
The problem is that I'm having difficulty obtaining $v_a$ in the algorithm. How should I pass from acceleration to $v_a$ in this particular situation?
Answer: I was able to determine the user status(static, slow walking, fast walking) by calculating the variance.
The Va in the research was not velocity. It was my mistake to interpret it as such. It was the variance of the euclidian norm of the accelerometer data.
I decreased the accelerometer update interval to 0.1s and every second I took 10 of the values and calculated their variance and compared that variance with the given thresholds.
The threshold required some tweaking since the research was conducted on a Nokia N95 where I'm on an iPhone 5. | {
"domain": "physics.stackexchange",
"id": 26235,
"tags": "kinematics, acceleration, velocity, computational-physics"
} |
openni_launch + fuerte + ubuntu 12.04: No image received | Question:
Hello,
I just installed openni_camera and openni_launch stacks. When I launch the driver and rviz using the following commands:
roslaunch openni_launch openni.launch
rosrun rviz rviz
in rviz, I see a warning telling me that it didn't receive any images. I try to echo the messages sent by /camera/rgb/image_raw, but no messages are published on that topic.
I need to mention that I get some number of messages from /camera/depth/points topic. The interesting thing here is that only a number of messages are published by that topic. It stops after a random number of messages.
There are a few similar problems like mine. Following the proposed answers, I suspect that my drivers might be faulty. But reinstalling them didn't solve the problem.
Here is a screen shot:
In this run, I only got 74 point cloud messages. It just stopped at that point. I also didn't get any image messages.
I deeply appreciate any help.
EDIT: --------------------------------------------------------------
I must also add that when I run the following command:
rosrun image_view disparity_view image:=/camera/depth/disparity and not rviz, I can see the camera working for only a few(5-10) seconds, and again it hangs.
-------------------------------------------------------------------------
EDIT 2: --------------------------------------------------------------
I get lots of errors and warnings but they are said to be harmless, for example
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
[ERROR] [1346137170.283120890]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressed/set_parameters]
[ WARN] [1346137171.480522722]: Using default parameters for RGB camera calibration.
In several questions on Ros Answers, I ran into the problems with the same errors and warning. They all say that ignoring these errors wouldn't do any harm.
I paste the full output here
-------------------------------------------------------------------------
EDIT 3: I cannot comment on the posts, that's why I will post this edit.
As pgorczak says, I don't believe that my problem is performance related, I use a powerful computer. In addition to that, even if I don't start rviz or image_view and just echo, let's say, /camera/rgb/image_raw, I don't see any messages on that topic. I believe openni just freezes.
I will keep trying to see if I can get it to work. And, thanks a lot pgorczak, I am looking forward to your update.
Originally posted by yigit on ROS Answers with karma: 796 on 2012-08-27
Post score: 1
Original comments
Comment by pgorczak on 2012-08-27:
Did you check for any suspicious log messages? Try running roscore and rxconsole before the driver's launch file and see if anything shows up when it hangs.
Comment by yigit on 2012-08-27:
I updated the question. Please check edit2.
Comment by georgebrindeiro on 2012-12-21:
@yygyt were you able to ever solve your problem? I think I am having the same issue. Are you using Ubuntu 12.04 64-bit? Are you using the openni-dev unstable and ps-engine avin2 packages found here?
Comment by yigit on 2012-12-23:
Yeah, somehow... Please take a look at this reply http://answers.ros.org/question/46066/kinect-dynamic-reconfigure-does-not-work/#46450. In my case, both problems are related
Answer:
The first exception is not harmful as you can see here.
When I am using roslaunch openni_launch openni.launch, I also get messages like:
[ERROR] [1346317018.377804268]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressedDepth/set_parameters]
[ERROR] [1346317018.384694527]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/compressed/set_parameters]
[ERROR] [1346317018.413763705]: Tried to advertise a service that is already advertised in this node [/camera/depth_registered/image_rect_raw/theora/set_parameters]
(...)
[ WARN] [1346317020.499849507]: Camera calibration file (...)/rgb_A00364A06244047A.yaml not found.
[ WARN] [1346317020.500131130]: Using default parameters for RGB camera calibration.
[ WARN] [1346317020.500289570]: Camera calibration file (...)/depth_A00364A06244047A.yaml not found.
[ WARN] [1346317020.500426152]: Using default parameters for IR camera calibration.
Still, everything works just fine (image_view and rviz) so those shouldn't be the issue.
I think running only image_view like in your first edit and still getting the freeze pretty much rules out a performance issue--at least concerning rviz. I think (I'm not sure though) the point cloud only gets computed when you subscribe to the points topic which would mean there is pretty much nothing going on computationally when you just display an image with image_view.
You could also try a different Kinect device or make sure your Kinect works e.g. on Windows with the OpenNI examples to make sure there is no problem with the device itself.
And my last question would be: What is your version of ROS and your operating system (version)? I'm using ubuntu 12.04 with fuerte and it worked with the OpenNI-packages I got with sudo apt-get install ros-fuerte-openni-camera.
You could try the driver from openni_camera_deprecated just to see if it gives you different output or some hints on what's going on.
Kinect launch file using openni_camera/OpenNINodelet (deprecated version of the driver):
<launch>
<node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>
<node pkg="nodelet" type="nodelet" name="openni_launch" args="load openni_camera/OpenNINodelet openni_manager" respawn="true">
<!--<param name="rgb_frame_id" value="camera_rgb_optical_frame" />-->
<!--<param name="depth_frame_id" value="camera_depth_optical_frame" />-->
<param name="depth_registration" value="true" />
<param name="image_mode" value="VGA_30Hz" />
<param name="depth_mode" value="VGA_30Hz" />
<param name="debayering" value="EdgeAwareWeighted" />
<param name="depth_time_offset" value="-0.055" /> <!-- Taken from TurtleBot's kinect.launch -->
<param name="image_time_offset" value="0" />
</node>
</launch>
Originally posted by pgorczak with karma: 446 on 2012-08-29
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 10775,
"tags": "kinect, openni, ros-fuerte"
} |
Gazebo camera plugin rendering meshes in wrong order | Question:
Trying to load a URDF model in Gazebo, and everything looks good in the Gazebo window. The model includes a Kinect , and when I look at the camera output, it renders the (.STL) meshes in the wrong order. See images here.
I've tried Gazebo 4 and 5, and both the libgazebo_ros_openni_kinect.so and libgazebo_ros_camera.so plugins. Is this expected behavior?
Originally posted by drwahl on Gazebo Answers with karma: 1 on 2015-03-11
Post score: 0
Answer:
Looks like it was a virtual machine (VMWare Player) issue. I ran the same code on an Ubuntu computer and everything rendered properly.
Originally posted by drwahl with karma: 1 on 2015-03-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3730,
"tags": "gazebo-camera"
} |
Why pick just a single bacterial transformed colony | Question: So after bacteria have been transformed to perhaps grow up a plasmid of interest, why pick only a single bacterial colony from a selective plate for further expansion?
I understand that this is to ensure that you are only working with a single genetic makeup because each separate colony is derived from only a single bacterium. What I can't rationalize is that if I am trying to expand and isolate a plasmid of interest, all colony expansions on a selective antibiotic plate should contain my plasmid. So is it really necessary to take only a single colony, because all colonies I pick should be useful to me.
Answer: This is a matter of pragmatism in the culture process.
Taking 100 colonies instead of 1 increases the inoculation volume by a factor of 100, which then saves you perhaps 2 hours of bacterial growth time before your culture reaches the OD you want.
However, mutations and loss of plasmid in culture, while unlikely, are possible, especially if the bacteria were not cloning strains with their recombinases knocked out. In such a case, you would risk the downstream experiments being contaminated with mutants, be that protein expression, retroviral plasmid production, or something else.
Worse still, the mixture of bacteria containing mutants is less likely to be detected, since had you picked a single mutant colony, there would be absolutely no result, but a contaminated culture would produce a low but still positive result. Therefore, you may end up wasting many days or weeks troubleshooting your downstream experiments.
Therefore, picking single colonies is simply a matter of "better safe than sorry". | {
"domain": "biology.stackexchange",
"id": 4090,
"tags": "genetics, bacteriology, transformation"
} |
GeekTrust: Tame of Thrones OO | Question: I am trying to solve a problem on GeekTrust, called Tame of Thrones (here)
Shan, the gorilla king of the Space kingdom wants to rule all Six
Kingdoms in the universe of Southeros.
There is no ruler today and pandemonium reigns. Shan is sending secret
messages to all kingdoms to ask for their allegiance. Your coding
challenge is to help King Shan send the right message to the right
kingdom to win them over. Each kingdom has their own emblem and the
secret message should contain the letters of the emblem in it. Once
Shan has the support of 3 other kingdoms, he is the ruler!
And would like to get my code reviewed.
I am trying to achieve the badges for Functional/OO Modelling and Extensibility.
Following are all the primary classes involved in the solution that I need to get reviewed.
public class Kingdom {
public final String name;
public final String emblem;
private final Set<String> allies;
private Ruler ruler;
private PostService postService;
public Kingdom(@NotNull String emblem, @NotNull String name) {
Objects.requireNonNull(emblem, ErrorMessages.EMBLEM_NOT_NULL_ERROR_MESSAGE);
Objects.requireNonNull(name, ErrorMessages.NAME_NOT_NULL_ERROR_MESSAGE);
this.emblem = emblem;
this.name = name;
this.allies = new LinkedHashSet<>();
}
public void sendMessage(String to, String body) {
Message message = new Message(this.name, to, body);
Message response = this.postService.exchange(message);
if (this.hasOtherKingdomAllied(response)) {
this.allies.add(response.from);
}
}
private boolean hasOtherKingdomAllied(Message message) {
try {
Cipher cipher = Ciphers.cipher(Ciphers.SEASAR_CIPHER_TYPE, this.postService.getEmblemFor(message.from)
.length());
String decryptedMessage = cipher.decrypt(message.body);
return MessageResponses.POSITIVE_RESPONSE.equalsIgnoreCase(decryptedMessage);
} catch (NoSuchAlgorithmException e) {
return false;
}
}
public Message allyRequest(Message message) {
Message response;
try {
Cipher cipher = Ciphers.cipher(Ciphers.SEASAR_CIPHER_TYPE, this.emblem.length());
String decryptedMessage = cipher.decrypt(message.body);
String shouldWeAlly = this.shouldWeAlly(decryptedMessage) ? MessageResponses.POSITIVE_RESPONSE:
MessageResponses.NEGATIVE_RESPONSE;
String responseBody = cipher.encrypt(shouldWeAlly);
response = new Message(this.name, message.from, responseBody);
} catch (NoSuchAlgorithmException e) {
return new Message(this.name, message.from, MessageResponses.NEGATIVE_RESPONSE);
}
return response;
}
private boolean shouldWeAlly(String message) {
return StringCompareUtils.containsIndexInsensitive(message, this.emblem);
}
public void setPostService(@NotNull PostService postService) {
Objects.requireNonNull(ruler);
this.postService = postService;
}
public Set<String> getAllies() {
return new LinkedHashSet<>(this.allies);
}
public Ruler getRuler() {
return ruler;
}
public void setRuler(@NotNull Ruler ruler) {
Objects.requireNonNull(ruler);
this.ruler = ruler;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Kingdom kingdom = (Kingdom) o;
return emblem.equals(kingdom.emblem);
}
@Override
public int hashCode() {
return Objects.hash(emblem);
}
}
public class Universe {
private static Universe universe;
private final Set<Kingdom> kingdoms;
private Kingdom rulingKingdom;
private Universe(@NotNull Set<Kingdom> kingdoms) {
Objects.requireNonNull(kingdoms);
this.kingdoms = kingdoms;
}
public static Universe getInstance() {
if (universe == null) {
synchronized (Universe.class) {
Universe.universe = new Universe(new HashSet<>(IOUtils.getAllKingdoms()));
}
}
return universe;
}
public Kingdom getRulingKingdom() {
return this.rulingKingdom;
}
public void setRulingKingdom(@NotNull Kingdom kingdom) {
Objects.requireNonNull(kingdom, ErrorMessages.RULING_KINGDOM_NOT_NULL_ERROR_MESSAGE);
this.rulingKingdom = kingdom;
}
public Set<Kingdom> getKingdoms() {
return this.kingdoms;
}
public Kingdom getKingdom(String name) {
return this.kingdoms.stream()
.filter(kingdom -> kingdom.name.equals(name))
.findFirst()
.orElseThrow(() -> new RuntimeException(String.format(ErrorMessages.KINGDOM_NOT_FOUND_ERROR_MESSAGE_FORMAT, name)));
}
}
public class PostService {
private final AddressRegistry addressRegistry;
public PostService(@NotNull Collection<Kingdom> kingdoms) {
Objects.requireNonNull(kingdoms, ErrorMessages.KINGDOMS_NOT_NULL_ERROR_MESSAGE);
this.addressRegistry = new AddressRegistry(kingdoms);
}
public Message exchange(Message message) {
Kingdom to = this.addressRegistry.getKingdomFromAddress(message.to);
return to.allyRequest(message);
}
public String getEmblemFor(String name) {
return this.addressRegistry.getKingdomFromAddress(name).emblem;
}
}
class AddressRegistry {
private final Map<String, Kingdom> registry;
AddressRegistry(Collection<Kingdom> kingdoms) {
this.registry = kingdoms.stream()
.collect(Collectors.toMap(kingdom -> kingdom.name, Function.identity()));
}
Kingdom getKingdomFromAddress(String address) {
return this.registry.get(address);
}
}
Answer: Thanks for sharing your code.
I just took a short look at it and this are my thoughts:
Avoid the Java Singelton Pattern
Your class Universe implements the Java Singelton Pattern and as many others before you fail with that by not taking concurrency into account.
But That is not the problem!
The Problem with the Java Singelton Pattern is that it makes the instance of your class Universe a global variable. You may have heard before that global variables are considered harmful. (https://algorithmsandme.com/five-reasons-why-not-to-use-global-variables/) But especially in Java they force tight coupling between the Singelton and its users. They are the opposite of the Open/Closed principle.
Attention: This does not mean that the Singelton as a concept is bad. This means that the application (the injection framework by any chance) should assure that only one instance is created at the applications runtime.
For more on this read here:
https://williamdurand.fr/2013/07/30/from-stupid-to-solid-code/
duplicated checks
In your constructors you use @NotNull annotations at the parameters and then you explicitly do a not null check.
Why? Given that your dependency injection framework works properly at the runtime of your application the additional check will never fail...
In you Method Kingdom.setPostService() you even miss to check the parameter the second time an check a member variable instead. This will lead to strange error messages at runtime.
avoid getter and setter on classes implementing business logic
getter and setter should only be used on Data Transfer Objects or Value Objects that only contain data and no business logic (at most some very simple validations).
On classes implementing business logic getter and setter are violations of the encapsulation aka information hiding principle.
Addendum
limit scope of variables
In Kingdom.allyRequest() you declare response before the try/catch block but you only access it within the try part. You only need this declaration there for the return statement after the catch block.
But doing anything after a catch block is a code smell itself. So you should return the response as the last statement inside the try block. Then you can also move the declaration of the variable response to the line where you actually assign it a value. | {
"domain": "codereview.stackexchange",
"id": 38049,
"tags": "java, object-oriented, library"
} |
Recursion memorization table | Question: The following code is for this problem.
Input
The input consists of a single line with three integers n
, s
, and k
(1 ≤ n ≤ 10,000, 1≤ k ≤ s ≤ 500
). n
is the number of throws, k
the number of different numbers that are needed to win and s
is the number of sides the die has.
Output
Output one line with the probability that a player throws at least k
different numbers within n
throws with an s
-sided die. Your answer should be within absolute or relative error at most 10−7
.
Sample Input 1
3 3 2
Sample Output 1
0.888888889
Sample Input 2
3 3 3
Sample Output 2
0.222222222
This program used 0.89s to pass all test cases. But I find other people use Java can achieve ~0.20s. How can I improve my code?
The idea is to establish a table for the recursive call to look up previously calculated the value.
import java.util.*;
public class Dice
{
static double table[][];
public static double game(int s, int r, int d)
{
if(table[r][d] != -1) return table[r][d];
double a = (1 - (double)d/(double)s) * game(s, r - 1, d + 1);
double b = ((double)d/(double)s) * game(s, r - 1, d);
double sum = a + b;
table[r][d] = sum;
return sum;
}
public static void main(String[] args)
{
Scanner in = new Scanner(System.in);
int n = in.nextInt();
int s = in.nextInt();
int k = in.nextInt();
table = new double[n+1][n+1];
for(int i = 0; i < n+1; i++)
for(int j = 0; j < n+1; j++)
table[i][j] = (double)-1;
for(int i = 0; i < n+1; i++)
for(int j = 0; j < n+1; j++)
if(i + j < k)
table[i][j] = (double)0;
for(int j = 0; j < n+1; j++)
for(int i = k; i < n+1; i++)
table[j][i] = (double)1;
System.out.println(game(s, n, 0));
}
}
Answer: Don't Repeat Yourself
Your main() function starts with 3 loops to initialize your table array. There are several problems here. First, it writes past the end of the array. If your array has n + 1 elements, then you can only write to 0 through n. Element n + 1 is out-of-bounds. So each of your loops is writing to one element of the next row, and then one extra row at the end, and one element past that, too. The loops should only go from 0 to n. This happens to work because you later read past the end of the array, too. If you're going to do that, you need to allocate n + 2 rows and columns.
Next, you're writing several elements 3 times. You can combine these 3 loops into 1 and do far less initialization. Further, you should use braces around all loop and conditional bodies because there's the potential to create errors in the future if you don't.
Also, you've switched i and j in the last loop which is highly confusing and could easily be missed by a reader.
Here's how I would write your initialization code:
table = new double [ n + 2 ][ n + 2 ]
for (int i = 0; i < n + 1; i++)
{
for (int j = 0; j < k; j++)
{
if (i + j < k)
{
table [ i ][ j ] = 0.0;
}
else
{
table [ i ][ j ] = -1.0;
}
}
for (int j = k; j < n + 1; j++)
{
table [ i ][ j ] = 1.0;
}
}
Now no element should be touched more than once, no elements past the end of the array should be written to, and no array indexes are flipped.
You're also calculating (double)d / (double)s twice. You should calculate it once and re-use this value.
Naming
You should choose better names for your variables. i and j are fine for loop indexes as they are commonly used that way and nobody will be confused by them. However, I'd rename n, s, and k. I'd name n something descriptive like numThrows; k should be something like numNeeded, and s should be numSides.
Likewise, s, r, and d are not very helpful names for the arguments to game().
Speed
I haven't profiled your code, which is the real way to find out which parts are slow. I have seen many cases where casting variables from one type to another causes significant slowdowns. Since the bulk of your game() function is either calling itself or casting values, I recommend just making the arguments to the function be double to begin with. Of course, you can't use a double as an index into an array. So you could either pass the r and d values as both doubles and ints. | {
"domain": "codereview.stackexchange",
"id": 29404,
"tags": "java, recursion"
} |
Is it necessary to have a perfect correlation when using linear regression? | Question: I am working on predicting BMI against weight, using linear regression.
The scatter plot of the data can be found below.
As you can see in the plot, there seems to be low (or no) correlation between the two variables and thus I have doubts whether I'm using the right method. Is it necessary to have correlated data in order to use linear regression? Would you advice me to try other methods or adding features?
Answer: If there is no correlation between variables you cannot run regression analysis as one variable cannot predict the other. Hence, it is necessary to have correlated data in order to run linear regression. | {
"domain": "datascience.stackexchange",
"id": 11210,
"tags": "linear-regression, correlation"
} |
SQL query with nested subqueries | Question: The following query is taking over 800ms to run, and returning 300 rows. When deployed to SQL Azure, it takes much longer on an affordable price tier.
SELECT
Tests.Id,
Tests.Title,
Tests.AuthorId,
Tests.[Status],
Users.FirstName + ' ' + Users.LastName AS AuthorName,
(SELECT
COUNT(1)
FROM Results LEFT JOIN Users ON Results.UserId = Users.Id
WHERE
Results.TestId = Tests.Id AND
Results.MarkedBy IS NULL AND
Results.QuestionNumber >= 1 AND
EXISTS (
(SELECT ClassName FROM UserClasses WHERE UserClasses.UserId = Users.Id)
INTERSECT
(SELECT ClassName FROM TestClasses WHERE TestClasses.TestId = Tests.Id)
INTERSECT
(SELECT ClassName FROM UserClasses WHERE UserId = @teacherId)
)
) AS UnmarkedCount,
(CASE WHEN EXISTS (SELECT 1 FROM Results WHERE Results.TestId = Tests.Id)
THEN CAST(1 AS BIT)
ELSE CAST(0 AS BIT) END
) AS AnyResults,
(SELECT Stuff((SELECT ',' + ClassName FROM
(
(SELECT ClassName FROM TestClasses WHERE TestClasses.TestId = Tests.Id)
INTERSECT
(SELECT ClassName FROM UserClasses WHERE UserId = @teacherId)
) x FOR XML PATH ('')),1,1,'')
) AS Classes
FROM
Tests INNER JOIN Users ON Tests.AuthorId = Users.Id
WHERE
Users.SchoolId = @schoolId AND Tests.Status <= 4
An overview of the schema:
Users include students and teachers.
UserClasses matches many users to many class names.
TestClasses matches many tests to many class names.
Each test in Tests can have multiple Results - one per question per student.
The query returns a list of tests, using subqueries to find:
UnmarkedCount: How many unmarked results exist for this test, where the intersection of the following is not empty:
The classes of the student of this result
The test's classes
The teacher's classes
AnyResults: Are there any results for this test?
Classes: As a comma-separated list, which of the teacher's classes are assigned to this test?
Note that if we remove the condition where three queries are intersected, the execution time is reduced to 150ms. However, this logic is required.
How could this be improved?
Further Details:
Query Execution Plan
This is an extract from the query execution plan, where the heavy lifting seems to occur. I can't see anywhere suggesting indexes.
Business logic
The procedure returns a list of all tests at a given school. For each test, it calculates:
UnmarkedCount: How many results are not yet marked for students in classes which are both allocated to this test and taught by the current user?
Classes: Which of the classes allocated to this test does the current user teach?
Answer: Let's just focus on this part, because that's where your performance goes:
SELECT
COUNT(1)
FROM Results LEFT JOIN Users ON Results.UserId = Users.Id
WHERE
Results.TestId = Tests.Id AND
Results.MarkedBy IS NULL AND
Results.QuestionNumber >= 1 AND
EXISTS (
(SELECT ClassName FROM UserClasses WHERE UserClasses.UserId = Users.Id)
INTERSECT
(SELECT ClassName FROM TestClasses WHERE TestClasses.TestId = Tests.Id)
INTERSECT
(SELECT ClassName FROM UserClasses WHERE UserId = @teacherId)
)
That pattern of EXISTS (... INTERSECT ...) is better written as a chain of INNER JOIN.
The query optimizer of your database already did that internally as well, but it chose the wrong order for the join, resulting in overly large temporary result sets. Especially when joining UserClasses straight on Results without applying the more selective filter by @teacherId first.
SELECT
COUNT(1)
FROM Results
INNER JOIN TestClasses ON
TestClasses.TestId = Tests.Id AND
TestClasses.TestId = Results.TestId
INNER JOIN UserClasses AS TeacherClass ON
TeacherClass.UserId = @teacherId AND
TeacherClass.ClassName = TestClasses.ClassName
INNER JOIN UserClasses AS UserClass ON
UserClass.UserId = Results.UserId AND
UserClass.ClassName = TestClasses.ClassName
WHERE
Results.MarkedBy IS NULL AND
Results.QuestionNumber >= 1
I reordered the JOIN clauses to ensure that the product remains as small as possible after each single step. Further, I eliminated the unnecessary join on the User schema.
However, you don't actually need full INNER JOIN either. If your database system supports that, you can safely replace the 2nd and 3rd of the INNER JOIN with LEFT SEMI JOIN operators instead.
So much for fixing the inner select. But as a matter of fact, now we don't even need to do it as a subquery any more, but can just handle if as a LEFT JOIN with COUNT and GROUP BY on the outmost query.
Whether this actually gains any performance needs to be tested.
There are also a couple of flaws in your database scheme:
Take the UserClasses table schema. You are abusing it to describe both the roles of teacher and student for any given class, without distinguishing between these two. I suspect you coded the user role into the Users schema instead, but it would have been better to store different roles in different schemes.
You are apparently storing class names as string literals in multiple schemes, this is an indirect violation of the 2NF, but even worse, it requires string comparisons to match the corresponding columns against each other. This should be refactored ASAP.
There also appears to be a possible design flaw in Results. If the same test is reused by two different classes, and a pupil is enrolled into both, his test results are now shared between both classes. Test results should probably linked to a specific enrollment to a class, rather than just to the generic test. This also allows to simplify this query further, as the most expensive part of joining on UserClass for querying pupil enrollment is then obsolete. | {
"domain": "codereview.stackexchange",
"id": 21878,
"tags": "performance, sql, sql-server"
} |
How to get string from terminal when starting the node. C++ | Question:
Hello everyone,
I'm trying to get a parameter used in the launch command of a node and use it on my c++ code. For example, when you're about to launch a node:
$rosrun [packet] [node] [string],
the [string] being the string I want to use in my c++ code of the node.
I want it to give me the string so I can choose a specific topic to subscribe to.
I'm using ROS Kinetic
Originally posted by Diatrix on ROS Answers with karma: 3 on 2018-07-18
Post score: 0
Answer:
If you're doing rosrun, you can use the argv passed into the main function of your C++ node:
int main(int argc, char** argv)
{
ros::init(argc, argv, "my_node");
if (argc > 1)
std::string value_from_cl = argv[1];
}
If you go through a launch file, you can use launch file args to pass in from the command line. For example, if you have in your launch file
<arg name="some_arg" default="my_default_value"/>
then you can run roslaunch like
roslaunch my_package my_file.launch some_arg:=some_other_value
Then to get the value to your node, you can pass it directly in the node tag in the launch file:
<node name="my_node" pkg="my_package" type="my_node" args="$(arg some_arg)"/>
You could alternatively set it with the ROS parameter server, in which case you would load it in your launch file by doing
<param name="my_param" value="$(arg some_arg)"/>
which can then be retrieved from the parameter server in your C++ code with
ros::NodeHande nh;
std::string value_from_ps;
// Retrieve and set a default value if not found on parameter server
nh.param<std::string>("my_param", value_from_ps, "my_default_value");
// Retrieve without a default value
nh.getParam("my_param", value_from_ps);
Originally posted by adamconkey with karma: 642 on 2018-07-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Diatrix on 2018-07-20:
Thank you, it's everything I needed. | {
"domain": "robotics.stackexchange",
"id": 31298,
"tags": "ros, c++, ros-kinetic, parameter, rosrun"
} |
Are all lift-off resist negative photoresists? | Question: Are all lift-off resist negative photoresists? I'm a bit confused as I can't find anything in my textbook/lecture slides about the type of resist used for lift-off.
Answer: There are definitely positive photo resists, but there tends to be little incentive to use them. Usually you use a negative mask with a negative resist to create a positive part. The pattern to be made tends to be a relatively small area of the photo resist on the mask and thus it takes less time for the lithography machine to produce a negative mask which can be a bottle neck in an R&D setting. Also Positive resists tend to be perminate as they have chemically inter-reacted to stay in place not be dissolved later (like during lift-off), thus another reason positive resists are not used frequently. | {
"domain": "chemistry.stackexchange",
"id": 11544,
"tags": "nanotechnology"
} |
How can I create a classifier using the feature map of a CNN? | Question: I intend to make a classifier using the feature map obtained from a CNN. Can someone suggest how I can do this?
Would it work if I first train the CNN using +ve and -ve samples (and hence obtain the weights), and then every time I need to classify an image, I apply the conv and pooling layers to obtain the feature map? The problem I find in this, is that the image I want to classify, may not have a similar feature map, and hence I wouldn't be able to find the distance correctly. As the order of the features may by different in the layer.
Answer: Add a fully connected layer at the end of the network and train it. After training, using a simple forward pass, you can classify your image.
image-->conv layer-->pool layer --> fully connected layer | {
"domain": "datascience.stackexchange",
"id": 1245,
"tags": "deep-learning, neural-network, convolutional-neural-network"
} |
The similarity transformation of Pauli matrices | Question: I am looking for a transformation $S$ of the Pauli matrices ($X,Y,Z$) such that
\begin{align*}
S^{-1}XS=Y, S^{-1}YS=Z, S^{-1}ZS=X.
\end{align*}
Simply put, my question is a cyclic transformation of the matrices $X\to Y\to Z\to X$.
For example, Hadamard matrix $H=1/\sqrt{2}(X+Z)$ gives a transformation $X\leftrightarrow Z, Y\leftrightarrow -Y$ (I don't care the negative factor).
If the transformation exists (or not), I want to know the proof or its mathematics behind it.
Do you have any ideas?
Answer: I write here a detailed answer out of my comment.
As is well known, the unitary one-parameter group
$$e^{i \theta \vec{n}\cdot \vec{\sigma}/2}$$
has the property that
$$e^{i \theta \vec{n}\cdot \vec{\sigma}/2} \sigma_k e^{-i \theta \vec{n}\cdot \vec{\sigma}/2} = \sum_{j=1}^3(R_{\vec{n}}(\theta))_{kj} \sigma_j\:, \tag{1}$$
where $R_{\vec{n}}(\theta)\in SO(3)$ is a spatial rotation around the unit vector $\vec{n}$ of an angle $\theta$.
As a consequence, let $\vec{n} = \frac{1}{\sqrt{3}}(\vec{e}_1 + \vec{e}_2+ \vec{e}_3)$ be the unit vector parallel to the diagonal of a cube with a vertex at $(0,0,0)$ and standard orthonormal axes $\vec{e}_j$. Then the action of a rotation $R_{\vec{n}}(2\pi/3)$ cyclically superposes the axes.
Taking (1) into account, we have that
$$e^{i \theta \vec{n}\cdot \vec{\sigma}/2} \sigma_k e^{-i \theta \vec{n}\cdot \vec{\sigma}/2} = \sigma_{k'}\:, \quad
e^{i \theta \vec{n}\cdot \vec{\sigma}/2} \sigma_{k'} e^{-i \theta \vec{n}\cdot \vec{\sigma}/2} = \sigma_{k''}$$
where $k,k', k''$ are a cyclic permutation of $1,2,3$.
$$e^{i \theta \vec{n}\cdot \vec{\sigma}/2}$$
can be explicitly computed by the formula (please check my numerical computation)
$$e^{i \theta \vec{n}\cdot \vec{\sigma}/2} = (\cos \theta) I + i (\sin \theta) (\vec{n}\cdot \vec{\sigma}) = -\frac{\sqrt{3}}{2} I + i \frac{1}{2\sqrt{3}} (\sigma_1 + \sigma_2+\sigma_3)\:.$$ | {
"domain": "physics.stackexchange",
"id": 98988,
"tags": "quantum-mechanics, quantum-spin, linear-algebra"
} |
How to choose a boundary layer coordinate or stretching transformation in matched asymptotic expansion | Question: In matched asymptotic expansions how should one properly chose a boundary layer coordinate or stretching transformation. At the moment I am following example 2.3 from Introduction to Perturbation Methods by Holmes which uses matched asymptotic expansion to solve
\begin{align}
\varepsilon^2 y'' + \varepsilon xy' - y = -\mathrm{e}^x
\end{align}
with the boundary conditions
\begin{align}
y(0) = 2, \quad \textrm{and} \quad y(1) = 1.
\end{align}
This problem has two boundary layers at $x = 0$ and $x = 1$ and therefore the book chooses two boundary layer coordinates, that is
\begin{align}
\bar{x} = \dfrac{x}{\varepsilon^\alpha}, \quad \textrm{and} \quad \tilde{x} = \dfrac{x-1}{\varepsilon^\beta}.
\end{align}
Why should one choose these coordinates? For the boundary layer at $x = 1$, why not choose a coordinate like
\begin{align}
\tilde{x} = \dfrac{1-x}{\varepsilon^\beta} \hspace{3cm} (1)\\ \textrm{or} \hspace{3cm}\\
\tilde{x} = \dfrac{\sqrt{1^2-x^2}}{\varepsilon^\beta} \hspace{3cm} (2)
\end{align}
If I used the boundary layer coordinate in (1), then I could match the solution in the limit of $\tilde{y}(\tilde{x} \to \infty) = y(x \to 1)$ instead of $\tilde{y}(\tilde{x} \to -\infty) = y(x \to 1)$. Aside from the sign, I don't think this is any different from the original coordinate.
If I used the boundary layer coordinate in (2), then it seems like the matching would still take place at $\tilde{y}(\tilde{x} \to \infty) = y(x \to 1)$ except that the amount of stretching would be different. Are there guidelines for how I should choose my boundary layer coordinate?
Answer: There is no substantive difference between the boundary layer variable
$$\tilde{x}=\frac{x-1}{\varepsilon^{\beta}}$$ and the one you give as equation (1). They only differ by a minus sign, which you correctly observe makes no difference. In fact, I would choose the version indicated by (1) myself, so that positive values of $\tilde{x}$ fall within the $0<x<1$ domain.
The situation with eq. (2) is a little more complicated. You can use a boundary later coordinate like that, but it makes things somewhat more complicated. First notice that, within the boundary layer near $x=1$ you can make the approximation
$$\sqrt{1-x^{2}}=\sqrt{(1+x)(1-x)}\approx\sqrt{2(1-x)}.$$
This can be used to solve for the boundary layer structure, with (as you noticed) a different stretch factor; it essentially amounts to using the square root of your coordinate from eq. (1). The root just makes things more complicated to work with in practice. | {
"domain": "physics.stackexchange",
"id": 54304,
"tags": "perturbation-theory"
} |
Game counting to 21 | Question: The idea of the game is that the player and the computer take turns entering either 1, 2, or 3, and that number is added to a running total. Whoever goes over 21 loses. I need to develop a strategy that lets the computer always win.
I have a strategy worked out, but I was wondering if there is a more efficient way to do it.
import java.util.Scanner;
public class Count21 {
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
final int TWENTY_ONE = 21;
int playerOneEntry;
int computerEntry = 0;
int total = 0;
System.out.println("Instructions, two players take turns by entering 1, 2, or 3 \n"+ "which is then added to a running total. Whoever makes the score go over twenty one "+ "loses\n");
while(total <= 21) {
System.out.println("Player, please enter 1, 2, or 3 >>> ");
playerOneEntry = input.nextInt();
while(playerOneEntry != 1 && playerOneEntry != 2 && playerOneEntry != 3) {
System.out.println("Player, please enter 1, 2, or 3 >>> ");
playerOneEntry = input.nextInt();
}
total += playerOneEntry;
System.out.println("The total is " + total);
if(total == 21) {
System.out.println("Player Wins!");
total = 25;
}
if(total < TWENTY_ONE) {
switch (total) {
case 1:
computerEntry = 1;
break;
case 2:
computerEntry = 1;
break;
case 3:
computerEntry = 3;
break;
case 4:
computerEntry = 3;
break;
case 5:
computerEntry = 3;
break;
case 6:
computerEntry = 2;
break;
case 7:
computerEntry = 1;
break;
case 8:
computerEntry = 1;
break;
case 9:
computerEntry = 3;
break;
case 10:
computerEntry = 2;
break;
case 11:
computerEntry = 3;
break;
case 12:
computerEntry = 2;
break;
case 13:
computerEntry = 1;
break;
case 14:
computerEntry = 3;
break;
case 15:
computerEntry = 2;
break;
case 16:
computerEntry = 1;
break;
case 17:
computerEntry = 1;
break;
case 18:
computerEntry = 3;
break;
case 19:
computerEntry = 2;
break;
case 20:
computerEntry = 1;
break;
default:
computerEntry = 1;
}
}
total += computerEntry;
System.out.println("Computer entered " + computerEntry);
System.out.println("The total is " + total);
if(total == 21) {
System.out.println();
total = 25;
System.out.println("Computer Wins!");
}
else {
System.out.println("The total is " + total);
}
}
}
}
Answer: As far as I know, the game is whoever gets to 21 or higher loses, as I have implemented. But as you may have heard of a different version, I will not complain.
First, let's take a sample run:
Instructions, two players take turns by entering 1, 2, or 3
which is then added to a running total. Whoever makes the score go over twenty one loses
Player, please enter 1, 2, or 3 >>>
1
The total is 1
Computer entered 1
The total is 2
The total is 2
Player, please enter 1, 2, or 3 >>>
3
The total is 5
Computer entered 3
The total is 8
The total is 8
Player, please enter 1, 2, or 3 >>>
1
The total is 9
Computer entered 3
The total is 12
The total is 12
Player, please enter 1, 2, or 3 >>>
1
The total is 13
Computer entered 1
The total is 14
The total is 14
Player, please enter 1, 2, or 3 >>>
3
The total is 17
Computer entered 1
The total is 18
The total is 18
Player, please enter 1, 2, or 3 >>>
3
The total is 21
Player Wins!
Computer entered 1
The total is 26
The total is 26
PLAYER WINS!?
Bugs
Your computer lost. This is because... well... explanation later.
The player wins, but the computer still plays a number.
You print "the total is..." twice
Issues
There is a memory leak, as you do not close the scanner.
Fixes
Bug #1 fix
The strategy that the computer uses is not the unbeatable. What is unbeatable is (for this):
Computer starts. Note that the list of numbers of the player's turn is all the possible numbers that the player can play.
Computer: 1
Player: 2, 3, 4
Computer: 5
Player: 6, 7, 8
Computer: 9
Player: 10, 11, 12
Computer: 13
Player: 14, 15, 16
Computer: 17
Player: 18, 19, 20
Computer: 21
Player: 22... LOST!
Why does this work?
Well, the computer has control over the whole game. No matter how hard the Player tries, the computer will always play a number which results in 21.
How did I think of this? Well, I already knew the unbeatable strategy for my version of 21, which was count all the multiples of 4 (4, 8, 12, 16, 20), and after 20, the other player would be forced to lose.
This strategy works because whatever the other player says, you can counter it with another number to put it back to a multiple of 4. For example, if the other player says 3 at the beginning, then the AI will counter with 1, therefore creating the result 4.
How does it work with your version? Well, your version has 22 as maximum, so all I did was shift the "control" numbers one up, to become 5, 9, 13, 17, 21.
Now, I'm not quite done yet, as the Player can start with 1 and the computer would lose, if the Player was smart, so I made the computer start with 1, which was all that was necessary to control the game.
Bug #2 fix
Check for player win.
Bug #3 fix
Don't print the total twice.
Issue #1 fix
Close the scanner.
Alternative Solution
As the answer of my question provides a very good alternate solution to my problem, it is much easier to modify it than try to review your code. Thank you to @200_success for the excellent answer.
The code below is copied directly from the answer:
HumanPlayer.java
import java.util.Scanner;
import java.io.PrintStream;
public class HumanPlayer implements CursedNumberGame.Player {
private static final String HELP = "help";
private final Scanner in;
private final PrintStream out;
public HumanPlayer(Scanner in, PrintStream out) {
this.in = in;
this.out = out;
}
@Override
public int play(int currentSum, int max, int avoid) {
out.printf("\nEnter a number from 1 to %d inclusive: ", max);
do {
String input = in.nextLine();
if (HELP.equals(input)) {
CursedNumberGame.displayHelp(out);
continue;
} else try {
int n = Integer.parseInt(input);
if (0 < n && n <= max) {
return n;
}
} catch (NumberFormatException notAnInt) {
}
out.print("Oops, illegal input. Try again: ");
} while (true);
}
@Override
public String toString() {
return "You";
}
}
The code below has been edited, due to the difference in the game:
CursedNumberGame.java
import java.io.PrintStream;
import java.util.NoSuchElementException;
import java.util.Scanner;
public class CursedNumberGame {
public interface Player {
/**
* Given the parameters of the game (max and avoid), and the
* current sum, chooses a number between 1 and max inclusive.
*/
int play(int currentSum, int max, int avoid);
}
private final int maxPerTurn, avoid;
public CursedNumberGame(int maxPerTurn, int avoid) {
this.maxPerTurn = maxPerTurn;
this.avoid = avoid;
}
public static void displayHelp(PrintStream out) {
out.println("The goal of this game is to stay below the number 21.\n\n"
+ "At each turn, a player chooses either \"1\", \"2\", or \"3\".\n"
+ "That number will be added to the current number.\n\n"
+ "You will be playing AI, and you will start. Try your best, but no matter how\n"
+ "hard you try, you will lose!");
}
public void run(Scanner scanner, PrintStream out) {
displayHelp(out);
Player[] players = new Player[] { new AI(), new HumanPlayer(scanner, out) }; // only line needed to change, so that the order of the players are reversed
int p, sum;
for (p = 0, sum = 0; sum < this.avoid; p = 1 - p) {
int choice = players[p].play(sum, this.maxPerTurn, this.avoid);
out.printf("%s played %d. The sum is now %d.\n", players[p], choice, sum + choice);
sum += choice;
}
out.printf("%s lost!\n", players[1 - p]);
}
private static boolean doAgain(Scanner scanner, PrintStream out) {
out.print("Do you want to play again? ");
while (true) {
char in = Character.toUpperCase(scanner.nextLine().charAt(0));
if (in == 'Y') {
return true;
} else if (in == 'N') {
return false;
}
out.print("Oops, that was not valid input. Try again: ");
}
}
public static void main(String[] args) {
CursedNumberGame game = new CursedNumberGame(3, 22); // One more here to change max
try (Scanner scanner = new Scanner(System.in)) {
do {
game.run(scanner, System.out);
} while (doAgain(scanner, System.out));
} catch (NoSuchElementException eof) {
}
System.out.println("Thanks for playing!");
}
}
AI.java
Well, the AI needs to change, if the AI must be designed differently, right?
public class AI implements CursedNumberGame.Player {
@Override
public int play(int currentSum, int max, int avoid) {
assert(max == 3);
assert(avoid == 22); // change the number to avoid to 22, which is one more than 21
return currentSum < 2 ? 1 : (5 - (currentSum % 5));
}
@Override
public String toString() {
return "The AI";
}
} | {
"domain": "codereview.stackexchange",
"id": 17323,
"tags": "java, game"
} |
reduction of maximum independet set to minimum distance of code | Question: Is there a reference for direct reduction of computing maximum independent set of a suitably constructed graph to computing minimum distance of a linear code when the code is specified by its parity check matrix?
Answer: For the benefit of the others, the minimum code distance problem for codes over $\mathbb{F}_2$ is: given a matrix $H$ ($m$ by $n$, with elements from $\mathbb{F}_2$) and a positive integer $w$, does there exist a vector $x$ such that $Hx = 0$ and the hamming weight (number of 1's) of $x$ is at most $w$.
Since both max ind set and the minimum distance of a linear code are NP complete, there are many-one reductions in either direction. However, it doesn't seem that any of the reductions in the literature that I have seen are directly from max independent set. Actually it seems that the known reductions are from other problems related to coding. Alexander Vardy proved the NP-hardness of computing the min distance by reducing from the max-likelihood decoding problem: given a matrix $H$ over $\mathbb{F}_2$, a vector $s$ and a positive integer $w$, does there exist a vector $x$ such that $Hx = s$ and the weight of $x$ is at most $w$. This problem was shown to be NP-hard by Berlekamp and others by reduction from 3-dimensional matching (which is one of the Karp problems and can be reduced to from 3SAT). If you want to reduce max independent set to 3SAT and piece together all these reductions, I guess you can get what you want...
There is a line of work that looks at hardness of approximating the min distance problem. However, reductions there are also from other coding problems. For example, Dumer and others show a hardness of approximation result for the min distance problem by reducing from the nearest codeword problem (given a message find the nearest codeword in hamming distance). A hardness of approximation result for the latter problem was proved by Arora and others, as far as I can tell using two methods: reducing from set cover, or reducing from label cover. | {
"domain": "cstheory.stackexchange",
"id": 1341,
"tags": "cc.complexity-theory, reference-request, graph-theory, np-hardness, coding-theory"
} |
Why do circularly connected springs have a mode with zero frequency? | Question: I saw that problem with spring connected circularly and found that for one of the normal mode frequency is zero, and could not explain why.
Answer: The zero frequency mode corresponds to a merry-go-round of masses rotating in the same direction. No springs are being stretched/compressed so there is nothing (apart from friction of course) to stop and reverse the motion of the masses. | {
"domain": "physics.stackexchange",
"id": 27549,
"tags": "homework-and-exercises, spring, coupled-oscillators"
} |
How does oxygen fight odors? (Deodorant ad) | Question: I saw an ad on tv last night for Mitchum deodorant that showed big text on the screen that said "OXYGEN eliminates all odors!" or something similar. Their website indeed claims "Our exclusive formula releases pure oxygen, a powerful odor fighter...".
How does oxygen fight odors, and how can this aspect of the deodorant be more effective than the abundance of pure O$_2$ in the air?
Answer: In my estimation, this is basically marketing nonsense. If you examine the list of ingredients, the primary active agent is invariably some aluminum or aluminum-zirconium salt, which act as anti-perspirants primarily by the mechanical actions of shrinking pores (due to their astringency) and clogging ducts in the apocrine glands.
Listed among the inactive ingredients is hydrogen peroxide, $\ce{H2O2}$. At sufficient concentrations, $\ce{H2O2}$ can be a potent oxidizing agent, and probably the source of the oxygen the ad refers to. It might indeed be able to neutralize some odors by oxidizing the molecules which cause them, thereby transforming those molecules into odorless ones. However, as it's listed among the inactive ingredients, it's almost certainly present in very low concentration. This is to be expected, since prolonged skin contact will invariably cause chemical burns unless the mixture is very dilute. Furthermore, much of the odor of sweat is the result of metabolic byproducts from skin-dwelling bacteria, chief among them certain short-chain carboxylic acids that are already maximally oxidized. The only malodorous substances in sweat that can be neutralized by oxidation, as far as I can tell, are certain amines and thiols. That said, I doubt the concentration of $\ce{H2O2}$ is sufficient to be of any use here. $\ce{H2O2}$ does also have anti-septic properties, so perhaps its purpose is bacteriostatic, but there I'm just speculating. Ultimately, I'm uncertain of its purpose in the formulation, but I'm extremely dubious that it helps substantially in eliminating odor. | {
"domain": "chemistry.stackexchange",
"id": 1361,
"tags": "redox, applied-chemistry"
} |
The differential height of the manometer and the force needed to hold the container in place are to be determined | Question: A cylindrical container equipped with a manometer is inverted and pressed into water. The differential height of the manometer and the force needed to hold the container in place are to be determined
Since pression in $A$ is the same as pression in $B$ i got
But since i cant find the value of $d$ i am unable to solve the rest of the exercice. Any suggestions?
Answer: Assume the tank is weightless, then
$F = 20\gamma_w A$
$20\gamma_w = \gamma_{mf}h$,
$SG = \dfrac{\gamma_{mf}}{\gamma_w} = 2.1$
Note the tank is pressurized internally by the floating bottom lid "A". | {
"domain": "engineering.stackexchange",
"id": 4325,
"tags": "fluid-mechanics"
} |
Physics near null infinity | Question: The concept of null infinity $\mathscr{I}$ is standard in general relativity, and more recently in the analysis of infrared structure of gravity (see e.g. the article by Strominger). I am curious about explicit examples in which the physics at or near null infinity is important, not just as a boundary condition.
For example, since observers can never reach null infinity, how relevant is any analysis (e.g. gravitational wave memory) near $\mathscr{I}^+$? Unlike $\mathscr{I}$, to me other parts of "infinity" (in the sense of Penrose diagram) has a more transparent meaning: for example, in the case of spatial infinity $i^0$ we can think about physics "far away" from an isolated object we are interested in. Similarly, the past/future timelike infinity $i^\pm$ has clear meaning in terms of observers at "early/late times" relative to some processes (e.g. scattering).
I have trouble thinking about physics near $\mathscr{I}^\pm$: certainly, massless perturbations (e.g. gravitational or electromagnetic waves) propagate at the speed of light and hence they are perturbations which will reach $\mathscr{I}^+$ for asymptotically flat spacetimes. What difference does different observers near different parts of $\mathscr{I}^+$ have? The only physical thing I know is really that massless perturbations reach $\mathscr{I}^+$ and that $\mathscr{I}^+$ can sometimes be used as initial data.
Remark: it would be best if the (potential) answer(s) are phrased in terms of observers or some experimental considerations. For example, I used scattering experiment to make sense of $i^\pm$ (since in those cases, one typically assumes e.g. in QFT that the field is asymptotically free/non-interacting), though not the only way.
Update 1: Note that, for example, in Strominger's work a lot of effort has been put in understanding gravitational memory near $\mathscr{I}$. So unless I am missing something, I don't see how that is important observationally (as he seems to claim) if we cannot come close to $\mathscr{I}$.
Answer: Future null infinity $\mathscr{I}^+$, is the natural location to analyse the observables for distant observers (eventhough as Ben Crowell points out, no observer would ever reach or come close to $\mathscr{I}^+$.) It is where radiation that has travelled a long time ``lives''. It is after all the limit of the (closure of the) spacetime as you a follow a null ray outwards. Hence, it serves as a first approximation to what a distant oberserver would see. (In fact, one could define a "distant observer" as one for which the finite distance effects are negligable.) Note that neither $i^0$ nor $i^+$ would serve for this purpose because radiation does not come near to either.
For example, when numerical relativists calculate the waveform of a black hole merger, this waveform is extracted at $\mathscr{I}^+$. Similarly, by studying the gravitational wave memory at $\mathscr{I}^+$, one obtains an approximation to the memory effects for distant observers. The effects of memory are more easily understood and formulate at $\mathscr{I}^+$ because the asymptotic symmetries of the spacetime are exact rather than approximate at a finite distance. | {
"domain": "physics.stackexchange",
"id": 60835,
"tags": "general-relativity, field-theory, causality"
} |
Is there a way to publish keyboard events (using key_teleop for example) to a serial port? | Question:
Hi,
I've been looking at ways to publish keyboard commands to a UART serial port and the top result keeps pointing me to rosserial, but I'm not entirely sure this exactly fits my needs. Ideally, I want to be able to send keyboard events to my flight controller through PPM signals because I already have the hardware that would send signals from a serial port to my flight controller. This is how I envision it would go from all the materials I have:
KEYPRESS (Twist message) ==> UART SERIAL PORT (probably ttyS1) ==> PPM ENCODER (not an Arduino) ==> FLIGHT CONTROLLER
I found that I can publish Twist messages using either the key_teleop module or teleop_twist_keyboard, but I'm stuck as to how to send this to a serial port (i.e. ttyS1). Would I simply need to write my own ROS node and use pyserial (since I would be writing in python) to open the port and pass the message in? Or can I use rosserial somehow to pass in Twist messages?
Thank you in advance and please excuse me for asking a noob question.
Trixie
Originally posted by trixr4kdz on ROS Answers with karma: 45 on 2018-03-13
Post score: 0
Answer:
rosserial will assume control over your serial port, so if that port is being used by something else -- and that 'something' is not a rosserial client -- that is probably not what you want.
Would I simply need to write my own ROS node and use pyserial (since I would be writing in python) to open the port and pass the message in?
If you're basically trying to convert ROS geometry_msgs/Twist msgs into something your remote system can use, then yes, you'll probably have to write a node that does this for you.
Please take the time to first search for an existing solution / implementation though, as it's likely someone (or multiple someones) has (have) already done this.
Originally posted by gvdhoorn with karma: 86574 on 2018-03-14
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 30308,
"tags": "ros, rosserial, ros-kinetic, rosserial-python"
} |
First resonant frequency of a ring | Question: What is the first resonant frequency of a ring? It's an unusual shape because a ring doesn't have ends, it loops into itself.
Things with edges like poles and plates have a first resonant frequency equal to 1/2 wavelength because the nodes are at the edges, half of the wave fits inside. So if something is 3.4 cm long, and the material it's made from has a speed of sound the same as in air, then the first resonance is 5000 Hz because that's the 6.8 cm wavelength in that material.
But a ring does not have any edge, so is its first resonant frequency 1 wavelength as opposed to 1/2 for shapes with edges? Or does the lack of an edge have no impact on the first resonance?
For example, consider a ring with a 10 cm circumference, made from a material with a 3000 m/s speed of sound. The first resonant frequency should be 30000 Hz if it's 1 wavelength, or 15000 Hz if it's 1/2 wavelength. Which is it?
Answer: The key feature for getting resonance is coherence, so all you need is for the phase to return to the start as you go completely around. So that's one full wavelength per circumference. If you only had half a wavelength, it arrives 180 degrees out of phase, and would destructively interfere with itself. | {
"domain": "physics.stackexchange",
"id": 47631,
"tags": "acoustics"
} |
rviz crashes at startup for universal robot | Question:
I have installed the hydro-devel-c-api branch of the universal_robot package from the ros-industrial github and am attempting to run the demonstration commands from the readme file. The python demonstration runs fine and controls the UR5 well. However, when I use the roslaunch ur5_moveit_config moveit_planning_execution.launch sim:=false robot_ip:=IP_OF_THE_ROBOT command (with the proper ip address there), rviz begins to start, and then crashes. This is the terminal output from this command: pastebin.com/wTUu5xQN.
Originally posted by airplanesrule on ROS Answers with karma: 110 on 2014-06-04
Post score: 0
Answer:
This may be related to the problem described in Does anyone have a ur5 working with moveit and hydro?, and in particular to this comment:
It would seem the UR5 MoveIt cfg uses a set of links, instead of a single chain, which is different from other ROS-I cfgs. I'm not sure why it is setup like that, there might be a valid reason.
As far as I know (and as @sedwards explained on the mailinglist), the hydro-devel-c-api branch hasn't been updated with the regenerated MoveIt configurations, which could explain why you are still seeing that crash.
A related bug report on the universal_robot github repository: 41: rviz crashes when trying to launch ur5_moveit_config in hydro.
Possibly related other ROS Answers question (and answer): Problem with ikfast_moveit tutorial.
Originally posted by gvdhoorn with karma: 86574 on 2014-06-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 18167,
"tags": "ros, rviz, ros-industrial, universal-robot, ur-driver"
} |
Determining orbital height of a satellite just by observing angular velocity | Question: A few days ago I obervered a satellite with my telescope. But when I later looked at Stellarium (after adding all available satellite databases), there was no satellite passing the spot that I observed at the given time (one was close but way too fast and should not have been visible).
Since I knew how big my FOV was and had roughly counted how many seconds it took to cross it, I thought it should be possible to determine the height of the satellite's orbit (assuming it's circular).
I had a few different approaches, one was to set the formula for orbital velocity $v = \sqrt{\frac{GM}{R+h}}$ equal to the formula for apparent size (per second to get velocity) $v = 2h \cdot tan(a/2)$ [M=mass of earth, R=radius of earth, h=orbital height, a=angle the satellite moves in one second measured in degrees]
Using WolframAlpha, I got a very long formula as a result with cot and stuff but in the end it turned out it wasn't right, as I checked with other satellites, it gave wrong heights, and for LEO objects it gave an error since the result would be imaginary.
So I tried a different approach:
$$\sqrt{\frac{GM}{R}} = \frac{(2\pi R)}{(2\pi/\omega)}$$
[R=total radius of satellite orbit, w=angular velocity in rad/s]
(Basically orbital velocity using gravity = orbital length depending on radius / time it takes for a $2\pi$ (i.e. one complete) orbit)
In the end you should get something like
$$R = \sqrt[3]{GM/\omega^2}$$
And to get the height just subtract the earth's radius
It seems to work out better, as it can return solutions in a LEO, but it still gives the wrong solutions.
Is there a well-known equation for determining orbital radius from angular velocity? Did I just make some calculation error? Please help!
Answer: The transit time across your field of view would depend on your position on the surface of the earth as well as the speed and altitude of the satellite. That gets really complicated. I'm thinking that to estimate the altitude, you should observe it passing overhead several times and use your best estimate of the period of revolution to calculate the altitude. Don't forget to allow for your rotation with the earth. | {
"domain": "physics.stackexchange",
"id": 85341,
"tags": "newtonian-gravity, orbital-motion, angular-velocity, satellites"
} |
What causes magnetism? | Question: Electric fields are caused by particles that have charge. Gravitational fields are caused by particles that have mass. What is the property of a particle that causes it to interact with a magnetic field?
I've read that magnetic fields are caused by charged particles that are moving in an electric field. However, since any charged particle in an electric field experiences a force on it which causes it to move, doesn't this mean that an electric field is impossible to detect without a magnetic field also being present, and vice versa? How would a magnetic field and electric field differ in this case?
I've also read that magnetism is caused by groupings of particles with the same spin state. Is this the quantum mechanical explanation, and if so then how do groupings of such particles create a magnetic field?
Answer: If all the spins are aligned, then a magnetic field is created. It is the direction of the spin that causes the attraction or repulsion (As far as my understanding goes)
An electric field can be created by a stationary charge (electrostatics). It is only when this charge moves that the magnetic field is created. | {
"domain": "physics.stackexchange",
"id": 85810,
"tags": "electromagnetism, magnetic-fields"
} |
Algorithm comparison | Question: I am learning Big O and Big theta notation and confused the certain case.
I have two functions,
function 1(f1)
$$ n * n^{1/2} $$
function 2(f2)
$$ 1.001^n $$
in smaller cases (10,000) f1 is much faster than f2, however, it is vice versa for bigger case (10,000,000). For the smaller case, I can say
$$ f2 = O(f1) $$
In the bigger case, I can say
$$ f1 = O(f2) $$
and I think this case is,
$$ f1 = \theta(f2) $$
(If ^ this is wrong, please let me know)
At this case, can I say ??
$$ f1 \equiv f2 $$
Answer: No, you can't say that, because $O$ doesn't mean "For small inputs, [something] happens" and $\Theta$ doesn't mean "For some inputs, one of the functions is bigger and, for other inputs, the other one is." In mathematics, it is essential to use the actual definition of the concepts that you're working with, and to prove that the definition applies in your situation. | {
"domain": "cs.stackexchange",
"id": 13264,
"tags": "asymptotics"
} |
How can I balance the following equation atomically and electrically? | Question: $$\ce{C2O4^2- + MnO2 -> Mn^2+ + CO2}$$
I think that the half reactions are
$$\ce{C2O4^2- -> CO2}$$
$$\ce{MnO2 -> Mn^2+}$$
I am supposed to balance these by adding water, $\ce{H+}$ atoms and by adding $\ce{e-}$’s, but I’m just not sure on the method to do this as we’ve covered it extremely quickly.
First, I found the oxidation numbers for the overall equation, and I think that $\ce{C2O4^2-}$ is the reducing agent because $\ce{C}$ is losing charge from +3 to +4, I just don’t know how to use that to balance this. Any help would be appreciated.
Answer: Deal with the two half equations separately and then combine them.
Starting with the oxalate equation: $$\ce{C2O4^{2-} -> CO2}$$
Balance the atoms: $$\ce{C2O4^{2-} -> 2CO2}$$
Now balance the charge by adding electrons: $$\ce{C2O4^{2-} -> CO2 + 2e-}$$
Now for the manganese dioxide reduction: $$\ce{MnO2 -> Mn^{2+}}$$
We can balance the half equations by adding water, hydrogen ions or electrons. Since water is the only one which contains oxygen we should add this first: $$\ce{MnO2 -> Mn^{2+} + 2H2O}$$
Now add hydrogen ions to balance the hydrogens: $$\ce{MnO2 +4H+ -> Mn^{2+} + 2H2O}$$
Finally add electrons the balance the charge: $$\ce{MnO2 +4H+ + 2e- -> Mn^{2+} + 2H2O}$$
Now combine the equations to cancel all the electrons. In this case there are two on both sides so we don't need to multiply any of the equations by anything.
$$\ce{MnO2 +4H+ + C2O4^{2-} -> Mn^{2+} + 2CO2 + 2H2O}$$ | {
"domain": "chemistry.stackexchange",
"id": 3230,
"tags": "electrochemistry, redox, stoichiometry"
} |
Using invariant hyperbolas to calibrate between reference frames | Question:
We can now calibrate the axes of $\tilde{O}$. In fig 1.11.
The axes of $O$ and $\tilde{O}$ are drawn, along with an invariant hyperbola of time like interval $-1$ from the origin. Event $A$ is on the $t-$axis so has $x=0$. Since the hyperbolas the equation $$ -t^2 + x^2 = 1$$, It follows that event $A$ has $t=1$. Similary event $B$ lies on the $\tilde{t}$ axis, so has $\tilde{x}=0$. Since the hyperbola has the equation $$-\tilde{t}^2 + \tilde{x}^2=-1$$ It follows that event $B$ has $\tilde{t}=1$. We have, therefore, used the hyperbolae to calibrate the $\tilde{t}$ axis.
I have the following questions from the above:
Precisely what does it mean to calibrate axes? I think it means to establish simultaneity between two different reference frames, but, I'd like a second confirmation as well.
Secondly how were events $A$ and $B$ chosen? No mention of that was given in the book to my knowledge.
If all else fails, could you recommend another book talking about this calibration procedure between two inertial frames with coordinates?
Answer: In units where $c=1$, the Lorentz transformations are
\begin{align}
\bar{t}&=\frac{t-vx}{\sqrt{1-v^2}},\quad\bar{x}=\frac{x-vt}{\sqrt{1-v^2}}.
\end{align}
By following a similar process to what I described in this answer, I leave it to you to verify the following facts: in the $(t,x)$ coordinate system,
The $\bar{t}$-axis is drawn at a slope of $\frac{1}{v}$, while the $\bar{x}$-axis is drawn at a slope of $v$
level sets of $t$ are horizontal lines, while level sets of $\bar{t}$ are straight lines of slope $v$
level sets of $x$ are vertical lines while level sets of $\bar{x}$ are straight lines of slope $\frac{1}{v}$
$-t^2+x^2=-\bar{t}^2+\bar{x}^2$.
Now some basic facts about hyperbolas: the set of points for which $-t^2+x^2=k$ (equivalently $-\bar{t}^2+\bar{x}^2=k$), where $k\in\Bbb{R}$ is a constant fall into three categories:
$k>0$, in which case the hyperbola opens left-right
$k=0$, which are straight lines $t=\pm x$ (degenerate hyperbolas)
$k<0$, in which case the hyperbola opens top-bottom,
and obviously through each point in the plane, there is a unique such hyperbola passing through that point.
I'm not sure about the purpose of mentioning this calibration procedure using hyperbolas. Generally, when we calibrate something, it means we have some fixed scale, and we're using that to set the scale of something else. But here, we already know the full relationship between $(t,x)$ and $(\bar{t},\bar{x})$, so I'm not sure about the purpose of this. In any case, here's what we can say geometrically: if we consider the point $\mathscr{A}$ with $t(\mathscr{A})=1,x(\mathscr{A})=0$, then you can draw the unique hyperbola through the point $\mathscr{A}$ (i.e the hyperbola with $k=-(1)^2+0^2=-1$). This hyperbola will intersect the $\bar{t}$-axis at a unique point $\mathscr{B}$ which has $\bar{t}(\mathscr{B})=1,\bar{x}(\mathscr{B})=0$. So, by starting from $(t,x)=(1,0)$ we managed to geometrically figure out where the point with $(\bar{t},\bar{x})=(1,0)$ lies. We can keep going: start with the point having $(t,x)=(2,0)$ we can geometrically figure out the point with $(\bar{t},\bar{x})=(2,0)$, next we can start with the point with $(t,x)=(3,0)$ we can figure out where $(\bar{t},\bar{x})=(3,0)$ is and so on.
Similarly, starting with the point such that $(t,x)=(0,1)$ we can draw the hyperbola and figure out where $(\bar{t},\bar{x})=(0,1)$ is, and then so on with $(0,2),(0,3)$ etc. So, by having the scale markings on the $t$ and $x$ axes, we can geometrically figure out the scale markings on the $\bar{t}$ and $\bar{x}$ axes. Having said this, and while this geometry is nice on its own, it doesn't tell you anything which the Lorentz transformations do not already tell you. Hopefully this answers your questions 1 and 2.
Finally, here's a quibble with your comment following your question 1:
... I think it means to establish simultaneity between two different reference frames, but, I'd like a second confirmation as well.
No, this makes no sense. Simultaneity is not a notion to be related between two observers. Simultaneity is a concept within one observer's frame. In the $(t,x)$ coordinate system, simultaneity is described by the level sets of $t$. Or more precisely, we say two events $p,q$ in the spacetime are simultaneous with respect to $(t,x)$ if and only if they lie in the same level set of $t$ (i.e if and only if $t(p)=t(q)$). In the case we're considering, these are the horizontal lines, as I mentioned above. We can obviously make an analogous definition for the barred coordinates: two events $p,q$ are said to be simultaneous with respect to $(\bar{t},\bar{x})$ if and only if they lie in the same level set of $\bar{t}$ (i.e $\bar{t}(p)=\bar{t}(q)$), and geometrically this happens if and only if they lie on the same straight line of slope $v$. | {
"domain": "physics.stackexchange",
"id": 93643,
"tags": "special-relativity"
} |
Why do we assume that a photon has encountered only a single collision with an electron in Compton scattering | Question: For explaining the experimental results of Compton scattering theoretically we consider a collision between a photon and a free electron and then calculate the new wavelength of photon after collision which is dependent on the angle of deviation. Why do we assume here that the photon reaching the detector has encountered only one collision with an electron, it could have reached the detector after multiple collisions with different electrons which would give different $\Delta\lambda$ for same angle. Is it because a photon colliding with multiple electrons no its path is very unlikely? Also what is the reason for non zero intensity at $\lambda$ other than the two peak ones?
Answer: This is basically called the "thin target approximation". The experimental detection rate depends on the product of the photon flux, the target areal density, and the cross section. The achieve a suitable rate, it's better to have more photon flux than more target atoms. Indeed, if the target is too thick, there will be multiple scattering events, and the analysis will be more difficult. | {
"domain": "physics.stackexchange",
"id": 79834,
"tags": "quantum-mechanics, photons, scattering"
} |
Time dependent wave function of a particle in a gravitational field | Question: I found this great question about the solution of the Schrodinger equation for a particle in a constant gravitational field, but the solution they wanted is to the time independent Schrodinger equation.
$$E \psi=\frac{-\hbar^2}{2m}\frac{\partial^2 \psi}{\partial x^2}+mgx\psi$$
Wave function of a particle in a gravitational field
I want a solution for the time dependent Schrodinger equation for a particle in a constant gravitational field, one with dispersion, where the energy is not exactly known. How do I get it? Basically I am trying to get a solution to this equation
$$i\hbar \frac{\partial\psi}{\partial t}=\frac{-\hbar^2}{2m}\frac{\partial^2 \psi}{\partial x^2}+mgx\psi$$
Hi I am coming back to edit this question in hopes that I can direct students in the right direction. The reason I asked this question in February 2021 was due to a fundamental misunderstanding of quantum mechanics and the Schrodinger equation. I did not understand the role of the time independent Schrodinger equation and I did not see the use in decomposing the wave function into a sum of stationary states. Here is a resource that greatly helped me understand why you would want to do it and HOW to do it. I have linked to the specific page that made things click for me. https://farside.ph.utexas.edu/teaching/qmech/Quantum/node101.html
Answer: The time-dependent SE in PDE shorthand:
$$i\hbar \psi_t=-\frac{\hbar^2}{2m}\psi_{xx}+mgx\psi$$
To solve it, we use separation of variables, by assuming the Ansatz:
$$\psi(x,t)=\Psi(x)\phi(t)$$
Inserting into the PDE:
$$i\hbar \Psi(x)\phi'(t)=-\frac{\hbar^2}{2m}T(t)\Psi''(x)+mgx\Psi(x)T(t)$$
Dividing both sides by $\psi(x,t)$ we get:
$$i\hbar\frac{\phi'}{\phi}=-\frac{\hbar^2}{2m}\frac{\Psi''}{\Psi}+mgx=E$$
Where $E$ is a separation constant.
So we obtain two separate DEs, one in $t$ and one in $x$:
$$i\hbar\frac{\phi'}{\phi}=E\tag{1}$$
$$\Rightarrow \phi(t)=e^{-\frac{{ E i t}}{\hbar}}$$
$$-\frac{\hbar^2}{2m}\frac{\Psi''}{\Psi}+mgx=E\tag{2}$$
$(2)$ is basically the DE you'll find in your link. | {
"domain": "physics.stackexchange",
"id": 76255,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, potential, schroedinger-equation"
} |
Creating and sending a Fixed-Width file | Question: This function will run every x seconds.
PAN: The name we are calling our fixed width file we are transmitting.
Private Sub Start()
' Get the data from the API so we can get ready to make the PAN File
Dim Baker As New PAN.Maker
Baker.SetIngredients(GetAPIData)
' Make the PAN file
Baker.Make()
' Retrieve the PAN file from the baker
Dim PAN As PANv1 = Baker.GetPAN()
' Transmit the PAN file, via FTP
Dim PANSender As New PAN.Transmitter(GetFTPDetails)
PANSender.Send(PAN)
End Sub
I create the Objects Maker and Transmitter in a PAN namespace to group the functionality. Is this wise?
I decide to use a method to set the data it needs before it can create the PAN file, rather than using a constructor. What is the preferred method in this case?
Am I using too many Objects to handle these tasks, when a more procedural structure would do?
Is the naming of the Baker and Transmitter confusing because they differ from the Objects name?
Are these straightforward and easily understandable OOP practices?
Answer:
This function will run every x seconds.
That's great. But without more context, there is no way we can identify any potential issues with the code you've provided.
Private Sub Start()
Depending on what the containing class is called, that method's name isn't very clear. And why is it Private? Who's calling it? A method like Start() sounds like something that belongs on the public interface; I'm confused.
If I were maintaining this code, I would remove every single comment you have there. Why? Because every single one of these comments is restating what the code does, and good comments should say why the code does what it's doing.
I create the Objects Maker and Transmitter in a PAN namespace to group the functionality. Is this wise?
It looks like you have very specialized objects, and that is very good thing. Whether or not they deserve their own namespace isn't something that can be commented on, because we have no idea what other things you have here: for all I know every type involved in what you've shown could be in the same namespace, it depends on the scope of your application: if it's an ERP system with 4,327 other functionalities, then yes, definitely put that in its own namespace. If it's a specialized application that only deals with "PAN" files, I don't know.
I decide to use a method to set the data it needs before it can create the PAN file, rather than using a constructor. What is the preferred method in this case?
I'm guessing you're referring to this line:
Baker.SetIngredients(GetAPIData)
What's GetAPIData? It looks like it could be a member call (a property getter?), or a field - in either case, the name is a bad one: GetAPIData would be a name for a method. Without seeing its declaration, it's very hard to tell what's going on here - heck, it could even be a delegate (in which case the name isn't too bad after all).
The point is, if it's just data, then there's no reason to not pass it through the constructor. If it's work to do, then you've done the right thing: constructors shouldn't do work.
Am I using too many Objects to handle these tasks, when a more procedural structure would do?
That's primarily opinion-based, but VB.NET, like C#, has an object-oriented paradigm; procedural "script-like" code would work, but if you want SOLID code, it just won't do.
Again, we have no idea what your objects are actually doing, so it's not possible to give you an answer here, but if your objects essentially are made of one or two one-liner methods, OOP is probably overkill.
Is the naming of the Baker and Transmitter confusing because they differ from the Objects name?
I'm confused here. Baker is an object name:
Dim Baker As New PAN.Maker
And Transmitter is a type name:
Dim PANSender As New PAN.Transmitter(GetFTPDetails)
PAN looks like it's a class name, and Transmitter would be a nested type - I don't like this. If PAN is a namespace, it's a bad name and Transmitter shouldn't be fully-qualified like this.
Dim PAN As PANv1 = Baker.GetPAN()
The local variable PAN confuses things even further - is it the same PAN as seen in PAN.Transmitter? That makes Transmitter a shared method, and that's starting to smell. | {
"domain": "codereview.stackexchange",
"id": 12587,
"tags": "object-oriented, vb.net"
} |
Will I hyperventilate if I breath twice as fast at an altitude with half as much oxygen as I am used to? | Question: Will I hyperventilate if I breath twice as fast at an altitude with half as much oxygen as I am used to? If not twice as fast, should I breath any amount faster on average than usual when at high altitudes?
Answer: By definition, hyperventilation is a state of increased breathing where the exhaled $CO_2$ is greater than what is produced by the body.
Except in artificial condition or in disease process, breathing faster than the autonomously-dictated rate is going to cause hyperventilation. You have a fine-tuned control of respiration that will work out the correct breating rate for you.
Oxygen is actually irrelevant for this phenomenon, first because of the definition of hyperventilation, but also because oxygen is mostly transported through haemoglobin (98+%). This means there is a fixed quantity of oxygen that can be transported by unit volume of blood (and by breath, for a given cardiac output), and the maximum is already reached at normal breathing rate. Breathing faster does not allow for better oxygenation of the blood. (even at an higher altitude)
On the other hand, $CO_2$ is transported by the plasma, dissolved (7%), but also in the form of bicarbonate (93%) through the following equilibrium :
$$ H_2O + CO_2 \rightleftharpoons \underbrace{H_2CO_3}_\text{Carbonic acid} \rightleftharpoons \underbrace{HCO_3^-}_\text{Bicarbonate} + H^+ $$
In the alveoli, a quantity of $CO_2$ will diffuse to the alveolar air, and be breathed out, the target being eliminating the quantity produced by the body. However, in opposition to oxygen transport, it is possible to overshoot by breathing faster, because removing $CO_2$ drives the equilibrium in direction of $H_2O + CO_2 \leftarrow HCO_3^- + H^+ $ that is, eliminating additional $CO_2$ through the removal of bicarbonate from the blood.
This is hyperventilation, and leads to hypocapnia, causing that particular sensation of dizzyness. It is also leads to respiratory alkalosis, because, as seen in the reaction equation, this incurs the loss of protons ($H^+$). | {
"domain": "biology.stackexchange",
"id": 6337,
"tags": "breathing"
} |
Anti matter in Jupiter magnetic field? | Question: My question is based in this Wikipedia article about the Jupiter magnetic field:
https://en.wikipedia.org/wiki/Magnetosphere_of_Jupiter
Jupiter magnetic field generates plasma and accelerates ionized molecules, eventually producing collisions. Is anti matter generated naturally under this scenario?
Answer: Yes it is. See this 2012 blog post by Paul Gilster:
https://www.centauri-dreams.org/2012/05/17/powering-up-the-antimatter-engine/
It says things like this:
Antimatter in space is an idea that James Bickford (Draper Laboratory) analyzed in a Phase II study for NASA’s Institute for Advanced Concepts, for he had realized that high-energy galactic cosmic rays interacting with the interstellar medium (and also with the upper atmospheres of planets in the Solar System) produce antimatter. In fact, Bickford’s calculations showed that about a kilogram of antiprotons enter the Solar System every second, though little of this reaches the Earth. To harvest some of this incoming antimatter, you need a planet with a strong magnetic field, so Jupiter is a natural bet for Baxter’s scientists, who go there to forage.
But it also says Saturn's a better bet, and that Earth is handier, along with this:
The problem with this — and this has been noted by The Physics arXiv Blog and Jennifer Ouellette in recent days — is that PAMELA could come up with only 28 antiprotons over the course of 850 days of data acquisition. There is no question that Bickford is right in seeing how antimatter can be produced locally. In fact, the paper on the PAMELA work says this: “The flux exceeds the galactic CR antiproton flux by three orders of magnitude at the current solar minimum, thereby constituting the most abundant antiproton source near the Earth.” But does the process produce enough antimatter to make local harvesting a serious possibility?
If you want some antimatter, buy a banana. I kid yet not. See the Symmetry article antimatter from bananas. Flip Tanedo says this: “While researching natural sources of antimatter, I discovered a curious article about a naturally occurring potassium isotope that, some fraction of the time, decays via positron emission. The conclusion was that: The average banana (rich in potassium) produces a positron roughly once every 75 minutes”. Better still, buy a whole bunch of bananas! | {
"domain": "astronomy.stackexchange",
"id": 3568,
"tags": "jupiter, magnetic-field, antimatter"
} |
Cmake debug option | Question:
Hellow,
I may have just search really bad but I didn't find this information. I'm used to Makefile but how do I use the otion "-g" and "-Wall" in Cmake with catkin ? I try writing a really dumb program with a memory leak and to run it with valgrind but I have the following
==20197== 4 bytes in 1 blocks are definitely lost in loss record 1 of 1
==20197== at 0x4C2B1C7: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==20197== by 0x400845: main (in /home/ros/groovy_ws/catkin_ws/devel/lib/stalker/debug)
And so no line numbers...
Thanks a lot !
Originally posted by Maya on ROS Answers with karma: 1172 on 2014-04-21
Post score: 2
Answer:
This will set -g for you:
catkin_make -DCMAKE_BUILD_TYPE=Debug
I think -Wall is set automatically.
Here are come useful CMake variables: http://www.cmake.org/Wiki/CMake_Useful_Variables
Originally posted by joq with karma: 25443 on 2014-04-21
This answer was ACCEPTED on the original site
Post score: 8
Original comments
Comment by Maya on 2014-04-21:
Thanks a lot ! I've been searching exactly for that ! | {
"domain": "robotics.stackexchange",
"id": 17720,
"tags": "catkin, cmake"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.