anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
A situation of moving a length of copper wire in a magnetic field? | Question:
Hello,
The book says,
"when a length of copper wire PQ is moved downwards through the poles of two horizontal bar magnets as shown below. Compared to end Q, end P will have fewer electrons."
But isn't it the opposite? Using the left hand rule, for me it seems like the electrons will gather more on the end P - I mean, your thumb points downwards and index finger points right, and the middle finger points backwards which means that the electrons come towards you (the opposite direction to the conventional current).
# Also, would the situation be different if the copper wire is in a loop (continuous?)
Sorry about my horrible handwriting!
Answer: This is the phenomenon of electromagnetic induction, so you cannot use Fleming's left hand rule, you have to use the right hand rule.
Your thumb will point down as conductor moves down, and index finger will point to right. If you do it properly, current will flow from Q to P. So your explanation is correct. So, P has excess of electrons and Q has deficit of electrons.
Please edit left hand to right hand.
Rest is fine. | {
"domain": "physics.stackexchange",
"id": 39358,
"tags": "magnetic-fields"
} |
Implementing U-Net segmentation model without padding | Question: I'm trying to implement the U-Net CNN as per the published paper here.
I've followed the paper architecture as closely as possible but I'm hitting an error when trying to carry out the first concatenation:
From the diagram, it appears the 8th Conv2D should be merged with result of the 1st UpSampling2D operation, however the Concatenate() operation throws an exception that the shapes don't match:
def model(image_size = (572, 572) + (1,)):
# Input / Output layers
input_layer = Input(shape=(image_size), 32)
""" Begin Downsampling """
# Block 1
conv_1 = Conv2D(64, 3, activation = 'relu')(input_layer)
conv_2 = Conv2D(64, 3, activation = 'relu')(conv_1)
max_pool_1 = MaxPool2D(strides=2)(conv_2)
# Block 2
conv_3 = Conv2D(128, 3, activation = 'relu')(max_pool_1)
conv_4 = Conv2D(128, 3, activation = 'relu')(conv_3)
max_pool_2 = MaxPool2D(strides=2)(conv_4)
# Block 3
conv_5 = Conv2D(256, 3, activation = 'relu')(max_pool_2)
conv_6 = Conv2D(256, 3, activation = 'relu')(conv_5)
max_pool_3 = MaxPool2D(strides=2)(conv_6)
# Block 4
conv_7 = Conv2D(512, 3, activation = 'relu')(max_pool_3)
conv_8 = Conv2D(512, 3, activation = 'relu')(conv_7)
max_pool_4 = MaxPool2D(strides=2)(conv_8)
""" Begin Upsampling """
# Block 5
conv_9 = Conv2D(1024, 3, activation = 'relu')(max_pool_4)
conv_10 = Conv2D(1024, 3, activation = 'relu')(conv_9)
upsample_1 = UpSampling2D()(conv_10)
# Connect layers
merge_1 = Concatenate()([conv_8, upsample_1])
Error:
Exception has occurred: ValueError
A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(32, 64, 64, 512), (32, 56, 56, 1024)]
Note that the values 64 and 56 correctly line up with the architecture.
I don't understand how to implement the model as it is in the paper. If I change my code to accept an image of shape (256, 256) and add padding='same' to the Conv2D layers, the code works as the sizes are aligned.
This seems to go against what the authors specifically state in their implementation:
Could somebody point me in the right direction on the correct implementation of this model?
Answer: $\hspace{3cm}$
If we follow the definition of each arrow.
Gray => Copy and Crop
Every step in the expansive path consists of an upsampling of the
feature map followed by a 2x2 convolution (“up-convolution”) that halves the
number of feature channels, a concatenation with the correspondingly cropped
feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in
every convolution. Paper
So, believe(I have added 3 coloured circles)
Blue - 28x28 is upsampled and become 56x56, 1024 is halved to 512
Red - 64x64 is cropped to 56x56. Then Concatenated along FM axis.
Black - 3x3 convolutions, followed by a ReLU | {
"domain": "datascience.stackexchange",
"id": 9251,
"tags": "keras, cnn, image-segmentation"
} |
Is the Universe really expanding at an increasing rate? | Question: Here's what I just read from Wikipedia's page on the Hubble Space Telescope:
While Hubble helped to refine estimates of the age of the universe, it also cast doubt on theories about its future. Astronomers from the High-z Supernova Search Team and the Supernova Cosmology Project used ground-based telescopes and HST to observe distant supernovae and uncovered evidence that, far from decelerating under the influence of gravity, the expansion of the universe may in fact be accelerating. The cause of this acceleration remains poorly understood; the most common cause attributed is dark energy.
I don't take this at face value because we should expect more distant objects to have higher observed speeds and therefore higher observed red-shifts. Here's why.
Let's start with a model where the Universe expanded very fast early on, but has been slowing down ever since due to gravity, as one would normally expect.
Remember that, the farther away a cosmic object is, the farther back in the past we are observing it. An object 1,000 light years away, if it's light is reaching us now, is being observed in its state that existed 1,000 years ago. We are effectively looking through a time machine.
So if we observe a more distant object, we're observing an older state of that object. Therefore, we are observing it at a time when the Universe was expanding faster than it is now, so it has higher red-shifts.
And isn't that what we observe today? The more distant the galaxy, the higher its red-shift? This is not inconsistent with a "normal" model where the expansion is slowing down due to gravity.
What am I missing? Why are scientists trying to explain such things with weird dark matter and dark energy that otherwise have never been detected or found evidence of and aren't needed for any other model, and in fact get in the way of our models of physics and quantum dynamics?
Answer:
I don't take this at face value because we should expect more distant
objects to have higher observed speeds and therefore higher observed
red-shifts.
That's true. That was the original Hubble discovery - the farther away things were, the faster they were moving away from us.
Here's why. Let's start with a model where the Universe expanded very
fast early on, but has been slowing down ever since due to gravity, as
one would normally expect.
Yes - that's what everybody thought following Hubble's discovery.
Remember that, the farther away a cosmic object is, the farther back
in the past we are observing it. An object 1,000 light years away, if
it's light is reaching us now, is being observed in its state that
existed 1,000 years ago. We are effectively looking through a time
machine.
This is not lost on Astrophysicists.
So if we observe a more distant object, we're observing an older state
of that object. Therefore, we are observing it at a time when the
Universe was expanding faster than it is now, so it has higher
red-shifts.
OK, 2 points. 2nd point first. The red-shift has to do with relative velocity, not speeding up or slowing down. Something can be more red-shifted and slowing down and something can be less red-shifted and speeding up, especially since the acceleration/deceleration is comparatively slow compared to the relative velocity.
and other point - lets keep in mind, we don't know what a galaxy 3 billion light years away is doing now. We can guess and we can run models, but we can only see what it's doing 3 billion years ago.
And isn't that what we observe today? The more distant the galaxy, the
higher its red-shift? This is not inconsistent with a "normal" model
where the expansion is slowing down due to gravity.
Yes, the more distant the galaxy the higher it's red-shift. But no, that's not inconsistent with expansion. That's what you'd see, expansion or contraction, because red-shift is just relative velocity.
What am I missing? Why are scientists trying to explain such things
with weird dark matter and dark energy that otherwise have never been
detected or found evidence of and aren't needed for any other model,
and in fact get in the way of our models of physics and quantum
dynamics?
A lot of these ideas are confusing. They're confusing to scientists too, especially when they were first discovered - so you're not alone.
Dark matter was observed because galaxies were behaving strangely. The stars in the outer arms of the galaxy were observed to be moving much too fast and faster than the stars more towards the middle of the galaxy and that made no sense. The galaxies also weighed too much and the only way to explain this was extra mass in kind of a halo around the galaxy, but this extra mass, also, didn't interact with electromagnetic waves like the mass here on earth does - so they called this extra mass (and there's a lot of it, more than there is regular mass), but since it's invisible, they called it "dark matter" and it's not dark like dirt or coal, it's dark as in - invisible. It's completely transparent to light, but it has mass and they still don't know what it is. They have some OK theories, but nothing definite.
Now, dark energy - think about the big bang and all matter flying apart - the galaxies twice as far are moving away twice as fast, BUT, as you said, because of gravity, we should see the galaxies that are twice as far moving away more than twice as fast, cause the nearer the galaxy, the more time it's had to slow down - aha, they thought, if we can compare the speed of the galaxies 4 billion light years away to the speed of the galaxies 2 billion light years away to the speed 1 billion light years - etc, etc and measure it all carefully, we can measure the rate at which gravity is slowing down the universe. - that makes sense right.
And with careful measurement of Type 1A supernovas, which temporarily outshine entire galaxies - with remarkable consistency (what they call a standard candle - a very bright standard candle, but a standard candle all the same) - with that, they thought they could measure the gravitational slow down of expansion - exactly what you're talking about.
The problem was, the measurements told them the opposite of what they expected to find. The measurements told them that the galaxies 2 billion light years away were traveling slightly more than half as fast as the galaxies 4 billion light years away, and so on. They checked this, cause it had to be wrong, then they re-checked it, and re-checked again and the only conclusion was, stuff out there is speeding up, not slowing down - cause that's what the telescopes tell us.
Dark energy wasn't a hair-brained scheme that mad scientists thunk up. It was an observed reality that nobody expected (well, cept just maybe for Einstein and his cosmological constant, but that's another story).
Dark energy's just a name anyway. They have to call it something, even if they're not sure what it is or how it works. | {
"domain": "astronomy.stackexchange",
"id": 4918,
"tags": "cosmology, cosmological-inflation"
} |
Converting multiple query to use parameters to avoid SQL injection | Question: I have some dropdownlist in my aspx page and I am using the choices from them in my SQL query:
query = "";
DataTable taskData = new DataTable();
connString = @""; //connection string
strClause = "";
if (!blOnLoad)
{
if (ddlTaskName.SelectedIndex > 0) //dropdownlist
{
strClause += " AND CT.ATTR2739 LIKE '%" + ddlTaskName.SelectedItem.Text + "%'";
//strClause += string.format();
}
else
{
strClause += " AND (CT.ATTR2739 LIKE '%' OR CT.ATTR2739 IS NULL)";
}
if (ddlService.SelectedIndex > 0) //dropdownlist
{
strClause += " AND SE.ATTR2821 LIKE '%" + ddlService.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (SE.ATTR2821 LIKE '%' OR SE.ATTR2821 IS NULL)";
}
if (ddlStatus.SelectedIndex > 0) //dropdownlist
{
strClause += " AND CT.ATTR2812 LIKE '%" + ddlStatus.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (CT.ATTR2812 LIKE '%' OR CT.ATTR2812 IS NULL)";
}
if (ddlDueDate.SelectedIndex > 0) //dropdownlist
{
strClause += " AND CONVERT(VARCHAR(14), CT.ATTR2752, 110) LIKE '%" + ddlDueDate.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (CONVERT(VARCHAR(14), CT.ATTR2752, 110) LIKE '%' OR CONVERT(VARCHAR(14), CT.ATTR2752, 110) IS NULL)";
}
if (ddlOwner.SelectedIndex > 0) //dropdownlist
{
strClause += " AND UA.REALNAME LIKE '%" + ddlOwner.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (UA.REALNAME LIKE '%' OR UA.REALNAME IS NULL)";
}
if (ddlClient.SelectedIndex > 0) //dropdownlist
{
strClause += " AND C.ATTR2815 LIKE '%" + ddlClient.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (C.ATTR2815 LIKE '%' OR C.ATTR2815 IS NULL)";
}
if (ddlSite.SelectedIndex > 0) //dropdownlist
{
strClause += " AND SI.ATTR2819 LIKE '%" + ddlSite.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (SI.ATTR2819 LIKE '%' OR SI.ATTR2819 IS NULL)";
}
if (ddlPractice.SelectedIndex > 0) //dropdownlist
{
strClause += " AND PR.ATTR2817 LIKE '%" + ddlPractice.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (PR.ATTR2817 LIKE '%' OR PR.ATTR2817 IS NULL)";
}
if (ddlProvider.SelectedIndex > 0) //dropdownlist
{
strClause += " AND P.ATTR2919 LIKE '%" + ddlProvider.SelectedItem.Text + "%'";
}
else
{
strClause += " AND (P.ATTR2919 LIKE '%' OR P.ATTR2919 IS NULL)";
}
if (ddlTaskName.SelectedIndex == 0 && ddlService.SelectedIndex == 0 && ddlStatus.SelectedIndex == 0 && ddlDueDate.SelectedIndex == 0 && ddlOwner.SelectedIndex == 0 && ddlClient.SelectedIndex == 0 && ddlSite.SelectedIndex == 0 && ddlPractice.SelectedIndex == 0 && ddlProvider.SelectedIndex == 0)
{
query = strMainQuery + " WHERE CT.ACTIVESTATUS = 0";
}
else
{
query = strMainQuery + " WHERE CT.ACTIVESTATUS = 0" + strClause;
}
}
else
{
query = strMainQuery + " WHERE CT.ACTIVESTATUS = 0";
}
using (SqlConnection conn = new SqlConnection(connString))
{
try
{
SqlCommand cmd = new SqlCommand(query, conn);
SqlDataAdapter da = new SqlDataAdapter(query, conn);
myDataSet = new DataSet();
da.Fill(myDataSet);
myDataView = new DataView();
myDataView = myDataSet.Tables[0].DefaultView;
yourTasksGV.DataSource = myDataView;
yourTasksGV.DataBind();
}
}
I am using the dropdownlist text right inside my SQL query which I am sure is prone to SQL injection which I am trying to prevent. I was looking around and found out I can use string.format() along with .Parameters.AddWithValue(); to ensure there is no SQL injection.
I am not sure how to actually take my code above and change it entirety to use Parameters.
How can I achieve the use of parameters instead of taking the dropdownlist text?
Answer: The recommended way to avoid SQL injection attacks is to use parameters.
Also I can recommend that you create a stored procedure instead of using dynamic SQL.
You can pass your dropdown indexes as parameters to your SP and use the old trick of using it as conditional on the where clause.
Your select can end up like this:
select CT.columnA, CT.columnB
from tableA CT
join tableB SE on SE.idA = CT.id
where
(@ddlTaskName_SelectedIndex > 0 and (CT.ATTR2739 LIKE '%' + @ddlTaskName_SelectedItemText))
-- or (@ddlTaskName_SelectedIndex = 0 and (CT.ATTR2739 LIKE '%' OR CT.ATTR2739 IS NULL)) -- if you think enough ill see this line is unnecessary
and
(@ddlService_SelectedIndex > 0 and (SE.ATTR2821 LIKE '%' + @ddlService_SelectedItemText + '%')
-- This "else" line is unnecessary too, since we don't really filter it because eveything is "like %" or "null"
And so on ...
Those dropdown texts are still prone to SQL injection since a smart hacker may be able to change its value depending on your system.
You can fix it to just set variables inside the SP, this mean you will not pass the drop down text, just the indexes and do some swith case to set that varchar variables before the select.
That can add some maintenance issue, since your dropdown texts must match the same texts hardcoded on your SP. A better approach can be to create a domain table for each dropdown:
Create table DropDownClientValues
(
index int
,text varchar(50)
)
And list your dropdown values directly from those domain tables; that way all your selects/SP can refer the exact same values.
This has the advantage of making adding/removing options from your drop downs very easy and with no impact in your code. Sadly this can be a bit onerous to refactor a big app that way. | {
"domain": "codereview.stackexchange",
"id": 9393,
"tags": "c#, sql, asp.net, sql-injection"
} |
Is gravity included in "static pressure"? | Question: In the context of Bernoulli, does "static pressure" typically refer to $p$, or to $p + \rho gz$?
Answer: $p_0$ is the static pressure
$\rho g z$ is the hydrostatic pressure
$\frac{1}{2} \rho v^2$ is the dynamic pressure
Hence
$p(z) = p_0 + \rho g z + \frac{1}{2} \rho v^2$ | {
"domain": "engineering.stackexchange",
"id": 2389,
"tags": "fluid-mechanics"
} |
Analytic solution for non-flat filter design | Question: I'm trying to write an accelerometer calibration script that uses filters to convert from volts into $m/s^2$. As accelerometers tend to have non-flat response curves, this means I have to design a rather complex filter. I'm not worried about phase, as I can just apply the filter twice in opposing directions to correct for any phase offsets (like matlab's filtfilt), so the focus is on designing a filter that approximates a user-provided magnitude curve.
Ideally, the user provides a calibration curve as input into an analytic algorithm to solve for the best fitting filter poles.
I'm aware MATLAB has a filter design function, but I don't know what the underlying algorithm is (if its an optimizer, or a closed form solution).
So my question is...
Is there an analytic solution to my filter design problem? Or do I have to use optimisation scripts to get the best filter?
I'm not mentioning programming language here, as I want to understand the underlying math behind this.
Answer: If you are not concerned on the phase and just want to approximate a magnitude response, then your first option should be the frequency sampling method implemented in Matlab/Octave fir2() function.
You would provide the frequency grid and corresponding frequency response magnitude at those frequencies.
As you have also mentioned, least-squares approach is another alternative. Indeed by using suitable weights, you can distribute the error according to your priority cirteria.
Magnitude approximation based on LMS adaptive system identification is also a possible option. | {
"domain": "dsp.stackexchange",
"id": 8527,
"tags": "filter-design"
} |
Flashlights and Capacitors | Question: Why capacitors are used in flashlights of cameras instead of attaching the light directly to the battery? How this leads more light?
Answer: There are 3 main reasons for using a capacitor.
First it stores the energy, so it can deliver a pulse of energy that is far larger than the battery can. Remember it may take several seconds of battery energy to fully charge the flash capacitor. Then the capacitor releases all that in less than a millisecond ($10^{-3}s$) or even just a few microseconds, so the flash bulb gets a massive jolt of energy.
Secondly, the flash capacitor stores the energy at a much higher voltage: we're talking about up to 1000V (typically around 300V), instead of the 6V from 4 AA cells.
Finally, the capacitor is designed so it can deliver extremely high currents, again higher than the battery can deliver by itself.
Finally, the charge can stay on that capacitor for a very long time. Never touch a charged flash capacitor. The energy stored in them can be lethal! | {
"domain": "physics.stackexchange",
"id": 7435,
"tags": "capacitance, camera"
} |
How to compute a limit about exponential function? | Question: In this paper, the operator $:Y_{i,x}:$ is defined as
$$
:Y_{i,x}~: = :~\exp\left( \sum_{p \in \mathbb{Z}} y_{i,-p} x^{p} \right):
$$
up to some constant coefficient in (3.43) on page 13. How do you obtain the second term on the second line of (5.18) on page 20 from (5.17) (I can only obtain the first term on the second line of (5.18))?
Edit: The equations are in the following.
Answer: We have
\begin{align}
Y_{i,ux} & \sim Y_{i,x} + :\partial_u Y_{i,ux}|_{u=1} (u-1): \\
& = Y_{i,x} + :Y_{i,ux} (\sum_{p \in \mathbb{Z}-\{0\}} y_{i,-p} p u^{p-1} x^p ) (u-1): \\
& = Y_{i,x} + :Y_{i,ux} (\sum_{p \in \mathbb{Z}-\{0\}} y_{i,-p} p u^{p-1} x^p )|_{u=1} (u-1): \\
& = Y_{i,x} + :Y_{i, x} (\sum_{p \in \mathbb{Z}-\{0\}} y_{i,-p} p x^p ) (u-1):\\
& = Y_{i,x} + :Y_{i, x} x \partial_x \log(Y_{i,x}) (u-1):.
\end{align}
Therefore
\begin{align}
& \lim_{u \to 1} S(u): \frac{Y_{i,ux}}{Y_{i,xq^{-1}}}( \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} \prod_{e: j \to x} Y_{j, \mu_e q^{-1} x} ) : \\
& = \lim_{u \to 1} \frac{(1-q_1 u)(1- q_2u)}{(1-q u)(1-u)} ( : \frac{Y_{i, x}}{Y_{i,xq^{-1}}}( \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} \prod_{e: j \to x} Y_{j, \mu_e q^{-1} x} ) : \\
& + : \frac{Y_{i, x}}{Y_{i,xq^{-1}}}( \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} \prod_{e: j \to x} Y_{j, \mu_e q^{-1} x} ) x \partial_x \log(Y_{i,x}) (u-1) : ) \\
& = \lim_{u \to 1} \frac{(1-q_1 u)(1- q_2u)}{(1-q u)(1-u)} : \frac{Y_{i, x}}{Y_{i,xq^{-1}}}( \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} \prod_{e: j \to x} Y_{j, \mu_e q^{-1} x} ) : \\
& \quad - \frac{(1-q_1 )(1- q_2 )}{(1-q ) } : \frac{Y_{i, x}}{Y_{i,xq^{-1}}}( \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} \prod_{e: j \to x} Y_{j, \mu_e q^{-1} x} ) x \partial_x \log(Y_{i,x}):
\end{align}
\begin{align}
& \lim_{u \to 1} S(u^{-1}): \frac{Y_{i, x}}{Y_{i,uxq^{-1}}}( \prod_{e: i \to j} Y_{j, u\mu_e^{-1} x} \prod_{e: j \to x} Y_{j, u\mu_e q^{-1} x} ) : \\
& = \lim_{u \to 1} \frac{(1-q_1 u^{-1})(1- q_2u^{-1})}{(1-q u^{-1})(1-u^{-1})}: \frac{Y_{i, x}}{Y_{i,xq^{-1}} + :Y_{i, xq^{-1}} x \partial_x \log(Y_{i,xq^{-1}}) (u-1):} \\
& ( \prod_{e: i \to j} (Y_{j, \mu_e^{-1} x} + :Y_{j, x\mu_e q^{-1} } x \partial_x \log(Y_{j,x\mu_e q^{-1}}) (u-1):) \prod_{e: j \to i} (Y_{j, \mu_e q^{-1} x} + :Y_{j, x\mu_e q^{-1} } x \partial_x \log(Y_{j,x\mu_e q^{-1}}) (u-1):) : \\
& = \lim_{u \to 1} \frac{(1-q_1 u^{-1})(1- q_2u^{-1})}{(1-q u^{-1})(1-u^{-1})}: \frac{Y_{i, x}}{Y_{i,xq^{-1}}} (1 - x \partial_x \log(Y_{i,xq^{-1}}) (u-1)) \\
& \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} (1 + x \partial_x \log(Y_{j,x\mu_e q^{-1}}) (u-1) ) \prod_{e: j \to i} Y_{j, \mu_e q^{-1} x}(1 + x \partial_x \log(Y_{j,x\mu_e q^{-1}}) (u-1) ) : \\
& \sim \lim_{u \to 1} \frac{(1-q_1 u^{-1})(1- q_2u^{-1})}{(1-q u^{-1})(1-u^{-1})}: \frac{Y_{i, x}}{Y_{i,xq^{-1}}} (1 - x \partial_x \log(Y_{i,xq^{-1}}) (u-1)) \\
& (\prod_{e: i \to j} Y_{j, \mu_e^{-1} x}) (1 + x \partial_x \sum_{e: i \to j} \log(Y_{j,x\mu_e q^{-1}}) (u-1) ) (\prod_{e: j \to i} Y_{j, \mu_e q^{-1} x}) (1 + \sum_{e: j \to i} x \partial_x \log(Y_{j,x\mu_e q^{-1}}) (u-1) ) : \\
& = \lim_{u \to 1} \frac{(1-q_1 u^{-1})(1- q_2u^{-1})}{(1-q u^{-1})(1-u^{-1})}: \frac{Y_{i, x}}{Y_{i,xq^{-1}}} (1 - x \partial_x \log(Y_{i,xq^{-1}}) (u-1)) \\
& (\prod_{e: i \to j} Y_{j, \mu_e^{-1} x}) (1 + x \partial_x \log \prod_{e: i \to j} (Y_{j,x\mu_e q^{-1}}) (u-1) ) (\prod_{e: j \to i} Y_{j, \mu_e q^{-1} x}) (1 + x \partial_x \log \prod_{e: j \to i} (Y_{j,x\mu_e q^{-1}}) (u-1) ) : \\
& \sim \lim_{u \to 1} \frac{(1-q_1 u^{-1})(1- q_2u^{-1})}{(1-q u^{-1})(1-u^{-1})}: \frac{Y_{i, x}}{Y_{i,xq^{-1}}} \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} \prod_{e: j \to i} Y_{j, \mu_e q^{-1} x} \\
& - \frac{(1-q_1 )(1- q_2 )}{(1-q ) } x \partial_x ( \log(Y_{i,xq^{-1}}) - \log \prod_{e: i \to j} Y_{i, \mu_e^{-1} x} - \log \prod_{e: j \to i} Y_{j, \mu_e q^{-1} x} ) \\
& \sim \lim_{u \to 1} \frac{(1-q_1 u^{-1})(1- q_2u^{-1})}{(1-q u^{-1})(1-u^{-1})}: \frac{Y_{i, x}}{Y_{j,xq^{-1}}} \prod_{e: i \to j} Y_{j, \mu_e^{-1} x} \prod_{e: j \to i} Y_{j, \mu_e q^{-1} x} \\
& - \frac{(1-q_1 )(1- q_2 )}{(1-q ) } x \partial_x \log \frac{Y_{i,xq^{-1}} }{\prod_{e: i \to j} Y_{j, \mu_e^{-1} x} Y_{j, \mu_e q^{-1} x} }.
\end{align}
Therefore we obtain (5.18) in the paper. | {
"domain": "physics.stackexchange",
"id": 35544,
"tags": "mathematical-physics, gauge-theory"
} |
Noise cancelling speakers to cancel sound in the room | Question: A thought of mine from class, where not everyone is the "silent kid" type. The teacher would very often need to shout to keep the class barely quiet enough for the students who actually are paying attention. This drives me insane, and I often wish there was a way I could silence the entire classroom.
My idea comes from a feature headphones have, noise cancelling. The way I understand it, a microphone takes in the ambient sound and creates a similar sound at a different phase such that the two sound waves cancel each other out inside the listener's ear. My question is: can I route that other wave back out through a speaker such that the entire class can't communicate the "usual" way? An old question suggests that such a device would break conservation of energy, but I could be understanding it incorrectly.
Answer: Well, Active Noise Control has been around for quite some time and there's a lot of applications developed based on those principles.
Apparently you cannot effectively cancel out one source if the main source and the cancelling source do not coincide in space. This, of course, has practical limitations, but it also depends on the wavelengths of interest. When the wavelength is quite big, you can achieve destructive interference in larger areas. If the distance between the two sources happens to be well shorter than half a wavelength, then if you consider omnidirectional radiation, then they will always maintain their phase relationship, which means that you could effectively cancel the main source quite effectively. Of course, this has a limited frequency range and as already mentioned it depends on the wavelength and the distance between the two sources.
Now, considering the fact that you have in mind using only one speaker which will be very far from some of the sources means that the frequency range for which you could achieve attenuation would be quite small. In addition to that, this would produce an increase in pressure in other positions. This is what makes active noise control on three dimensional systems (such as rooms) quite impractical. If you could introduce loudspeakers in each desk, then you could possibly achieve attenuation on a quite large area for some low frequencies. This would end up being a quite complex system though.
In addition to that, you would have to consider the fact that lowering low-frequency noise could potentially increase speech intelligibility, increasing the annoyance of other people talking!
I strongly apologize for not providing in-text references. If you are interested in learning more about Active Noise Control you can start with the Active Control of Noise and Vibration books by Hansen and regarding annoyance from increased speech intelligibility, you can find more information in the Effects of Noise Reduction on Speech Intelligibility, Perceived Listening Effort, and Personal Preference in Hearing-Impaired Listeners article by Brons et al. and the Effects of Interior Aircraft Noise on Speech Intelligibility and Annoyoance paper by Pearsons and Bennett. | {
"domain": "physics.stackexchange",
"id": 64357,
"tags": "energy, acoustics, electronics, electrical-engineering, noise"
} |
Cache wrapper - Generics vs Dynamic | Question: I've implemented a common wrapper pattern I've seen for the .NET cache class using generics as follows:
private static T CacheGet<T>(Func<T> refreashFunction, [CallerMemberName]string keyName = null)
{
if (HttpRuntime.Cache[keyName] == null)
HttpRuntime.Cache.Insert(keyName, refreashFunction(), null, DateTime.UtcNow.AddSeconds(600), System.Web.Caching.Cache.NoSlidingExpiration);
return (T)HttpRuntime.Cache[keyName];
}
It could then be called like so:
public static Dictionary<string, string> SomeCacheableProperty
{
get
{
return CacheGet(() =>
{
Dictionary<string, string> returnVal = AlotOfWork();
return returnVal;
});
}
}
However, the CacheGet method could be implemented using dynamic:
private static dynamic CacheGet(Func<object> refreashFunction, [CallerMemberName]string keyName = null)
{
if (HttpRuntime.Cache[keyName] == null)
HttpRuntime.Cache.Insert(keyName, refreashFunction(), null, DateTime.UtcNow.AddSeconds(600), System.Web.Caching.Cache.NoSlidingExpiration);
return HttpRuntime.Cache[keyName];
}
The questions I have:
Is there a technically (or philosophically) superior preference between these two implementations?
Are these different at runtime?
If they are both left in, which one is being called in the getter method?
Answer: First of all, I think you misspelled refresh as refreash.
Second, your usage can probably be simplified
CacheGet(AlotOfWork);
Finally, you might want to check if refreshFunction returns null and maybe log a warning then as that function returning null would cause a cache miss every time.
Now to answer your specific questions.
I'll say it, the generic implementation is better. dynamic is a great trapdoor when you get really bogged down with generics or anonymous types and there's neat things you can do with it (see Dapper) but it still has some gotchas. For example, I do not think your function with dynamic will work in most cases.
HttpRuntime.Cache expects and returns object types meaning all types are being downcast or boxed. Therefore, if your function returns a User object, what is stored is still an object and what is returned from the cache is downcast likewise. Therefore your user.Username property will not be available until you cast, even though it's dynamic.
Yes. The generic version - with some subtle yet real differences - will run as if it was written for the type you're filling <T> with. The dynamic version will just be a "value" and let the DLR figure out how to invoke members (which again, unless you're calling ToString() or GetHashCode(), will fail). dynamic will also be slower as the runtime binding has to be done every time, though admittedly this is unlikely to be any sort of bottleneck.
Obviously I'm going to say always use the generic version in this case. | {
"domain": "codereview.stackexchange",
"id": 5987,
"tags": "c#, generics, cache"
} |
Particle in a Box: Energy Less than the Potential Energy | Question: I am reading quantum mechanics from Shankar's Principles of Quantum Mechanics. On page 157 he defines the box potential $V(x)$ as
$$
V(x) = \left\{ \begin{array}{rl}
0 &\mbox{ if $|x|< L/2$} \\
\infty &\mbox{ otherwise .}
\end{array} \right.
$$
He starts with region $III$ where $V=\infty$. For this region, he considers $V=V_0>E$ at first and writes the Schrodinger equation as
$$ \frac{d^2\psi_{III}}{dx^2} + \frac{2m}{\hbar^2} (E-V_0) \psi_{III} = 0.$$
Solving it for $\psi_{III}$, he later shows that $\psi_{III} = 0$ when $V=V_0 \to \infty$.
My confusion is with the assumption: $V=V_0>E$.
Question: How the potential energy $V(x)$ of a particle in region $III$ can be greater than its total energy $E$?
We all know that $E=k_E + V(x)$; $E$ = total energy, $k_E$ = kinetic energy, $V(x)$ = potential energy. If the potential energy is $V_0$, shouldn't the total energy $E$ of the particle be greater than or equal to $V_0$; i.e., $E\geqslant V_0$?
Please explain.
Answer:
We all know that $E=k_E + V(x)$; $E$ = total energy, $k_E$ = kinetic energy, $V(x)$ = potential
One reason why you may be confused by this is that the equation
$$
E = \frac{1}{2}m\dot{x}^2 + V(x)
$$
comes from classical mechanics. When we are solving the Schroedinger equation
$$
H\Phi = E\Phi
$$
with Hamiltonian operator $H$, the meaning of the symbols $E$,$H$ is not necessarily the same as in classical mechanics above, even if we call $E$ energy and $H$ Hamiltonian.
Of course, there are similarities in form between the equations of classical mechanics and Schroedinger's equations, but the meaning of the symbols (their use) is different.
We interpret the symbols used in solving the Schroedinger equation with help of the Born interpretation of the function $\Phi$; it gives probability density in configuration space. With this probability density, we can then calculate expected average values of physical quantities, but not their instantaneous values; this is in contrast to classical mechanics, where we deal directly with values, not probabilities.
When we find eigenvalues $E$'s and the corresponding $\Phi$'s, we may use them to make some probabilistic statements about physical quantities of the system considered.
For example, we may calculate average electric moment or average energy:
$$
\langle \mu_x \rangle = \int \Phi^* qx \Phi\,dx,
$$
$$
\langle Energy \rangle = \int \Phi^* H \Phi\,dx,
$$
When we put in $\Phi$ corresponding to eigenvalue $E$, we obtain average energy
$$
\langle Energy \rangle = E.
$$
This shows us possible interpretation of the eigenvalue $E$: the expected average energy of the system appropriate when it is described by $\Phi$. We may say the average energy is $E$, but there is no need to think that instantaneous value of energy is $E$ or that value of energy attached to every point $x$ of the configuration space is $E$. Then, there is no problem with the relation
$$
Energy = \frac{1}{2}m\dot{x}^2 + V(x),
$$
because the eigenvalue $E$ only gives average energy, not energy corresponding to some point $x$ of configuration space. | {
"domain": "physics.stackexchange",
"id": 16774,
"tags": "quantum-mechanics, energy, wavefunction, schroedinger-equation, potential-energy"
} |
Why does velocity of electron increases with increase atomic number in the Bohr model of the hydrogen and hydrogen like atoms? | Question: I already know mathematical proof which states velocity of electron increases with increase in atomic number,but what is the intuition behind it?
Answer: The same reason the Earth would move faster should the mass of the sun increase: centripetal force grows. In the case of Bohr's model, the force grows as $\sim Z$,
$$
F_e = \frac{k Ze^2}{r^2}
$$
Newton's law thus results in
$$
\frac{mv^2}{r} = F_e = \frac{k Ze^2}{r^2}
$$
leading to $v \sim Z^{1/2}$ | {
"domain": "physics.stackexchange",
"id": 67014,
"tags": "atomic-physics"
} |
How to get the name of a entity a ray or a ray sensor collides with? | Question:
In my sensor plugin, I have a ray sensor which casts a single ray. From this ray, I would like to get the name of the entity with which the ray intersects.
I have a working workaround by creating a test_ray (physics::RayShapePtr) and calling test_ray->GetIntersection() every time my ray measures a finite distance. However, it doesn't feel like the right way to do and furthermore, I get occasional segfaults which are caused somehow by underlying collision checks of the test_ray->GetIntersection() call.
Is there a better way to get the entity name?
In my plugin I have access to the parent ray sensor which is of type sensors::RaySensorPtr. From the parent sensor I can access the LaserShape (physics::MultiRayShapePtr), but apparently not the single ray (tried getChild(0) already).
Cheers,
Originally posted by Hans-Joachim Krauch on Gazebo Answers with karma: 5 on 2016-04-01
Post score: 0
Answer:
I'm currently looking at Gazebo default, which may not match with your version of gazebo.
I think you can use the following API calls:
physics::MultiRayShapePtr RaySensor::LaserShape() const
RayShapePtr MultiRayShape::Ray(const unsigned int _rayIndex) const
void RayShape::GetIntersection(double &_dist, std::string &_entity)
An implementation might look like:
RaySensorPtr mySensor;
double dist;
std::string entity;
// Should check these pointers for NULL
mySensor->LaserShape()->Ray(0)->GetIntersection(dist, entity);
std::cout << "My ray intersected entity[" << entity << "] at a distance of[" << dist << "]\n";
Originally posted by nkoenig with karma: 7676 on 2016-04-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Hans-Joachim Krauch on 2016-04-04:
Ah ok, I see you have added this functionality in Gazebo 7. I'm bound to 6.5 however (forgot to mention this), but it helps me anyway, thanks a lot! | {
"domain": "robotics.stackexchange",
"id": 3899,
"tags": "gazebo"
} |
image type after an ifft reconstruction | Question: I reconstruct images from MRI k-space using ifft and root-sum-of-squares method.
imRef = ifftshift(ifftshift(ifft(ifft( ifftshift(ifftshift( kspacedata,1),2),[],1),[],2),1),2);
imSOS = squeeze(sqrt(sum( abs(angle(imRef)).^2, 3)));
imagesc(abs(imSOS))
k = abs(imSOS);
disp(class(k)); %double
Class of image matrix k is shown as double.
Is the image matrix k an indexed image or intensity image?
The sample values from reconstructed image k is shown below.
4.2753 4.9807 4.5435 5.6548 6.1303 5.0229
3.3805 4.6260 5.1594 5.2692 4.1187 4.4885
5.8990 5.5275 4.3493 5.6182 6.7237 4.3071
6.4885 4.6861 4.4086 3.5034 5.2378 4.7466
6.1099 4.6995 4.1673 4.7408 3.8915 5.7531
5.4006 3.1289 5.6541 5.8782 4.6568 3.8166
By referrring http://in.mathworks.com/help/matlab/creating_plots/image-types.html,
I learned that for indexed images, the data matrix is represented by integers which are indices to color map. And if it were intensity images, I feel the data range is somewhere between 0 and 1. If k represents an indexed image, how can I know the colormap associated with it?
Thanks.
Answer: FFT and IFFT are linear operators, and as such, the results only make a lot of sense in a linear intensity space, not if indexed into a non-linearly mapped space. | {
"domain": "dsp.stackexchange",
"id": 3667,
"tags": "matlab, image-processing, ifft, reconstruction"
} |
How does the Earth's center produce heat? | Question: In my understanding, the center of the Earth is hot because of the weight of the its own matter being crushed in on itself because of gravity. We can use water to collect this heat from the Earth and produce electricity with turbines. However, I'd imagine that doing this at an enormous, impossibly large scale would not cool the center of the Earth to the same temperature as the surface, since gravity is still compressing the rock together.
However, since energy cannot be created or destroyed, it seems like this energy is just coming from nowhere. I doubt the Earth's matter is being slowly consumed to generate this energy, or that the sun is somehow causing the heating.
I think that I have misunderstood or overlooked some important step in this process. If so, why (or why not) does the Earth's center heat up, and, if not, does geothermal energy production cool it down irreversibly?
Answer: Heating because of high pressure is mostly an issue in gases, where gravitational adiabatic compression can bring up the temperature a lot (e.g. in stellar cores). It is not really the source of geothermal heat.
Earth's interior is hot because of three main contributions:
"Primordial heat": energy left over from when the planet coalesced. The total binding energy of Earth is huge ($2\cdot 10^{32}$ J) and when the planetesimals that formed Earth collided and merged they had to convert their kinetic energy into heat. This contributes 5-30 TW of energy flow today.
"Differentiation heat": the original mix of Earth was likely relatively even, but heavy elements would tend to sink towards the core while lighter would float up towards the upper mantle. This releases potential energy.
"Radiogenic heat": The Earth contains a certain amount of radioactive elements that decay, heating up the interior. The ones that matter now are the ones that have half-lives comparable with the age of Earth and high enough concentrations; these are $^{40}$K, $^{232}$Th, $^{235}$U and $^{238}$U. The heat flow due to this is 15-41 TW.
Note that we know the total heat flow rather well, about 45 TW, but the relative strengths of the primordial and radiogenic heat are not well constrained.
The energy is slowly being depleted, although at a slow rate: the thermal conductivity and size of Earth make the heat flow out rather slowly. Geothermal energy plants may cool down crustal rocks locally at a faster rate, getting less efficient over time if they take too much heat. But it has no major effect on the whole system, which is far larger. | {
"domain": "physics.stackexchange",
"id": 58517,
"tags": "thermodynamics, newtonian-gravity, thermal-radiation, radioactivity, geophysics"
} |
a generalized job assignment problem | Question:
The following problem is from a past algorithms course exam and I'm using it to test my knowledge.
There are m machines and n jobs. Each machine can doing a subset of jobs. Each machine i has a capacity $C_i$, meaning that it has $C_i$ units of processing time. Each job $j$ has a demand $D_j$, meaning that it requires $D_j$ units of processing time to complete. We'd like to assign all the jobs to the machines, so that each job is assigned to only one machine, and no machine is overloaded (i.e. the total demands assigned to machine i doesn't exceed its capacity $C_i$).
Input: $m$ positive numbers $C_1,\cdots, C_m$, n positive numbers $D_1,\cdots, D_n$, and for each $1\leq i\leq m$ and $1\leq j\leq n,$ a boolean variable $x_{i,j}$ indicating whether machine $i$ can do job $j$.
Output: Does there exist an assignment such that all the jobs are assigned to machines, so that each job is assigned to only one machine and no machine is overloaded?
Question: prove the above problem is NP-complete, or give an algorithm to solve the decision problem in polynomial time.
I think the problem might be NP-complete. The decision problem asks whether an assignment assigns at least k jobs, where k is a parameter to the decision problem. Clearly the problem is in NP; one can verify in polynomial time that an assignment satisfies that no machine is overloaded and each job is assigned to one machine. One can do this by checking the jobs assigned to machine i and verifying that the total sum of the $D_j$'s associated with machine i is at most $C_i$. One can then check at the same time that no job is assigned to two different machines.
But I'm not sure which NP-complete problem to reduce from. For instance, I know the following well-known problems are NP-complete: vertex cover, 3-SAT, hamiltonian cycle, set cover, hamiltonian path, clique, independent set, 3 coloring, subset sum etc.
Maybe Vertex cover would be useful?
Answer: This problem is a generalization of the decision version of the bin packing problem (BPP). While all bins in BPP have the same given capacity, the capacities of the machines in this problem are variable.
The decision version of BPP is $\mathsf{NP}$-complete. So you have guessed correctly: this problem is $\mathsf{NP}$-complete since this problem is in $\mathsf{NP}$ as proved in the question. | {
"domain": "cs.stackexchange",
"id": 19858,
"tags": "algorithms, graphs, np-complete"
} |
VBA userform loop to gather numbers from cells on an Excel sheet | Question: I am working on a userform and have everything working the way it should but it's taking a little longer than I would like. It's looping through 42 labels for each For statement and there are six of them.
In an effort to learn how to code a little more efficient, could someone please review my code and show me some faster ways to get the job done?
Private Sub Find()
Application.ScreenUpdating = False
Dim i As Integer, k As Integer
k = 1
i = 1
For i = i To 42
C1 = "C1_" & i
Me.Controls(C1) = Sheet14.Range("I" & k).Value
If Me.Controls(C1) = "0" Then
Me.Controls(C1).ForeColor = &H8000000F
End If
k = k + 1
Next
k = 1
i = 1
For i = i To 42
C2 = "C2_" & i
Me.Controls(C2) = Sheet14.Range("J" & k).Value
If Me.Controls(C2) = "0" Then
Me.Controls(C2).ForeColor = &H8000000F
End If
k = k + 1
Next
'
k = 1
i = 1
For i = i To 42
C3 = "C3_" & i
Me.Controls(C3) = Sheet14.Range("K" & k).Value
If Me.Controls(C3) = "0" Then
Me.Controls(C3).ForeColor = &H8000000F
End If
k = k + 1
Next
k = 1
i = 1
For i = i To 42
C4 = "CL4_" & i
Me.Controls(C4) = Sheet14.Range("L" & k).Value
If Me.Controls(C4) = "0" Then
Me.Controls(C4).ForeColor = &H8000000F
End If
k = k + 1
Next
k = 1
i = 1
For i = i To 42
C5 = "C5_" & i
Me.Controls(C5) = Sheet14.Range("M" & k).Value
If Me.Controls(C5) = "0" Then
Me.Controls(C5).ForeColor = &H8000000F
End If
k = k + 1
Next
k = 1
i = 1
For i = i To 42
C6 = "C6_" & i
Me.Controls(C6) = Sheet14.Range("N" & k).Value
If Me.Controls(C6) = "0" Then
Me.Controls(C6).ForeColor = &H8000000F
End If
k = k + 1
Next
Application.ScreenUpdating = True
End Sub
Answer: i = 1 and For i = i To 42 is not how you initiate a loop.
Here is the Pseudo Code to write a For Next loop:
For i = Low_Bound to Upper_Bound
Next
This video will explain it better :Excel VBA Introduction Part 16 - For Next Loops. I recommend you watch the complete series on Youtube.
Here you set the ForeColor if one condition is met but you never reset it. Running the code twice will give not work properly.
If Me.Controls(C6) = "0" Then
Me.Controls(C6).ForeColor = &H8000000F
Else
Me.Controls(C6).ForeColor = -2147483630
End If
&H8000000F is the hexadecimal code for the default ForeColor of a Userform. If you want to hide the label than change it's visibility.
Me.Controls(C6).Visible = Not Me.Controls(C6) = "0"
Application.ScreenUpdating has no effect on a Userform. Use it when writing to or formatting a Range.
As a general rule of thumb, if you have a large amount of repeat code extract it to a helper function.
Refactored Code
Private Sub UpdateLabels()
Dim Index As Long
For Index = 1 To 42
setLabel "C1_", "I", Index
setLabel "C2_", "J", Index
setLabel "C3_", "K", Index
setLabel "CL4_", "L", Index
setLabel "C5_", "M", Index
setLabel "C6_", "N", Index
Next
End Sub
Sub setLabel(Prefix As String, RefColumn As Variant, Index As Long)
With Me.Controls(Prefix & Index)
.Caption = sheet14.Cells(Index, RefColumn).Value
If .Caption = "0" Then
.ForeColor = &H8000000F
Else
.ForeColor = -2147483630
End If
End With
End Sub
Note: My goal was to make the process as simple as possible. I didn't bother loading the data into an Array because Read data from a Worksheet is an inexpensive process and we are only doing 252 lookups. The code ran virtually instantaneously on the test Userform. | {
"domain": "codereview.stackexchange",
"id": 29596,
"tags": "vba, excel"
} |
What is the ultimate "matter reaching diameter" (radius) for a laser beam with information sent out today compared to the observable universe? | Question: Diameter (radius) refers here to the distance the laser beam can travel to reach matter before Dark Energy (and the expansion of the universe) will make this impossible.
Hubble's constant implies an expanding universe in which clusters of galaxies are moving away from each other. After certain distance travelled the laser beam will not be able to reach the next galaxy cluster anymore because dark energy will have taken over, and even gravity will not be enough to hold galaxies together.
Is this distance bigger/equal/smaller than the observable universe? Can we already determine which galaxies are the candidates when the laser beam reaches this distance?
The observable Universe is according to Wikipedia:
Diameter: $8.8 \times 10^{26}~\mathrm{m}$ (28.5 Gpc or 93 Gly)
Answer: The distance is much smaller than the currently observable Universe, and it is possible to estimate which galaxies lie within it, except it's a lot of galaxies.
As @Ihle says in their comment above, the distance you are looking for is equal to our cosmic event horizon. This is normally defined as the distance beyond which light emitted now will never reach us, but by symmetry, it should be straightforward to see that it is the same.
The figure above by Tamara Davis (color version of figure from this paper ) shows that this distance is currently $\sim 15$ billion light years,a good deal smaller than the observable Universe. Furthermore, the size of the observable Universe will grow indefinitely in both absolute and comoving coordinates, while our event horizon will approach a finite size of $\sim 18$ gigalightyears in absolute size and zero in comoving size.
As the figure caption says, the distance currently corresponds to a redshirt of around 1.8, which is easily observable with current technology. | {
"domain": "physics.stackexchange",
"id": 32669,
"tags": "cosmology, soft-question, astrophysics, universe"
} |
When does the 'standard' angular velocity formula not hold? | Question: I have read that the formula for angular velocity:
$$\dot {\vec r}=\vec \omega \times\vec r \tag{1}$$
does not hold in some situations, but the book does not specify what situation so please could you produce a list of when this formula does not hold.
If this formula does not hold is it also true that:
$$\vec \omega= \frac{\vec r \times \vec v}{|\vec r|^2} \tag{2}$$
does not hold?
Answer: Electron spin is not the result of a rotation of the electron around itself. In this case, of course (2) also doesn't hold.
In fact, one can show that there is a double implication as follows:
1) if $\vec v$ is defined as in (1) one gets
$$ \frac {\vec r \times \vec v}{r^2} = \vec {\omega} - \vec r \frac {(\vec r \cdot \vec {\omega})}{r^2}. \tag{I}$$
So, as $\vec {\omega}$ is perpendicular to $\vec r$ the equality (2) is implied.
2) On the other hand if the equality (2) is true it implies
$$r^2 (\vec {\omega} \times \vec r) = (\vec r \times \vec v) \times \vec r = \vec v \ r^2 - \vec r (\vec v \cdot \vec r). \tag{II}$$
So, if your equality (2) is true, and $\vec v$ is defines as tangential velocity, then it implies (1). Therefore if $\vec v$ is defines as tangential velocity, and (1) isn't true, (2) cannot be true, otherwise it would imply that (1) is true. | {
"domain": "physics.stackexchange",
"id": 21965,
"tags": "homework-and-exercises, rotational-dynamics, angular-velocity"
} |
Kinetic Energy Evaluation Integral Evaluation Program | Question: I'm reading Ostlund's Modern Quantum Chemistry. In Appendix A, the kinetic energy integral is evaluated using the Gaussian Basis functions to be
$$
\left(A\left| -\frac{1}{2}\nabla^2 \right| B\right) =
\alpha\beta/(\alpha + \beta)[3 - 2\alpha\beta/(\alpha + \beta) |\mathbf{R}_A - \mathbf{R}_B|^2][\pi/(\alpha + \beta)]^{3/2} \\
\times \exp [-\alpha\beta/(\alpha + \beta)|\mathbf{R}_A - \mathbf{R}_B|^2]
\tag{A.11}\label{kin-en.int}
$$
So, in the integral evaluation the Gaussian functions themselves are not used, but in the computer program he is evaluating the integral using
T11=T11+T(A1(I),A1(J),0.0D0)*D1(I)*D1(J)
The function T() is calculating the equation above \eqref{kin-en.int}.
I can't understand why he is multiplying by D1 and D2 which are the Gaussian functions themselves.
$$g_\mathrm{1s}(\alpha) = (2\alpha/\pi)^{3/4}\mathrm{e}^{-\alpha\mathbf{r}^2}$$
Answer: The equation for the Kinetic Energy matrix element that you quote is for two unnormalised 1s Gaussians. The d factors contain the normalisation factors and the contraction coefficients - look more carefully at the code, you have got it slightly wrong what the d's represent.
Talking of the code please, Please don't use that as a model for your own Fortran - that style is getting on for 50 years out of date! | {
"domain": "chemistry.stackexchange",
"id": 13249,
"tags": "computational-chemistry"
} |
Adding obstacles (lines) to costmap | Question:
I have a vehicle that uses sbpl to travel through an environment. The vehicle uses lidar to populate the costmap during travel, but data may also be received through other means (visual, other vehicles). Is there a way to populate a costmap using other data?
Currently, I may have obstacle information such as x,y,z position as well as shape information (lenght, width, etc). The only way I can see how to populate the map at the moment is to generate a point cloud with the known obstacle information and then to publish that data.
Another question is whether there is a way to remove that same information from the costmap? For example, what if I travel through the environment near that previous target and find that it is actually not there or has moved?
Originally posted by orion on ROS Answers with karma: 213 on 2014-01-06
Post score: 0
Answer:
If you are using Hydro+, you can create a custom layer to include this information. Layered Costmap Tutorials
Originally posted by David Lu with karma: 10932 on 2014-05-14
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by nanaky on 2014-08-05:
Is this possible in Groovy anyhow?
Comment by David Lu on 2014-08-06:
There's a fork of the navigation stack with layered costmaps for Groovy here: https://github.com/DLu/navigation/tree/groovy_dev
However, the easier approach might be to just make a fake sensor that puts imaginary obstacles where you don't want the robot to go. | {
"domain": "robotics.stackexchange",
"id": 16587,
"tags": "ros, navigation, sbpl-lattice-planner, costmap-2d, move-base"
} |
My python hook for mercurial | Question: In our hg workflow we use default as testing branch and stable as the one that contains stable code. All changes are made in feature-branches that are started from latest stable. This hook checks if there is a "direct" commit to default or stable (they are denied, only merges may change contents of default and stable) or that new feature branch has been started from default (which is denied by conventions too).
Any proposal to make it more "pythonic" (or just better)?
from mercurial import context
def check(ui, repo, hooktype, node=None, source=None, **kwargs):
for rev in xrange(repo[node].rev(), len(repo)):
ctx = context.changectx(repo, rev)
parents = ctx.parents()
if len(parents) == 1:
if ctx.branch() in ['default', 'stable']:
ui.warn('!!! You cannot commit directly to %s at %s !!!' % (ctx.branch(), ctx.hex()))
return True
if parents[0].branch() == 'default':
ui.warn('!!! You cannot start your private branch from default at %s !!!' % (ctx.hex()))
return True
return False
Answer: for rev in xrange(repo[node].rev(), len(repo)):
ctx = context.changectx(repo, rev)
In Python, I generally try to avoid iterating using xrange. I prefer to iterate over what I'm interested in.
def revisions(repo, start, end):
for revision_number in xrange(start, end):
yield context.changectx(repo, revision_number)
for rev in revisions(repo, repo[node].rev(), len(repo)):
...
Although I'm not sure its worthwhile in this case.
The only other issue is your use of abbreviations like repo and rev. They aren't that bad because its pretty clear from the context what they stand for. But I'd write them fully out. | {
"domain": "codereview.stackexchange",
"id": 636,
"tags": "python"
} |
How can i copy non-binary files into my binaries directory with catkin and cmake? | Question:
Hi, I have a benchmarking executable in my project which pulls data from bagfiles. Now I want to copy the folder with my bagfiles into the directory where my binaries end up, because I have hardcoded paths to them in my code. I already tried to copy the folder with my bagfiles by adding this line to my CMakeLists.txt:
file(COPY src/benchmarking/bagfiles DESTINATION ${CMAKE_BINARY_DIR})
Unfortunately the variable CMAKE_BINARY_DIR only contains the absolute path to my source directory. Is there a variable that points to the binaries directory?
Originally posted by Robert Grönsfeld on ROS Answers with karma: 26 on 2016-07-15
Post score: 0
Original comments
Comment by gvdhoorn on 2016-07-15:
Just a suggestion: if you place your bagfiles in a (skeleton) ROS pkg, you could use rospack find $bag_pkg to parameterise the location. Could potentially remove the need for hard-coded paths.
Alternatively, you could make the paths parameters, and provide the absolute paths at runtime.
Answer:
The solution was to replace ${CMAKE_BINARY_DIR} with ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_BIN_DESTINATION}
See also this post.
Originally posted by Robert Grönsfeld with karma: 26 on 2016-07-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 25251,
"tags": "ros, path, catkin, binary, cmake"
} |
Tidal effect on interferometry | Question: Interferometry relies on the change in the phase of two orthogonal light beams reflected back to the source point. Assume there is an interferometer at the equator, one mirror is planted 1 mile due north and another mirror is planted 1 mile due east. Every 6 hours the axis of the east mirror-interferometer experiences a tidal effect and its length should increase. Michaelson and Morley would not have detected this because their mirrors were planted on the same piece of granite which could not expand and contract. But the mirrors in the LIGO interferometer should detect this. Do they?
Answer: This exact question was considered in a 1997 LIGO paper by Raab and Fine entitled "The Effect of Earth Tides on LIGO Interferometers". Not only is there the expected semi-diurnal tide (known as the "sectorial" component of the tide) but there is a diurnal ("tesseral") component due to the Sun and Moon not being in the equatorial plane of the Earth.
The resulting length changes in the LIGO arms are very significant compared to the magnitudes of the sought-for gravitational wave events. The biggest tidal amplitudes produce changes on the order of 100 microns.
The effect was then measured in 2006-2007 and reported by Melissinos in "The Effect of the Tides on the LIGO Interferometers". This extract of a two-day period in the observing run clearly shows the diurnal and semi-diurnal components: | {
"domain": "physics.stackexchange",
"id": 94328,
"tags": "tidal-effect, interferometry, ligo, gravitational-wave-detectors"
} |
Rate limiting in Gazebo 1.0 | Question:
I have been using earlier releases of gazebo with some success. I'm currently experimenting with the 1.0 -RC3 version of gazebo to be included with fuerte, but I'm having trouble with rate limiting.
In the earlier versions placing the <updateRate> tag with a negative number would lock the simulation to real time. A bit of investigation into the new system shows that adding an update rate to the ode like this:
<ode update_rate="1.0">
should limit the frame rate. Despite experimenting with the values my rather simple simulation is still running at many times real time.
Any suggestions?
Originally posted by JonW on ROS Answers with karma: 586 on 2012-04-03
Post score: 0
Answer:
It's an attribute under , see the SDF documentation for Physics elements, for example, the empty world in gazebo is rate limited to 1000Hz with 1ms time step size, so effectively should track real-time.
Originally posted by hsu with karma: 5780 on 2012-04-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by JonW on 2012-04-19:
Thanks, that seems to have fixed the issue. | {
"domain": "robotics.stackexchange",
"id": 8856,
"tags": "gazebo"
} |
sensor_msgs/Image data to sensor_msgs/CompressedImage data | Question:
I have an industrial camera (DVP interface), and I added some code in the driver to convert the cv image into ros image data. I want to use this camera to test the fiducials package. But it needs to subscribe to a sensor_msgs/CompressedImage topic, and now the driver can only provide the sensor_msgs/Image topic. Is there any way to convert the sensor_msgs/Image data into sensor_msgs/CompressedImage data?
Originally posted by anonymous38087 on ROS Answers with karma: 9 on 2019-04-02
Post score: 1
Answer:
Is there any way to convert the sensor_msgs/Image data into sensor_msgs/CompressedImage data?
yes: the republish node of the image_transport package can convert between uncompressed and compressed images (between any registered transports actually).
Note: this will incur overhead, as it subscribes, compressed and then publishes messages.
Originally posted by gvdhoorn with karma: 86574 on 2019-04-02
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by anonymous38087 on 2019-04-02:
Thank you! It solved my problem. This is my method:rosrun image_transport republish raw in:=/usb_cam/image_raw compressed out:=/usb_cam/image_raw. Before use this node, ros-$ROS_DISTRO-image-transport and ros-$ROS_DISTRO-image-transport-plugins should be installed.
Comment by gvdhoorn on 2019-04-02:\
out:=/usb_cam/image_raw
is that correct, or a typo?
I would not publish on the exact same topic. Or does republish add a subtopic (ie: /usb_cam/image_raw/compressed)?
Comment by anonymous38087 on 2019-04-02:
Because my camera driver will only generate a usb_cam/image_raw topic, and image_transport node will generate a new topic usb_cam/image_raw/compressed based on the parameter compressed of out.
Comment by sunt40 on 2020-10-19:
How to republish sensor_msgs/CompressedImage data to sensor_msgs/Image data ? I use "rosrun image_transport republish compressed in:=/usb_cam/image_raw/compressed raw out:=/usb_cam/image_raw". But thereis no data.When I run "rostopic echo /usb_cam/image_raw", There is nothing. Where is the problem ? | {
"domain": "robotics.stackexchange",
"id": 32805,
"tags": "ros, ros-kinetic, image, camera"
} |
Is there an Existing Model of Computing Rice Sufficiency? | Question: I've been researching agricultural systems out of my curiosity.
And so I've been thinking how do the Officials know that the Rice Produce or YIELD, will be satisfiable to a certain country, or small village.
My question is: "Is there an existing model for computing Sufficiency for a place based on Yield of rice, and the Demand of the Population?"
Most likely I aim for a Percentage Output, but anyway I can alter.
If there's none can you enlighten me of some Key terms, That can lead me into building one myself. Can be Studies, articles or Algorithms, and Approaches.
Answer: Rice production and yield, like any other agricultural crop depends on a number of factors. These include, but not limited to:
The variety of rice being grown
The fertility of the soil, and this can and does vary between regions
in countries
The availability of water for the variety of rice
Loss due to pests or natural events (ie weather)
When new varieties of rice are developed, test plots are planted in various locations and the yield per area of land, typical tonnes per hectare, are ascertained. Once this has been done for a number of test plots and for a number of growing seasons, average numbers of tonnes per hectare can then be applied to larger/national plantings.
Combine this with data regarding the number of people that need to be fed and the determined average of the amount of rice each person needs it is possible to determine the sufficiency of a rice production. | {
"domain": "earthscience.stackexchange",
"id": 1223,
"tags": "agriculture"
} |
Increasing expansion rate and time dilation | Question: Does the increasing dispersion of matter that occurs as the universe expands impact time (and therefore observed speed) for older/further compared to newer/closer events being observed simultaneously from earth? How do physicists know that the observed increasing rate of expansion of the universe is not the effect of gravitational time dilation for earlier relative to later events?
Answer: Gravitational time dilation is due to spatial differences in the potential. In the FLRW/ΛCDM cosmology we assume spatial homogenity and isotropy, so there is no overall time dilation. Nevertheless the observed duration of events is dilated the longer ago they took place, since the relation of the local duration to the observed duration is, like the wavelength, proportional to the growth of the scale factor. | {
"domain": "physics.stackexchange",
"id": 61141,
"tags": "general-relativity, cosmology, universe, space-expansion"
} |
Galaxy discovered lacking Dark Matter | Question: Recently a Galaxy was discovered that contained no Dark Matter. I was wondering what explanation could be offered for this ? This Galaxy also contains only 1% as many Stars as our Milky Way does . Is there be a connection between these two things, lack of many Stars and lack of Dark Matter ?
Answer: The paper describing this finding is van Dokkum et al. (2018). In a galaxy, the ratio of the stellar mass $M_\star$ to dark matter mass $M_\mathrm{DM}$ is normally very small, increasing with mass until Milky Way-sized galaxies where it reaches $\sim1/30$, then decreasing again.
But the dwarf galaxy NGC1052–DF2, seems to contain little or no dark matter. Galaxies of that mass ($M_\star\sim2\times10^8\,M_\odot$) typically has several hundred times more DM than stars.
How the galaxy was formed is not known, but the authors speculate on a few different mechanisms that all have to do with the fact that gas, in contrast to DM, may cool and condense and thus form clouds of very high $M_\mathrm{gas}/M_\mathrm{DM}$ ratios:
NGC1052–DF2 is located neat an large elliptical galaxy (NGC 1052) which could have gone through a merging event, tidally stripping a chunk of gas from one of the mergers. This is consistent with NGC1052–DF2 having a large velocity wrt. the elliptical.
NGC1052–DF2 could have formed from low-metallicity gas that was swept up in quasar winds (as described in Natarajan et al. 1998).
Lastly, NGC1052–DF2 could have formed from the fragmentation of gas accreting onto the elliptical, possibly aided by shocks.
EDIT (thanks to @WayfaringStranger): A fourth possibility is that the authors have misinterpreted the data. Shortly after the paper was put out, several other papers criticized the statistical methods used by van Dokkum et al. to infer their result (Martin et al. 2018;
Famaey et al. 2018;
Laporte et al. 2018).
van Dokkum wrote in a very long reply on his blog how at least the first of these papers actually confirm, rather than refute, his results. I am not enough of a statistician to comment on who's right and who's wrong, but note that there is currently an ongoing debate on the Facebook group astrostatistics.
You also ask whether the lack of DM is related to its "lack of stars". I wouldn't say that NGC1052–DF2 "lacks stars", anymore than the Milky Way "has too many stars" — it's just a small galaxy. But in general, the smaller a galaxy is, the larger the scatter in the $M_\star/M_\mathrm{DM}$ ratio is. Small galaxies, or small clumps of gas, have shallow gravitational potentials, so a small galaxy can more easily get stripped of gas, and a small chunk of gas can more easily escape a galaxy, or accretion stream, without attracting dark matter. In contrast, if would be very hard to conceive a very large galaxy having a considerably different $M_\star/M_\mathrm{DM}$ ratio, and indeed the scatter for massive galaxies is less than a factor of two (e.g. More et al. 2010). | {
"domain": "astronomy.stackexchange",
"id": 2856,
"tags": "dark-matter"
} |
Glint effect in electromagnetic waves | Question: Two plane waves having the same frequency and different intensities:
$$E_0=Ae^{i(\omega t-kr_0)}$$ and
$$E_1=Be^{i(\omega t-kr_1)}$$
arrive at point $P=(x,y)$ from two point sources located at a distance of $r_0$ and $r_1$ from $P$. If the distance between the two point sources is $d$, a detector in the point $P$ will 'see' the resulting wave as a plane wave coming from a direction different from the center of the two sources (glint). How can I calculate this direction as a function of $r_0$ and $r_1$? Thanks.
Answer: Interesting question. It would help to understand more about your application.
If your application is radar tracking, then perhaps you are interested in Crosseye effects (Google "Crosseye", it's an angle deception countermeasure, typically intended for use against trackers). If this is the case, you are missing one puzzle piece: the sensor, i.e. the radar antenna. The instantaneous radar pointing error depends on the size of the antenna, relative to the radar wavelength, and also the beam shape of the antenna.
To calculate the instantaneous pointing error, compute the antenna's radiation pattern in the usual way, including the E Field illumination of the antenna by the two point sources (this will be the part of the sinusoidal radiation pattern [caused by the two point sources] intercepted by the radar antenna). If it is a monopulse antenna, you will have to compute both sum and difference patterns; use them to form an ideal dot product discriminator, and the near boresight zero crossing of the discriminator is the instantaneous aim point. The aim point will move as the phase and amplitude of the two sources are changed.
If your application is not radar tracking, then perhaps you are instead interested in the direction of electromagnetic power flow (the Poynting vector). This is given by E_vec X H_vec, where E_vec = E-field vector, H_vec = magnetic field vector, and "X" is the vector cross product. Make sure you place the coordinate frame origin between the point sources.
You can draw a graph of the angle of the Poynting vector as a function of angle around the centre of the point sources, or alternatively as a function of cross range position at an imagined observer. | {
"domain": "physics.stackexchange",
"id": 56156,
"tags": "homework-and-exercises, electromagnetism, electromagnetic-radiation, interference, plane-wave"
} |
Mars Rover Kata using TDD and SOLID | Question: I am doing MarsRoverKata exercise just to train my coding skills and I came up with the following solution.
A squad of robotic rovers are to be landed by NASA on a plateau on
Mars.
This plateau, which is curiously rectangular, must be navigated by the
rovers so that their on board cameras can get a complete view of the
surrounding terrain to send back to Earth. A rover's position is
represented by a combination of an x and y co-ordinates and a letter
representing one of the four cardinal compass points. The plateau is
divided up into a grid to simplify navigation. An example position
might be 0, 0, N, which means the rover is in the bottom left corner
and facing North.
In order to control a rover, NASA sends a simple string of letters.
The possible letters are 'L', 'R' and 'M'. 'L' and 'R' makes the rover
spin 90 degrees left or right respectively, without moving from its
current spot. 'M' means move forward one grid point, and maintain the
same heading. Assume that the square directly North from (x, y) is (x, y+1).
Input (whether hard coded or input from keyboard): The first line of
input is the upper-right coordinates of the plateau, the lower-left
coordinates are assumed to be 0,0. The rest of the input is
information pertaining to the rovers that have been deployed. Each
rover has two lines of input. The first line gives the rover's
position, and the second line is a series of instructions telling the
rover how to explore the plateau.
The position is made up of two integers and a letter separated by
spaces, corresponding to the x and y co-ordinates and the rover's
orientation. Each rover will be finished sequentially, which means
that the second rover won't start to move until the first one has
finished moving. Output: The output for each rover should be its final
co-ordinates and heading.
Plateau max X and Y, Starting coordinates, direction and path for two
rovers:
5 5
1 2 N
LMLMLMLMM
3 3 E
MMRMMRMRRM
Output and new coordinates:
1 3 N
5 1 E
I was trying to follow SOLID principles in my implementation and I used TDD approach to write code.
Please criticize.
I have two main classes: MarsRover (manages main parameters of the Rover like initial position and final position) and MarsRoverNavigator (responsible for movements and spinning).
MarsRover.cs:
public class MarsRover
{
private readonly string input;
private MarsRoverNavigator marsRoverNavigator;
public MarsRover(string input)
{
this.input = input;
}
public NavigationParameters NavigationParameters { get; private set; }
public string FinalPosition { get; private set; }
public void Initialize()
{
NavigationParameters = InputValidator.GetNaviagtionParametersFromInput(input);
}
public void Navigate()
{
marsRoverNavigator = new MarsRoverNavigator(NavigationParameters);
FinalPosition = marsRoverNavigator.Navigate();
}
}
MarsRoverNavigator.cs:
public class MarsRoverNavigator
{
private readonly NavigationParameters navigationParameters;
private SpinningControl spinningControl;
private MovingControl movingControl;
public MarsRoverNavigator(NavigationParameters navigationParameters)
{
this.navigationParameters = navigationParameters;
spinningControl = new SpinningControl();
movingControl = new MovingControl();
}
public string Navigate()
{
var command = navigationParameters.Command;
foreach (var step in command)
{
DoAStep(step);
}
var result = $"{navigationParameters.CurrentCoordinates.X} {navigationParameters.CurrentCoordinates.Y} {navigationParameters.CurrentDirection}";
return result;
}
private void DoAStep(char stepCommand)
{
var newDirection = spinningControl.GetNextDirection(navigationParameters.CurrentDirection, stepCommand);
navigationParameters.UpdateCurrentDirection(newDirection);
var newCoordinates = movingControl.Move(stepCommand, navigationParameters.CurrentDirection, navigationParameters.CurrentCoordinates);
if (newCoordinates.X > navigationParameters.PlateauDimenstions.X || newCoordinates.Y > navigationParameters.PlateauDimenstions.Y)
{
throw new InvalidCommandException();
}
navigationParameters.UpdateCurrentCoordinates(newCoordinates);
}
}
NavigationParameters.cs:
public class NavigationParameters
{
public string CurrentDirection { get; private set; }
public string Command { get; }
public Coordinates PlateauDimenstions { get; }
public Coordinates CurrentCoordinates { get; private set; }
public NavigationParameters(string currentDirection, Coordinates plateauDimenstions, Coordinates currentCoordinates, string command)
{
CurrentDirection = currentDirection;
PlateauDimenstions = plateauDimenstions;
CurrentCoordinates = currentCoordinates;
Command = command;
}
public void UpdateCurrentDirection(string newDirection)
{
CurrentDirection = newDirection;
}
internal void UpdateCurrentCoordinates(Coordinates newCoordinates)
{
CurrentCoordinates = newCoordinates;
}
MovingControl.cs is implemented as a dictionary:
public class MovingControl
{
public Dictionary<string, Func<Coordinates, Coordinates>> MoveFunctions =
new Dictionary<string, Func<Coordinates, Coordinates>>
{
{"N", MoveNorth},
{"W", MoveWest},
{"S", MoveSouth},
{"E", MoveEast}
};
public Coordinates Move(char command, string currentDirection, Coordinates currentCoordinates)
{
if (command == 'M')
{
return MoveFunctions[currentDirection](currentCoordinates);
}
return currentCoordinates;
}
private static Coordinates MoveEast(Coordinates coordinates)
{
return new Coordinates()
{
X = coordinates.X + 1,
Y = coordinates.Y
};
}
private static Coordinates MoveSouth(Coordinates coordinates)
{
return new Coordinates()
{
X = coordinates.X,
Y = coordinates.Y - 1
};
}
private static Coordinates MoveWest(Coordinates coordinates)
{
return new Coordinates()
{
X = coordinates.X - 1,
Y = coordinates.Y
};
}
private static Coordinates MoveNorth(Coordinates coordinates)
{
return new Coordinates()
{
X = coordinates.X,
Y = coordinates.Y + 1
};
}
}
SpinningControl.cs is implemented as a Circular LinkedList:
public class SpinningControl
{
static readonly LinkedList<string> directions =
new LinkedList<string>(new[] { "N", "W", "S", "E" });
public readonly Dictionary<char, Func<string, string>> SpinningFunctions =
new Dictionary<char, Func<string, string>>
{
{'L', TurnLeft},
{'R', TurnRight},
{'M', Stay }
};
public string GetNextDirection(string currentDirection, char stepCommand)
{
return SpinningFunctions[stepCommand](currentDirection);
}
private static string TurnRight(string currentDirection)
{
LinkedListNode<string> currentIndex = directions.Find(currentDirection);
return currentIndex.PreviousOrLast().Value;
}
private static string TurnLeft(string currentDirection)
{
LinkedListNode<string> currentIndex = directions.Find(currentDirection);
return currentIndex.NextOrFirst().Value;
}
private static string Stay(string currentDirection)
{
return currentDirection;
}
}
Circular LinkedList extension:
public static class CircularLinkedList
{
public static LinkedListNode<T> NextOrFirst<T>(this LinkedListNode<T> current)
{
return current.Next ?? current.List.First;
}
public static LinkedListNode<T> PreviousOrLast<T>(this LinkedListNode<T> current)
{
return current.Previous ?? current.List.Last;
}
}
InputValidator.cs:
public static class InputValidator
{
private static Coordinates plateauDimenstions;
private static Coordinates currentCoordinates;
private static string currentDirection;
private static string command;
private static string[] inputByLines;
private const int expectedNumberOfInputLines = 3;
private const int expectedLineWithPlateauDimension = 0;
private const int expectedLineWithStartPosition = 1;
private const int expectedLineWithCommand = 2;
private const char linesDelimeter = '\n';
private const char parametersDelimeter = ' ';
private static readonly List<string> allowedDirections = new List<string> { "N", "W", "E", "S" };
public static NavigationParameters GetNaviagtionParametersFromInput(string input)
{
SplitInputByLines(input);
SetPlateauDimensions(inputByLines);
SetStartPositionAndDirection(inputByLines);
SetCommand();
return new NavigationParameters(currentDirection, plateauDimenstions, currentCoordinates, command);
}
private static void SplitInputByLines(string input)
{
var splitString = input.Split(linesDelimeter);
if (splitString.Length != expectedNumberOfInputLines)
{
throw new IncorrectInputFormatException();
}
inputByLines = splitString;
}
private static void SetPlateauDimensions(string[] inputLines)
{
var stringPlateauDimenstions = inputLines[expectedLineWithPlateauDimension].Split(parametersDelimeter);
if (PlateauDimensionsAreInvalid(stringPlateauDimenstions))
{
throw new IncorrectPlateauDimensionsException();
}
plateauDimenstions = new Coordinates
{
X = Int32.Parse(stringPlateauDimenstions[0]),
Y = Int32.Parse(stringPlateauDimenstions[1])
};
}
private static void SetStartPositionAndDirection(string[] inputByLines)
{
var stringCurrentPositionAndDirection = inputByLines[expectedLineWithStartPosition].Split(parametersDelimeter);
if (StartPositionIsInvalid(stringCurrentPositionAndDirection))
{
throw new IncorrectStartPositionException();
}
currentCoordinates = new Coordinates
{
X = Int32.Parse(stringCurrentPositionAndDirection[0]),
Y = Int32.Parse(stringCurrentPositionAndDirection[1])
};
currentDirection = stringCurrentPositionAndDirection[2];
}
private static void SetCommand()
{
command = inputByLines[expectedLineWithCommand];
}
private static bool StartPositionIsInvalid(string[] stringCurrentPositionAndDirection)
{
if (stringCurrentPositionAndDirection.Length != 3 || !stringCurrentPositionAndDirection[0].All(char.IsDigit)
|| !stringCurrentPositionAndDirection[1].All(char.IsDigit) || !allowedDirections.Any(stringCurrentPositionAndDirection[2].Contains))
{
return true;
}
if (Int32.Parse(stringCurrentPositionAndDirection[0]) > plateauDimenstions.X ||
Int32.Parse(stringCurrentPositionAndDirection[1]) > plateauDimenstions.Y)
{
return true;
}
return false;
}
private static bool PlateauDimensionsAreInvalid(string[] stringPlateauDimenstions)
{
if (stringPlateauDimenstions.Length != 2 || !stringPlateauDimenstions[0].All(char.IsDigit)
|| !stringPlateauDimenstions[1].All(char.IsDigit))
{
return true;
}
return false;
}
}
Tests around MarsRoverNavigator:
[TestFixture]
public class MarsRoverNavigatorShould
{
[TestCase("5 5\n0 0 N\nL", "0 0 W")]
[TestCase("5 5\n0 0 N\nR", "0 0 E")]
[TestCase("5 5\n0 0 W\nL", "0 0 S")]
[TestCase("5 5\n0 0 W\nR", "0 0 N")]
[TestCase("5 5\n0 0 S\nL", "0 0 E")]
[TestCase("5 5\n0 0 S\nR", "0 0 W")]
[TestCase("5 5\n0 0 E\nL", "0 0 N")]
[TestCase("5 5\n0 0 E\nR", "0 0 S")]
[TestCase("5 5\n1 1 N\nM", "1 2 N")]
[TestCase("5 5\n1 1 W\nM", "0 1 W")]
[TestCase("5 5\n1 1 S\nM", "1 0 S")]
[TestCase("5 5\n1 1 E\nM", "2 1 E")]
public void UpdateDirectionWhenPassSpinDirections(string input, string expectedDirection)
{
var marsRover = new MarsRover(input);
marsRover.Initialize();
marsRover.Navigate();
var actualResult = marsRover.FinalPosition;
actualResult.Should().BeEquivalentTo(expectedDirection);
}
[TestCase("5 5\n0 0 N\nM", "0 1 N")]
[TestCase("5 5\n1 1 N\nMLMR", "0 2 N")]
[TestCase("5 5\n1 1 W\nMLMLMLM", "1 1 N")]
[TestCase("5 5\n0 0 N\nMMMMM", "0 5 N")]
[TestCase("5 5\n0 0 E\nMMMMM", "5 0 E")]
[TestCase("5 5\n0 0 N\nRMLMRMLMRMLMRMLM", "4 4 N")]
public void UpdatePositionWhenPassCorrectInput(string input, string expectedPosition)
{
var marsRover = new MarsRover(input);
marsRover.Initialize();
marsRover.Navigate();
var actualResult = marsRover.FinalPosition;
actualResult.Should().BeEquivalentTo(expectedPosition);
}
[TestCase("1 1\n0 0 N\nMM")]
[TestCase("1 1\n0 0 E\nMM")]
public void ReturnExceptionWhenCommandSendsRoverOutOfPlateau(string input)
{
var marsRover = new MarsRover(input);
marsRover.Initialize();
marsRover.Invoking(y => y.Navigate())
.Should().Throw<InvalidCommandException>()
.WithMessage("Command is invalid: Rover is sent outside the Plateau");
}
}
Tests around input:
[TestFixture]
public class MarsRoverShould
{
[TestCase("5 5\n0 0 N\nM", 5, 5, 0, 0, "N", "M")]
[TestCase("10 10\n5 9 E\nLMLMLM", 10, 10, 5, 9, "E", "LMLMLM")]
public void ParseAnInputCorrectly(string input, int expectedXPlateauDimension, int expectedYPlateauDimension,
int expectedXStartPosition, int expectedYStartPosition, string expectedDirection, string expectedCommand)
{
var expectedPlateausDimensions = new Coordinates() { X = expectedXPlateauDimension, Y = expectedYPlateauDimension };
var expectedStartingPosition = new Coordinates() { X = expectedXStartPosition, Y = expectedYStartPosition };
var expectedNavigationParameters = new NavigationParameters(expectedDirection, expectedPlateausDimensions,
expectedStartingPosition, expectedCommand);
var marsRover = new MarsRover(input);
marsRover.Initialize();
var actualResult = marsRover.NavigationParameters;
actualResult.Should().BeEquivalentTo(expectedNavigationParameters);
}
[TestCase("10 10 5\n1 9 E\nLMLMLM")]
[TestCase("10\n5 9 E\nLMLMLM")]
[TestCase("10 A\n5 9 E\nLMLMLM")]
public void ReturnExceptionWhenWrongPlateauDimensionsInput(string input)
{
var marsRover = new MarsRover(input);
marsRover.Invoking(y => y.Initialize())
.Should().Throw<IncorrectPlateauDimensionsException>()
.WithMessage("Plateau dimensions should contain two int parameters: x and y");
}
[TestCase("1 1\n1 1\nLMLMLM")]
[TestCase("1 1\n1 N\nLMLMLM")]
[TestCase("1 1\n1\nLMLMLM")]
[TestCase("5 5\n5 A N\nLMLMLM")]
[TestCase("5 5\n5 1 A\nLMLMLM")]
[TestCase("1 1\n5 1 N\nLMLMLM")]
public void ReturnExceptionWhenWrongStartPositionInput(string input)
{
var marsRover = new MarsRover(input);
marsRover.Invoking(y => y.Initialize())
.Should().Throw<IncorrectStartPositionException>()
.WithMessage("Start position and direction should contain three parameters: int x, int y and direction (N, S, W or E)");
}
[TestCase("10 10; 5 9; LMLMLM")]
[TestCase("10 10\nLMLMLM")]
public void ReturnExceptionWhenWrongInputFormat(string input)
{
var marsRover = new MarsRover(input);
marsRover.Invoking(y => y.Initialize())
.Should().Throw<IncorrectInputFormatException>()
.WithMessage("Error occured while splitting the input: format is incorrect");
}
}
Answer: Right off the bat I want to say I love the way you broke everything out into separate classes. I also like that you included your test cases. Both of these will make the code easier to process.
Tests
Since this is TDD, lets start with the classes (note: I pulled in NUnit and FluentAssertions from nuget).
I love the use of the TestCase attribute. I don't love the default test name that NUnit gives you when you use it, though. I prefer to set the TestName property of the TestCase attribute, using https://github.com/nunit/docs/wiki/Template-Based-Test-Naming to see all the possibilities. I tend to use something like TestName = {M} <reason for this case>
TestCase also has an ExpectedResult property. You can change the methods to return your expected value, instead of using .Should() assertions. In your case, most of your tests would not benefit, as your expected results are complex types most of the time.
I love that you are also testing your error conditions. Users can clearly see what invalid state looks like.
Implementation
Moving onto your implementation (note: I created a Coordinate class with X and Y integer properties. I also created the named exceptions with a hardcoded Message based on your test cases).
MarsRover
Initialize seems like a method that should be private and called by the constructor. Or, Initialize could be called by Navigate and take the input string as parameters. Definitely keep them as separate functions, but from a user perspective I prefer not to call New and Initialize both.
NavigationParameters seems like an implementation detail. I don't know that I would want to expose it as a public property. There is a test that depends on the property (MarsRoverShould.ParseAnInputCorrectly), but that doesn't need to be about the MarsRover object; it could just as easily be scoped into the NavigationParameters object.
MarsRoverNavigator
I like the encapsulation of the moving and spinning movements.
NavigationParameters
PlateauDimenstions should be spelled PlateauDimensions.
MovingControl/SpinningControl
I like the dictionary for determining which direction to travel.
I like the use of a circular list for directions. It's also a good use of extension methods to create a circular list for yourself. Any time of list should work though, it wouldn't have to be linked since it's fixed size the entire time.
Rather than searching directions each time you move, you could be storing the most recent state in the SpinningControl. There would still be an initial search at creation, but after that holding onto the currentDirection state, and just call PreviousOrLast or NextOrFirst.
InputValidator
You start off great with a bunch of constants. But there are other magic numbers later in the file.
currentDirection, command, and inputByLines as static member variables put up a read flag for me. You only have one public method right now, but that could change in the future and suddenly the static state doesn't make sense. I'd prefer to see those passed as parameters everywhere.
allowedDirections has the same list of characters as SpinningControl. I think those could be pulled out into a shared reference. Maybe a CardinalDirections class?
General Comments and Final Thoughts
In general, member variables should be as generic as possible. For example, Moving Control.MoveFunctions: instead of using Dictionary, I would recommend IDictionary. Later if you need some change to the implementation, such as using a ReadOnlyDictionary, you can change the internal usage without affecting method signatures.
Again, I really like the way the classes/methods are broken out. And I love that tests are provided. Makes understanding the code from an outsider perspective that much easier. | {
"domain": "codereview.stackexchange",
"id": 31688,
"tags": "c#, programming-challenge, unit-testing"
} |
Why BiCl5 isn't stable? | Question: I read this in a textbook:
$\ce{Bi(V)}$ is very unstable and is a good oxidizing agent.
Why does it happen that way? Is it because in $\pu{+5}$ oxidation state $\ce{Bi}$ pulls in more electrons and hence gets reduced fast or there's a different concept?
Also then $\ce{BiCl5}$ should be more stable than $\ce{BiCl3}$ because it's getting the electrons that it needs, which is not true.
Answer: The reason behind this is the mainly inert pair effect .
In $\ce{BiCl3}$, due to the much electronegativity difference between $\ce{Bi}$ and $\ce{Cl}$, Chlorine atom forms bonds with almost pure $p$ orbitals of $\ce{Bi}$, and the lone pairs on $\ce{Bi}$ are of almost pure $s$ character. Thus, $\ce{Bi}$ atom doesn't actually utilise much of its $s$ orbital electrons in forming the bonds which is energetically much more preferable. ( If it seems non-obvious, recall Bent's rule which says that more electronegative atoms prefer to form bonds with orbitals with more $p$ - character.)
On the other hand, if $\ce{Bi}$ is present in $\ce{BiCl5}$, it has no other option other than utilising its $s$ orbitals for bonding. The equatorial bonds will mainly consist $s$ character and $p_x, p_y$ orbitals will also participate in forming the equatorial bonds. So, $s$ character in the equatorial bonds will be slightly lesser than $33$% (again due to Bent's rule). And the axial bonds will consist of mainly $p_z $ and $d_{z^2}$ orbitals and little bit of $s$ mixing may also be there as it is spherically symmetric. So, overall central atom $\ce{Bi}$ has to use its $s$ electrons in bonding which is difficult energetically.
Now, the participation of $s$ electrons is difficult in bonding is due to the relativistic contraction of $s$ orbitals. It becomes closer to the nucleus and more stable in case of heavy elements like $\ce{Au, Hg, Tl, Pb, Bi}$ etc. Hence, the central atom needs to pay more energy to involve those stable electrons in bonding. That's why, the $s$ electrons become kind of inert and thus these heavy elements don't prefer higher oxidation states to avoid participation of these inner $s$ orbitals in bonding. | {
"domain": "chemistry.stackexchange",
"id": 11433,
"tags": "inorganic-chemistry, electronic-configuration, halides, oxidation-state"
} |
How do gravitons and curved space time work together? | Question: I've heard two different descriptions of gravity, and I'm wondering how they work together.
The first is Gravitons:
"The three other known forces of nature are mediated by elementary
particles: electromagnetism by the photon, the strong interaction by
the gluons, and the weak interaction by the W and Z bosons. The
hypothesis is that the gravitational interaction is likewise mediated
by an – as yet undiscovered – elementary particle, dubbed the
graviton. In the classical limit, the theory would reduce to general
relativity and conform to Newton's law of gravitation in the
weak-field limit." -- Source
I'll admit I don't know much about them, but I assume that they would work similarly how photons do in EM.
The second, which I understand more, is given by GR and is that space time is curves by mass-energy, sort of how putting a heavy object on a blanket curves it.
So, how would these work together? Would the curves space time be analogous to a "graviton field", where the more massive/energetic objects produce a stronger field which other objects are attracted to, and excitations in the field produce gravitons?
Answer: At the level of understanding the data and observations we have up to now, General Relativity describes well what we perceive of the Cosmos and Quantum Field Theory what we observe in the microcosm of elementary particles and their interactions. The two have not been joined up to now, i.e. there is no accepted unified theory that joins smoothly these two mathematical frameworks. String Theory is the only known theory that has both quantization of gravity and the groups structures that can accommodate the elementary particle standard model, but it is still at the research level, due to the complexity of the mathematical systems possible.
At the moment gravitons are hypothetical particles on par with photons gluons and Z-W mesons in string theory.
String theory predicts the existence of gravitons and their well-defined interactions. A graviton in perturbative string theory is a closed string in a very particular low-energy vibrational state
in the same class as the other particles, which are also particular vibrational levels of strings.
The classical stress energy tensor that macroscopically is described by General Relativity as space-time distortions will emerge by the confluence of innumerable gravitons, in an analogous way that the electric and magnetic fields appear from the confluence of innumerable photons, which electromagnetic fields are well described by Maxwell's equations.
Even when we manage to have a Theory of Everything (ToE), each observational region will be described at its own level of mathematical complexity. For example when one is doing optics one forgets that light is made up of photons and uses the classical equations very successfully except in regions where quantization is important for understanding. | {
"domain": "physics.stackexchange",
"id": 10925,
"tags": "general-relativity, quantum-gravity, curvature, gravitational-waves"
} |
dynamicEDTOctomap.h: No such file or directory | Question:
I installed dynamicedt3d library using sudo apt-get install ros-indigo-dynamicedt3d
Included in my CMakeLists as:
set(DYNAMICEDT3D_LIBRARIES "/opt/ros/indigo/lib/libdynamicedt3d.so")
include_directories( ${DYNAMICEDT3D_LIBRARIES} )
> add_executable(get_occupancy src/get_occupancy.cpp)
> target_link_libraries(get_occupancy $(DYNAMICEDT3D_LIBRARIES))
But while doing a catkin_make, I still get the error fatal error: dynamicEDTOctomap.h: No such file or directory
#include <dynamicEDTOctomap.h>
Can somebody help me figure this out?
Originally posted by swethmandava on ROS Answers with karma: 102 on 2017-02-28
Post score: 0
Answer:
What you did was to declare your library file to be the include directory. This is clear whythis fails...
To quote from the dynamicEDT3DConfig.cmake:
# Usage from an external project:
# In your CMakeLists.txt, add these lines:
#
# FIND_PACKAGE(dynamicedt3d REQUIRED )
# INCLUDE_DIRECTORIES(${DYNAMICEDT3D_INCLUDE_DIRS})
> # TARGET_LINK_LIBRARIES(MY_TARGET_NAME
> ${DYNAMICEDT3D_LIBRARIES})
You should probably also add dynamic-edt-3d as a depend to your package.xml.
Originally posted by mgruhler with karma: 12390 on 2017-03-02
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by swethmandava on 2017-07-25:
I was following a tutorial given here http://wiki.ros.org/mallasrikanth/octomap
But I think that is for an older version and the issue is solved now that I went back to simply finding package and including directories. Thanks! | {
"domain": "robotics.stackexchange",
"id": 27162,
"tags": "ros, octomap, collision-detection, catkin"
} |
Simple fft to Gaussian pulse with MATLAB | Question: trying simply to create femtosecond pulse in MATLAB exactly like in the attached image,
the carrier frequency is around 374THz, and my sampling frequency is 10 times the carrier.
The results of the fft yields nothing understandable... tried to change some of the variables, fs,t-around zero, fftshift...
but could not conclude what's wrong with code.
my code follows matlab fft example :
clear all ; close all ; clc
f=374.7e12;%Thz
fs=f*10; %sampling frequency
T=1/fs;
L=1000;
sigma=5e-15;
t=(0:L-1)*T; %time base
x=(exp(-(t-50e-15).^2/(2*sigma)^2)).*exp(-1i*2*pi*f*t);
subplot(2,1,1)
plot(t,real(x),'b');
title(['Gaussian Pulse \sigma=', num2str(sigma),'s']);
xlabel('Time(s)');
ylabel('Amplitude');
ylim([-1 1])
xlim([10e-15 90e-15])
NFFT = 2^nextpow2(L);
X = fft(x,NFFT)/L;
Pxx=X.*conj(X)/(NFFT*NFFT); %computing power with proper scaling
f = fs/2*linspace(0,1,NFFT/2+1); %Frequency Vector
subplot(2,1,2)
plot(f,2*abs(X(1:NFFT/2+1)))
title('Magnitude of FFT');
xlabel('Frequency (Hz)')
ylabel('Magnitude |X(f)|');
the results are :
Answer: You simply don't plot what you want to see. Note that your time domain signal is complex-valued and you modulate by a negative frequency. So the range of frequencies where things are happening are the negative frequencies (or - by periodicity - the frequencies in the range $[f_s/2,f_s]$), but those you don't plot. If you change the definition of your frequency vector and the corresponding plot command you'll see what you expect to see (i.e. a Gaussian in the frequency domain centered at $f_s-f=3.37e15$ Hz):
f = fs*linspace(0,1,NFFT); % (full range) Frequency Vector
...
plot(f,2*abs(X))
OR, simply change the negative modulation frequency to a positive one
x=(exp(-(t-50e-15).^2/(2*sigma)^2)).*exp(1i*2*pi*f*t);
and leave everything else unchanged. Then the Gaussian in the frequency domain is centered at $f=374.7$ THz. This is probably what you expected to see in the first place. | {
"domain": "dsp.stackexchange",
"id": 2085,
"tags": "matlab, fft, gaussian"
} |
Why does Pascal's principle apply to a Hydraulic jack but not to stream lines in a stream tube? | Question: Pascal's principle says that
the pressure applied at one point in an enclosed fluid under
equilibrium conditions is transmitted equally to all parts of the
fluid
and thus the pressure in e.g. a hydraulic jack is equal everywhere.
However, in a stream tube, the pressure at the narrow end will be greater than the pressure at the wider end. Therefore, Pascal's principle does not apply to a stream tube and we need to use Bernoulli's equation to model the stream tube phenomenom.
Answer: Pascal's Principle applies only to confined/enclosed fluids. When fluid is flowing through a stream tube, it is not completely confined (the tube has two open ends, after all; if it didn't, then steady flow wouldn't be possible). | {
"domain": "physics.stackexchange",
"id": 52756,
"tags": "fluid-dynamics, pressure, flow, fluid-statics"
} |
Z transform - Inverse System function - Why number of poles and zeros myst be equal? | Question: I know that if a system is causal then the system function H(z) must have :
a) a ROC that spans from the exterior of the most distant pole and
b) the number of zeros must not be greater than the number of poles
I have found this exercise:
and the solution:
Why the number of poles and zeros must be equal?
Answer: Regardless of causality and stability, if you count poles and zeros at the origin and at infinity, the total number of poles always equals the total number of zeros. I'll show a few examples to make this obvious.
First, take a polynomial
$$P(z)=(z-z_0)(z-z_1)\ldots (z-z_M)\tag{1}$$
It has $M+1$ zeros and no (finite) poles. However, it must have $M+1$ poles at $z=\infty$ (because of the term $z^{M+1}$).
Next consider an "all-pole" function
$$A(z)=\frac{1}{(z-z_0)(z-z_1)\ldots (z-z_N)}\tag{2}$$
with $N+1$ poles and no (finite) zeros. Clearly, due to the $N+1$ poles, the function $A(z)$ in $(2)$ has $N+1$ zeros at infinity, because of the term $z^{N+1}$ in the denominator.
In general, if you have
$$H(z)=\frac{(z-z_{0,0})(z-z_{0,1})\ldots (z-z_{0,M})}{(z-z_{\infty,0})(z-z_{\infty,1})\ldots (z-z_{\infty,N})}\tag{3}$$
and if $M>N$, you get $M-N$ poles at infinity. For $M<N$, you have $N-M$ zeros at infinity. Consequently, in all cases the number of poles equals the number of zeros if poles and zeros at infinity are included. | {
"domain": "dsp.stackexchange",
"id": 7490,
"tags": "z-transform, poles-zeros, causality, inverse"
} |
Finding many different minima of nonlinear cost function | Question: Given a nonlinear cost function $G(\vec{x})$ of many variables, does there exist a method that allows one to find successive local minima $\vec{x}_0, \vec{x}_1, \dots$ so that $\vec{x}_n$ is orthogonal to $\vec{x}_{n-1}, \vec{x}_{n-2}, \dots, \vec{x}_0$?
Sorry for the apparent lack of effort on my part, but the field of optimization is vast and I do not have a lot of experience. I have searched for variants of a nonlinear conjugate gradient method that have the above property, but haven't been able to find anything.
Answer: Suppose you have already found $x_0,\dots,x_{k-1}$ and now your goal is to find $x_k$. Let $\mathcal{R}$ denote the region of points that are orthogonal to $x_0,\dots,x_{k-1}$. Then your problem is to minimize $G(x_k)$ subject to the requirement $x_k \in \mathcal{R}$. This is a straightforward equality constraint in optimization, and can be handled in a number of ways. Probably the simplest is to use projected gradient descent, which is like gradient descent but in each step you project the point onto the hyperplane $\mathcal{R}$. Given how you have defined $\mathcal{R}$, the projection operation in this case is easy. | {
"domain": "cs.stackexchange",
"id": 11766,
"tags": "optimization"
} |
How can I determine the code changes in an Atmel ASF update? | Question: Atmel's ASF (Atmel Software Framework) is periodically updated by Atmel. When an update is available, Atmel Studio notifies you and asks if you want to download the update. If I install the update, this does not update any code in my existing projects.
I want to determine what code changes have been made in the new ASF compared to what I previously used in a project. And ideally I'd like to be able to cherry-pick the changes that I want to incorporate (and exclude those I don't).
The only way I know to do this now is to create new project with the updated ASF (using the ASF Wizard to add all the drivers I used previously) and manually diff the files against my existing project. Then manually copy/paste from the files I want to include.
I thought I heard that Atmel uses an internal git repository for their ASF. Is there a way to get at this directly and view the changes? Or is there a better way?
It would also be nice to view the changes before installing the ASF update.
Answer: I think what you're looking for is the ASF Conversion Wizard. According to the linked page, this wizard is supposed to be launched whenever the ASF version changes. If is not launching for you, you can open the ASF Wizard in your existing project, and under the Version tab, you can change the ASF version for your current project. Make sure the "Trigger upgrade of modules" box is checked:
Image is from ASF Wizard documentation.
For viewing changes before you upgrade the ASF version, I think the release notes for a given ASF release are your only resource. They're admittedly not very complete, but I can't find any other documentation about them.
I don't know of any way to cherry-pick different sections of an ASF release. I don't think this is a great idea anyway, unless you're willing to wade through a lot of dependencies. Not all components of the ASF are direct windows between your software and the hardware; some of it is middleware that other components of the same ASF release may rely on. By picking and choosing which aspects of the ASF to update, you risks losing APIs, structure definitions, and other code that you may need in a non-obvious way.
Last note: the only ASF git repository I can find is here: https://spaces.atmel.com/gf/project/asf/scmgit/. Unfortunately, it doesn't appear to have been updated since 2012. I can't find any more up-to-date repositories. | {
"domain": "engineering.stackexchange",
"id": 374,
"tags": "embedded-systems"
} |
Why we need earthern pot to keep water cold although open surrounding also allow evaporation! | Question: Why earthern pot keep water cold?
Many answers say it allow evaporation through pores, but evaporation can happen without pot too, in open surrounding!
So why specifically we need earthern pot?
Answer: Earthen pots have pores, and the surface area available for evaporation includes the entire pot's surface and is much greater than an open plastic or metallic container. Moreover, leaving a water container without a lid in the open is not a good idea as it can easily get contaminated. | {
"domain": "physics.stackexchange",
"id": 83128,
"tags": "thermodynamics, evaporation"
} |
A derivation in Schwinger's proper time approach | Question: I have a question in derivation of Schwinger's proper time method in chapter 2.1 of
http://link.springer.com/book/10.1007%2F3-540-45585-X
from Eq.(2.20)-Eq.(2.23) to the classical action expression after Eq.(2.23). I do not know how the second term comes out that includes $\frac{1}{4}(x'-x'')^{\alpha}e {F_{\alpha}}^{\beta}{[coth(e \mathbf{F} s)]_{\beta}}^{\gamma}(x'-x'')_{\gamma}$.I expect this term comes from the quadratic term $\frac{1}{4}\dot{x}^2$ in the Lagrangian but I just cannot extract out the above form.
Answer: The term you are asking about does come from the $\frac{1}{4}\dot{x}^\mu\dot{x}_\mu$ term of the Lagrangian.
Start by inserting the first of Eqs.(2.23) back into Eq.(2.22) to obtain the expression for $\dot{x}(\lambda)$:
$$
\dot{x}(\lambda) = e^{2e{\bf F}\lambda}\dot{x}(0) = e^{2e{\bf F}\lambda}\frac{1}{e^{2e{\bf F}s} - 1} 2e{\bf F}\; (x' - x") = \frac{2e{\bf F}\; e^{2e{\bf F \lambda}} }{e^{2e{\bf F}s} - 1} \; (x' - x")
$$
Now notice that the equation of motion
$$
\ddot{x}^\mu(\lambda) = 2e {\bf F}^{\mu\nu} \dot{x}_\nu(\lambda)
$$
implies
$$
\ddot{x}^\mu(\lambda)\dot{x}_\mu(\lambda) = 2e\; \dot{x}_\mu(\lambda) {\bf F}^{\mu\nu} \dot{x}_\nu(\lambda) = 0
$$
and
$$
\frac{d}{d\lambda}\left[ \dot{x}_\mu(\lambda)\dot{x}^\mu(\lambda) \right] = 2\; \ddot{x}^\mu(\lambda)\dot{x}_\mu(\lambda) = 0
$$
Hence we have
$$
\int_0^s{d\lambda \;\dot{x}_\mu(\lambda)\dot{x}^\mu(\lambda) } = s\; \dot{x}_\mu(0)\dot{x}^\mu(0)
$$
But from the expression for $\dot{x}(\lambda)$ above we have
$$
\dot{x}(0) = \frac{2e{\bf F}}{e^{2e{\bf F}s} - 1} \; (x' - x") = (x' - x") \frac{2e{\bf F}e^{2e{\bf F}s}}{e^{2e{\bf F}s} - 1}
$$
where the last form on the right hand side follows from $F^{\mu\nu} (x'-x")_\nu = - (x'-x")_\nu F^{\nu\mu}$. With this the action integral term becomes
$$
\int_0^s{d\lambda \;\dot{x}_\mu(\lambda)\dot{x}^\mu(\lambda) } = s\;(x' - x") \frac{2e{\bf F}e^{2e{\bf F}s}}{e^{2e{\bf F}s} - 1}\frac{2e{\bf F}}{e^{2e{\bf F}s} - 1} \; (x' - x") =\\
= (x' - x") e{\bf F}s \frac{4e{\bf F}e^{2e{\bf F}s}}{\left(e^{2e{\bf F}s} - 1\right)^2}\; (x' - x") = (x' - x") e{\bf F}s \frac{e{\bf F}}{\sinh^2(e{\bf F}s)}\; (x' - x") = \\
= - (x' - x")\; (e{\bf F}s) \frac{d}{d s}\coth(e{\bf F}s)\; (x' - x")
$$
Assuming $x'$ and $x"$ are fixed, the last expression can be rearranged to obtain
$$
\int_0^s{d\lambda \;\dot{x}_\mu(\lambda)\dot{x}^\mu(\lambda) } = (x' - x")\; e{\bf F} \coth(e{\bf F}s)\; (x' - x") - \frac{d}{d s} \left[(x' - x")\; (e{\bf F}s) \coth(e{\bf F}s)\; (x' - x") \right]
$$
The first term is the one we are looking for. The second one is not only a total time derivative, but can also be rewritten as the integral of a total time derivative, since
$$
\int_0^s{d\lambda \frac{d^2}{d \lambda^2} \left[(x' - x")\; (e{\bf F}\lambda) \coth(e{\bf F}\lambda)\; (x' - x") \right] } = \\
= \frac{d}{d s} \left[(x' - x")\; (e{\bf F}s) \coth(e{\bf F}s)\; (x' - x") \right] - \lim_{\lambda \rightarrow 0} \frac{d}{d \lambda} \left[(x' - x")\; (e{\bf F}\lambda) \coth(e{\bf F}\lambda)\; (x' - x") \right]
$$
and
$$
\lim_{\lambda \rightarrow 0} \frac{d}{d \lambda} \left[(x' - x")\; (e{\bf F}\lambda) \coth(e{\bf F}\lambda)\; (x' - x") \right] = 0
$$
But a term of the form $\int_0^s{d\lambda \frac{d^2}{d \lambda^2} \left[(x' - x")\; (e{\bf F}\lambda) \coth(e{\bf F}\lambda)\; (x' - x") \right] }$ would only add a total time derivative to the Lagrangian, so it can be safely discarded and the final result is
$$
\int_0^s{d\lambda \;\dot{x}_\mu(\lambda)\dot{x}^\mu(\lambda) } = (x' - x")\; e{\bf F} \coth(e{\bf F}s)\; (x' - x")
$$ | {
"domain": "physics.stackexchange",
"id": 25542,
"tags": "quantum-mechanics, quantum-field-theory, greens-functions"
} |
Dependence of current in electronvolts | Question: Definition: 1 eV is when an electron passes through a potential difference of 1 V and gains/loses energy.
Where is this potential difference? Is it between two plates in a apparatus setup?
Is this potential difference and applied voltage to the experimental setup/circuit, are they two different things?
If the device used to create a potential difference of 1 V used a power of 1 watt and 1 ampere current, then can we define 1 eV as being the energy gained by electron when it passes through a electric field using 1 watt power using 1 ampere current?
I am not sure if my third question makes sense. If you could help me with corrections or clarification, that would be great.
Answer: Your definition is not accurate. One electron-volt is an energy unit equivalent to the amount of energy gained (or lost) by one electron accelerated across a potential difference of 1 V. What you have stated is simply a result of the acceleration, not the definition of the electron-volt. Plus, we should be even more general and instead of using the electron, we should use a particle with a charge of 1 electronic unit, e.
The acceleration of an actual electron doesn't have to happen. Nor does there actually have to be a potential difference. Those are merely concrete items which are used to define an equivalent amount of energy. The mass energy of an electron is approximately 511,000 electron volts, but there doesn't have to be any potential of 511,000 volts for the electron to exist. | {
"domain": "physics.stackexchange",
"id": 42104,
"tags": "electromagnetism, energy, definition"
} |
Finding columns of a matrix that are the same for combinations of events | Question: How I can change this multi-loop to reduce the computation time?
A is a matrix (5 × 10000) where the fifth line contains values between 1 and 50 corresponding to different events of an experiment. My goal is to find the columns of the matrix which are the same for all possible combinations of 7 different events.
data = A(1:4,:);
exptEvents = A(5,:);
% Find repeats
[b,i,j] = unique(data', 'rows');
% Organize the indices of the repeated columns into a cell array
reps = arrayfun(@(x) find(j==x), 1:length(i), 'UniformOutput', false);
% Find events corresponding to these repeats
reps_Events = cellfun(@(x) exptEvents(x), reps, 'UniformOutput', false);
U = cellfun(@unique, reps_Events, 'UniformOutput', false);
repeat_counts = cellfun(@length, U);
kk=1;
for i1=1:50
for i2=1:50
for i3=1:50
for i4=1:50
for i5=1:50
for i6=1:50
for i7=1:50
if i1~=i2 && i2~=i3 && i3~=i4 && i4~=i5 && i5~=i6 && i6~=i7
myEvents = [i1 i2 i3 i4 i5 i6 i7];
v= b(cellfun(@(x)all(ismember(myEvents,x)),U),:);
intersection(1,kk)=v(1);
intersection(2,kk)=v(2);
intersection(3,kk)=v(3);
intersection(4,kk)=v(4);
intersection(5,kk)=i1;
intersection(6,kk)=i2;
intersection(7,kk)=i3;
intersection(8,kk)=i4;
intersection(9,kk)=i5;
intersection(10,kk)=i6;
intersection(11,kk)=i7;
kk=kk+1;
end
end
end
end
end
end
end
end
Answer: There's quite a few things that you can do to reduce computation time.
First of all, if you know that an outcome has to be the combination of seven events, you can throw out whatever has fewer of them.
Then, you can preassign the output array (growing in a loop is usually not very fast).
Finally, you want to generate all possible subsets if there are, say, 9 events from which you can choose 7.
So you'd start
%# throw out useless stuff
tooFewIdx = repeat_counts < 7;
b(tooFewIdx,:) = [];
U(tooFewIdx) = [];
repeat_counts(tooFewIdx) = [];
%# estimate how many combinations you will get
nPerms = arrayfun(@(x)nchoosek(x,7),repeat_counts);
intIdx = [0;cumsum(nPerms(:))];
%# pre-assign output
%# If this runs out of memory, try using integer formats
%# instead, e.g. zeros(...,'uint8') if no value is above 255
intersection = zeros(intIdx(end),11);
%# loop to fill intersection
for i=1:length(U)
%# fill in the experiment information
intersection((intIdx(i)+1):intIdx(i+1),1:4) = repmat(b(i,:),nPerms(i),1);
%# add all combinations of events
intersection((intIdx(i)+1):intIdx(i+1),5:11) = combnk(U{i},7);
end | {
"domain": "codereview.stackexchange",
"id": 990,
"tags": "performance, algorithm, matrix, combinatorics, matlab"
} |
Classical potential and particle creation | Question: I was reading up some references about QM/QFT and I came across this note (actually a problem set): https://www.classe.cornell.edu/~yuvalg/p4444/hw8-sol.pdf
In question 1, the author mentioned that "In the limit where particle production and annihilation can be neglected, one can use the idea of potentials. In QFT, this can be used to find how a potential is generated from particle exchange. In particular, the statement that “the photon is the carrier of the electric force” can be understood."
Can someone please elaborate more about this? In particular, why can we use the notion of potentials if particle production and annihilation can be neglected and how is a potential generated from particle exchange?
Thanks!
Answer: there is a complete discussion of Weinberg's "Lectures on Quantum Mechanics". In Chapters 7 and 8, he discusses how the non-relativistic, elastic scattering, that is neglecting creation and annihilation, can be described given a Hamiltonian $H=H_0+V(\bf{x})$ obtaining in the Born approximation (initial and final states as free particles) that the scattering amplitude depends on the potential as the expression in the notes $f(q)=\int d^3x \exp{iqx} V(x) $ with q = k-k' (transferred momentum). In the case of the creation and annihilation of particles, you may use the formalism of S-matrix $S_{\beta\alpha}$ representing the transition rate between the asymptotic initial ($\alpha$) and final ($\beta$) states. If you assume an elastic scattering generated by a particle exchange (let's say e+N->e+N, with photon exchange) you will find the propagator of the exchanged particle. The particle exchanged is a formal mathematical representation of a perturbative series. It is also virtual since it has $q^2\neq0$ (which is not the case for a real photon). Finally, for the general case with creation and annihilation, you will use a more general formalism (Scattering amplitude) which recovers the scattering on the potential in the case of non-relativistic elastic scattering.
Hope it could be helpful | {
"domain": "physics.stackexchange",
"id": 70737,
"tags": "quantum-mechanics, quantum-field-theory, potential"
} |
Executing query until there is nothing more left | Question: I often query a database to get a batch of items to procees. I do this as long as the query returns some items. I use this pattern quite a lot so I thought I create a small helper so that I don't have to implement this logic again and again.
It's a small class that executes the query until there is nothing more left:
public static class Unfold
{
public static async Task ForEachAsync<T>
(
Func<CancellationToken, Task<IList<T>>> query,
Func<IList<T>, CancellationToken, Task> body,
CancellationToken cancellationToken
)
{
while (true)
{
var result = await query(cancellationToken);
if (result.Any())
{
await body(result, cancellationToken);
}
else
{
break;
}
}
}
}
The reason why I implemented it exactly this way is:
all my queries are async
they must always return IList<T> (if they return a collection of course)
I always process a batch at a time that I then mark as processed
Example
The typical use-case is like this:
get a batch of items from a repository
process this batch
repeat until the batch is empty
async Task Main()
{
var numbers = new NumberRepository();
await Unfold.ForEachAsync
(
query: async token => await numbers.GetNumbersAsync(token),
body: ProcessBatch,
CancellationToken.None
);
}
A test repository:
public class NumberRepository
{
private readonly IList<IList<int>> _numbers = new[] { new[] { 1, 2 }, new[] { 3, 4 }, new[] { 5 }, new int[0] };
private int _batchIndex;
public Task<IList<int>> GetNumbersAsync(CancellationToken cancellationToken) => Task.FromResult(_numbers[_batchIndex++]);
}
and the processing method:
private Task ProcessBatch<T>(T item, CancellationToken cancellationToken)
{
item.Dump();
return Task.CompletedTask;
}
What do you say? Is this a good or a bad solution? Is there anything missing (but null-checks)?
Answer: Sorry, but you definitely need a null check here:
if (result.Any())
Else there is not much to comment.
About the usage:
I don't understand, why you create a lambda for the query argument:
async Task Main()
{
var numbers = new NumberRepository();
await Unfold.ForEachAsync
(
query: async token => await numbers.GetNumbersAsync(token),
body: ProcessBatch,
CancellationToken.None
);
}
Why not just:
async Task Main()
{
var numbers = new NumberRepository();
await Unfold.ForEachAsync
(
query: numbers.GetNumbersAsync,
body: ProcessBatch,
CancellationToken.None
);
}
numbers.GetNumbersAsync is awaitable already? | {
"domain": "codereview.stackexchange",
"id": 32850,
"tags": "c#, async-await"
} |
start and kill rosbag record from bash shell script | Question:
Let's say we want to record a rosbag, and run a python script from a bash script.
rosbag record -o /file/name /topic
PID=$!
sleep 2
python ./run_ROS_script.py
kill -INT $PID
so if you leave out -INT it kills the rosbag, but not nicely: it leaves the rosbag in .active status. If you add -INT (same thing as running ctrl+c on the rosbag process) then the rosbag never finishes, and doesn't leave .active status.
So ... what gives? Why can't we nicely kill this rosbag record?
Originally posted by buckley.toby on ROS Answers with karma: 116 on 2017-11-09
Post score: 5
Answer:
You could run your rosbag with a Not anonymous node name
rosbag record -o /file/name /topic __name:=my_bag
and then kill it via rosnode kill :
rosnode kill /my_bag
This assures that rosbag stops gracefully.
Originally posted by Wolf with karma: 7555 on 2017-11-09
This answer was ACCEPTED on the original site
Post score: 7
Original comments
Comment by Girmi on 2018-02-16:
The correct command that worked for me (ROS Kinetic) is:
rosbag record -o /file/name /topic __name:=my_bag
rosnode kill /my_bag
Note the double underscore before the name param (see the "Special keys" section at [Remapping Arguments](http://wiki.ros.org/Remapping Arguments))
Comment by gvdhoorn on 2018-02-16:
You're correct. I've just edited the answer by @Wolf.
Comment by stegelid on 2019-08-21:
You can set a trap in your bash script, catching SIGINT and call rosnode kill from there, eg:
#!/bin/bash
trap "rosnode kill /bagger" SIGINT
rosbag record /my_topic __name:=bagger &
roslaunch mypackage launch.launch
Comment by mch on 2021-10-28:\
rosnode kill /my_bag
This assures that rosbag stops gracefully.
That does not seem to stop gracefully though. Rosbag name is *.bag.active not just *.bag | {
"domain": "robotics.stackexchange",
"id": 29322,
"tags": "ros, rosbag, shell, bash"
} |
What is the meaning of $\mathrm{d}^4k$ in this integral? | Question: From Gerardus 't Hooft's Nobel Lecture, December 8, 1999, he states the following equation (2.1):
$$
\int \mathrm{d}^4k \frac{\operatorname{Pol}(k_{\mu})}{(k^2+m^2)\bigl((k+q)^2+m^2\bigr)} = \infty
$$
in relation to weak interactions theory, where $\operatorname{Pol}(k_{\mu})$ stands for some polynomial in the integration variables $k_{\mu}$, and then goes on to say that physically it must be a nonsense.
Why is it a nonsense?
What sort of integral is this, and how does one interpret it?
Is the $\mathrm{d}^4k$ shorthand for 4th degree integration?
At what stage and subject of a physics course does one learn about it (A pre-fresher is asking)?
Answer: The equation is a term in the calculation of a scattering probability. Obviously a scattering probability must be between zero and one, like any other type of probability. So when the calculation of a scattering probability returns a value of $\infty$ that isn't physically possible, and it shows that the method we are using to calculate the probability is incorrect.
That is what 't Hooft means by nonsense - it means the method of doing the calculation is wrong. His Nobel prize was earned showing us the correct way to do the calculation.
The parameter $k$ is a wave vector, or more precisely the special relativistic form of a wave vector. This is a 4D vector so it has four independant components normally written as $k^0$, $k^1$, $k^2$ and $k^3$. Note that the superscript is a label and doesn't mean you're raising $k$ to a power. The integration is over all possible values of each of the four components, so it's really four integrations:
$$ \int \int \int \int\,dk^0 \,dk^1 \,dk^2 \,dk^3 $$
Writing $d^4k$ is a common shorthand for this.
You are unlikely to study quantum field theory in any depth unless you do a postgraduate course in physics, though I guess some universities may offer it as a final year option. | {
"domain": "physics.stackexchange",
"id": 29249,
"tags": "quantum-field-theory, special-relativity, particle-physics, integration, calculus"
} |
Maximum independent subset for graphs with lots of edges | Question: Consider an NP-hard graph problem, like the maximum independent set problem.
Let us say I restrict my inputs to only be graphs that have $n$ vertices and at least $n^{c}$ edges, for some $c > 1$. In other words, the graphs are very connected.
Is the maximum independent set problem still hard for these graphs?
Answer: The problem is still $\mathsf{NP}$-hard. For example, take a hard instance $G = (V,E)$ of the original maximum independent set problem. Add a new vertex set $V'$ to the graph such that $|V'| = |V|$ and $V'$ forms a complete graph. Also, there are no edges between $V$ and $V'$. Let the new graph be $G' = (V' \cup V, E')$ which is also a hard instance. And, $|V' \cup V| = 2n$ and $|E'| = \Theta(n^2)$.
Formally, it holds that $G$ has an independent set off size $k$ if and only if $G'$ has an independent set of size $k+1$.
Even if you add edges between every vertex of $V$ and every vertex of $V'$, the reduced instance would be a hard instance. Then, it holds that $G$ has an independent set of size $k$ if and only if $G'$ has an independent set of size $k$. | {
"domain": "cs.stackexchange",
"id": 19537,
"tags": "complexity-theory, graphs, time-complexity, np-hard, polynomial-time-reductions"
} |
Introduction into first order logic verification | Question: I am trying to teach myself different approaches to software verification. I have read some articles. As far as I learned, propositional logic with temporal generally uses model checking with SAT solvers (in ongoing - reactive systems), but what about first order Logic with temporal? Does it use theorem provers? Or can it also use SAT?
Any pointers to books or articles for beginners in this matter is much appreciated.
Answer: First order logic is undecidable, so SAT solving does not really help. That said, techniques exist for bounded model checking of first order formulas. This means that only a fixed number of objects can be considered when trying to determine whether the formula is true or false. Clearly, this is not complete, but if a counter-example is found, then it truly is a counter-example.
The tool Alloy is one tool that allows models to be described in first-order logic (though the surface syntax is based on relationally described models) and uses bounded model checking to find solutions. A SAT solver is used under the hood. One alloy extension allows models with a temporal character, though technically it does not support temporal logic.
If you wish to explore further, for example, to verify program correctness, then you can look at program verification tools. These are generally based on Hoare logic (for reasoning about pre- and post-conditions), possibly extended with Separation logic (for reasoning about heaps). These logics are generally undecidable, so a certain amount of interaction between the human and the verification tool is required.
Some example tools are:
Verifast
Spec# | {
"domain": "cs.stackexchange",
"id": 413,
"tags": "reference-request, logic, formal-methods, sat-solvers, software-verification"
} |
Why are there six reading frames if only one strand of DNA is referred to as the ‘coding strand’? | Question: We need to consider six reading frames when considering the potential of DNA to encode protein (three frames for each strand). But only one strand is transcribed into RNA — the so-called coding strand. It would therefore seem to me that there are actually only three reading frames to consider. Why, then, do people refer to six?
Another point concerning reading frames is the definition of Open Reading Frame — ORF. One text defines ORF as:
“An ORF is a continuous stretch of codons beginning with a start codon
(usually AUG) and ending with a stop codon”
whereas another text defines it as
“An ORF is a continuous stretch of codons that do not contain a stop
codon (usually UAA, UAG or UGA)”
It seems to me that the first definition is correct. Which is the generally accepted definition for ORF?
Answer: In my opinion this question reflects two things:
The difficulty students have in appreciating the historical experimental concerns of research workers in an area that is now well understood, and, hence, how it influenced the coining of new technical terms.
The way that the use of terms has changed with time as old concerns disappear and new ones arise. Thus, a term originally used in one sense may have subsequently been adopted to mean something else, even if this does not appear strictly logical.
What is a coding strand?
This is the crux of the first question, and the answer is that the term ‘coding strand’ does not mean anything (or at least is ambiguous) without context. Thus, I think the poster is assuming a genome context, and this is the fallacy of her argument.
If one talks (or thinks) about the ‘coding strand’ of a DNA genome, one is assuming that because many DNA genomes are double-stranded, if you separated the two strands (e.g. of a small DNA virus) and performed a conceptual translation (decoded the DNA into amino acids using a genetic code with T rather than U) one strand would have all the information for the genes and the other strand would have none. Another way of saying this is that you are assuming that all the genes in the genome have the same directionality (arrow direction on a genome diagram, such as that from E. coli, below). This is hardly ever the case. (The only examples I can think of are the genomes of single-stranded RNA viruses.)
So if you use the term ‘coding strand’, you must state that this is in the context of a single gene. Each strand of DNA in a genome (e.g. E.coli) will contain sections of DNA that are coding and some that are non-coding in terms of conceptual translation. (If you look at papers describing early isolations of genes you will find the words “coding strand” generally qualified by “of the gene”
But cDNA has just one coding strand…
Historically one of the concerns was to sequence eukaryotic cDNAs, DNA copies of mRNAs. These would be monocistronic, i.e. encode a single protein. So here one strand would be coding and the other non-coding. Was it possible to reduce to three the number of reading frames it was necessary to analyse in this case? No! The fact that only one strand is coding was no help at all as there was no way of knowing which this was in the cDNA being sequencing. Likewise for a the fragment of any gene. You had sequenced a piece of DNA, cloned into some plasmid vector and there was no way of telling which strand the sequence you read out originated from. Hence you need to translate it in all six reading frames to find potential amino acid sequences.
So what is an Open Reading Frame?
Open Reading Frame is a term that is often used today in a manner distinct from the way it was used when it was coined. At the time it was coined one would be sequencing short stretches of cDNAs or virus or bacterial genes and there was a low likelihood that one would be sequencing through the C-terminus, i.e. the stop codon of the gene. One’s concern was to concentrate on reading frames that were not interupted by stop codons. This original usage is reflected in the definition of Open Reading Frame in Wikipedia:
“An ORF is a continuous stretch of codons that do not contain a stop codon (usually UAA, UAG or UGA)”
However as knowledge increased and technology improved, the focus switched to discovering genes in the genomic or long partial genomic sequences of organisms. Now one was working with long DNA sequences containing many whole genes. The focus became finding potential genes based on start and stop codons (and a cut-off length). This is reflected in the documentation for the EMBOSS program, getorf:
“An ORF may be defined as a region of a specified minimum size between two STOP codons, or between a START and a STOP codon.”
Note that even this last definition is ambiguous.
Which is correct? That is a concern of students. In the real world one must recognize that the meaning of expressions can change. If there is any ambiguity — as here — one must define the way in which you are using the expression. | {
"domain": "biology.stackexchange",
"id": 6660,
"tags": "codon, orf"
} |
E: Unable to locate package ros-jade-desktop-full | Question: I want to install ROS on my Xubuntu 16.04, Xenial Xerus. I have followed the ROS's site instruction: http://wiki.ros.org/jade/Installation/Ubuntu, and did the following: First, setup my sources.list:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
Second, set up keys:
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 0xB01FA116
Then, make sure my package is up-to-date:
sudo apt-get update
Last, try to install ROS jade:
sudo apt-get install ros-jade-desktop-full
And get this error:
E: Unable to locate package ros-jade-desktop-full
Where did I go wrong, and how can I get ROS (any version is ok) running on my Xubuntu 16.04?
Answer: On the page you indicated http://wiki.ros.org/jade/Installation/Ubuntu it is said that it only support 14.04 14.10 and 15.04, therefore i don't think it is available yet for the 16.04 version.
Either try to see via auto-completion if there is any other version available,
or compile from source.
You will have better answers on the ros site http://answers.ros.org/question/226098/ros-on-ubuntu-xenial-1604/ | {
"domain": "robotics.stackexchange",
"id": 1058,
"tags": "ros"
} |
How to improve the influence of one element of the input on the latent code in an autoencoder? | Question: I am trying to apply an autoencoder for feature extraction with the input like I=[x1,x2,x3,...,xn]. Representing the latent code after encoding as L, I want to improve the influence of one element of the input, such as x1, on L. My intention is that when x2,x3,...,xn remain constant, a small change in x1 can lead to a huge change in the code L. So what kind of autoencoder structure should be adopted to achieve this purpose? Thanks.
Answer: You could add a new term to the loss that enforces precisely what you said: compute a modified version of the batch adding a very small variation to the $x_1$ component, compute the distance $D$ between the latent code of the original batch and the variation and add a new term to the loss in the lines of $L_D = 1/D$.
To combine the original loss and the new term, you can add them together with a weight $L + \alpha L_D$. To determine $\alpha$, you can try with different values, ensuring that the gradients of each term are not too different in magnitude. | {
"domain": "datascience.stackexchange",
"id": 12126,
"tags": "feature-engineering, feature-extraction, autoencoder"
} |
What is the heat sink in an internal combustion engine? | Question: I believe that the heat source is the spark plug because it heats up the fuel-air mixture and that the working substance is the fuel-gas mixture. Then, I know that the explosion created by the fuel forces the piston down, turning the wheels. However, I cannot identify the heat sink in an internal combustion. What does the fuel transfer its heat to to do work?
Answer: The spark plug merely initiates the chemical reaction of gasoline combining with oxygen. This reaction is what produces the heat. The combustion inside of the engine is typically $800-1200 ^\circ \text{C}$. The surrounding environment is the heat sink, whatever the outside temperature is. Some small engines are cooled directly by the air with fins, and most large engines have a liquid cooling system with a radiator. Either way, all of the heat produced by the engine finds it's way to the air outside either through cooling fins, a radiator, or the exhaust gases. | {
"domain": "physics.stackexchange",
"id": 26038,
"tags": "thermodynamics, heat-engine"
} |
Where does the sun set? | Question: It's commonly said that the sun sets in the west.
However from my balcony I can clearly see that it sets significantly farther in the south during winter, and much farther north in the summer.
Why is this? Where does it set?
Answer: Where it sets depends on 1) your latitude and 2) the Sun's declination, which varies throughout the year between -23° (December) and +23° (June). Consider the following pictures, taken from A Quick Guide to the Celestial Sphere by Jim Kaler.
Objects that are on the celestial equator (like the Sun on approximately March 20th and September 23rd), rise in exactly East and set in exactly West:
However, if during the year the Sun travels more to the southern celestial pole (denoted SCP) in the winter on the northern hemisphere, or the northern one (denoted NCP) in the summer, that circle moves, and you see the point where it crosses the horizon (corresponding to sunset and sunrise) moves to the South resp. to the North as well: | {
"domain": "earthscience.stackexchange",
"id": 2699,
"tags": "sun, seasons"
} |
Voltage calculation at any point between two plates at different voltage | Question: I have two plates (actually they are electrical connectors on a board) and I try to find the way of getting the $V$ value at any point between two plates, $A$ and $B$.
A is at $V_o$ (in my case it is at $6 KV$) and B is at $V=0$.
distance between $A$ and $B$ is $d$.
At any point at $x$ distance from $A$.
1.- I know $E=-AV/d$ --> $V$ at any point $p$ ie. $V_p$= $-\int \vec{E}·\vec{dr}$
2.- I have found that between two plates: $$V(d) = \frac{Q}{4\pi\epsilon_od}$$
How can I use them for getting $V$ value at any distance $(x)$ between A and B conductors?
Answer: If we ignore complicated edge effects, then the E-field between the two plates is constant, then the potential difference will change linearly between them. In practice, $E = \frac{V_0}{d_0}$ (where $V_0$ is the potential difference between the plates and $d_0$ is the distance between plates), is the slope of the potential difference as a function os distance from one of the plates. | {
"domain": "physics.stackexchange",
"id": 58923,
"tags": "electric-fields, voltage, distance"
} |
Deexcitation times for ytterbium | Question: I need to find the deexcitation times for the transitions found in Figure 1 of Nature Phys. 8, 649 (2012), arXiv:1206.4507.
That is, what is the deexcitation time for the following transitions:
$$ ^2P_{1/2} \rightarrow {}^2S_{1/2} $$
$$ ^2P_{1/2} \rightarrow {}^2D_{3/2} $$
$$ ^2D_{3/2} \rightarrow {}^2S_{1/2} $$
$$ ^2D_{5/2} \rightarrow {}^2F_{7/2} $$
I've searched on google for pretty much everything I can think of , but I was not able to find a data table with these deexcitation times.
Answer: Most of what's making life difficult is that you're using the wrong terminology. The term "deexcitation" can be understood by a human but it is not standard or recommended, and it definitely won't be understood by a machine. What you're looking for are more normally called the state and transition lifetimes and probabilities.
The place to look is the NIST Atomic Spectra Databases, particularly the one for atomic lines. I can't link to the Ytterbium page (it's a dynamic page), but the $A_{ki}$ are the data you want - they are the transition probabilities, in $\mathrm s^{-1}$, for the transition. Invert those to get the lifetimes. If the information you want is not on there, there is a wealth of bibliographic information lying about the site which can probably help you find what you need.
For the specific transitions listed in the diagram, the NIST database only lists two, ${}^2 P_{1/2}\to{}^2 S_{1/2}$ and ${}^3 [3/2]_{3/2}\to{}^2 S_{1/2}$ (which you took down as ${}^2D_{3/2}\to{}^2 S_{1/2}$, but that is incorrect) for the Yb II spectra (note that this is the ytterbium ion, not the neutral):
\begin{align}
\text{Transition} & & \text{Wavelength} & & A_{ki} & & A_{ki}^{-1}\\\hline
{}^2 P_{1/2}\to{}^2 S_{1/2} & & 369\:\mathrm{nm} & & 123\:\mathrm{\mu s}^{-1} & & 8.13 \:\mathrm{ns}
\\
{}^3 [3/2]_{1/2}\to{}^2 S_{1/2} & & 297\:\mathrm{nm} & & 26.1\:\mathrm{\mu s}^{-1} & & 38\:\mathrm{ns}
\end{align}
If you want to go beyond this you can click on the reference on the far-right corner, which then lets you find all the bibliography on the species in question. Using this you can find, for example, Phys. Rev. A 60, 2829 (1999), which gives
\begin{align}
\ \ \ \ \ \ {}^2 D_{5/2}\to{}^2 F_{7/2} & &\ \ \ \ \ \ \ 3.43\:\mathrm{\mu m} & & 0.905\:\mathrm{\mu s}^{-1} & & 1.10\:\mathrm{\mu s}
\end{align}
Note that this is out of the wavelength range in the NIST database.
You can get the wavelength of the final transition, ${}^2 P_{1/2}\to{}^2 D_{3/2}$, using energy level data from Atomic Energy Levels - The Rare-Earth Elements by Martin, Zalubas and Hagan, and it comes out as $2.4\:\mathrm{\mu m}$, also outside of the range in the NIST database. If you're happy with a theoretical calculation then J. Phys. B: At. Mol. Opt. Phys. 45 145002 (2012) gives estimates from $A_{ki}= 47\:\mathrm{ms}^{-1}$ to $2.98\:\mathrm{\mu s}^{-1}$, which is not very comforting, but the real issue with this transition is that it is very hard to measure.
In particular, if you put an ytterbium ion in the ${}^2P_{1/2}$ state, what it will do almost immediately is decay via the dipole transition to the ground state, ${}^2S_{1/2}$, with very little of the population ending up in the metastable ${}^2D_{3/2}$ state. What you care about, then, is the branching ratio in the decay of the ${}^2P_{1/2}$ state, which is measured at $0.0005$ by Phys. Rev. A 76, 052314 (2007), from an overall lifetime of $8.07\:\mathrm{ns}$. | {
"domain": "physics.stackexchange",
"id": 25636,
"tags": "energy, electrons, atomic-physics, atoms"
} |
Could gravity be an emergent property of nature? | Question: Sorry if this question is naive. It is just a curiosity that I have.
Are there theoretical or experimental reasons why gravity should not be an emergent property of nature?
Assume a standard model view of the world in the very small. Is it possible that gravity only applies to systems with a scale large enough to encompass very large numbers of particles as an emergent property?
After all: the standard model works very well without gravity; general relativity (and gravity in general) has only been measured at distances on the millimeter scale.
How could gravity emerge? For example, it could be that space-time only gets curved by systems which have measurable properties, or only gets curved by average values. In other words that the stress-energy tensor has a minimum scale by which it varies.
Edit to explain a bit better what I'm thinking of.
We would not have a proper quantum gravity as such. I.e. no unified theory that contains QM and GR at the same time.
We could have a "small" (possibly semi-classical) glue theory that only needs to explain how the two theories cross over:
the conditions and mechanism of wave packet reduction (or the other corresponding phenomena in other QM interpretations, like universe branching or decoherence or whatnot)
how this is correlated to curvature - how GM phenomena arise at this transition point.
Are there theoretical or experimental reasons why such a reasoning is fundamentally incorrect?
Answer: I'm not an expert in gravity, however, this is what I know.
There's a hypothesis about gravity being an entropic property. The paper from Verlinde is available at arXiv. That said, I would be surprised for this to be true. The reason is simple. As you probably know, entropy is an emergent property out of statistical probability. If you have non-interacting, adimensional particles into one half of a box, with the other half empty and separated by a valve, it's probability, thus entropy, that drives the transformation. If you look at it from the energetic point of view, the energy is exactly the same before and after the transformation. This works nicely for statistical distribution, but when you have to explain why things are attracted to each other statistically, it's much harder. From the probabilistic point of view, it would be the opposite: the more degrees of freedom your particles have, the more entropy they have. A clump has less degrees of freedom, hence has less entropy, meaning that, in a closed system, the existence of gravity is baffling. This is out of my speculation, and I think I am wrong. The paper seems to be a pleasure to read, but I haven't had the chance to go through it. | {
"domain": "physics.stackexchange",
"id": 45048,
"tags": "general-relativity, gravity, cosmology, standard-model, research-level"
} |
Is it possible to have a 4-coloring for a non-planar graph ? | Question: I have been working on this thread Grid $k$-coloring without monochromatic rectangles, and I am aware that the four color theorem implies that all planar graphs are four colorable.
The question is whether this is a necessary condition as well, i.e. whether having a proof that a graph is not planar implies it is not four colorable ?
Answer: Obviously not. A graph is bipartite if and only if it is 2-colorable, but not every bipartite graph is planar ($K_{3,3}$ comes to mind). | {
"domain": "cstheory.stackexchange",
"id": 109,
"tags": "graph-theory, graph-colouring"
} |
Is there an inherently ambiguous language which can not be recognized by Deterministic LBA? | Question: Is there inherently ambiguous language which can not be recognized by Deterministic LBA?
For example, $L=\{wv: w,v=(x|y)^*, w=w^R,v=v^R\}$, is there any deterministc LBA that recognizes $L$ ?
Answer: Every context-free language, whether inherently ambiguous or not, is recognized by some deterministic LBA. This is Exercise 9.8 (a) in Hopcroft and Ullman's Introduction to Automata Theory, Languages, and Computation. Even more is true; see Exercise 9.8 (b). | {
"domain": "cstheory.stackexchange",
"id": 4178,
"tags": "complexity-classes, fl.formal-languages, computability, automata-theory"
} |
Is this a correct implementation of quicksort in C? | Question: I'm learning C at the moment and chose implementing quicksort as an exercise.
My code is sorting correctly, but afterwards I looked at some tutorials online about quicksort. And they implement it differently.
I don't know, if I misunderstood quicksort, or if I simply implemented it differently.
How I understood quicksort:
Choose pivot
bin elements according to their size (bigger/smaller than pivot)
repeat till array is sorted
I use middle element as pivot to avoid standard worst cases.
My code:
#include <stdio.h>
#include <math.h>
void quicksort(unsigned int* array, unsigned int length)
{
if (length <= 1)
{
return;
}
unsigned int length_tmp = length/2;
//pivot is middle element
unsigned int pivot = array[length_tmp];
unsigned int array_tmp[length];
unsigned int length_small = 0;
unsigned int length_big = 0;
//binning array - bigger / smaller
//left of pivot
for (unsigned int i = 0; i < length_tmp; i++)
{
if (array[i] < pivot)
{
array_tmp[length_small] = array[i];
length_small++;
}
else
{
array_tmp[length-1-length_big] = array[i];
length_big++;
}
}
//right of pivot
for (unsigned int i = length_tmp+1; i < length; i++)
{
if (array[i] < pivot)
{
array_tmp[length_small] = array[i];
length_small++;
}
else
{
array_tmp[length-1-length_big] = array[i];
length_big++;
}
}
//inserting pivot unsigned into temporary array
array_tmp[length_small] = pivot;
//copying values unsigned into array
for (unsigned int i = 0; i < length; i++)
{
array[i] = array_tmp[i];
}
//recursive function calls
quicksort(array+0, length_small);
quicksort(array+length_small+1, length_big);
return;
}
int main()
{
unsigned int array[] = {1,2,3,7,8,9,6,5,4,0};
unsigned int length = sizeof array / sizeof array[0]; //alternative: sizeof array / sizeof *array
//printing array
printf("unsorted array: ");
for (unsigned int i = 0; i < length; i++)
{
printf("%d", array[i]);
}
printf("\n");
//calling sorting function
quicksort(array, length);
//printing array
printf("sorted array: ");
for (unsigned int i = 0; i < length; i++)
{
printf("%d", array[i]);
}
printf("\n");
return 0;
}
I had posted this before on stackoverflow, but was informed of my mistake and that I should post it here. I didn't get answer to my question in that short time, but a few pointers regarding my code which I tried to implement in my solution.
Answer:
The most valuable feature of quicksort is that it sorts in-place, without a temporary array. The average space complexity quicksort is logarithmic, whereas the one of your solution is linear. The time complexity is also affected by the copying from temporary back to original.
NB: If you can afford a linear temporary, don't use quicksort. Mergesort will win hands down: it doesn't have a worst case, and besides it is stable.
Choosing middle element for the pivot, while harmless, doesn't avoid the worst case. The performance of the quicksort is affected not by where the pivot is chosen, but by where it lands after partitioning. Worst case arises when they consistently land near the edges of the array.
I don't see the need to treat left of pivot and right of pivot separately:
Swap the pivot with the first element (`array[0]` now holds the pivot)
Partition the entire [1..length) range in one pass
Swap the `array[0]` (which holds the pivot) with `array[length_small]`.
A length of the array should be size_t. There is no guarantee that unsigned int is wide enough to represent a size of a very large array.
The code before the recursive call implements an important algorithm, namely partition and deserves to be a function of its own. Consider
void quicksort(int *array, size_t length)
{
if (length <= 1) {
return;
}
size_t partition_point = partition(array, length);
quicksort(array, partition_point);
quicksort(array + partition_point + 1, length - partition_point - 1);
}
Further down, you may want to implement two improvements:
Recursion cutoff: when the array becomes small enough, insertion sort performs better
Tail call elimination (not really necessary, the C compilers are good in it) | {
"domain": "codereview.stackexchange",
"id": 38040,
"tags": "c, quick-sort"
} |
Phospholipid Bilayer structural reversal | Question: What would happen if the phospholipids in the phospholipid bi-layer were reversed, the fatty acid tails now facing outwards and the phosphate heads facing inwards? I'm assuming this will not affect the protein channels, but perhaps the loss of cholesterol in the structure of the bi-layer. Would this then mean that the fluid mosaic model no longer holds?
Answer: This would have quite dramatic consequences. The layers are ordered in the way they are, because of their polarity. In the way they are ordered, the hydrophobic tails are inside and directed towards each other, the hydrophilic heads are orientated to the outside and inside. Since both sides of the membrane are surrounded by aqueous solutions, this is necessary to allow a contact between solution and cell membrane and to allow an exchange of molecules between them. If the layers would be oriented the other way, this contact and exchange wouldn't be possible. Proteins channels in the membrane wouldn't be possible as well, since the intermembrane domains are composed preferrably of amino acids with hydrophobic sidechains, while the domains on the outside of the membrane contain more hydrophilic amino acids.
A turn like this would need a completely different composition of life - meaning it couldn't be based on water like it is. | {
"domain": "biology.stackexchange",
"id": 4364,
"tags": "biochemistry"
} |
Does Orange scale the data automatically for the linear regression with Ridge regularization | Question: I'm using the linear regression tool with the Ridge regularization. To use the Ridge regularization, I have to scale the data first. Does Orange scale the data automatically? I can't find any information about this mentioned in Orange's documentation for Ridge regularization.
In python's scikit-learn, I have to scale the data manually before using Ridge Regression. In MATLAB, the scaling in the Ridge function included. So, do I have to scale the data manually before I'm using the Ridge Regression in orange?
Thanks for your help.
Answer: Not by default, no, as shown by the normalise=False here:
class Orange.regression.linear.RidgeRegressionLearner(alpha=1.0,
fit_intercept=True, normalize=False, copy_X=True, max_iter=None,
tol=0.001, solver='auto', preprocessors=None) | {
"domain": "datascience.stackexchange",
"id": 3873,
"tags": "linear-regression, orange, feature-scaling, regularization"
} |
Fourier series of cycloid | Question: What is the Fourier series representation of a cycloid?
The parametric representation of the curve is as follows.
$$
t=\dfrac{\theta-\sin\theta}{\pi}\\
x=\dfrac{1-\cos\theta}{\pi}
$$
The period is $2$, so the coefficients of the complex exponential Fourier series should be
$$
c_n=\dfrac{1}{2}\int_0^2\!x(t)e^{-jn\pi t}\,\mathrm{d}t
$$
and that's where I got stuck. I performed substitution
$$
t=\dfrac{\theta-\sin\theta}{\pi}\implies\mathrm{d}t=\dfrac{1-\cos\theta}{\pi}\,\mathrm{d}\theta\\
c_n=\dfrac{1}{2}\int_0^{2\pi}\!\left(\dfrac{1-\cos\theta}{\pi}\right)^2e^{-jn(\theta-\sin\theta)}\,\mathrm{d}\theta
$$
but don't know how to integrate this monster. Is there any other way?
Answer: The Fourier series of the cycloid can be expressed in terms of the Bessel functions of the first kind:
$$J_n(x)=\frac{1}{\pi}\int_0^{\pi}\cos(nt-x\sin t)dt,\qquad n\in\mathbb{Z}\tag{1}$$
Using the cycloid parameterization
$$y(t)=1-\cos t,\qquad x(t)=t-\sin t\tag{2}$$
which results in a period of $2\pi$ and a maximum value of $2$, the Fourier series of $y(t)$ as a function of $x$, referred to as $f(x)$, is given by
$$\bbox[#f8f1ea, 0.6em, border: 0.15em solid #fd8105]{
\begin{align}f(x)&=\frac32+\sum_{n=1}^{\infty}\frac{J_{n+1}(n)-J_{n-1}(n)}{n}\cos(nx)\\&=\frac32-2\sum_{n=1}^{\infty}\frac{J'_{n}(n)}{n}\cos(nx)\end{align}}\tag{3}$$
where $J'_n(x)$ is the derivative of $J_n(x)$ w.r.t. $x$.
The figure below shows the cycloid and its Fourier series approximation according to $(3)$ using the first $20$ coefficients in the sum:
Proof:
Let's consider the real-valued Fourier series. Since $f(x)$ is even, all sine coefficients vanish and we get
$$f(x)=a_0+\sum_{n=1}^{\infty}a_n\cos(nx)\tag{4}$$
with
$$a_0=\frac{1}{2\pi}\int_{-\pi}^{\pi}f(x)dx\tag{5}$$
and
$$a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos(nx)dx,\qquad n>0\tag{6}$$
With $dx=(1-\cos t)dt$ we obtain
$$a_0=\frac{1}{2\pi}\int_{-\pi}^{\pi}y(t)(1-\cos t)dt=\frac{1}{2\pi}\int_{-\pi}^{\pi}(1-\cos t)^2dt=\frac32\tag{7}$$
and
$$\begin{align}a_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}(1-\cos t)^2\cos(nt-n\sin t) dt\\&=\frac{2}{\pi}\int_{0}^{\pi}(1-\cos t)^2\cos(nt-n\sin t) dt,\qquad n>0\tag{8}\end{align}$$
where the last equality follows from the fact that the integrand is even.
Expanding
$$(1-\cos t)^2=1-2\cos t+\cos^2 t=\frac32-2\cos t+\frac12\cos 2t\tag{9}$$
and using $\cos(\alpha)\cos(\beta)=\frac12[\cos(\alpha-\beta)+\cos(\alpha+\beta)]$, the integrand in $(8)$ can be rewritten as
$$\begin{align}(1-\cos t)^2\cos(nt-n\sin t)=\frac32\cos(nt-n\sin t) -&\\\big[\cos((n-1)t-n\sin t) + \cos((n+1)t-n\sin t)\big]+\\\frac14 \big[\cos((n-2)t-n\sin t) + \cos((n+2)t-n\sin t)\big]\tag{10}\end{align}$$
Plugging $(10)$ into $(8)$ and using the definition of the Bessel function $(1)$, we can write
$$a_n=3J_n(n)-2\big[J_{n-1}(n)+J_{n+1}(n)\big]+\frac12\big[J_{n-2}(n)+J_{n+2}(n)\big],\qquad n>0\tag{11}$$
This can be further simplified using the recurrence relation (10.6.1) for $J_n(x)$:
$$J_{n-1}(n)+J_{n+1}(n)=\frac{2n}{x}J_n(x)\tag{12}$$
which can be used to eliminate $J_{n-2}(n)$ and $J_{n+2}(n)$ in $(11)$, and which finally results in
$$a_n=\frac{J_{n+1}(n)-J_{n-1}(n)}{n},\qquad n>0\tag{13}$$
Since (10.6.1)
$$J_{n-1}(x)-J_{n+1}(x)=2J'_n(x)\tag{14}$$
where $J'_n(x)$ denotes the derivative of $J_n(x)$ with respect to $x$, the coefficients $a_n$ can also be expressed as
$$a_n=-\frac{2J'_n(n)}{n},\qquad n>0\tag{15}$$
Eqs $(7)$, $(13)$, and $(15)$ establish the result $(3)$. | {
"domain": "dsp.stackexchange",
"id": 9820,
"tags": "fourier, fourier-series, periodic, integration"
} |
Could a reported rainfall pH of 3.1 actually be realistic? | Question: In this Chemistry question Is there any reason to fear personal exposure to rain with a pH of 3.1? I haven't gotten any answer, but there was some discussion about the reality of the measurement.
So I'll ask here, is a sustained rainfall with a pH of roughy 3.1 possible on Earth? Of course it would have to be related to some serious sources of pollution - perhaps intensive coal burning over a large geographic region. I say 'sustained' to rule out any fluke or highly unusual situations. Could I really fill a plastic bucket with a substantial amount of pH 3.1 water by putting it outdoors, under the sky, in the rain, in the right place on Earth?
Oh, for this question I'd like to exclude unusual situations related to volcanic eruptions as well.
Answer: A quick literature search seems to confirm Gordons estimation, even at the scale of a whole bucket:
[T]he annual mean pH, based upon
samples collected weekly during 1970-1971 and weighted proportionally to
the amount of water and pH during each period of precipitation, was 4.03
at the Hubbard Brook Experimental Forest, New Hampshire; 3.98 at Ithaca,
New York; 3.91 at Aurora, New York; and 4.02 at Geneva, New York. Measurements
on individual rainstorms frequently showed values between pH 3
and 4 at all of these locations. Data from the National Center for Atmospheric
Research included precipitation pH values as low as 2.1 in the northeastern
United States during November 1964.
Patel, C. K. N., E. G. Burkhardt, and C. A. Lambert. "Acid rain: a serious regional environmental problem." Science 184 (1974): 1176-1179. | {
"domain": "earthscience.stackexchange",
"id": 911,
"tags": "rainfall, air-pollution, acid-rain"
} |
What is the $Q_y$ transition in a bacteriochlorophyll? | Question: Bacteriochlorophyll (BChl) are pigments that occur in the photosynthetic mechanisms of bacteria. I am studying some papers on the excitonic properties of BChl's, and the term $Q_y$ transition comes up a lot, but I haven not found any explanation of its meaning.
Can someone explain to me the meaning of this term?
Answer: The $Q_x$ and $Q_y$ transitions are electronic excitations in the conjugated $\pi$ orbitals of the Bchl a molecule. They involve two different sets of conjugated bonds. The $Q_x$ involves a shorter chain of conjugated bonds so it occurs at a higher energy/frequency. I couldn't find a really good diagram to show which bonds are involved in the in the $Q_x$ and which in the $Q_y$ excitations, but figure 1 in this paper has a reasonable illustration. | {
"domain": "physics.stackexchange",
"id": 6692,
"tags": "biology, quantum-chemistry, dipole"
} |
Can foxes move their ears independently? | Question: I've read that dogs can do move their ears independently, i.e., point one ear in the direction of a sound without having to point the other one at the same time. Is this a common trait with other canids, such as foxes? Can most eared mammals do this, with humans being an exception (except for a few exceptional humans)?
Answer: Yes, they can. Here are a couple pics I found of foxes moving their ears independently:
This is something all canids can do (1)(2) Btw, I can move my ears independantly. It's all in the finding the right muscles. | {
"domain": "biology.stackexchange",
"id": 2508,
"tags": "zoology, ethology, hearing"
} |
Are CRUD webapps today the modern version of the "expert system"? | Question: From Wikipedia, citations omitted:
In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning about knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software.
An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.
CRUD webapps (websites that allows users to Create new entries in a database, Read existing entries in a database, Update entries within the database, and Delete entries from a database) are very common on the Internet. It is a vast field, encompassing both small-scale blogs to large websites such as StackExchange. The biggest commonality with all these CRUD apps is that they have a knowledge base that users can easily add and edit.
CRUD webapps, however, use the knowledge base in many, myriad and complex ways. As I am typing this question on StackOverflow, I see two lists of questions - Questions that may already have your answer and Similar Questions. These questions are obviously inspired by the content that I am typing in (title and question), and are pulling from previous questions that were posted on StackExchange. On the site itself, I can filter by questions based on tags, while finding new questions using StackExchange's own full-text search engine. StackExchange is a large company, but even small blogs also provide content recommendations, filtration, and full-text searching. You can imagine even more examples of hard-coded logic within a CRUD webapp that can be used to automate the extraction of valuable information from a knowledge base.
If we have a knowledge base that users can change, and we have an inference engine that is able to use the knowledge base to generate interesting results...is that enough to classify a system as being an "expert system"? Or is there a fundamental difference between the expert systems and the CRUD webapps?
(This question could be very useful since if CRUD webapps are acting like "expert systems", then studying the best practices within "expert systems" can help improve user experience.)
Answer: The key feature of an expert system is that the knowledge base is structured to be traversed by the inference engine. Web sites like Stack Exchange don't really use an inference engine; they do full-text searches on minimally-structured data. A real inference engine would be able to answer novel queries by putting together answers to existing questions; Stack Exchange sites can't even tell if a question is duplicate without human confirmation. | {
"domain": "ai.stackexchange",
"id": 2533,
"tags": "definitions, expert-systems"
} |
What is the correct classification of population groups of homo sapiens? | Question: The classification system of the United States census into various races of humans is a bad way to classify population groups of humans, because both East Asians and South Asians are classified the same, yet they look different and are genetically distinct. So, has anyone proposed a more scientifically accurate classification of population groups of homo sapiens?
Answer: Basically all the genetic variation of humans is inside the native african populations. A norwegian and a australian native are more closely related to each other than two random native Africans are. So the only accurate way to do it would not only group those two asians together but also group them with europeans and native americans. If you split people up enough to get two groups of "asians" then you end up with more than a hundred races the vast majority of whom would be split among the native african populations.
Humans have very low genetic diversity in the first place, and most of our diversity is within our populations not between them, so splitting us up does not make a lot of sense. We end up doing it mostly on superficial variations.
Source | {
"domain": "biology.stackexchange",
"id": 7704,
"tags": "human-biology"
} |
Equivalence of functional and partial derivatives | Question: I am trying to derive Newton's second law from the principle of least action, that is, setting the functional derivative $\frac{\delta S}{\delta x(t)}$ equal to 0.
$$S = \int dt' \left[ \frac{m}{2} \left( \frac{dx}{dt'} \right)^2 - V(x(t')) \right] \tag{1} $$ So,
\begin{align}
\frac{\delta S}{\delta x(t)} &= \int dt' \left[ \frac{m}{2} \frac{\delta}{\delta x(t)} \left( \frac{dx}{dt'} \right)^2 -\frac{\delta V(x(t'))}{\delta x(t)} \right] \tag{2} \\
&= \int dt' \left[ m \frac{dx}{dt'}\frac{d}{dt'}\delta(t-t') - \frac{\delta V(x(t'))}{\delta x(t')}\frac{\delta x(t')}{\delta x(t)} \right] \tag{3} \\
&= - \int dt' \left[ m \frac{d^2x}{dt'^2}\delta(t-t') + \frac{\delta V(x(t'))}{\delta x(t')} \delta (t-t') \right] \tag{4} \\
&= -\left[ m \frac{d^2x}{dt^2} + \color{Red}{\frac{\delta V(x(t))}{\delta x(t)}} \right]. \tag{5} \\
\end{align}
Now that I have calculated $(5)$, and then set the variation of the action equal to zero, I know that $\frac{\delta V(x(t))}{\delta x(t)}$ must be the same as $\frac{\partial V(x(t))}{\partial(x(t))}$ in order to reproduce Newton's second law. How does the functional derivative turn into the partial derivative in this case?
Note: to get the second term in $(3)$, I used chain rule, but for functional derivatives.
Answer: The definition of the (integral of the) functional derivative (at least a definition that's good enough for physics level rigor) is the difference of the functional evaluated on a path $x(t)$ plus an arbitrary variation $\epsilon(t)$ and the functional evaluated on the path, to leading order in $\epsilon$. In other words
\begin{equation}
S[x(t)+\epsilon(t)]-S[x]=\int dt \frac{\delta S}{\delta x} \epsilon(t) + O(\epsilon^2)
\end{equation}
The fact that this definition puts the functional derivative inside of an integral is a reflection of the fact that the functional derivative is a distribution, like a Dirac delta function, it is only well defined inside of an integral.
Now define
\begin{equation}
S_V[x(t)]=\int dt V(x(t))
\end{equation}
Then
\begin{equation}
S_V[x(t)+\epsilon(t)]=\int dt V(x+\epsilon)=\int dt\left( V(x) + \frac{\partial V}{\partial x}\epsilon+O(\epsilon^2)\right)
\end{equation}
Comparing with the definition of the functional derivative, we see we can identify
\begin{equation}
\frac{\delta S_V}{\delta x} = \frac{\partial V}{\partial x}
\end{equation}
which is the statement you need. | {
"domain": "physics.stackexchange",
"id": 24431,
"tags": "lagrangian-formalism, action, variational-calculus, functional-derivatives"
} |
How big of a temperature difference is needed to power a thermoelectric generator? | Question: I'm really curious about this and haven't been able to find a formula to calculate the types of voltages I could generate based on differences in temperature. An example of interest is The PowerPot which can charge a phone by boiling water over a campfire (snow would work better due to a larger difference).
I know that the average temperature of a campfire (well stocked) is roughly $1000^\circ C$ and through several Google searches I have very roughly estimated the average surface temperature of the planet at $15^\circ C$ and am going with my original educated guess that the temperature of the water used is roughly $20^\circ C$. This is a massive difference in temperature and I wonder if it's mandatory.
With some further research into the topic, I have found a formula which hasn't been 100% helpful.
Seebeck Coefficient
You can read more on this one here; however, to sum it up:
$$S = -\frac{\Delta V}{\Delta T}$$
Where $\Delta V$ is the voltage difference between the hot and cold sides, and $\Delta T$ is the temperature difference. The negative sign is from the negative charge of the electron and the conventions of current flow.
Is the difference in temperature always required to be this drastic? How do I calculate the correct difference in temperature needed to achieve a specific voltage output? How do I calculate $\Delta V$ when both sides are ambient temperatures instead of physical (can touch it) objects?
Answer: There is a thermoelectric device the generates electricity in the difference between your skin temperature and the surrounding air temp. | {
"domain": "physics.stackexchange",
"id": 58524,
"tags": "thermodynamics, temperature, estimation, thermoelectricity"
} |
distance travelled by a ball under the action of a spring | Question: I was trying a question which goes like this:
Two balls of same mass are projected, by compressing the springs of different force constants k1 and k2 by equal magnitude. The first ball is projected upwards along a smooth wall and the other on the rough horizontal floor with coefficient of friction $\mu$. If the first ball goes up by height h , then find the distance covered by the second ball in terms of k1, k2, h and $\mu$
Since the question is based on springs, I tried to use the work done by spring, gravity and friction and get the answer but ended up with quite a long expression involving m and g terms as well. Any help would be appreciated. Thanks in advance
My try:
Work energy theorem being applied to the first case
$$\frac{1}{2}k_1x^2 - \frac{1}{2}k_1h^2 - mg(x+h) = 0$$
Work energy theorem being applied to second case
$$\frac{1}{2}k_2x^2 - \frac{1}{2}k_2y^2 - \mu mg(x+y) = 0$$
where y is the distance travelled by the second ball. I proceed by solving for x and substituting in the other equation, but however I get the mg term in the answer as well though the answer is not supposed to be in terms of mg.
PS: Moreover, I have a doubt regarding the principle of working of spring. the work done by a spring which is compressed by a amount x to be elongated till its elongation is an amount x is zero. Is this correct?
Answer: We can use energy conservation. For the first spring the only relevant energy at the beginning is stored in the compressed spring. When the ball reaches the highest point, all energy is transformed to potential energy. Hence you have
$$\frac{1}{2}k_1x^2 = mgh$$
For the second spring, the situation is the same at the beginning, but now we lose the energy due to friction:
$$\frac{1}{2}k_2x^2 = \mu mgd$$
Eliminating $x^2$ and solving for $d$ will give you the answer.
It seems to me that you were on the right track, but got confused when equating the energies. In both equations you have two terms for the energy stored in the compressed spring. Why? You also introduce $y$, without saying what it is. It will definitely help to make a sketch to understand the problem better. | {
"domain": "physics.stackexchange",
"id": 47235,
"tags": "homework-and-exercises, energy, energy-conservation, work"
} |
"Embedding" a language in itself | Question: Main/General Question
Let $L$ be a language. Define the languages $L_i$ with $L_0 = L$ and
$$L_i = \{xwy : xy \in L_{i-1}, w \in L\}$$
for $i \geq 1$. Consider $\hat{L} = \bigcup L_i$. So, we repeatedly "embed" $L$ into itself to obtain $\hat{L}$.
Has $\hat{L}$ been studied? Does it have a name?
Examples/Motivation
As requested in the comments by here are some examples to better illustrate what $\hat{L}$ is. Then since no one (so far) seems to have seen this notion I will discuss my motivation for looking at it.
Klaus Draeger beat me to adding examples. I'll put those examples from the comments here for increased visibility since they are good examples.
If $L$ is a unary language, then $\hat{L} = L^+$ (and hence is regular).
If $L = {ab}$, then $\hat{L}$ is the Dyck language.
Here is a alternative way to think of $\hat{L}$. Given a language $L$ over an alphabet $A$ we play the following game. We take any $w \in A^*$ the try to reduce $w$ to the empty string $\epsilon$ by repeatedly removing subwords that are in $L$. (Here we need to by a little careful how we treat the empty string itself to make sure that this is equivalent to the definition above, but this is morally correct.)
Originally I came the define $\hat{L}$ by considering deleting powers in words. Take $L = \{w^3 : w \in A^*\}$ to be the language of cubes over the binary alphabet $A = \{a,b\}$. Then $aaabaabaabbabab \in \hat{L}$ and we can consider the following "$L$-deletion"
$$a(aabaabaab)babab \to ababab \to \epsilon.$$
Observe not all deletions will work
$$(aaa)baabaabbabab \to baabaabbabab$$
and we are stuck with a cube-free word. So, there is another notation of "strongly $L$-deletable" which in general does not coincide with $\hat{L}$.
One final example, if $L$ in the language of squares over the binary alphabet $A = \{a,b\}$, then $\hat{L}$ is the strings with both an even number of $a$'s and an even number of $b$'s. Clearly this condition is necessary. One way to see it is sufficient is to consider deleting squares and recall every binary word of length 4 or great has a square. Here $\hat{L}$ is regular.
For larger alphabets this type of argument fails since there are arbitrarily long square-free words. With alphabets of size $k \geq 3$ I can show $\hat{L}$ is not regular using Myhill-Nerode and the fact there are arbitrarily long square-free words, but I have not been able to say much more. I was hoping looking at it in this more abstract way could shed some light on the situation (and this more abstract definition seems interesting in its own right).
Answer: This question is related to the so called insertion systems.
An insertion system is a special type of rewriting system whose rules are of the form $1 \rightarrow r$ for all $r$ in a given language $R$. Let us write $u \rightarrow_R v$ if $u = u'u''$ and $v = u'ru''$ for some $r \in R$. Let us denote by $\buildrel{*}\over\rightarrow_R$ the reflexive transitive closure of the relation $\rightarrow_R$. The closure of a language $L$ of $A^*$ under $\buildrel{*}\over\rightarrow_R$ is the language
$$
[L]_{\buildrel{*}\over\rightarrow_R} = \{ v \in A^* \mid \text{ there exists $u \in L$ such that $u \buildrel{*}\over\rightarrow_R v$} \}
$$
Recall that a well quasi-order on a set $E$ is a reflexive and
transitive relation $\leqslant$ such that for any infinite sequence $x_0,
x_1, \ldots$ of elements of $E$, there are two integers $i < j$ such
that $x_i \leqslant x_j$. The following theorem is proved in [1]:
If $H$ is a finite set of words such that the language $A^* \setminus
A^*HA^*$ is finite, then the relation $\buildrel{*}\over\rightarrow_R$ is a well quasi-order on $A^*$ and $[L]_{\buildrel{*}\over\rightarrow_R}$ is regular.
[1] W. Bucher, A. Ehrenfeucht and D. Haussler, On total regulators generated by derivation relations, Theor. Comput. Sci. 40, 2-3 (1985), 131– 148. | {
"domain": "cstheory.stackexchange",
"id": 3427,
"tags": "reference-request, fl.formal-languages"
} |
primitive recursive functional equivalence | Question: Given two primitive recursive functions is it decidable whether or not they are
the same function? For example lets take sorting algorithms A, and B which are
primitive recursive. While there are many algorithms for sorting they all
describe the same relation. Given two primitive recursive implementations of A,
and B, can they be proven to represent the same function? Please note that this
question is not about unrestricted recursion, and as such not limited by the
properties of Turing machines.
I know that if you have two functions that halt, and have a finite domain they
can be proven to be the same function because you can simply try every possible
input, and compare the output of each function. My confusion is when working
with things working on say natural numbers because they are not finite.
If this is not decidable for the primitive recursive functions is it possible
for weaker classes like say the elementary recursive functions. I also know that
this IS possible for weaker things like finite state machines, and deterministic
pushdown automata. Thanks.
Answer: It is well known that equivalence is undecidable even for CFGs resp. PDAs (see even Wikipedia). This provides a proof that the same property is undecidable for every model of any superset of CFL (by a simple special case reduction).
Since solving the word problem for any given CFL is clearly primitive recursive (by virtue of your favorite parsing algorithm), this includes the set of primitive recursive functions/algorithms. | {
"domain": "cs.stackexchange",
"id": 6250,
"tags": "computability, decision-problem, primitive-recursion"
} |
Why doesn't the gravitational potential energy and the gravitational acceleration yield the same velocity? | Question: This might be a really dumb question, but I am having trouble figuring it out. Imagine a small object revolving around a planet. The acceleration of the object due to the planet is $ \frac F {m_2} = \frac{Gm_1}{r^2}$, so the tangential velocity $v$ is $\sqrt{\frac {Gm_1}{r}}$. I have also learned that the the gravitational potential energy is $\frac {Gm_1m_2}{r}$. If we convert this to kinetic energy $\frac 1 2 m_2v^2$, and solve for $v$ we get $v=\sqrt{2\frac {Gm_1}{r}}$. Where is the 2 coming from? Did I make a mistake in assuming I can convert the potential energy to kinetic energy?
Answer: Yes, that procedure doesn't make sense. If I hold a 1 kg mass 2 meters off of the ground it has about 20 J of potential energy, assuming that I choose the potential energy to be zero at ground level. Can I convert that to kinetic energy in order to determine its current speed?
It's also worth noting that the gravitational potential energy formula is $U(r) = \color{red}{\mathbf-} \frac{Gm_1m_2}{r}$, where we have chosen $U(\infty)=0$.
The kinetic energy (and corresponding speed) you have calculated does have an interpretation - it's the speed an object would have once it reached the radial position $r$ if you released it from rest at $R_0\rightarrow \infty$. More carefully, if you release an object from rest in space and let it fall towards Earth, its speed at $r$ increases as the release point gets further away. The speed you calculated is the upper limit of these impact speeds. | {
"domain": "physics.stackexchange",
"id": 75653,
"tags": "energy, gravity"
} |
Is there conservation of momentum if there's conservation of energy? | Question: The equation for conservation of momentum:
$$m_1\vec{v}_1 + m_2\vec{v}_2 = m_1\vec{u}_1 + m_2\vec{u}_2$$
The equation for conservation of energy:
$$\frac 12m_1v_1^2 + \frac 12m_2v^2_2 = \frac 12m_1u_1^2 + \frac 12m_2u^2_2$$
Then if it's an elastic collision on one dimension (conservation of energy and also conservation of momentum):
$$v_1 - v_2 = -(u_1 - u_2)$$
My question is:
If the case is that there is conservation of energy, will there be conservation of momentum?
In both cases, can you explain why?
Also, let's say they give you a question where there is a collision.
How can you prove there is conservation of energy?
Sometimes the questions tell you that a collision was elastic. But if not, how do you prove it?
Answer: Momentum is always conserved in a collision, due to Newton's 2nd and 3rd laws. Prove this to yourself by setting up a problem where there is an interaction between two colliding objects. Since the forces between these objects are equal in magnitude and opposite in direction, it is easily seen that the accelerations experienced by each object during the collision are inversely proportional to the objects' masses. When the velocities are calculated after the collision based on the reasons given above, the total momentum before the collision is seen to be equal to the total momentum after the collision, for all collisions.
To determine if a collision is elastic, you necessarily have to calculate the total kinetic energy before the collision and the total kinetic energy after the collision. If and ONLY if the kinetic energy after the collision is equal to the kinetic energy before the collision, can you say that the collision was elastic. | {
"domain": "physics.stackexchange",
"id": 29254,
"tags": "classical-mechanics, energy-conservation, momentum, conservation-laws"
} |
How to solve a polynomial of the form y = ax^3 + bx^2 + cx + d using the incremental algorithm in computer graphics | Question: I am studying Computer Graphics and need to design an incremental algorithm for solving the polynomial $y = ax^3 + bx^2 + cx + d$, and then implement that in OpenGL. The input will be the values of $a, b,c,d$ and the desired output is a line/curve to be drawn. The values of $x$ would be in the range $1\leq x\leq100$. The algorithm needs to be very efficient hence I am required to use only addition operation, as multiplication is less efficient.
It would be similar to this technique, but here the polynomial to be considered is the one given above. I have searched a lot on the Internet but cannot find the required solution, because most of the examples solve the equation $y = mx+b$.
Can anyone kindly guide me how to solve it or which method should be applied to solve it?
Answer: You have f(x). Let g(x) = f(x+1) - f(x). Let h(x) = g(x+1) - g(x). Let k(x) = h(x+1) - h(x). It turns out that k(x) is a constant.
Calculate f(x), g(x), h(x) and k(x) for x = 1. Then you calculate f(x+1) = f(x) + g(x), g(x+1) = g(x) + h(x), h(x+1) = h(x) + k(x), and k(x) is a constant. | {
"domain": "cs.stackexchange",
"id": 11003,
"tags": "algorithms, graphics, polynomials"
} |
General solution to the wave equation in 1+1D | Question: In a book I'm reading it states,
$y(x,t) = f(x+vt) + g(x-vt)$ is a solution to the one dimensional wave equation,
But upon differentiating with respect to time and $x$ twice I arrive at:
$$\frac{\partial^2y}{\partial t^2} = v^2\frac{\partial^2f}{\partial t^2} + v^2\frac{\partial^2g}{\partial t^2}$$
$$\frac{\partial^2y}{\partial x^2} = \frac{\partial^2f}{\partial x^2} + \frac{\partial^2g}{\partial x^2}$$
How can this solve the wave equation if it has different differentials on each side of the equation?
Answer: Indeed this is a general solution to the wave equation. Both f and g can be the solution, their combination is also another solution. We can show that using D'Alembert's solution. You said both sides have different differentials. Now if we take different parameters that both depend on x and t as in the above answer we can find the same derivatives in both sides.
Let's take $a = x+vt$ and $b = x-vt$. Now a and b are both functions of $x$ and $t$, $ a = a(x,t), b = b(x,t)$.
Now $y(x,t)$ can be written in terms of functions of $a$ and $b$.
And define a new function which is $\widetilde{y}(a,b) = y(x(a,b),t(a,b))$.
Coming back from $\widetilde{y}$ is also possible.
$y(x,t) = \widetilde{y}(a(x,t),b(x,t))$, just simply put x and t instead of a an b.
Now using above relation let's take a derivative
$$
\frac{\partial y}{\partial x} = \frac{\partial\widetilde{y}}{\partial a}\frac{\partial a}{\partial x} + \frac{\partial \widetilde{y}}{\partial b}\frac{\partial b}{\partial x}
$$
But we know that
$$
\frac{\partial a}{\partial x} = \frac{\partial b}{\partial x} = 1
$$
Second derivative becomes
$$
\frac{\partial^2 y}{\partial x^2} = (\frac{\partial }{\partial a} + \frac{\partial }{\partial b})(\frac{\partial }{\partial a} + \frac{\partial }{\partial b})\widetilde{y} \\
= \frac{\partial^2 \widetilde{y}}{\partial a^2} + \frac{\partial^2 \widetilde{y}}{\partial b^2} + 2\frac{\partial^2 \widetilde{y}}{\partial a \partial b}
$$
By the same token,
$$
\frac{\partial^2 y}{\partial t^2} = v^2(\frac{\partial^2 \widetilde{y}}{\partial a^2} + \frac{\partial^2 \widetilde{y}}{\partial b^2} - 2\frac{\partial^2 \widetilde{y}}{\partial a \partial b})
$$
If we put these equations into our wave equation (simply subtract one from the other) we get
$$
\frac{\partial^2 \widetilde{y}}{\partial a \partial b} = 0
$$
This equation tells us that derivative of $\widetilde{y}$ with respect to a does not depend on b, or vice versa. Therefore, we can write $\widetilde{y}$ as
$$
\widetilde{y}(a,b) = f(a) + g(b)
$$
where f and g are some arbitrary functions. Simply put x and t instead of a and b to get y.
$$
{y}(x,t) = f(x+vt) + g(x-vt)
$$
So we have just found our general solution.
You can also find more about its history 1D Wave Equation. | {
"domain": "physics.stackexchange",
"id": 62208,
"tags": "waves, differential-equations, calculus"
} |
Is breathing a reflex action or is it an intrinsic process? | Question: The process of breathing is controlled by respiratory centers in the brain stem. Do these centers have an innate activity, i.e., just send out signals to breathing muscles intrinsically, and have the rate and manner in which they do so modified by various regulatory factors?
Or are they driven by imbalances (in levels of oxygen, carbon dioxide, hydrogen ions) like a reflex? Let's say that hypothetically these levels remain static in an acceptable state such that this reflex is no longer needed, would breathing stop since there's no longer a driving motive or would it continue because the respiratory centers have an intrinsic activity?
Answer: While the ultimate purpose of breathing could be considered to be the maintainance of a balance of the substances you are referring to (such as blood oxygen, carbon dioxide, and hydrogen ions), the blood levels of these substances do not directly control the production of action potentials within the motor neurons that promote the contraction of the diaphragm and intercostal muscles.
The propagation of these action potentials is initiated by signals from the medullary respiratory center, specifically the neurons in the dorsal respiratory group (DRG) and the ventral respiratory group (VRG). In the VRG, a complex of neurons known pre-Bötzinger complex is responsible for generating the signals that cause the rhythmic muscle contractions involved in breathing:
The respiratory rhythm generator is located in the pre-Bötzinger complex of neurons in the upper part of the VRG. This rhythm generator appears to be composed of pacemaker cells and a complex neural network that, acting together, set the basal respiratory rate.
Vanders Physiology, p473, 15th ed.
So breathing is indeed, as you mention, a process that is controlled by innate neuronal activity but regulated by the concentrations of PO2, PCO2, and H+ concentrations. I recommend you read pages 473 to 477 of Vanders Physiology, which explains these controls in detail, some of which do involve reflexes, say, if O2 concentration in the blood strays too low.
Interestingly, expiration, which is typically a result of actions potentials ceasing and respiratory muscles relaxing, can be controlled by a reflex, known as the Hering-Breuer reflex, during strenuous exercise when the lung is inflated by a large tidal volume. Stretch receptors in the airway in this case are activated, causing the inhibition of inspiratory neurons in the DRG.
Source: Vanders Physiology, 15th ed, section 13.9: "Control of Respiration" | {
"domain": "biology.stackexchange",
"id": 10525,
"tags": "physiology, neurophysiology, respiration, breathing, pulmonology"
} |
Proof for Kolmogorov complexity is uncomputable using reductions | Question: I am looking for a proof that Kolmogorov complexity is uncomputable using a reduction from another uncomputable problem. The common proof is a formalization of Berry's paradox rather than a reduction, but there should be a proof by reducing from something like the Halting Problem, or Post's Correspondence Problem.
Answer: You can find two different proofs in:
Gregory J. Chaitin, Asat Arslanov, Cristian Calude: Program-size Complexity Computes the Halting Problem. Bulletin of the EATCS 57 (1995)
In Li, Ming, Vitányi, Paul M.B.; An Introduction to Kolmogorov Complexity and Its Applications it is presented as an exercise (with a hint on how to solve it that is credited to P. Gács by W. Gasarch in a personal communication Feb 13, 1992).
** I decided to publish an extended version of it on my blog. | {
"domain": "cstheory.stackexchange",
"id": 3233,
"tags": "computability, reductions, kolmogorov-complexity"
} |
3D calibration in OpenCV without chessboard images? | Question:
It takes me a long time to get functions to work in OpenCV so I'd like to know whether my overall plan makes sense before I dive into the details of trying to make it happen. (Open CV 2.3.1, Windows 7, C++) I'd be appreciative of any advice.
Problem:
I work at a skeet range & want to learn 3D information about the flight of the clay targets until they're hit.
Data:
The two cameras (eventually there will be a few more) are yards apart so it's not practical to make a chessboard large enough for them to both see.
There are several trees between 50 to 100 yards each side of the sloping hill target area that cover the field of view of each camera at least horizontally. I've measured distance to specific spots (like the junction of 1st left limb with trunk) on each of them.
Plan
Put the tree positions into an objectPoints vector as Point3f objects
Find the points they appear on each camera's image at and put those Point2f objects into an imagePoints vector for each camera
Stereo calibrate
Questions
Is my plan even in the ballpark?
If it is
would it be better to calibrate each camera by itself with a chessboard that's feet from the camera then pass the intrinsic and distCoeffs matrices to stereoCalibrate?
If I stereoCalibrate without a chessboard what should I pass as Size to the function?
Thank you for any help.
Charles
Originally posted by Cherry 3.14 on ROS Answers with karma: 41 on 2012-04-07
Post score: 3
Original comments
Comment by Kevin on 2012-04-07:
Cool project ... hope you can get it to work out!
Answer:
1.) Yes, it would be MUCH better to handle the intrinsic calibration of each camera beforehand with a reliable target. If you only have a few manually measured points, you won't have good coverage in your entire image.
2.) Have you considered making a circles grid by gluing targets (I'm assuming they are bright orange or obviously colored) onto a contrasting background? We know that in order for your application to work, you need to be able to detect the targets at a distance. If you have a good intrinsic calibration for each camera, you don't have to have a very large stereo calibration target. You could arrange 3x3 or 4x4 targets onto a board and move to various positions/distances. This way you get a better calibration through many board positions but fewer board points.
You'll also need to consider the specifics of your application. Ultimately, you need these steps:
Good 3D projection model for each individual camera (intrinsic calibration)
Calibration of the position and orientations between multiple cameras (extrinsic calibration)
Detection of the target in the background scene
Optimization between two or more cameras to determine position relative to the camera array.
I don't think that a traditional stereo algorithm will work very well, and if it does, it will be overkill and not easily extensible to multiple cameras.
Likely, you'll want to work in this order:
Use camera_calibration to reliably and accurately calibrate each camera.
Write a detector for the targets in the original, unrectified images. If you have good targets, this will likely just be a color threshold and finding blob centroids.
Perform a calibration by detecting a target grid with multiple cameras. This will give you the transform between each of the cameras. You can use the openCV stereo calibration or the pcl svd transform estimation for this.
Now you can work on detecting the 3D position of a single target.
Convert the coordinates to the rectified image.
Once you know the centroid of the target in rectified coordinates, you can calculate the 3D ray from the camera that goes through the center of the target.
Finally, write some sort of optimizer to output a 3D point from the intersection N-camera rays. You'll likely not have perfect intersections, so you'll surely have to converge on something close. You may even be able to determine the distance to the target using the measured size in pixels if you have sufficiently high resolution cameras.
Originally posted by Chad Rockey with karma: 4541 on 2012-04-07
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by Cherry 3.14 on 2012-04-13:
OK, I'll do calibration on each camera before I calibrate the camera's together. Could this intrinsic calibration be done before the camera is mounted if the camera isn't jostled, the lens rotated or the camera's body torqued when it's mounted?
Whether the cameras are extrinsically calibrated with the object points I've measured (+- inch at hundreds of yards) or with object points on pattern boards that are held at different locations that both cameras can see ...
What does the calibration generate?
I know your answer includes it but I didn't understand how to use what the extrinsic calibration generates to determine a real-world 3D location of a point from 2D image locations on both of those cameras. How do I?
My guess is that StereoCalibration outputs rotation matrix and translation vector. These two items are put into StereoRectify which will output a transformation matrix. This transformation matrix is the input array to PerspectiveTransform which gives the function the values it needs to calculate one 3D location from two 2D image points from those cameras. For every point frame for which I detect a clay target in both cameras (I can actually do object detection) PerspectiveTransform outputs an array of Point3f objects in real-world units from an array of 2D image points. | {
"domain": "robotics.stackexchange",
"id": 8901,
"tags": "calibration, opencv"
} |
Death of a Black Hole ≠ Rebirth? | Question: So we all know that Stephen Hawking proved that Black Holes do evaporate overtime through hawking radiation. But that energy is released in very discrete amounts and there will be a time in the universe when there would not be any stars left and we would only have black holes. And here is where my question comes in: After we see the blackhole era of the universe won't we see a small blackhole that evaporates fast and all of the energy released can form stars or more?
Answer: A star needs more than "energy" to form. It needs fuel, in the form of hydrogen.
In this distant era, there will no longer be clouds of hydrogen to collapse to form stars. So there will be no more stars.
The actual amount of energy released by a black hole is large (it is equal to the mass-energy of the black hole) But most of that energy is released very slowly at a very low power. Only a comparatively small amount of energy is released explosively at the end of the black hole's life. Most of the energy is released as long-wavelength photons which dissipate into the universe. At the end of the black hole's life, higher energy photons and other particles will be produced, but nowhere near enough to form a star. | {
"domain": "astronomy.stackexchange",
"id": 7094,
"tags": "black-hole, hawking-radiation"
} |
Buckling of a timber column | Question: I am wondering how the "buckling number" of a timber column is altered by having a vertical element attached to it. Blue line is a roof, green a concrete wall and black is the timber column and the beam.
I am basically asking what happens to "K" as described here:
https://en.wikipedia.org/wiki/Buckling#Columns
or how the effective length of the collumn is altered in the "buckling equation".
Answer: The beam creates a pin constraint where it connects to the column. Therefore take the top section which is longer and apply $K=0.699$ which corresponds to a fixed-pinned column.
with
$$P_{\rm cr} = \frac{\pi^2 E I}{ (K L_{23})^2 } $$
This works even though the connection to the beam is a fixed connection because the beam can deflect easier than it can expand or contract. | {
"domain": "physics.stackexchange",
"id": 94745,
"tags": "forces, classical-mechanics, everyday-life, stress-strain"
} |
Can consuming acidic drinks help kill bacteria in the stomach? | Question: All around us we hear and read articles which suggest that soft drinks or carbonated beverages are acidic and damage our body but we still consume them (or atleast a significant proportion of us do). Why doesn't consuming such drinks during a bacterial infection in stomach help to kill all the bacteria?
Answer: I don’t know if soft drinks can help ward off bacterial infections in the stomach, however if you’re not taking antibiotics, other acidic compounds can be helpful for that. One of these is apple cider vinegar, you will find many videos on youtube about it. Another is betain-HCL, which is actually hydrochloric acid, the same acid we have in our stomach but it’s more expensive than ACV.
There is a big misconception regarding stomach acid problems and their treatment, and people should first check whether they have low or high acid level in their stomach, as both have almost the same symptoms. You can do that with a simple bicarbonate test.
I was diagnosed with mild GERD (gastroesophageal reflux disease), and I was surprised to learn that, contrary to popular belief, this common digestive problem does not arise from excess acid in the stomach but on the contrary from having too low acid, which causes the valve that we have at the bottom of our esophagus, called LES, to relax, so that the acidic content of the stomach goes up and reaches the delicate mucosa of the esophagus triggering the burning sensation, especially after a heavy meal.
Having low acid is something you want to avoid not only because you are more prone to digestive problems due to improperly broken down proteins, but also to infections - our food is not 100% sterile, bacteria are everywhere and even after washing or cooking there will always be some in our food, they don’t create problems to us simply because they get killed in the stomach.
The worst part of the GERD epidemics in the Western world due to excess and bad eating, is that doctors prescribe you acid reducers (so called proton pump inhibitors, they go with various brand names that contain omeprazole, lansoprazole, etc.) which further lower the acid content of the stomach. Doctors should know that, right? Well this drugs put you at higher risk of infections, given that in the long term they can lower the acid content in the stomach down to 5% of the normal level! Which is like inviting bacteria to a party. As if this were not enough, once you take them is very difficult to stop and get rid of their side effects, as there will be a rebound effect after a few days, with symptoms even worse than before you started the treatment, so you are literally bound to them. The only way to get rid of them is decreasing the dosage very slowly over a long period of time, usually months, while taking substitutes.
My doctor didn’t give me acid reducers, instead he told me that if I like soft drinks I can try coke zero, which has zero calories but still retains the acidity necessary to raise the stomach acid and help with digestion, and it works! Later I found that the same effect is given by apple cider vinegar and also vitamin C, aka ascorbic acid, and now I tend to prefer the vitamin over the rest. | {
"domain": "biology.stackexchange",
"id": 9750,
"tags": "health, bacterial-toxins, gut-bacteria"
} |
Implementation of a Smart To-do List Algorithm | Question: This is a small program to implement the FVP algorithm (outlined here). I'm still quite new to C++ and don't have a strong grasp of basically any concepts. Concepts I tried to use in this program:
OOD
Header files
Lambda functions
std::list and std::vector
I would be grateful for any suggestions on code style, any bugs you see, and any other advice you might have.
main.cpp
#include "fvpalgorithm.h"
int main() {
FVPAlgorithm algorithm;
algorithm.run();
}
fvpalgorithm.h
#pragma once
#include <string> // Need strings for Task.
#include <vector> // Need to define an object of type vector here, so must include in the .h.
#include <list>
struct Task { // Need to give Task struct a body in header.
std::string taskName;
};
class FVPAlgorithm {
private:
std::list<Task> longList; // List of Tasks
std::vector<std::list<Task>::iterator> shortList; // Vector of iterators to tasks in longList
void addTasks();
void selectTasks(std::list<Task>::iterator startIterator);
void promptToDo(std::list<Task>::iterator task);
//std::list<Task>::iterator compareTasks(std::list<Task>::iterator startIterator);
public:
void run();
void printAllTasks();
void printShortList();
};
fvpalgorithm.cpp
#include "fvpalgorithm.h"
#include <iostream>
/*
----- The algorithm -----
Create a longlist of all tasks.
Add the first task to the shortlist.
Iterate through each task - ask if user would rather do that
than the last task on the shortlist (Which is the first task in the list, in this case)
If user says no, go to next task.
If user says yes, add task to shortlist.
Continue until no tasks left on longlist.
Tell user to complete last task added to shortlist.
When user has completed last task added to shortlist, remove it from the longlist
and begin iterating through longlist again from the index below the task that was just removed.
Ask if the user wants to do it more than the last task on the shortlist.
If the user decides they want to do the last item on the longlist, then just tell them to do the next task
on the shortlist after they finish it (since there are no more tasks on the longlist that they
didn't already turn down in favour of the second-last item on the shortlist.
Allow for items being added to end of list.
-------------------------
*/
void FVPAlgorithm::addTasks() {
std::cout << "Please add task names. Enter q to quit adding tasks." << std::endl;
std::string taskInput = "";
while (taskInput != "q") {
std::getline(std::cin, taskInput);
if (taskInput != "q") {
longList.push_back(Task{ taskInput });
std::cout << "Added task." << std::endl;
}
}
std::cout << "\nFinished adding tasks. The following tasks were added:" << std::endl;
printAllTasks();
}
void FVPAlgorithm::printAllTasks() {
for (std::list<Task>::iterator it = longList.begin(); it != longList.end(); ++it) {
std::cout << it->taskName << std::endl;
}
}
void FVPAlgorithm::printShortList() {
for (std::vector<std::list<Task>::iterator>::iterator it = shortList.begin(); it != shortList.end(); ++it) {
std::cout << (*it)->taskName << std::endl;
}
}
void FVPAlgorithm::selectTasks(std::list<Task>::iterator startIterator) {
auto compareTasks = [this](std::list<Task>::iterator it) {
std::string shortlistedTaskName = shortList.back()->taskName;
char userChoice = NULL;
for (it; it != longList.end(); ++it) {
std::cout << "Would you like to do " << it->taskName << " more than " << shortlistedTaskName << "? (Y/N)" << std::endl;
std::cin >> userChoice;
while (true) {
if (userChoice == 'Y' || userChoice == 'y') { // User wants to do this task more than the current leader.
shortList.push_back(it); // Add this task to the end of the shortlist.
return it; // Returns the task we stopped on.
}
else if (userChoice == 'N' || userChoice == 'n') { break; } // User doesn't want to, move on.
else std::cout << "Please enter Y or N." << std::endl; break;
}
userChoice = NULL;
}
return it;
};
std::list<Task>::iterator latestTaskChecked = compareTasks(std::next(startIterator, 1)); // longList.begin() is the first element of the vector, and then increments by 1, for second element.
while (latestTaskChecked != longList.end()) { // If we didn't go through all of the tasks the first time,
latestTaskChecked = compareTasks(++latestTaskChecked); // Start comparing again from the next task after the one we stopped at.
}
}
void FVPAlgorithm::promptToDo(std::list<Task>::iterator task) {
// Instruct user to do the given task.
std::cout << "You should do " << task->taskName << ". Enter anything when done." << std::endl;
std::string doneTask;
std::cin >> doneTask;
std::cout << "Good job!" << std::endl;
}
void FVPAlgorithm::run() {
// Add tasks to the longlist.
addTasks();
// Begin algorithm loop.
while (!longList.empty()) { // While we still have tasks left to do,
if (shortList.empty()) { // If we have nothing on the shortlist,
shortList.push_back(longList.begin()); // Add the first task to the shortlist
selectTasks(shortList.back()); // Add any more tasks the user would like, after the last item in shortList.
promptToDo(shortList.back());
}
if (&*shortList.back() != &longList.back()) { // If last item in shortlist isn't last item in longist,
std::list<Task>::iterator lastCompletedTask = shortList.back(); // Make note of the task we just finished,
shortList.pop_back(); // and delete it from the shortlist.
selectTasks(lastCompletedTask); // Compare everything after last task we just finished.
longList.erase(lastCompletedTask); // Delete the last completed task.
promptToDo(shortList.back());
}
else { // The last item in the shortlist is the last item in the longlist,
longList.pop_back(); // so pop them both off,
shortList.pop_back();
promptToDo(shortList.back()); // and prompt to do next-last task.
}
}
std::cout << "No tasks remaining!" << std::endl;
}
```
Answer: Couple of small things:
Use emplace_back rather than push_back when you just have the parameters for the constructors:
longList.push_back(Task{ taskInput });
// This is better written as:
longList.emplace_back(taskInput);
The difference between the two:
push_back(Task{ taskInput });.
This creates a "Task" object as an input parameter. It then calls push_back(). If the Task type object is movable (it is) then it is moved into the list otherwise it is copied into the list.
emplace_back(taskInput);
This creates an object in place in the list. This means the Task object in the list is created at the point and place it is needed without needing to copy anything.
The emplace_back() is preferred (but only very slightly). This is because if the object being put in the container is not movable then it will be copied (copies can be expensive). So it is preferred to create the object in place.
Now. Since the paramer 'taskInput' is never going to be used again we could also use std::move() to move the string to the constructor so potentially avoiding a copy of the string.
longList.emplace_back(std::move(taskInput));
Prefer the range based for for looping over containers:
for (std::list<Task>::iterator it = longList.begin(); it != longList.end(); ++it) {
std::cout << it->taskName << std::endl;
}
Can be simplified to:
for (auto const& task: longList) {
std::cout << task.taskName << "\n";
}
So what is happening here?
The range based for works with any object that can be used with std::begin(obj) and std::end(obj). These methods by default simply call the begin/end method on obj.
So:
for (auto const& item: cont) {
// CODE
}
Can be considered as shorthand for:
{
auto end = std::end(cont);
for (auto iter = std::begin(cont); iter != end; ++iter) {
auto const& item = *iter;
// CODE
}
}
Prefer to use "\n" rather than std::endl.
The difference here is that std::endl flushes the stream (after adding the '\n') character. It is usually ill advised to manually flush stream (unless you have done the testing). This is because humans are bad at deciding when a stream needs to be flushed and the code will flush the stream if it needs to be flushed automatically.
One of the biggest complaints from beginners about C++ is that std::cout is not as fast as printing to stdcout in C. The main culprit of this is usually down to inappropriate flushing of the std::cout buffer. Once that is fixed the speed of these streams are nearly identical.
Don't copy strings if you just need a refeence:
std::string shortlistedTaskName = shortList.back()->taskName;
This copies the string into shortlistedTaskName. If you just need a short reference to the value use a reference.
std::string& shortlistedTaskName = shortList.back()->taskName;
// ^^^ This is a reference to the object on the right.
for (it; it != longList.end(); ++it) {
^^ Does nothing.
// write like this.
for (; it != longList.end(); ++it) {
Don't use NULL. This is old school C for a null pointer. Unfortunately it is actually the number 0 and can thus accidentally be assigned to numeric types. Which is confusing as they are not pointers.
In C++ we use nullptr to refer to the null pointer. It can only be assigned to pointer objects and thus is type safe.
Don't use NULL to represent nothing.
char userChoice = NULL;
That is not a concept in C++. Here userChoice is a variable. It exists and will always have a value. The trouble is that char is a numeric type so assigning NULL too userChouce gave it the integer value of 0 which is the same as the char value '\0'.
You can leave it unassigned or put a default value it in. In this context since you are about to read into it I would just leave it unassigned.
char userChoice;
As long as you write into it before reading its value everything is OK.
Reading from a stream can fail.
std::cin >> userChoice;
Reading a stream can fail. Even the std::cin input can get an EOF signal which means nothing more can be read.
So always check the result of the read.
if (std::cin >> userChoice) {
// Something was successfully read into the character.
}
I don't see why you need this loop.
while (true) {
if (userChoice == 'Y' || userChoice == 'y') {
return it;
}
else if (userChoice == 'N' || userChoice == 'n') {
break;
}
else std::cout << "Please enter Y or N." << std::endl;
break;
}
You could simplify this to:
if (userChoice == 'Y' || userChoice == 'y') {
return it;
}
else if (userChoice != 'N' && userChoice != 'n') {
std::cout << "Please enter Y or N." << "\n"
} | {
"domain": "codereview.stackexchange",
"id": 37871,
"tags": "c++, beginner, algorithm, vectors"
} |
How to maximize top speed potential for a given motor? | Question: I have an engineering background however I am new to motors. I am looking to understand what the options exist for maximizing the top speed a motor can produce (say for an RC vehicle) without simply using the "brute force" option of "buy a bigger, heavier, more expensive motor with a higher number of maximum RPMs". Because after a point there is a step function in terms of size and cost that make selection impractical depending on application.
Seems like there is plenty of motorization in the world, so this has to be a solved problem--I just need some pointers to pursue learning more. Specifically: what options exist to maximize motor output for speed? What are the advantages and disadvantages of each?
All useful help appreciated--thank you!! Happy to clarify if anything is not clear.
Answer: Through my own research since originally posting I have learned what seems to be a rather practical answer. While one could provide some sort of a mechanical advantage in conjunction with a motor, the reality is that it would be less efficient than simply using a larger motor.
The tradeoff is essentially that losses of a larger motor are merely electrical in nature, whereas the losses of nearly any conceivable mechanical advantage (for a specific example, let's suppose a step-up gearbox) will likely be in the form of heat loss. Another potential "loss" could be in quality or reliability issues due to introducing more moving parts into the system. So at the end of the day, especially from a systems-level perspective, trading up in motor size, construction, or performance will almost always provide the better option. I say "almost always", because there very well could be an overly specific application out there where the only option is to pursue a mechanical advantage (such as if you had no control over choice of motor, or other odd physical space constraints).
In conclusion, the old adage still applies: sometimes all you need is simply a larger "hammer". | {
"domain": "robotics.stackexchange",
"id": 2597,
"tags": "motor, mechanism, torque, actuator, gearing"
} |
amcl laser localization | Question:
Hi
I have a question.
I have created a map using slam_gmapping package, then I save that map as map.yaml using rosrun map_server map_saver -f map.
Is it possible to use a laser and the map I have created with amcl for localization?
I appreciate your answer :)
Originally posted by acp on ROS Answers with karma: 556 on 2013-04-12
Post score: 0
Answer:
Short answer: yes.
Longer answer: Yes, it is possible. Use the map_server to publish your saved map, start amcl and use rviz to estimate the inital pose. Then move your robot around and see, if the localization is working. :)
Originally posted by Ben_S with karma: 2510 on 2013-04-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by acp on 2013-04-12:
well, I do the following: I use rosrun map_server map_server map.yaml then in another console I use rosrun amcl amcl scan. Then the only topic I see in rviz under Pose is /move_base_simple/goal. I supposed to see amcl_pose topic under Pose, right?
Comment by Ben_S on 2013-04-12:
According to the docs, amcl publishes the pose as geometry_msgs/PoseWithCovarianceStamped. But you can also add displays for the particlecloud (geometry_msgs/PoseArray) and tf. Through tf, amcl should publish the most likely transform between your map and odom.
Comment by acp on 2013-04-12:
well, i have 2 issues, 1.-how i can set the initial pose 2--why my map gets rotated after I have published it and visualized in rviz?
Comment by Ben_S on 2013-04-12:\
In rviz you should have a "2d pose estimate" button. 2) Depending on your fixed frame, the visualized map can rotate if the best guess by amcl changes. Try setting the fixed frame to /map in rviz.
Comment by acp on 2013-04-14:
Well, it seems to be as it everything is working, the only issue now is how to see the estimated poses produced by amcl in rviz? in other words, how can I see that amcl is working?
Comment by Ben_S on 2013-04-14:
Add a display of type PoseArray to rviz, set the topic to /particlecloud and then drive around a little bit. You then should see all the pose-guesses of amcl as small arrows in rviz.
Comment by acp on 2013-04-15:
Well, it seems to be as PoseAarray is working under rviz, however, from the beginning the amcl algorithm spreads particles even outside the map. Is that normal? or is there a way to fix it?
Comment by acp on 2013-04-15:
Since amcl publishes the pose as geometry_msgs/PoseWithCovarianceStamped, how can I see the poses with covariance under rviz?
Comment by Ben_S on 2013-04-15:
Thats normal. You can use the "2d pose estimate" in rviz to give it a hint, where to start. (Click the button and drag the arrow on the map.) Regarding the visualization of PoseWithCovarianceStamped: I dont know if that is possible. (rviz does not seem to have a builtin display for that...)
Comment by acp on 2013-04-18:
Well, it seems to be as if the navigation is working properly. Just one issue 1.-when I set the '2d pose estimate' and then '2d nav goal' the robot goes to the goal with a lot of rotations, is there any way to make the robot goes a straight line?
Comment by FuerteNewbie on 2013-11-21:
I guess visualizing PoseWithCovarianceStamped is not available in rviz currently but you can 'rostopic echo amcl_pose' to see the message that feedback to you every time the filter update. | {
"domain": "robotics.stackexchange",
"id": 13792,
"tags": "localization, navigation, laser, amcl"
} |
If both neutron stars and white dwarf stars can have the same mass, what determines what a star of that mass will become when it "dies"? | Question: My understanding is that roughly 1.4 solar masses is the upper limit for white dwarf stars, and that the lower bound for neutron stars is around 1.1 solar masses. Is there any way to tell what a star will form upon death, knowing that its mass will end up being in between these two bounds?
Answer: Yes, there are theoretical models of stellar evolution that tell us what to expect.
Broadly, we expect that stars with an initial mass less than 8 solar masses ($8M_{\odot}$) will end their lives as white dwarfs. So I think there is a misconception in your question - the progenitors of white dwarfs and neutron stars are usually a lot more massive than what ends up in the stellar remnant. So a star of initial mass $1.1<M/M_{\odot} <1.4$ will always end up as a white dwarf.
The reason for this $8 M_{\odot}$ upper limit is that below it, the cores of such stars never achieve the temperatures required for carbon fusion. Instead, electron degeneracy pressure is able to support the carbon/oxygen core (of mass about $\leq 1.1M_{\odot}$), whilst the outer envelope is lost in a stellar wind and planetary nebula. (Note that white dwarfs more massive than this need to have accreted mass, usually as part of a binary system).
Stars with initial mass larger than $10M_{\odot}$ do not form an electron degenerate core and are able to contract and heat up sufficiently to ignite carbon and subsequent elements until a core of iron peak elements is formed. This may then collapse to form a neutron star or possibly a black hole for very massive stars.
There is a grey area at $8-10M_{\odot}$, where it may be possible to form massive oxygen/neon white dwarfs, or they might explode as electron capture supernovae leaving behind neutron stars - it just depends how massive the core can become and whether the oxygen is able to ignite in a degenerate configuration. The remnants here, whether they be white dwarfs or neutron stars could have very similar masses.
Either way, although these models are well understood, there are sufficient theoretical uncertainties (at the $\pm 1 M_{\odot}$ level), that observational tests and empirical confirmation of the exact relationship between initial mass and the type and mass of the remnant is still desirable. | {
"domain": "physics.stackexchange",
"id": 23552,
"tags": "stars, neutrons"
} |
Question related to direction of current | Question: I believe we have all been told that current flows from high potential to low potential by convention but in reality current is flow of electrons which flow from low to high. Now as in reality current flows from low to high then potential should increase across a resistor or any other electrical device which means that th potential energy is being increased rather than getting decreased, this contradicts a resistor which creates an energy loss where am I going wrong here?
Answer: Dale has addressed your misconception about the direction of current. There is a further misconception in your question, which is that negatively charged particles moving from low to high potential increase in potential energy. In fact, the opposite is true. The change in potential energy $\Delta U$ for a charge $q$ that moves through potential difference $\Delta v$ is
\begin{align}
\Delta U = q \Delta V.
\end{align}
When $q$ is negative — take for example $q = - e$ for an electron — then
\begin{align}
\Delta U = - e \Delta V.
\end{align}
The change in potential energy will be negative if the potential difference $\Delta V$ is positive. | {
"domain": "physics.stackexchange",
"id": 68728,
"tags": "electric-current, electrical-resistance"
} |
Is Buchberger's algorithm or Wu's method valuable theoretically when we have the Tarski–Seidenberg theorem? | Question: Is Buchberger's algorithm or Wu's method valuable theoretically when we have the Tarski–Seidenberg theorem? In other words, could the Tarski–Seidenberg theorem subsume Buchberger's algorithm and Wu's method?
Actually, algorithms of both Buchberger and Wu may be just deduced from variants of Hilbert's Nullstellensatz.
Answer: For Buchberger, it depends what you want it for, but generally speaking the answer is no. First, as pointed out on the Wikipedia article, the complexity upper bound given by Tarski-Seidenberg is horrendous, whereas Buchberger's algorithm is exponential space, which is optimal (since ideal membership is EXPSPACE-complete).
Second, Tarski-Seidenberg is for semi-algebraic sets over the reals (that is, allowing $\leq, <, =, \neq$), whereas Buchberger's algorithm works not only for the reals, but for polynomials over any field, or even over other rings (such as $\mathbb{Z}$). With minor modifications, Buchberger's algorithm even works in various noncommutative analogues of polynomial rings.
Third, Grobner bases (and hence, Buchberger's algorithm) can be used for many more things besides quantifier elimination. For example, intersecting ideals, quotienting ideals, computing syzygy modules of ideals, proof systems (hence algorithms) for Tautologies, coding theory, group cohomology, applying toric geometry to algebraic geometry (where we think of the initial ideal as a way of deforming an arbitrary variety into a toric variety, and thereby learn things about the original variety that are easier to deduce for the toric one), the list goes on...
(I am less familiar with Wu's method.) | {
"domain": "cstheory.stackexchange",
"id": 4169,
"tags": "decidability, automated-theorem-proving, proof-assistants"
} |
Form of the Classical EM Lagrangian | Question: So I know that for an electromagnetic field in a vacuum the Lagrangian is $$\mathcal L=-\frac 1 4 F^{\mu\nu} F_{\mu\nu},$$ the standard model tells me this. What I want to know is if there is an elementary argument (based on symmetry perhaps) as to why it has this form. I have done some searching/reading on this, but have only ever found authors jumping straight to the expression and sometimes saying something to the effect that it is the "simplest possible".
Answer: The Lagrangian for Electromagnetism follows uniquely from requiring renormalizability and gauge invariance (plus parity time reversal)
U(1) gauge Invariance
if you require your Lagrangian to be locally invariant under symmetry operations of the unitary group U(1) that is under
$$\phi\to e^{i\alpha(x)}\phi$$
all derivatives $\partial_\mu$ have to be replaced by the covariant derivative $D_\mu = \partial_\mu+ieA_\mu$, where, in order to save local invariance the gauge field is introduced. Loosely speaking this is necessary to make fields at different spacetiem points comparable Since two points may have an arbitrary phase difference, due to the fact that we can set $\alpha(x)$ as we wish, something has to compensate this difference, before we can compare fields, which is what differentiation basically does. This is similar to parallel transport in general relativity (the mathematical keyword is connection see wiki: Connection (wiki) The gauge field $A_\mu$ transforms as $A_\mu \to A_\mu - \frac{1}{e}\partial_\mu\alpha(x)$.
Now the question is what kind of Lagrangians we can build with this requirement. For matter (i.e. non-gauge) fields it's easy to construct gauge invariant quantities by just replacing the derivatives with the covariant derivatives, i.e.
$$\bar{\psi}\partial_\mu\gamma^\mu\psi\to \bar{\psi}D_\mu\gamma^\mu\psi$$,
this will yield kinetic terms for the field (the part with the normal derivative), and interactions terms between matter fields and the gauge field.
Gauge-Field only terms
the remaining question is how to construct terms involving only the gauge field and no matter fields (i.e. the 'source-free' terms your question is about). For this we must construct gauge-invariant germs of $A_\mu$.
Once $\alpha(x)$ is chosen we can imagine starting from a point and walking on a loop back to that same point (this is called a wilson loop (wiki)). This must necessarily be gauge invariant since any phase that we pick up on the way we must also loose on the way back. It turns out, that this is exactly the term $F_{\mu\nu}$, i.e. the field strength. (the calculation is a little longer, see Peskin & Schroeder page 484). Actually this is only true for abelian symmetries such as U(1), for non abelian ones such as SU(3) we will get some interaction terms between the gauge fields which is why light does not interact with itself but gluons do.
Bilinear mass terms such as $A_\mu A^\mu$ are not gauge invariant (in the end this is the need for the Higgs meachanism)
Renormalizability
If we wish that our theory is renormalizable, we can only include terms into the lagrangian up to mass dimension 4. Now listing all terms up to mass dimension 4 we arrive at
$$\mathcal{L} = \cdot\bar{\psi}D_\mu\psi - m\bar{\psi}\psi - b\cdot F_{\mu\nu}F^{\mu\nu} + d\cdot \epsilon^{\alpha\beta\gamma\delta}F_{\alpha\beta}F_{\gamma\delta}$$
the last term involves the anti-symmetric tensor $\epsilon^{\alpha\beta\gamma\delta}$ and is therefore not time and parity invariant.
Note that we have not included linear terms here since we will be expanding around a local minimum anyways, so that the linear term will vanish.
Conclusion
if we require U(1) gauge invariance and renormalizability (mass dimension up to 4) and time and parity invariance we only get
$$\mathcal{L} = \cdot\bar{\psi}D_\mu\psi - m\bar{\psi}\psi - b\cdot F_{\mu\nu}F^{\mu\nu}$$
In the source-free case this is
$$\mathcal{L} = - b\cdot F_{\mu\nu}F^{\mu\nu}$$
the overall factor $\frac{1}{4}$ is not important. | {
"domain": "physics.stackexchange",
"id": 97328,
"tags": "electromagnetism, lagrangian-formalism, symmetry, field-theory, gauge-theory"
} |
total noise power of a resistor (all frequencies) | Question: Let's calculate the power generated by Johnson-Nyquist noise (and then immediately dissipated as heat) in a short-circuited resistor. I mean the total power at all frequencies, zero to infinity...
$$(\text{Noise power at frequency }f) = \frac{V_{rms}^2}{R} = \frac{4hf}{e^{hf/k_BT}-1}df$$
$$(\text{Total noise power}) = \int_0^\infty \frac{4hf}{e^{hf/k_BT}-1}df $$
$$=\frac{4(k_BT)^2}{h}\int_0^\infty \frac{\frac{hf}{k_BT}}{e^{hf/k_BT}-1}d(\frac{hf}{k_BT})$$
$$=\frac{4(k_BT)^2}{h}\int_0^\infty \frac{x}{e^x-1}dx=\frac{4(k_BT)^2}{h}\frac{\pi^2}{6}$$
$$=\frac{\pi k_B^2}{3\hbar}T^2$$
i.e. temperature squared times a certain constant, 1.893E-12 W/K2.
Is there a name for this constant? Or any literature discussing its significance or meaning? Is there any intuitive way to understand why total blackbody radiation goes as temperature to the fourth power, but total Johnson noise goes only as temperature squared?
Answer: I think you've just derived the Stefan-Boltzman law for a one-dimensional system. The T^4 comes from three dimensions. The more dimensions the quanta can populate the higher power of T you get. | {
"domain": "physics.stackexchange",
"id": 4165,
"tags": "thermal-radiation, noise"
} |
Caesar cipher in C | Question: I've written a simple program that encrypt and decrypt a string using Caesar cipher.
There are some examples to show how it works:
$ cae encrypt "Hello World!" 5
Mjqqt Btwqi!
$ cae decrypt "Mjqqt Btwqi!" 5
Hello World!
$ cae encrypt "Hello world!"
Ebiil Tloia!
$ cae decrypt "Ebiil Tloia!"
Hello World!
If no key is given, it will encrypt or decrypt the string with three shifts.
The code is organized in a single main.c file and a Makefile.
main.c
#include <stdio.h>
#include <getopt.h>
#include <ctype.h>
#include <stdlib.h>
#include <string.h>
#define LEFT_SHIFT 1
#define RIGHT_SHIFT 0
#define DEFAULT_KEY 3
/* It shifts a character by 'key' positions to right.
*
* Arguments:
* chr - the character (is must be between A-Z or a-z);
* key - the number of shifts (max 25).
*
* Return value:
* The shifted character.
*
* Note:
* If chr is not between A-Z or a-z or key is greater than 25, there is
* undefined behaviour.
*/
char __shift_chr(char chr, const unsigned int key)
{
const int letters = 26;
int diff = isupper(chr) ? 'A' : 'a';
return (chr - diff + key) % letters + diff;
}
/* It takes a string and shifts every character by 'key' positions to left or
* right.
*
* Arguments:
* chiper - the string to shift;
* key - the number of shifts for every character (between 1 and 25).
* mode - LEFT_SHIFT (to left) or RIGHT_SHIFT (to right).
*
* Return value:
* It returns zero on success, otherwise one on error if the key is not
* between 1 and 25 or chiper is a NULL pointer.
*/
int __shift_str(char *chiper, unsigned int key, int mode)
{
const short int letters = 26;
if (!chiper || (key < 1 && key > 25))
return 1;
/* Because chiper function works only with right shift. */
if (mode == LEFT_SHIFT)
key = letters - key;
while (*chiper != '\0') {
if (isalpha(*chiper))
*chiper = __shift_chr(*chiper, key);
chiper++;
}
return 0;
}
/* It encrypt a null-terminated byte string with `key` left shifts.
*
* Arguments:
* - *str - the null-terminated byte string;
* - key - number of shifts (between 1 and 25).
*
* Return value:
* Zero on success, otherwhise a positive value.
*/
int cae_encrypt(char *str, const short int key)
{
return __shift_str(str, key, LEFT_SHIFT);
}
/* It decrypt a null-terminated byte string.
*
* Arguments:
* - *str - the null-terminated byte string;
* - key - number of shifts (between 1 and 25).
*
* Return value:
* Zero on success, otherwhise a positive value.
*/
int cae_decrypt(char *str, const short int key)
{
return __shift_str(str, key, RIGHT_SHIFT);
}
int main(int argc, char *argv[])
{
if (argc != 4 && argc != 3) {
printf("usage: cae <encrypt|decrypt> <message> [key]\n\n");
return 1;
}
unsigned int key;
if (argc == 3)
key = DEFAULT_KEY;
else
key = strtoul(argv[3], NULL, 0);
if (key < 1 || key > 25) {
fprintf(stderr, "key must be between 1 and 25\n\n");
return 1;
}
char *str = argv[2];
char *command = argv[1];
if (!strcmp(command, "encrypt")) {
cae_encrypt(str, key);
} else if (!strcmp(command, "decrypt")) {
cae_decrypt(str, key);
} else {
fprintf(stderr, "invalid command\n\n");
return 1;
}
/* We don't check the return value because we checked the arguments
* before we passed them to the function, so it will always return 0.
*/
printf("%s\n", str);
return 0;
}
Makefile
.PHONY = clean all tests
SHELL = /bin/sh
CC ?= gcc
CFLAGS = -Wall -Wextra -g
EXEC = cae
OBJECTS = main.o
all : $(EXEC)
$(EXEC) : $(OBJECTS)
$(CC) -o $@ $^
clean :
$(RM) *.o $(EXEC)
I think that is not necessary to move functions __shift_chr and __shift_str into a new file, because they are small.
Also I forced the program to encrypt with left shifts (and decrypt with right shifts), but It could be possibile to do the same opposite.
Answer: The big assumption
This program works as advertised only if the host character coding has contiguous a..z and A..Z represented as single bytes. That works for the majority of codings used today, including ASCII, ISO 8859.x, and even UTF-8. It will fail on EBCDIC systems, though. You could probably write a small test for the problem platforms (e.g. assert('z'-'a' == 25); assert('Z'-'A' == 25);).
Includes
The program includes <getopt.h> but never uses it. Consider moving all except <ctype.h> down to immediately before main() if you might want to use the functions with a different main() (e.g. GUI or Curses interface).
Naming
Identifiers beginning with underscores, and those containing double underscores, are reserved to the implementation for any purpose. That means they can even be macros! So __shift_str and __shift_chr need to be changed. Given that __shift_chr is used only once, you might consider inlining it (that would obviate the following item).
letters constant
Instead of declaring letters separately in __shift_str and __shift_chr, it would be better to declare it only once - perhaps as a global.
Bug - error checking
This error checking is flawed, because key cannot both be less than 1 and greater than 25:
if (!chiper || (key < 1 && key > 25))
The correction is
if (!chiper || key < 1 || key > 25)
Or (better)
if (!chiper || key <= 0 || key >= letters)
This will have been masked in your testing by the (correct) version in main().
isalpha() matches all letters
In the C locale, you probably get away with isalpha(). In general, we can't assume that it matches only un-accented a..z and A..Z.
Prefer plain int
Use short int only for optimized storage of values. For calculation and argument passing, use int - it may be faster and result in smaller code; more importantly, it reduces the surprises caused by promotions you didn't expect.
No need for separate mode argument.
Instead of modifying key within shift_str, it can be done within decrypt():
int cae_decrypt(char *str, const int key)
{
const int letters = 26;
return cae_encrypt(str, letters-key);
}
Check the result of strtoul()
You should check that result of the conversion is not zero. It's probably a good idea to pass a str_end parameter, and check that it points to end of string (and not equal to the input argv[3]) to require exactly a number, and nothing else. You might want to check it's in range for int before assigning it.
Use EXIT_SUCCESS and EXIT_FAILURE
Since we include <stdlib.h>, we may as well take advantage of these macros to better express the return from main().
Prefer puts(str) to printf("%s\n", str)
Since we just want to print a string value and newline, let's use the simpler function; printf() is overkill here.
Usage message should go to stderr
If the wrong number of arguments are passed, we get a message on standard output. This should be on standard error, like the other messages. This avoids situations like the traffic sign that had "Sorry I'm not in the office" instead of the true translation of "This road will be closed"!
The Makefile
The Makefile is very good. Just two things I'd change:
It's not necessary to write SHELL=/bin/sh - that's the default (and it's not affected by an inherited environment variable).
You can use $(LINK.c) instead of $(CC) to produce the binary.
I approve of -Wall -Wextra, and suggest also -Wwrite-strings -Warray-bounds. You might get some benefit from -Wconversion, too (e.g. the assignment from strtoul() into an unsigned int). | {
"domain": "codereview.stackexchange",
"id": 27356,
"tags": "c, caesar-cipher"
} |
Will a new planet form if Jupiter's influence on asteroid belt will diminish in a few billion years? | Question: I know that tidal forces are pushing Jupiter farther from the Sun, but I couldn't find exactly the yearly amount. In a few billion years would this effect (and subsequent decrease in gravity pull) allow the formation of a new planet from the asteroid belt or would Jupiter take the asteroids with it farther away?
Answer: The total mass of the asteroid belt is just about 4% of the mass of our Moon.
Even if the asteroids don't collide with other planets in the meanwhile, the mass is too low to form a planet in the sense of the 2006 IAU definition.
Even if Ceres would accrete all asteroids in the asteroid belt, its radius wouldn't grow to the double of its present radius, hence stay smaller than Mercury. But Ceres is already large enough to fit the present IAU definition of a planet, with the exception, that it hasn't cleared its neighbourhood.
With a Stern-Levison parameter of $8.32 \cdot 10^{−4},$ Ceres is too small to fulfill the third IAU criterion for a planet, and too small to clear its neighbourhood.
Even a dwarf planet of the total mass of the asteroid belt couldn't fulfill the criterion on a pure Stern-Levison parameter basis, hence wouldn't be able to keep its neighbourhood clear. Its Stern-Levison parameter would be about $7.5\cdot 10^{-3}$. That's well below 1, the estimated value necessary to fulfill the third IAU criterion, but well above the Stern-Levison parameter of $2.95 × 10^{−3}$ for Pluto. | {
"domain": "astronomy.stackexchange",
"id": 356,
"tags": "gravity, jupiter, tidal-forces, asteroid-belt"
} |
If I let a brick fall on the ground, does it create more heat or give more kinetic energy to the earth? | Question: Part of the kinetic energy of the falling brick will be converted to heat of the surrounding air and the earth. Another part of the kinetic energy will be transferred to the earth, influencing its speed a tiny bit. Which part is larger, and why?
I am interested in the case that the brick does not bounce at all.
Answer: The amount of energy that goes into heat vs macroscopic kinetic energy depends on the elasticity of the collision: if the brick bounces almost as high as where it was dropped from, only a small amount of its kinetic energy right before impact goes to heat. If it just thuds into the ground, almost all of it goes into heat. In the former case, important to note that after the collision, kinetic energy remains with the brick. If you think of it as a two body system with the earth at rest immediately before impact, in a perfectly elastic collision, the earth ends up with about twice the momentum as the brick had. So after the collision, given a 3-kg brick and a 6-trillion-trillion-kg earth, the earth is going about one trillion-trillionth the speed of the brick. As kinetic energy is proportional to speed squared, you can see that the kinetic energy transferred to the earth is pretty negligible.
EDIT AFTER COMMENTS
Looking at the perfectly inelastic case in the same frame (earth at rest the instant before impact), the final speed of the earth is about half as large as in the elastic case above (so half a trillion-trillionth of the speed of the brick). Thus the earth ends up with half a trillion-trillionth of the kinetic energy the brick had. The fraction of the kinetic energy of the brick that goes into heat is: $$\frac{9999999999999999999999995}{10000000000000000000000000}$$ | {
"domain": "physics.stackexchange",
"id": 91295,
"tags": "thermodynamics, energy, energy-conservation, collision"
} |
Which way do Feynman's plates precess? | Question: I assume people are familiar with the story of Feynman watching students toss dinner plates in the air in the cafeteria, and how working out the relation between the spin rate and the precession rate in a nonstandard way helped him get out of his physics slump.
What I'm interested in is predicting the direction of precession based on how you spin the plate when tossing it. A quick calculation using Euler angles suggests that spin $\dot{\psi}$ and precession rate $\dot \phi$ are of opposite sign (taking $\theta$ to be acute) :
$$ \frac{-(I_3 - I_1)}{I_3}\cos \theta \dot \phi = \dot \psi$$
Here the principal moments of inertia satisfy $I_3 \approx 2I_1$, assuming the plate is basically a disk, so for a small tilt angle $\theta$ between the constant angular momentum vector $\vec L$ and the axis of symmetry $\hat e_3$ (about which $\psi$ is measured) we get
$$ \dot \phi \approx -2\dot \psi$$
If you're right-handed and toss the plate with a clockwise spin as seen from above, or counterclockwise as seen from below, it would seem that it should precess the other way. But I've tried it and the plate precesses in the same direction that I spun it. It looks counterclockwise from below. So what's the deal?
Answer: The wobble in the case of Feynman's wobbling plate is (when not referred to as 'wobble') referred to as 'torque free precession'. I will refer to it as 'wobble'.
To describe the direction of the wobble I will use the following: create a vector through the center of the plate, perpendicular to the plate. As the plate wobbles that vector sweeps out a cone. The point of that vector sweeps out a circle. The circling motion of that vector point is in the same direction as the spin of the plate.
It's not clear whether you will agree or disagree with that, since you do not specify what you count as the direction of the wobble.
About the mechanism of the wobbling:
Only a sufficiently rigid object will show sustained wobbling. Conversely, if the material can flex then the flexing will dissipate energy. Extreme example: spinning pizza dough. The energy of any wobble will dissipate very rapidly (I expect in a turn or so), and from that point on the spinning pizza dough is spinning in a constant plane.
During the wobbling/nutation there are shifting stresses in the plate. If the plate is sufficiently rigid those forces do no work, and there is no dissipation of energy.
The mechanics of the wobbling is the mechanics of nutation.
For the mechanics of nutation see my 2012 answer about the mechanism of gyroscopic precession
(The onset of gyroscopic precession involves nutation.)
See also:
Youtube video titled Feynman's wobbling plate. It shows an actual plate - with markers on it - being actually thrown. The wobbling sweeps out a cone in the same direction as the direction of spin. | {
"domain": "physics.stackexchange",
"id": 84409,
"tags": "newtonian-mechanics, rigid-body-dynamics, precession"
} |
unknown_publisher: Ignoring transform for child_frame_id "link1" | Question:
Hello. Currently I want to simulate random robot, which not provide ROS related files at all, to ROS environment.
Most of manufacturer provide 3D cad file(.step).
And I do some search to make urdf file from cad file, finally I figure it out THERE IS a add-in function in solidworks(wiki.ros.org/sw_urdf_exporter).
However, unfortunately, it shows error message as below.
[ERROR] [1542609991.430536188]: Ignoring transform for child_frame_id "link1" from authority "unknown_publisher" because of a nan value in the transform (-nan -nan -nan) (-nan -nan -nan -nan)
[ERROR] [1542609991.430645045]: Ignoring transform for child_frame_id "link1" from authority "unknown_publisher" because of an invalid quaternion in the transform (-nan -nan -nan -nan)
[ERROR] [1542609991.430670924]: Ignoring transform for child_frame_id "link2" from authority "unknown_publisher" because of a nan value in the transform (-nan -nan -nan) (-nan -nan -nan -nan)
[ERROR] [1542609991.430689004]: Ignoring transform for child_frame_id "link2" from authority "unknown_publisher" because of an invalid quaternion in the transform (-nan -nan -nan -nan)
https://github.com/Jungduri/HX300L.git
Here is the code that I made.
OS: ubuntu 16.04 LTS(VMware)
ROS: kinetic
Any comments will helpful.
Thank you
Originally posted by jungduri on ROS Answers with karma: 13 on 2018-11-19
Post score: 1
Original comments
Comment by nabihandres on 2019-10-25:
that is correct, you need to check the correct axis in the joint.
Answer:
First issue: some of the joints don't have their axis property set correctly (this one for instance). That would be something to check and fix.
Originally posted by gvdhoorn with karma: 86574 on 2018-11-19
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by jungduri on 2018-11-19:
@gvdhoorn, It works. It makes matrix all zero? So it didn't work?
Comment by gvdhoorn on 2018-11-19:
Sometimes the SolidWorks2URDF plugin has difficulty detecting the correct mates/axis. It's not a perfect plugin (but what software is).
Comment by rsankhye97 on 2023-01-12:
I'm getting a similar error and it would be worth mentioning what you mean by axis property being correctly set? How should it be set? For instance here is my code. What's going wrong here https://github.com/rsankhye97/gantry_urdf/tree/main/src/end_effector_slide_gantry | {
"domain": "robotics.stackexchange",
"id": 32070,
"tags": "ros, gazebo, rviz, ros-kinetic, joint-state-publisher"
} |
How can extra (non-curled up) dimensions be hidden from us? | Question: Wikipedia says:
If extra dimensions exist, they must be hidden from us by some
physical mechanism. One well-studied possibility is that the extra
dimensions may be "curled up" at such tiny scales as to be effectively
invisible to current experiments.
I'm just curious, could there be any plausible "hiding" mechanism except the above?
Is there any theory dealing with such other possibilities?
Answer: There are models where the extra dimensions don't need to be curled up.
The main issue with extra dimensions is, 'why don't the particles/fields we interact with travel in those directions?' We have extremely good limits on standard model particles (electrons, photons) travelling in extra dimensions.
However, it is possible to imagine a string inspired scenario where standard model fields are confined to a 'brane'--that is, some 3 dimensional spatial surface living in a higher dimensional space. Then the particles we interact with simply are not free to travel in the extra dimension.
Then the issue becomes confining gravity to the brane. Already the experimental constraints on gravity are much weaker than the standard model fields--this is the essence behind the proposal of Arkhani-Hamed, Dimopolous, and Dvali (ADD) (http://arxiv.org/abs/hep-ph/9803315). In that proposal the extra dimensions are still curled, but are allowed to be much larger than the limits coming from observations of standard model particles.
There are other models though where gravity looks four dimensional to us but the extra dimensions are not infinite. Probably the most famous example is the Randall Sundrum model (http://arxiv.org/abs/hep-ph/9905221). There, the extra dimensions are 'warped,' and the warping has the net effect of having gravity be localized near our brane. They are not compact, however. Another example is the model of Dvali, Gabadadze, and Porrati (DGP), with a brane living in an infinite dimensional flat space-time (http://arxiv.org/abs/hep-th/0005016). The way gravity is confined in that case is that there are two gravitational constants, a "4d" one living on the brane and a "5d" one living in the bulk, and the relative sizes of the constants is chosen so that on short distances on the brane the gravitational force is dominated by the 4d gravitational constant.
None of these models have any observational evidence in their favor. In the case of the DGP model there are also various theoretical problems--there is superluminal propagation around certain backgrounds, and there is a "ghost instability" around the background you would have wanted to use for cosmology. | {
"domain": "physics.stackexchange",
"id": 20175,
"tags": "string-theory, spacetime, spacetime-dimensions, compactification, branes"
} |
When was it determined that Type 1 Diabetes is an autoimmune disease? | Question: I just found out today that type 1 diabetes is an autoimmune disease. When was this discovered?
Answer: This question has two answers: The difference was first described in 1936 by Harold Percival Himsworth, which described it in this article.
At this time it was established that there are two forms of Diabetes, one sensitive to insuline while the other is not.
The terms Diabetes type 1 and 2 where established somewhere between 1974 and 1976, for details see the review "The discovery of type 1 Diabetes". | {
"domain": "biology.stackexchange",
"id": 1641,
"tags": "history, autoimmune, diabetes-mellitus"
} |
Help with Fanuc M-710iC/50 | Question:
Hi,
This is a continuation of the conversation from the github issue of fanuc_driver as per the request of the maintainer. You can find the original conversation here.
Answer to the questions raised by gavanderhoorn there:
Ok, so steps 1 & 2 of Running the ROS-Industrial programs on your Fanuc robot are successful? The robot visualised in RViz shows the same motion as the real one.
Yes. I made sure that the RVIZ was reading the robot states and as I manually moved the robot joints using the pendant, I could see it move on RVIZ.
Is this still with only rosstate running on the robot?
Again: you're only running rosstate at this point?
If you answered yes to all my questions, then you should essentially move to step 3 of the tutorial I linked ("On the TP, start the ros TPE program"). In order for the robot to execute your trajectories, you need to be running the ros TP program. rosstate, as the name implies, only broadcasts joint states, it cannot perform any motion.
Yes. I was only running rostate at that point. After reading your response I ran ros from TPE (which, correct me of I'm wrong, has RUN ROS_STATE, RUN ROS_MOVESM, CALL ROX_RELAY). RIVZ is still able to read the states from the robot but even now sending commands to the robot does not work.
For example, I still get controller takes too long error:
[ INFO] [1438288838.217184191]: ParallelPlan::solve(): Solution found by one or more threads in 0.122578 seconds
[ INFO] [1438288838.217382544]: manipulator[RRTkConfigDefault]: Starting planning with 1 states already in datastructure
[ INFO] [1438288838.217470676]: manipulator[RRTkConfigDefault]: Starting planning with 1 states already in datastructure
[ INFO] [1438288838.232510276]: manipulator[RRTkConfigDefault]: Created 9 states
[ INFO] [1438288838.272224274]: manipulator[RRTkConfigDefault]: Created 37 states
[ INFO] [1438288838.272384298]: ParallelPlan::solve(): Solution found by one or more threads in 0.055077 seconds
[ INFO] [1438288838.274225942]: SimpleSetup: Path simplification took 0.001782 seconds and changed from 3 to 2 states
[ERROR] [1438288841.993446399]: Controller is taking too long to execute trajectory (the expected upper bound for the trajectory execution was 3.708576 seconds). Stopping trajectory.
[ INFO] [1438288841.993557245]: MoveitSimpleControllerManager: Cancelling execution for
[ INFO] [1438288842.099845429]: ABORTED: Timeout reached
If you can tell me which particular
variant of the M-710iC you have, I
might have an unreleased support pkg
for it.
Let me know if you need any more details about the robot.
I just realized when setting things up, for the TPE Programs section, I just left everything at default. I am referring to this because I didn't know how to edit it using the teach pendant (in particular, I didn't know how to get the colon in 1:ROS-I rdy). I'm not sure if this will cause any problems but thought I should let you know about it.
EDIT
Ok. Please tell me exactly how you
setup your workspace, which versions
of the packages you installed, what
you installed, and what you are trying
to run (launch files, etc). Please
update your original question with
that information.
I followed the tutorials fully. The file I'm trying to run is moveit_planning_execution.launch under anuc_lrmate200ic_moveit_config. I have all the directories relating to lrmate200ic and have removed the others (I assumed this would be the closed to the one I have).
After powering on the controller, I launch the ros TPE program and get the following:
229354 RSTA Waiting for ROS state prox
29354 RREL Waiting for ROS traj relay
After launching moveit_planning_execution.launch, I get the following:
29398 I RSTA Connected
29398 I RREL Connected
I noticed that after quitting Moveit! I get an error (in red):
29448 E RSTA jnt_pkt_srlise err: -67208
If I launch Moveit! again, I see that only RSTA launches and says waiting:
29448 I RSTA Waiting for ROS state prox
and RREL does not launch. I had to reboot the controller and launch roscore again to get everything right. Anyway, I set execution_duration_monitoring to false and ran execute from Moveit and noticed an error motn-049 attempt to move w/o calibrated. I found out how to calibrate it and actually saw the robot move after giving the robot command. Although the robot moved, where it moved and where Moveit! thought it moved did not match up (which I'm guessing is because I am using a different model for the robot 200ic instead of 700ic). Either way I think I have it somewhat working.
Forgive me asking, but how much
experience do you have with these
robots? Editing a TP program should
not be too difficult.
I don't have any experience at all. I started working with these robots a week ago (part of my internship). I have a couple of questions:
I don't know how to run the robot in AUTO mode. Could you please direct me on how to do it?
Do you have the URDF for the M710ic/50 model? If you do, could you please share it.
Thanks!
Originally posted by Sudarshan on ROS Answers with karma: 113 on 2015-07-30
Post score: 0
Original comments
Comment by gvdhoorn on 2015-09-04:
@Sudarshan: could you please mark this question as answered?
Answer:
Edit (2015-09-04): this turned out to be user error. Everything was installed and setup correctly, but as the controller was not in auto mode, the operator has to keep the deadman switch, and shift depressed, or else the programs running on the controller will be paused.
Failing to do so resulted in the issues described in this Q/A. An off-list troubleshooting session quickly got to the root cause of the problems described, and everything started to work correctly.
Ok. Please tell me exactly how you setup your workspace, which versions of the packages you installed, what you installed, and what you are trying to run (launch files, etc). Please update your original question with that information.
The manipulator we are using is Fanuc Robot M-710iC/50.
I like the large image you included, but just the model name would've sufficed :). We could be needing all that space for question updates pretty soon.
Yes. I was only running rostate at that point. After reading your response I ran ros from TPE (which, correct me of I'm wrong, has RUN ROS_STATE, RUN ROS_MOVESM, CALL ROX_RELAY).
The ros TPE program does start the programs you mention. Can you add what kind of feedback you get on the TP once you've started the ros program? I'm looking for things like
12345 I RSTA Connected
12345 I RREL Connected
If you don't see that, something is wrong, and trajectory execution will not work no matter what you try.
RIVZ is still able to read the states from the robot but even now sending commands to the robot does not work.
I'm just going to include this for completeness, but do know that if your controller is not in AUTO mode, you'll have to keep pressing SHIFT + keep the deadman depressed for trajectory execution to work.
[..] I still get controller takes too long error:
With the way the current driver works, you'll always get that error (we cannot get 100% performance out of the controller), so I'd advise you to either allow way more time for execution (use the scaling factor) or disable the execution monitoring completely. See the fanuc_driver/Troubleshooting - Robot stops at seemingly random points during trajectory execution section for info on how to that (the linked ROS Answers post has an example that shows how to edit your launch files).
I just realized when setting things up, for the TPE Programs section, I just left everything at default. I am referring to this because I didn't know how to edit it using the teach pendant
You'll want to make sure that none of the used flags, integer or position registers is used by other programs on the controller. If the fanuc_driver Karel and TP programs are the only one running on the controller, the defaults are probably free to use.
(in particular, I didn't know how to get the colon in 1:ROS-I rdy). I'm not sure if this will cause any problems but thought I should let you know about it.
Forgive me for asking, but how much experience do you have with these robots? Editing a TP program should not be too difficult.
edit:
I have all the directories relating to lrmate200ic and have removed the others (I assumed this would be the closed to the one I have).
The LR Mate 200 robots are some of the smallest Fanuc has, both in reach and in max payload. The M-710iC/50 is a moderately large robot with much lower (velocity) limits, so I would've gone for the files of the M-10iA for a first test.
I noticed that after quitting Moveit! I get an error (in red):
29448 E RSTA jnt_pkt_srlise err: -67208
That is normal, as closing RViz (not MoveIt actually) takes down all of the other nodes, including fanuc_driver, which then disconnects from the programs running on the controller. That is what that 'error' tells you.
If I launch Moveit! again, I see that only RSTA launches and says waiting:
29448 I RSTA Waiting for ROS state prox
and RREL does not launch.
That is not ok. Both RSTA and RREL should notice that you disconnected. Did you let go of either SHIFT or the deadman in the middle of a trajectory execution? If yes, it could be that RREL got hung up waiting for the last traj pt to be completed, which prevents it from detecting a closed socket.
Letting go of either SHIFT or the deadman causes the controller to pause both the ROS and ROS_MOVESM programs, so unless you continue or restart them, things will not work properly anymore.
Btw: neither of them 'launches': they are still running, just waiting for new connections.
I had to reboot the controller and launch roscore again to get everything right.
That is not necessary: just ABORT all programs on the controller by using Fctn→ABORT ALL on the TP. You might have to use that twice, depending on in which state the programs are. Then restart the ROS side if necessary.
I don't have any experience at all. I started working with these robots a week ago (part of my internship).
Then please do yourself a favour and spend some time getting familiar with the robot, controller and the TP (ideally by following a course, or learning from someone with experience) before attempting to use it with ROS. Without experience, judging what is 'normal' behaviour for the machine is hard, and you could end up damaging equipment, others or yourself. Using our tutorials is not a good way to learn how to handle these dangerous robots.
This may sound pedantic, but I've seen too many near-accidents with these things to ignore it.
I don't know how to run the robot in AUTO mode. Could you please direct me on how to do it?
This is typically done using the selector switch on the control cabinet. Please do read the safety instructions before using it.
Do you have the URDF for the M710ic/50 model? If you do, could you please share it.
I've just pushed a fanuc_m710ic_support package to the fanuc_experimental repository. Installation and use is identical to all other Fanuc support packages. No IKFast plugins nor MoveIt configurations are included at this time. See also wiki/fanuc_m710ic_support.
Note that this package is experimental: I've checked it as much as I can, but you are responsible for making sure that joint limits and the general configuration correspond to the specific variant you have there.
Originally posted by gvdhoorn with karma: 86574 on 2015-07-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22343,
"tags": "ros, ros-industrial, fanuc"
} |
Multidimensional array made in c++ | Question: In some of my past c++ projects, I would sometimes end up having to use a multidimensional array. However, I often would spend more than a needed amount of time on how to store the multidimensional array.so as a result, I decided to create a multidimensional array c++ class.
here is the code
mdarray.hh
#pragma once
#include <iostream>
#include <cstdint>
#include <vector>
#include <iterator>
#include <cmath>
#include <type_traits>
template <typename T, std::uint64_t N>
class multidimensional_array {
std::vector<multidimensional_array<T, N - 1>> _data{0};
template <typename Size1, typename... Sizes>
void _resize(const Size1 &size1, const Sizes &... sizes) {
_data.resize(size1);
for (auto &item : _data) {
item.resize(sizes...);
}
}
template <typename Size1>
void _resize(const Size1 &size1) {
_data.resize(size1);
}
public:
// iterators
using iterator =
typename std::vector<multidimensional_array<T, N - 1>>::iterator;
using const_iterator =
typename std::vector<multidimensional_array<T, N - 1>>::const_iterator;
// constructers
multidimensional_array() = default;
virtual ~multidimensional_array() = default;
template <typename... Sizes>
multidimensional_array(const Sizes &... sizes) {
resize(sizes...);
}
multidimensional_array(
const std::vector<multidimensional_array<T, N - 1>> &Items)
: _data(Items) {}
// access to data
public:
std::vector<T> data() { return _data; }
T *raw_data() { return _data.begin()->raw_data(); }
multidimensional_array<T, N - 1> &operator[](std::uint64_t index) {
return _data.at(index);
}
public:
// access to size
std::uint64_t size() { return _data.size(); }
template <typename... Sizes>
void resize(const Sizes &... sizes) {
static_assert(sizeof...(sizes) <= N,
"the number of parameters is more the number of dimensions");
_resize(sizes...);
}
template <typename Size>
void resize(const Size &size1) {
_resize(size1);
}
// helper functions
public:
void fill(iterator first, iterator last, const T &item) {
for (; first != last; first++) {
first->fill(first->begin(), first->end(), item);
}
}
void fill(const T &item) { fill(begin(), end(), item); }
template <typename F>
void for_each(F &&function) {
for (auto &item : _data) {
item.for_each(function);
}
}
// iterators
public:
iterator begin() noexcept { return _data.begin(); }
const_iterator cbegin() const noexcept { return _data.cbegin(); }
iterator end() noexcept { return _data.end(); }
const_iterator cend() const noexcept { return _data.cend(); }
};
// class for 1d array
template <typename T>
class multidimensional_array<T, 1> {
private:
std::vector<T> _data{0};
public:
// iterators
using iterator = typename std::vector<T>::iterator;
using const_iterator = typename std::vector<T>::const_iterator;
// constructers
multidimensional_array(const std::uint64_t &size) : _data(size){};
multidimensional_array(const std::vector<T> &Items) : _data(Items){};
multidimensional_array() = default;
virtual ~multidimensional_array() = default;
template <typename Size>
multidimensional_array(const Size &size1) {
resize(size1);
}
// access to data
public:
std::vector<T> &data() { return _data; }
T *raw_data() { return _data.data(); }
T &operator[](std::uint64_t index) { return _data.at(index); }
// helper functions
public:
void fill(iterator first, iterator last, const T &item) {
for (; first != last; first++) {
*first = item;
}
}
void fill(const T &item) { fill(begin(), end(), item); }
template <typename F>
void for_each(F &&function) {
for (auto &item : _data) {
function(item);
}
}
public:
// access to size
std::uint64_t size() { return _data.size(); }
template <typename Size>
void resize(const Size &size1) {
_data.resize(size1);
}
// iterator
public:
iterator begin() noexcept { return _data.begin(); }
const_iterator cbegin() const noexcept { return _data.cbegin(); }
iterator end() noexcept { return _data.end(); }
const_iterator cend() const noexcept { return _data.cend(); }
};
Answer: std::vector<multidimensional_array<T, N - 1>> _data{0};
What does this line do? (Possible hint: {0} is a braced initializer sequence consisting of a single int, and int is implicitly convertible to multidimensional_array<T, N-1>. Or is this an anti-hint? Can you tell, without asking a compiler?)
If you want to create an empty vector, just use vector's default constructor:
std::vector<multidimensional_array<T, N - 1>> _data;
or convert from an empty initializer-list:
std::vector<multidimensional_array<T, N - 1>> _data = {};
See "The Knightmare of Initialization in C++."
virtual ~multidimensional_array() = default;
Yikes! Why does this class need a vtable? Are you intending to inherit from it? Please don't!
template <typename Size1, typename... Sizes>
void _resize(const Size1 &size1, const Sizes &... sizes) {
_data.resize(size1);
for (auto &item : _data) {
item.resize(sizes...);
}
}
template <typename Size1>
void _resize(const Size1 &size1) {
_data.resize(size1);
}
If you're allowed to use C++17 if constexpr, then you can write this without the "recursion", as:
template<class... Sizes>
void _resize(size_t head, Sizes... tail) {
_data.resize(head);
if constexpr (sizeof...(tail) != 0) {
for (auto&& elt : _data) {
elt.resize(tail...);
}
}
}
template <typename... Sizes>
multidimensional_array(const Sizes &... sizes) {
resize(sizes...);
}
This constructor should be marked explicit. Otherwise, declarations like these will compile without complaint:
multidimensional_array<int, 1> a = 3;
multidimensional_array<int, 1> b {3};
In fact, you should probably forbid constructing a multidimensional_array<T, N> with any number of size parameters other than N. And in fact, to avoid the ambiguity of
multidimensional_array<int, 1> b {3};
entirely, let's just use a factory method instead of a constructor. Result:
template<class T, size_t N>
class multidimensional_array {
public:
template<class... Sizes,
std::enable_if_t<sizeof...(Sizes) == N, int> = 0,
std::enable_if_t<(std::is_same_v<Sizes, size_t> && ...), int> = 0,
>
static multidimensional_array with_dimensions(Sizes... sizes) {
multidimensional_array a;
a.resize(sizes...);
return a;
}
[...]
};
auto a1 = multidimensional_array<int, 1>::with_dimensions(2);
auto a2 = multidimensional_array<int, 1>{0, 0};
auto a3 = multidimensional_array<int, 2>::with_dimensions(3, 3);
Consider what should happen if the caller passes dimensions (0,0) or (-1,-1) or so on.
std::vector<T> data() { return _data; }
It is surprising that data() returns a copy of the data; that's not how std::vector::data() or std::array::data() work. I would expect to see also an overload of data() for const arrays:
std::vector<T>& data() { return _data; }
const std::vector<T>& data() const { return _data; }
...oh wait, except that doesn't work at all, because _data is not a vector<T>; it's a vector<multidimensional_array<T,N-1>>.
The moral of this story is that you should always test your code! C++ templates especially, because if you don't instantiate them, you'll never know if they even type-check at all.
T *raw_data() { return _data.begin()->raw_data(); }
This compiles, but is extremely scary, because it sounds (to me) like it ought to give a view onto a contiguous array of (Sizes * ...) objects of type T, but really it only gives a view onto the first linear "row" of the data; the rest of the data is stored somewhere else, non-contiguous with that row.
In fact, I would recommend that you not provide the data() accessor either, because the C++20 STL adds a notion of "contiguous container" which is triggered by seeing if the container has a plausible-looking .data() method (e.g. vector, array, string). Since your container is non-contiguous, you should probably avoid the word .data() — the same way you'd avoid the word .begin() for something that didn't return an iterator.
T &operator[](std::uint64_t index) { return _data.at(index); }
should have a const overload too. And you should almost certainly use size_t, not uint64_t, just to be idiomatic and to match the size_type of std::vector.
void for_each(F &&function)
Consider providing a const overload of for_each.
The STL-ese way of passing a callback is just F function — pass by value — because usually the function is just a stateless lambda or something equally cheap to copy. If the caller really wants pass-by-reference, all they have to do is wrap their function in std::ref.
Consider adding a static_assert(N >= 2) to the primary template. N==0 should be a hard error.
Your repetition of public: is harmless but unidiomatic. We generally just have one big public: section and one big private: section (and personally I put them in that order, but reasonable people may vary on that).
Quick, off the top of your head, which parts of your design break when you instantiate multidimensional_array<bool, 2>? Which parts break unsalvageably?
Write unit tests! Pay particular attention to const — like, write a test that verifies that a[3] = b[3] compiles when b is const (but not when a is const).
Hmm, that reminds me: did you want a[3] = b[3] to compile? I should be able to assign the entire array at once a = b, and I should be able to assign one T object at a time a[i][j] = t, but should I also be able to assign one row at a time a[i] = r?
What about this?
auto mat = multidimensional_array<int, 2>::with_dimensions(3, 3);
mat[0].resize(2);
// Is `mat` now a 3x3 array with one corner cut out? | {
"domain": "codereview.stackexchange",
"id": 38170,
"tags": "c++"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.