anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Ros + openstreetmap (osm cartography package) | Question:
Hi,
Is anyone knows how to work with ROS and Open Street Map ? I have ROS indigo.
The package is here:
http://wiki.ros.org/osm_cartography
I would like to have something like on this picture:
image description http://wiki.ros.org/osm_cartography?action=AttachFile&do=get&target=prc-rviz-small.png
My code for now is the following but doesn't work:
int main(int argc, char **argv)
{
ros::init(argc, argv, "osm");
ros::NodeHandle n;
ros::ServiceClient client = n.serviceClient<geographic_msgs::GetGeographicMap>("geographic_msgs");
geographic_msgs::GetGeographicMap srv;
srv.request.url = "package://home/user/map.osm";
if(client.call(srv))
{
ROS_INFO("connected");
}
else
{
ROS_ERROR("failed");
}
return 0;
}
Thanks you so much in advance.
Originally posted by lilouch on ROS Answers with karma: 93 on 2015-06-04
Post score: 1
Original comments
Comment by Zargol on 2015-11-23:
after lunching the file, it does not show any picture. Should I run RViz and then what should add in Rviz?
Answer:
That package comes with a demo launch file (source) which starts the visualization, and you could start from there.
Originally posted by Chris L with karma: 61 on 2015-08-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 21837,
"tags": "rviz"
} |
Everyone loves Fibonacci | Question: I was bored and burnt out on more serious projects.... until I saw this Computerphile video on YouTube. Now I'm inspired again. Now I feel like spending some time implementing that recursive algorithm in a little console program. Fortunately, recursion is unbearably slow in this case. So, I ended up writing a small class that implements a loop algorithm to find any \$F_n\$.
I'm just learning C#, so I'm most concerned with best practices and style. I want to get a good foundation though, so all feedback is welcome.
Particular Concerns
Is my indentation correct?
Have I named my variables horribly? (I think they smell a bit.)
Which implementation is better? I'm feeling like it's six of one or a half doz. of the other.
Should I be concerned about hitting the upper boundary of the ulong type? Fibonacci numbers grow very quickly. Is there a better representation of very large numbers?
Fibonacci
class Fibonacci
{
private int _n;
private ulong _value;
private ulong _FnMinus1;
private ulong _FnMinus2;
public int n
{
get { return _n; }
set { _n = value; }
}
public ulong Value
{
get {return _value;}
//set { _value = value; }
}
public void Calculate()
{
if (n <= 0)
{
throw new ArgumentOutOfRangeException("n","Can't calculate Fn when n is less than or equal to zero.");
}
else if (n == 1 || n == 2)
{
_value = 1;
}
else
{
//initialize previous results
_FnMinus1 = 1;
_FnMinus2 = 1;
for (int i = 0; i <= (n - 2) ; i++)
{
_value = _FnMinus1 + _FnMinus2;
_FnMinus2 = _FnMinus1;
_FnMinus1 = _value;
}
}
}
//constructors
public Fibonacci(int n)
{
_n = n;
Calculate();
}
public Fibonacci() { }
}
Implementation1
class Program
{
static void Main(string[] args)
{
Fibonacci fib = new Fibonacci();
for (int i = 1; i <= 50; i++)
{
fib.n = i;
fib.Calculate();
Console.WriteLine(fib.Value);
}
Console.WriteLine("Press enter to close...");
Console.ReadLine();
}
}
Implementation2
class Program
{
static void Main(string[] args)
{
for (int i = 1; i <= 50; i++)
{
Fibonacci fib = new Fibonacci(i);
Console.WriteLine(fib.Value);
}
Console.WriteLine("Press enter to close...");
Console.ReadLine();
}
}
Update: Follow up question here.
Answer: Indentation and formatting are fine, as is the algorithm itself.
Your naming is inconsistent, compare n and Value; _value and _FnMinus1. There are a variety of conventions to follow, but consistency is the most important thing.
_FnMinus1 and _FnMinus2 are only used in one method, so they can be made local to that method.
The API is a little odd:
fib.n = i;
fib.Calculate();
int result = fib.Value;
It would be more convenient if I could do this with one line, like int result = fib.Calculate(i), which would return the value. Then it seems that n and Value are unnecessary.
Furthermore, right now there is nothing that keeps n and Value synchronized. The class should maintain the invariant that Value is always the nth Fibanacci number. You could do the work in the n assignment or mark Value as dirty when n changes. Either way you essentially get rid of Calculate, so that's another approach. I prefer getting rid of n and Value, I just mention this in case you think it makes sense to keep them.
Between Implementation 1 and Implementation 2, it's pretty much a wash. 1 has the benefit of only making one allocation. Since the class is (going to be) stateless, it's perfectly reusable. On the other hand 2 declares fib inside the loop, which minimizes its scope (which is good). | {
"domain": "codereview.stackexchange",
"id": 8175,
"tags": "c#, beginner, fibonacci-sequence"
} |
Understanding a rule related to Young tableau | Question: This is my first time learning Young tableau, and I am struggling with a specific rule.
I am reading it from this lecture note.
The rule I am confused about is the very last point at the end of page 2. I have pasted a screenshot of it here for easy reference. Could someone explain a bit more why the tableau shown in the last sub-bullet point should be rejected? It says that the going from right to left for each row, the number of $b$'s should not be more than the $a$'s, but in this specific tableau there are one each of $a$ and $b$. So, why is it not allowed?
Answer: You aren’t considering the phrase “at any given box position”. Starting on the right of the first row with $b$, at that position there has been one $b$ and zero $a$’s, which is forbidden. | {
"domain": "physics.stackexchange",
"id": 72693,
"tags": "representation-theory"
} |
Multivariate Time series analysis: When is a CNN vs. LSTM appropriate? | Question: I have multiple features in a time series and want to predict the values of the same features for the next time step. I have already trained an LSTM which is working okay, but takes a bit long to train.
So now my question: is it reasonable to use a CNN instead of an LSTM, even though it is a time series? Are there any indicators for when you should never switch to a CNN?
Answer: Is it reasonable to use a CNN instead of an LSTM, even though it is a time series?
Yes, it is. Convolutional Neural Networks are applied to any kind of data in which neighboring information is supposedly relevant for the analysis of the data.
CNN are very popular with images, where data is correlated in space, and in video, where correlation happens both in space and time.
Are there any indicators for when you should never switch to a CNN?
CNNs are limited in a sense: they have a rigid, forward structure.
If you are trying to perform:
classification in sequences that have varying length ($N$ to $1$);
trying to output another sequence which has no fixed proportion between their length and the length of the input ($N$ to $M$).
Simple feed-forward neural networks will fail (due to dimension inconsistency). So, in that case you should use a recursive neural network such as a LSTM. | {
"domain": "datascience.stackexchange",
"id": 8037,
"tags": "time-series, cnn, lstm"
} |
Parallelizing WGCNA k-means clustering and merging smaller clusters | Question: WGCNA carefully takes advantage of server parallelization in several functions, and I can switch this on with:
enableWGCNAThreads(nThreads=28)
That speeds up some functions significantly, but the two bottleneck functions in the pipeline (in my experience) are part of the "Projective k-means" operation, primarily "..k-means clustering.." and "..merging smaller clusters..".
Unfortunately, parallelization does not work for those two functions.
Am I missing something? How can I parallelize these functions? Otherwise I'm wasting HPC resources and cluster time.
For reference, I'm working on a 28-processor HPC node with 96GB RAM, and I'm trying to run analysis on a 15,000-square correlation matrix.
Answer: The bad news is that, indeed, projectiveKMeans is not parallelized and I am not sure how much of it is (easily) parallelizable. The good news is that with 15k features (genes) and 96GB RAM you don't need to run preclustering at all. Just analyze the whole data set in one block.
If you for some reason insist on splitting the data into blocks, make sure the BLAS R uses on the cluster is a fast BLAS, ideally one that can parallelize matrix operations. A lot of time in merging of small clusters is spent calculating singular value decompositions, which becomes much faster using a good BLAS. HPC clusters will often have Intel KML available, or the admins could install OpenBLAS. | {
"domain": "bioinformatics.stackexchange",
"id": 1208,
"tags": "r, wgcna"
} |
Textbook proof error? Runge Lenz | Question:
I was reading this proof in my textbook. They say that $$\vec{r} \cdot \dot{\vec{r}} = |\vec{r}||\dot{\vec{r}}|.$$ Doesn't that mean $\vec{r}$ is parallel to $\dot{\vec{r}}$, and if so, then the line before $3.80$ is $0$ ($\vec{r}$ crossed with $\dot{\vec{r}}$ is $0$). How can $\vec{r}$ possibly not be parallel to $\dot{\vec{r}}$ as this is the only way the proof makes sense?
Answer: You don't have the notation quite right. The end result is that
$$\vec{r}\cdot\dot{\vec{r}} = |\vec{r}|\frac{d|\vec{r}|}{dt}.$$
In other words, $\dot{r}$ is the rate of change of the distance from the origin, not the speed of the object. If the object is undergoing circular motion about the origin at a constant speed, $\dot{r} = 0$. | {
"domain": "physics.stackexchange",
"id": 65495,
"tags": "newtonian-mechanics, differentiation, notation, projectile, laplace-runge-lenz-vector"
} |
Defining Model in Kalman for Getting Position Using | Question: I am stuck at modeling a system model, i.e. getting my state vector and input vector for navigating just using navaid and ins (tactical). My guess is that position is my only state vector and INS output (acceleration or velocity) is input vector. What is state vector and what is input vector in my case? As I just need 2 position which sensor must be used for update and which must be used for predict. Should I use 1d 2 kalman implementation for lat and lon or no etc?
Answer: In general, for position-related Kalman filters, you want your state vector to contain $x$, and $\dot{x}$ (location and velocity) components. See, for example, the Wikipedia page or this question and answer here.
If you have a measure of velocity, then it can certainly also be an input to the Kalman filter. | {
"domain": "dsp.stackexchange",
"id": 7145,
"tags": "filter-design, kalman-filters, least-squares"
} |
Better code for error handling in NodeJs and Express | Question: I have this code for a post request, when the user wants to change his password. Because of all the cases and the following page renders the code came out really ugly. Is there a better way to structure this? (It works fine and does what I want.)
// Check if old password is correct
SQL.getUserFromDB(request.session.username).then(function (results) {
// Hash and compare with stored hash
bcrypt.compare(request.body.oldPw, results[0].password, function (error, result) {
// Log possible error
if (error) console.log(error);
if (result === true) {
// Check if new passwords are both the same
if (request.body.newPw === request.body.newPw2) {
// Call mysql function
SQL.changeUserPassword(request.session.username, request.body.newPw).then(function () {
response.render('pages/changePassword', {
user: request.session.username,
text: 'Passwort erfolgreich geändert.'
});
}).catch(function (error) {
console.log(error);
if (error == 'pw') {
response.render('pages/changePassword', {
user: request.session.username,
text: 'Neues Passwort zu unsicher.'
});
} else {
// Render error page
response.render('pages/changePassword', {
user: request.session.username,
text: 'Fehler beim Ändern des Passworts.'
});
}
});
} else {
// Render error page
response.render('pages/changePassword', {
user: request.session.username,
text: 'Neue Passwörter stimmen nicht überein!'
});
}
} else {
// Render error page
response.render('pages/changePassword', {
user: request.session.username,
text: 'Altes Passwort stimmt nicht überein!'
});
}
});
// Catch sql errorsFehler beim Ändern des Passworts
}).catch(function (error) {
if (error) console.log(error);
response.render('pages/errors/loginFailed');
});
I tried just setting the text in the different cases and rendering one page with the text at the bottom, this didn't work however.
Answer: What you first need to do is convert all non-promise/callback-based operations into promises, or wrap them in one. This way, you can write seamless promise-based code. For instance, convert bcrypt.compare() into:
const bcryptCompare = (pw1, pw2) => {
return new Promise((resolve, reject) => {
bcrypt.compare(pw1, pw2, (error, result) => {
if(error) reject(error)
else resolve(result)
})
})
}
There are libraries that do this for you, and newer versions of libraries usually support promise-based versions of their APIs. But if you ever need to craft this yourself, this is how it's generally done.
Next, I would separate express from this logic. This way, you're not constrained with Express. Instead of accessing request, pass only the needed information to your function. Instead of immediately sending a response, consider throwing error objects instead. This way, you can defer your response logic to your routing logic.
Next, you could turn to async/await for cleaner code. It's just syntax sugar on top of Promises, allowing asynchronous code to look synchronous.
// Custom error class to house additional info. Also used for determining the
// correct response later.
class PasswordMismatch extends Error {
constructor(message, user){
this.message = 'Neue Passwörter stimmen nicht überein!'
this.user = user
}
}
class UserDoesNotExist extends Error {...}
class PasswordIncorrect extends Error {...}
// More custom error classes here
// Our logic, which knows nothing about express and only requires a few
// pieces of information for our operation.
const changePassword = async (username, oldPw, newPw, newPw2) => {
// Throwing an error inside an async function rejects the promise it returns
if (newPw !== newPw2) {
throw new PasswordMismatch('Neue Passwörter stimmen nicht überein!', user)
}
const results = await SQL.getUserFromDB(username)
if (!results) {
throw new UserDoesNotExist('...', user)
}
const result = bcryptCompare(oldPw, results[0].password)
if(result !== true) {
throw new PasswordIncorrect('...', user)
}
try {
SQL.changeUserPassword(username, newPw)
} catch(e) {
throw new PasswordChangeFailed('...', user)
}
}
// In your express router
app.post('route', async (request, response) => {
// Gather the necessary data from the request
const username = request.session.username
const oldPw = request.body.oldPw
const newPw = request.body.newPw
const newPw2 = request.body.newPw2
try {
await changePassword(username, oldPw, newPw, newPw2)
// If the operation didn't throw, then we're good. Happy path.
response.render('pages/changePassword', {
user: username,
text: 'Passwort erfolgreich geändert.'
});
} catch (e) {
// If any of the operations failed, then we determine the right response
// In this case, we used a custom error. We can get the data from it.
// You can have more of these for the other custom classes.
if(error instanceof PasswordConfirmError){
response.render('pages/changePassword', {
user: error.user,
text: error.message
});
} else {
// Generic 500 error, if all else fails.
}
}
}) | {
"domain": "codereview.stackexchange",
"id": 35141,
"tags": "javascript, node.js, authentication, express.js"
} |
Is sublimation a property of all substances? | Question: I wonder if sublimation can be done only on certain substances like iodine, or it can be done on all substances.
In other words, can sublimation be done, for example, on water? If it cannot, then why?
Answer: Look up the term “phase diagram”. At different temperatures and pressures different phases are favored. Think about water. At 0 °C and 1 atm water and ice are equally favorable. There can be freezing or melting. At other temperatures and pressures there can be sublimation and deposition.
The sublimation of water is involved in freeze drying and freezer burn of ice cream.
It turns out that iodine sublimes at room temperature and atmospheric pressure but there’s nothing special about iodine compared to other substances on a fundamental level. | {
"domain": "chemistry.stackexchange",
"id": 1644,
"tags": "physical-chemistry, vapor-pressure, sublimation"
} |
Equipartition theorem contradiction for coordinate transformation | Question: The equipartition theorem states the following:
Let $H$ be the Hamiltonian describing a system and $x_i, x_j$ be canonical variables. Then, for a canonical ensemble with temperature $T$, it follows that:
$$\langle x_i \frac{\partial H}{\partial x_j}\rangle = \delta_{ij}kT.$$
Now consider a system with two degrees of freedom $x_1, x_2$ and canonical momenta $p_1$, $p_2$, with the Hamiltonian being
$$H(x_1,x_2,p_1,p_2) = \frac{p_1^2+p_2^2}{2m} + \kappa(x_1 - x_2)^2.$$
One can also introduce center of mass coordinates, $R = (x_1 + x_2)/2, r = x_1-x_2$, with the transformed Hamiltonian being:
$$H(R,r,p_R,p_r) = \frac{p_R^2}{2(m+m)} + \frac{p_r^2}{2(m/2)} + \kappa r^2.$$
The equipartition theorem tells us in the first case:
$$2H = p_1\frac{\partial H}{\partial p_1} + p_2\frac{\partial H}{\partial p_2} + x_1\frac{\partial H}{\partial x_1} + x_2\frac{\partial H}{\partial x_2} \Rightarrow \langle H\rangle = 2kT.$$
For the second case (center of mass coordinates):
$$2H = p_R\frac{\partial H}{\partial p_R} + p_r\frac{\partial H}{\partial p_r} + r\frac{\partial H}{\partial r} \Rightarrow \langle H\rangle = (3/2)kT.$$
Where is my mistake?
Answer: Indeed, even for finite size systems, it would be problematic since going to the center of mass frame is just a change of variable with jacobian one (and a nice symplectomorphism, so everything should be consistent). And, this is the hamiltonian of a diatomic particle, it was worked on during the 1880s.
Two problems can arise, first, the Hamiltonian is free in the center of mass frame. That is problematic because it means that you'll need to regularize the integral to get sensitive answers (by adding a box for example, or taking the infinite size limit after performing the integral). Secondly, the bounds in your center of mass frames are highly non trivial.
In the derivation of the generalised equipartition theorem, you use the fact that some quantities vanish at the boundaries (see wikipedia article about equipartition theorem), while here, the situation is a bit more complex because the integration on $R$ depends on the value of the integration boundaries on $r$ (see this MSE post). Of course at the end you take the limit of infinite size, but still, I think it invalidates the proof using integration by part given in the Wikipedia page (see Edit 1.).
If you do it by hand, you will obtain $\dfrac{3}{2}k_b T$ as expected.
So everything is fine. Note that, the NVT or NVE ensemble, considers that the center of mass is also a variable that can fluctuate. You are right (I think it was your thought process) to think that, in a realistic system, the center of mass would be conserved, as the momentum and the angular momentum. This is for example what happens in molecular dynamics and this has to be accounted for, see for example this article. As mentionned by Quillo however, for large system sizes, the difference between a system conserving the momentum, center of mass and angular momentum (molecular dynamics with infinite size) and one that does not (for example if we simulate the NVT ensemble through monte carlo) is not measurable.
Edit 1: proof that the wikipedia proof does not hold in this case.
Wikipedia places itself in the NVT ensemble:$$1 = {\displaystyle \dfrac{1}{Z}\int e^{-\beta H(p,q)}d\Gamma =\dfrac{1}{Z}\int d[x_{k}e^{-\beta H(p,q)}]d\Gamma _{k}-\dfrac{1}{Z}\int x_{k}{\frac {\partial e^{-\beta H(p,q)}}{\partial x_{k}}}d\Gamma ,}$$
Where $d\Gamma = dx_kd\Gamma_k$. Then they integrate by part and obtain:
$${\displaystyle {\dfrac{1}{Z}}\int \left[e^{-\beta H(p,q)}x_{k}\right]_{x_{k}=a}^{x_{k}=b}d\Gamma _{k}+{\dfrac{1}{Z}}\int e^{-\beta H(p,q)}x_{k}\beta {\frac {\partial H}{\partial x_{k}}}d\Gamma =1,}$$
They argue that the first integral is 0 either because the Hamiltonian diverges when evaluated at $a$ and $b$ or that $x_a = x_b = 0$.
They then find that: $$\beta\langle x_j \frac{\partial H}{\partial x_j}\rangle = 1$$ as expected.
Now, let's assume your Hamiltonian, without the kinetic part:
$$H = (x -y)^2 = r^2$$ this implies that:
$$\lim_{l\rightarrow\infty}\dfrac{1}{Z}\int_{-l}^{l}dx\int_{-l}^{l}dy e^{-\beta (x -y)^2} = \lim_{l\rightarrow\infty}\dfrac{1}{Z}\int_{-l}^{l}dR\int_{-2(l - R)}^{2(L - R)}dr e^{-\beta r^2} = 1$$
If we turn to the proof given by wikipedia with this Hamiltonian, we have the following integration by part:
$$\lim_{l\rightarrow\infty}{\dfrac{1}{Z}}\int_{l}^{l} dR \left[e^{-\beta r^2}r\right]_{-2(l - R)}^{2(l - R)}$$
And this won't be 0 (I might have messed up the boundary terms for the integrals... But in any case, the final result won't be 0 but something like $2/\pi$.)
Thus you see that wikipedia argument does not hold when phase space coordinates are interdependent. This also happens for the momenta, thus the equipartition theorem does not hold for them neither. | {
"domain": "physics.stackexchange",
"id": 98414,
"tags": "thermodynamics, statistical-mechanics, coordinate-systems, phase-space"
} |
Coriolis Effect and the Space Shuttle | Question: The Coriolis effect is a well-known phenomenum, important in meteorology and ocean current forecasting. In addition to location (latitude), it depends on velocity and duration. I assume that commercial aircraft autopilot inertial guidence systems have the ability to compensate for Coriolis, and that even intercontinental missiles are designed with guidance systems that provide Coriolis capability for target accuracy. Was it necessary to provide space shuttles a means to deal with the Coriolis effect during re-entry?
Answer: I am not particularly an expert either, but my understanding is that shuttle flight is a very active process compared to ballistic motion, so any effects the Coriolis effect might have can just as well be considered as additional errors in the trajectory, which is being adjusted. There's an active feedback loop at work: "observe flightpath -> identify desired corrections -> correct and observe."
Contrast this with pure ballistic motion (e.g., a cannonball), where only initial quantities can be manipulated, and you have to account for this if it's significant (i.e., naval artillery charts). | {
"domain": "physics.stackexchange",
"id": 3912,
"tags": "rocket-science, estimation, coriolis-effect"
} |
Analytical solution to first-order rate laws | Question: Does anyone know the analytical solutions to the following rate laws:
1) $\frac{\mathrm{d}\rho_\text{W}}{\mathrm{d}t} = -\rho_\text{W} \cdot (K_1+K_2+K_3)$
2) $\frac{\mathrm{d}\rho_\text{T}}{\mathrm{d}t} = \rho_\text{W} \cdot K_2 - (K_4+K_5) \cdot \rho_\text{T}$
3) $\frac{\mathrm{d}\rho_\text{C}}{\mathrm{d}t} = \rho_\text{W} \cdot K_3 + \rho_\text{T} \cdot K_5$
As an example, I have determined the following analytical solution for this rate law:
$\frac{\mathrm{d}\rho_\text{W}}{\mathrm{d}t} = -\rho_\text{W} \cdot (K_1+K_2+K_3) \qquad \Rightarrow \qquad \rho_\text{W} = \rho_{\text{W}0} \cdot \operatorname{e}^{-(K_1+K_2+K_3) \cdot t}$
where $\rho_{\text{W}0} = $ initial density, $K = $ reaction rate constant, $t =$ time
The scheme for the reactions are:
$\ce{wood} \xrightarrow{K_{1}}{} \ce{gas}$
$\ce{wood} \xrightarrow{K_{2}}{} \ce{tar}$
$\ce{wood} \xrightarrow{K_{3}}{} \ce{char}$
$\ce{tar} \xrightarrow{K_{4}}{} \ce{gas}$
$\ce{tar} \xrightarrow{K_{5}}{} \ce{char}$
Any help with the three rate laws above would be greatly appreciated.
Answer: This is the solution I get:
$$\begin{align}\rho _T(t) &= -\frac{K_2 \rho _{w0} \left(e^{\left(-K_1-K_2-K_3\right) t}-e^{t (-K4-K5)}\right)}{K_1+K_2+K_3-K4-K5}\\\rho _W(t) &= e^{\left(-K_1-K_2-K_3\right) t} \rho _{w0}\\\rho_C(t) &=\tiny{-\frac{\rho _{w0} e^{-t \left(K_1+K_2+K_3+K4+K5\right)} \left(K_2^2 K5 e^{\left(K_1+K_2+K_3\right) t} \left(e^{t (K4+K5)}-1\right)+K_2 \left(K5 (K4+K5) \left(e^{t (K4+K5)}-e^{t \left(K_1+K_2+K_3+K4+K5\right)}\right)+K_1 K5 e^{\left(K_1+K_2+K_3\right) t} \left(e^{t (K4+K5)}-1\right)-K_3 \left(-(K4+2 K5) e^{t \left(K_1+K_2+K_3+K4+K5\right)}+K5 e^{\left(K_1+K_2+K_3\right) t}+(K4+K5) e^{t (K4+K5)}\right)\right)+K_3 (K4+K5) \left(-K_1-K_3+K4+K5\right) \left(e^{t (K4+K5)}-e^{t \left(K_1+K_2+K_3+K4+K5\right)}\right)\right)}{\left(K_1+K_2+K_3\right) (K4+K5) \left(-K_1-K_2-K_3+K4+K5\right)}}\end{align}$$
Solving this by hand isn't that hard, as the coupling is unidrectional. So, we can solve for $\rho_W$ first, and substitute it in the equation for $\frac{d\rho_T}{dt}$.
Now, we have an equation of the form $\frac{dy}{dt}=Ay+Be^{kt}$, which has a solution of the form $y=Ce^{At}+De^{kt}$, where C,D can be found by back-substitution. Now, we subtitute the values for $\rho_T, \rho_W$ in the last equation, and we get an equation of the form $\frac{d\rho_C}{dt}=f(t)$ ($f(t)$ here is some combination of exponentials), which can easily be solved by integrating $f(t)$ | {
"domain": "chemistry.stackexchange",
"id": 576,
"tags": "reaction-mechanism, kinetics, analytical-chemistry, catalysis"
} |
Error: package 'my_examples_pkg' not found | Question:
I have a launch file that looks like this:
<launch>
<node pkg="my_examples_pkg" type="cmd_vel_publisher.py" name="cmd_vel_publisher" output="screen" />
</launch>
...and a publisher that looks like this:
#! /usr/bin/env python
import rospy
from geometry_msgs.msg import Twist
rospy.init_node('cmd_vel_publisher')
pub = rospy.Publisher('/cmd_vel', Twist, queue_size=10)
cmd = Twist()
cmd.angular.x = 0
cmd.linear.z = 0
while not rospy.is_shutdown():
pub.publish(cmd)
rospy.spin()
However, when I try to run the launch file with the following command:
roslaunch my_examples_pkg my_examples_pkg.launch
I receive the following error:
[my_examples_pkg.launch] is neither a launch file in package [my_examples_pkg] nor is [my_examples_pkg] a launch file name
and when I try to run the individual python publisher file with the following:
rosrun my_examples_pkg cmd_vel_publisher.py
I receive the following error: [rospack] Error: package 'my_examples_pkg' not found
I'm new to ROS so I apologize if this is a really basic question, but its been plaguing me for a while now. Most of the resolved questions I see online direct me to do:
source devel/setup.bash
However, this doesn't resolve my issue as I still receive the errors mentioned above afterwards. Any help/insights/guidance is greatly appreciated. Thanks in advance.
Originally posted by wynnblevins on ROS Answers with karma: 1 on 2020-10-14
Post score: 0
Original comments
Comment by wynnblevins on 2020-10-14:
Oh my gosh, I'm so embarrassed. Yes I had run catkin_make but the issue was caused by a copy paste error package.xml. Specifically, the name field was set to the incorrect value. If you would like to get credit for the answer I will mark your answer as correct! I so appreciate the help, like I said, I'm very new to ROS!
Answer:
Do you have a package named my_examples_pkg? Is it in your workspace? Has it been built with catkin_make or catkin build? What is the name of the package in the package.xml and CMakeLists.txt (<name> tag and project() method respectively)?
Originally posted by jarvisschultz with karma: 9031 on 2020-10-14
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35631,
"tags": "roslaunch, ros-kinetic, rosrun"
} |
Chemistry 12: Calculating changes in enthalpy | Question: I was wondering if I could please get some help with this:
In a coffee-cup calorimeter $\pu{100.0 mL}$ of $\pu{1.0 M}\ \ce{NaOH}$ and $\pu{100.0 mL}$ of $\pu{1.0 M}\ \ce{HCl}$ are mixed. Both solutions are originally at $\pu{24.6 ^\circ C}$. After the reaction, the temperature is $\pu{31.3 ^\circ C}$. Assuming all solutions have a density of $\pu{1.0 g/cm^3}$ and a specific heat capacity of $\pu{4.181 J/(g ^\circ C)}$. What is the enthalpy change for the neutralization of $\ce{HCl}$ by $\ce{NaOH}$?"
This is what I did, but I'm not sure if it's correct:
$$
\begin{align}
q(\text{surroundings}) &= m \cdot c \cdot \Delta T\\
&= \pu{200 g} \cdot \pu{4.184 J/(g ^\circ C)} \cdot (\pu{31.3 ^\circ C} - \pu{24.6 ^\circ C})\\
&= \pu{5.61 kJ}\\[2em]
q(\text{system}) - \pu{5.61 kJ} &= n \cdot \Delta H - \pu{5.61 kJ}\\
&= \pu{0.1 mol} \cdot \Delta H
\end{align}
$$
Thus: $\Delta H = \pu{-56.1 kJ/mol}$
Answer: In general, homework is not welcome here in the forum, which is probably why your question was voted down. However, since you have already given a solution here and only ask for a confirmation, I want to be so nice and reply to your question.
The short answer is: Yes!
In more detail, the following can be said: The heat from the neutralization added to the heat to warm the solution added to the heat to warm the calorimeter has to be equal to zero.
Thus: $Q(\text{neutralization}) + Q(\text{warm the solution}) + Q(\text{warm the calorimeter}) = 0$
In your example, it is assumed for simplicity that the calorimeter does not consume heat to heat up. Therefore $Q(\text{warm the calorimeter}) = 0$.
Thus:
$$
\begin{align}
Q(\text{neutralization}) + Q(\text{warm the solution}) &= 0\\
n \Delta H + m c \Delta T &= 0\\
n \Delta H &= - m c \Delta T\\
\Delta H &= -\frac{m c \Delta T}{n}\\
&= \frac{\pu{200 g} \cdot \pu{4.184 J/(g ^\circ C)} \cdot (\pu{31.3 ^\circ C} - \pu{24.6 ^\circ C})}{\pu{0.1 mol}}\\
&= \pu{-56.1 kJ/mol}
\end{align}
$$ | {
"domain": "chemistry.stackexchange",
"id": 12620,
"tags": "enthalpy"
} |
Problems with python when trying to control a Nao robot with a Kinect | Question:
My team is trying to implement the nao_openni package from WPI(code here). Following the readme file we've gotten to step 5e, where we get a python import error.
"$ rospack find [PACKAGE]" finds the all packages that it says it depends on.
"$ echo $PYTHONPATH" returns:
/opt/ros/cturtle/ros/core/roslib/src:/home/[username]/Desktop/aldebaran-sdk-1.6.13-linux-i386/lib
The error when running:
"$ ./nao_ni_walker.py" (and ./nao_walker.py and ./nao_sensors.py) is:
File "./nao_walker.py", line 51, in
from nao_ctrl.msg import MotionCommandBtn
ImportError: No module named nao_ctrl.msg
We aren't very familiar with Python, I've been trying to read about it to figure out what's going on but it hasn't been working so far.
We're fairly new to ROS, so any help would be appreciated.
Originally posted by tecedge on ROS Answers with karma: 13 on 2011-02-16
Post score: 1
Original comments
Comment by tecedge on 2011-02-22:
That worked for that error. Thanks again everyone. We're on to another error now though. If we can't figure it out in a few days I'll be back.
Comment by tecedge on 2011-02-16:
Alright. I have to go to class right now but I'll try that later. Thanks a lot for the help everyone.
Comment by Eric Perko on 2011-02-16:
You likely don't have permissions to write to that directory (check with 'ls -lh /opt/ros/cturtle/stacks/nao_0.2'). If you installed nao_ctrl from a source checkout, you should install it to a directory that you have write access to. You could also try running 'sudo make' to force a make to happen.
Comment by tecedge on 2011-02-16:
When running rosmake nao_ctrl there are errors that say gcc is broken, the interesting part can be found here: http://pastebin.com/ZmqKa0vJ. EDIT: There seems to be something wrong with the permissions too.
Comment by tfoote on 2011-02-16:
If you're unsure, it's best to just run the command again.
Comment by tecedge on 2011-02-16:
I believe my teammate did it before, is there a way that to check to see that it's been done before?
Answer:
Have you done 'make' in the nao_ctrl package (or 'rosmake nao_ctrl') -- if not, you won't have the message headers generated yet.
Originally posted by fergs with karma: 13902 on 2011-02-16
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 4770,
"tags": "kinect, nao"
} |
Nextflow (DSL v2): how to best synchronize multiple outputs from a single process | Question: I have a workflow that needs to:
Generate .fastq files from .bams (while preserving the @RG group from the original bam)
Split the .fastqs
Align
etc.
The only way I could think to preserve the @RG group was to print it to a file during step 1 and emit the file. I can't figure out how to synchronize the emitted @RG file with the .fastqs, however. I've tried several variations without luck.
Questions:
How do I synchronize the emitted @RG file (sample_RG_file) with the split .fastqs?
Is there a better way?
Bonus question (may be worth a separate post): does Nextflow not stage files until the actual .bash script starts? I tried to operate on the sample_RG_file within the script using Groovy, but got an error that the file didn't exist. (The code was working before I found I needed to synchronize sample_RG_file with the
Other variations:
The code was 'working' before I found I needed to synchronize sample_RG_file with the split .fastqs. I was originally passing GENERATE_FASTQS_PROC.out.sample_RG as a separate parameter to ALIGN_FASTQ_BWA_PROC, but it was always passing the same (first) file.
I also tried GENERATE_FASTQS_PROC.out.sample_RG.flatten() in the .map operator, but that had the same error.
Current error:
The code below fails with the following error:
Error executing process > 'REALIGN_SAMPLES_WF:ALIGN_FASTQ_BWA_PROC (1)'
Caused by:
Not a valid path value type: groovyx.gpars.dataflow.DataflowBroadcast (DataflowBroadcast around DataflowStream[?])
Semi-pseudo code
Workflow
workflow my_WF {
take:
sample_input_file_ch
original_ref
main:
GENERATE_FASTQS_PROC(sample_input_file_ch, original_ref)
reads_per_job = 1_000_000
fq_split_reads_and_RG_ch = GENERATE_FASTQS_PROC.out.fastqs
.splitFastq(by: reads_per_job, pe: true, file: true)
.map {
fq1, fq2 ->
tuple( fq1, fq2, GENERATE_FASTQS_PROC.out.sample_RG )
}
.view()
ALIGN_FASTQ_BWA_PROC(fq_split_reads_and_RG_ch, ...)
}
GENERATE_FASTQS_PROC
process GENERATE_FASTQS_PROC {
input:
path(sample_input_file)
val(original_ref)
output:
tuple path("**/*_R1.fastq"), path("**/*_R2.fastq"), emit: fastqs
path("**/*_RG.txt"), emit: sample_RG
script:
"""
generate_fastqs.sh $sample_input_file $original_ref $task.cpus $mem_per_thread
"""
}
ALIGN_FASTQ_BWA_PROC
process ALIGN_FASTQ_BWA_PROC {
/*
* Publish results. 'mode: copy' will copy the files into the publishDir
* rather than only making links.
*/
// publishDir("${params.results_dir}/01-REALIGN_SAMPLES", mode: 'copy')
label 'REALIGN_SAMPLES'
input:
tuple path(fq1), path(fq2), path(sample_RG_file)
val(align_to_ref)
val(align_to_ref_tag)
val(original_ref)
val(output_format)
output:
path '*.{bam,cram}', emit: final_alignment
script:
/*
* Tried to read the RG file using Groovy, but
* got error that the file didn't exist.
*
* Does Nextflow not stage files until the actual .bash script starts?
*/
// println "sample RG file: $sample_RG_file"
// file_lines = sample_RG_file.readLines()
// sample_name = file_lines[0]
// sample_RG = file_lines[1]
"""
lines=()
while read line
do
lines+=("\\\$line")
done < $sample_RG_file
sample_name="\\\${lines[0]}"
sample_RG="\\\${lines[1]}"
bash realign_bwa-split.sh $fq1 $fq2 \\\$sample_name \"\\\$sample_RG\" ${align_to_ref} ${output_format} $task.cpus
"""
}
Update
Per @Steve's suggestion, I'm trying to have GENERATE_FASTQS_PROC create the split .fastq files using Unix split, but I'm still struggling with various concepts.
Supposed I have this emitted from my process:
[sample_A_RG.txt, [s1_R1.split_00.fq, s1_R2.split_00.fq, ..., s1_R1.split_XX.fq, s1_R2.split_XX.fq]]`
In my mind, I want to pull out s1_RG.txt file from the Channel and extract the @RG. Then I want to call:
[<all_fastqs_for_sample_A>]
.map{
// pull out split ID numbers
}
.groupTuple()
I believe this would group _R1 and _R2 for each split set, and then call
[[00.fq1, 00.fq2], ..., [XX.fq1, XX.fq2]]
.combine(sample_RG_var)
to get [[@RG, 00.fq1, 00.fq2], ..., [@RG, XX.fq1, XX.fq2]].
Seems like it would be so much easier if I could use Channel factories and operators (e.g., .fromFilePairs().combine( rg )) in the output:.
Getting files into a process is relatively straightforward, but getting them out in a reasonable manner is less so.
Answer: Nextflow channels guarantee that items are delivered in the same order as they are sent. So if a process declares two (or more) output channels, the items that are emitted should already be synchronized. This is true unless of course one of the channels sets the optional output attribute.
How do I synchronize the emitted @RG file (sample_RG_file) with the
split .fastqs?
Since the splitFastq operator can create n outputs for each input, you need some way to combine the @RG files using some key. This key, for example, could be the basename of the sample input file. This, I think, would work nicely given the input file should just have a '.bam' extension. Another option, would be to supply some other value in the input declaration and just use that in your output declaration. If we went with the former, your output declaration might look like:
output:
tuple val(sample_input_file.baseName), path("sample_R1.fastq"), path("sample_R2.fastq"), \
emit: fastqs
tuple val(sample_input_file.baseName), path("sample_RG.txt"), \
emit: sample_RG
Then, the body of your workflow might look like:
workflow {
reads = GENERATE_FASTQS_PROC(sample_input_file_ch, original_ref)
reads.fastqs
.splitFastq( by: params.reads_per_job, pe: true, file: true )
.combine( reads.sample_RG, by: 0 )
.map { sample, fq1, fq2, rg_txt -> tuple(fq1, fq2, rg_txt) }
.set { fq_split_reads_and_RG_ch }
...
}
You'll notice that I removed your fuzzy glob patterns from your output declaration above. Your workflow definition suggests that the generate_fastqs.sh script produces a single pair of FASTQ files and a single readgroup file for a given set of inputs. However, as you know, glob patterns can match one or more output files. When declaring output files, it's always best to name these guys explicitly. If you do use a glob pattern and for some reason your process was to produce two files that match a given pattern, you'll instead get back a tuple, which would break the pipeline.
Is there a better way?
Since you need the @RG tags with the split FASTQ files, I don't think there is. However, I generally avoid using the splitting operators on large files. My preference is always to pass "work" off (including IO) to a separate process. A separate process can also provide increased flexibility if/when requirements change.
Does Nextflow not stage files until the actual .bash script starts? I tried to operate
on the sample_RG_file within the script using Groovy, but got an error
that the file didn't exist.
Nextflow evaluates the code in the script definition before returning the script string to be executed. This is why it's possible to implement a conditional script. Basically, you want to avoid all IO operations here. The script string to be executed has not even been decided at this point.
Update
I'm not sure how you plan to operate per sample. IIUC, [<all_fastqs_for_sample_A>] is basically a Channel of lists, where each list contains all split FASTQ files for a given sample. Using map here to pull out the split ID numbers followed by the groupTuple operator might not give the expected results. You might also run into the problems with calling groupTuple, since, without a size attribute, it will basically wait for all outputs before emitting the grouped result:
You should always specify the number of expected elements in each
tuple using the size attribute to allow the groupTuple operator to
stream the collected values as soon as possible. However there are use
cases in which each tuple has a different size depending on the
grouping key. In this case use the built-in function groupKey that
allows you to create a special grouping key object to which it’s
possible to associate the group size for a given key.
Part of the problem is that we don't know what the generate_fastqs.sh script does and how the other scripts work. If you are aligning using BWA MEM and have samtools available in your $PATH, it might even be easier to produce interleaved output (since BWA MEM can read interleaved FASTQs) using the samtools fastq command. This would save some work having to group each list of FASTQs into pairs, ensuring they are ordered as expected. This would be reasonably involved.
Not sure what BAMs you need to process, or if you're looking for a more generic solution, but I thought I'd throw in an alternative solution you might find works well for you or that you may not have yet considered. Since BAMs often contain multiple readgroups, I find that if I split by readgroup (using samtools split, flatten the output Channel, then samtools collate followed by samtools fastq, I can achieve somewhat uniform mapping walltimes (assuming the input BAMs are similar) without needing to chunk the reads and then merge the alignments back together. If your input BAMs are not similar at all, then you will likely need to chunk to generate uniform alignment jobs.
Below is an example, that shows how to create a groupKey, followed by the transpose, join/combine and groupTuple pattern. There are some caveats and there is some room for optimization:
nextflow.enable.dsl=2
params.original_ref = './ref.fasta'
params.reads_per_job = 10000
process samtools_view_header {
tag { "${sample_name}:${bam.name}" }
cpus 1
memory 1.GB
input:
tuple val(sample_name), path(bam)
output:
tuple val(sample_name), path("${bam.baseName}.header.txt")
"""
samtools view \\
-H \\
-o "${bam.baseName}.header.txt" \\
"${bam}"
"""
}
process samtools_name_sort {
tag { "${sample_name}:${bam.name}" }
cpus 4
memory 8.GB
input:
tuple val(sample_name), path(bam)
output:
tuple val(sample_name), path("${bam.baseName}.nsorted.bam")
script:
def avail_mem = task.memory ? task.memory.toGiga().intdiv(task.cpus) : 0
def mem_per_thread = avail_mem ? "-m ${avail_mem}G" : ''
"""
samtools sort \\
-@ "${task.cpus - 1}" \\
${mem_per_thread} \\
-n \\
-o "${bam.baseName}.nsorted.bam" \\
-T "${bam.baseName}.nsorted" \\
"${bam}"
"""
}
process samtools_fastq {
tag { "${sample_name}:${bam.name}" }
cpus 1
memory 2.GB
input:
tuple val(sample_name), path(bam)
output:
tuple val(sample_name), path("${bam.baseName}.fastq")
"""
samtools fastq \\
-O \\
-T RG,BC \\
-0 /dev/null \\
"${bam}" \\
> \\
"${bam.baseName}.fastq"
"""
}
process split_fastq {
tag { "${sample_name}:${fastq.name}" }
cpus 1
memory 1.GB
input:
tuple val(sample_name), path(fastq)
output:
tuple val(sample_name), path("${fastq.baseName}.${/[0-9]/*5}.fastq")
"""
split \\
--suffix-length=5 \\
--additional-suffix=".fastq" \\
-d \\
-l "${params.reads_per_job*4}" \\
"${fastq}" \\
"${fastq.baseName}."
"""
}
process bwa_index {
tag { fasta.name }
cpus 1
memory 12.GB
input:
path fasta
output:
tuple val(fasta.name), path("${fasta}.{amb,ann,bwt,pac,sa}")
"""
bwa index "${fasta}"
"""
}
process bwa_mem {
tag { "${sample_name}:${fastq.name}" }
cpus 8
memory 12.GB
input:
tuple val(sample_name), path(fastq), path(header)
tuple val(idxbase), path(bwa_index)
output:
tuple val(sample_name), path("${fastq.baseName}.bam")
script:
def task_cpus = task.cpus > 1 ? task.cpus - 1 : task.cpus
"""
bwa mem \\
-p \\
-t ${task_cpus} \\
-C \\
-H <(grep "^@RG" "${header}") \\
"${idxbase}" \\
"${fastq}" |
samtools view \\
-1 \\
-o "${fastq.baseName}.bam" \\
-
"""
}
process samtools_coordinate_sort {
tag { "${sample_name}:${bam.name}" }
cpus 4
memory 8.GB
input:
tuple val(sample_name), path(bam)
output:
tuple val(sample_name), path("${bam.baseName}.csorted.bam")
script:
def avail_mem = task.memory ? task.memory.toGiga().intdiv(task.cpus) : 0
def mem_per_thread = avail_mem ? "-m ${avail_mem}G" : ''
"""
samtools sort \\
-@ "${task.cpus - 1}" \\
${mem_per_thread} \\
-o "${bam.baseName}.csorted.bam" \\
-T "${bam.baseName}.csorted" \\
"${bam}"
"""
}
process samtools_merge {
tag { sample_name }
cpus 1
memory 4.GB
input:
tuple val(sample_name), path(bams)
output:
tuple val(sample_name), path("${sample_name}.bam{,.bai}")
"""
samtools merge \\
-c \\
-p \\
"${sample_name}.bam" \\
${bams}
samtools index \\
"${sample_name}.bam"
"""
}
workflow {
original_ref = file( params.original_ref )
bwa_index( original_ref )
Channel.fromPath('./data/*.bam')
| map { tuple( it.baseName, it ) }
| set { sample_bam_files }
samtools_view_header( sample_bam_files )
sample_bam_files
| samtools_name_sort
| samtools_fastq
| split_fastq
| map { sample_name, fastq_files ->
tuple( groupKey(sample_name, fastq_files.size()), fastq_files )
}
| transpose()
| combine( samtools_view_header.out, by: 0 )
| set { realignment_inputs }
bwa_mem( realignment_inputs, bwa_index.out )
| samtools_coordinate_sort
| groupTuple()
| map { sample_name, bam_files ->
tuple( sample_name.toString(), bam_files )
}
| samtools_merge
| view()
}
There are some caveats however, in no particular order:
It unravels your shell scripts. I think this is a better solution though because it gives you control over each of the components.
No CRAM support. But this should be easy to add if needed.
A single process to read a BAM header is wasteful, especially if the input BAM is large. More time will be spent localizing the file than actually doing work.
Replace samtools sort -n with samtools collate. There's really no need to name sort. You may prefer to pipe the output of samtools collate into samtools fastq for avoid writing an extra file. This might be useful if you need to operate on an extremely large number of large BAMs.
The 'samtools_fastq' step should write gzipped output with low compression. This means replacing coreutils split with something else. A solution using Biopython could be useful here to read and write gzipped output in a single pass.
The last 'map' command in the workflow is unnecessary. I just thought I'd show that calling .toString() on the GroupKey gives you back an actual String object.
Also, it's not necessary to pass around a tuple of 'sample name' and paths as process inputs and outputs. There's no harm in doing this though. When you are more confident with Nextflow Channels, refactoring these processes should be easy.
My samtools sort processes ensure that they do not oversubscribe the nodes that they land on. They use approximately 320% - 380% CPU, depending on the size of the BAM. The resource requests for the other processes are not optimized at all.
With newer versions of samtools, BAM indexing can be done at the same time as merging, with the --write-index option. | {
"domain": "bioinformatics.stackexchange",
"id": 2076,
"tags": "nextflow"
} |
why do light bulbs explode when in contact with water? | Question: Is it true that when water pours on a light bulb it will explode? If so does this apply to all light bulbs and how does that happen.
Answer: If the bulb is lit when you pour water on it, it will undergo thermal shock. Some parts cool down and shrink, while other parts are hot. This causes very large thermal stresses, which can break the bulb. Depending on the material and construction of the bulb, this may or may not result in a spectacular implosion (once the bulb cracks, the vacuum inside will be filled with air and the water you poured on the bulb). The water will hit the very hot internal parts, and boil instantaneously. This can make a big mess. Certainly it is not an experiment you should do without extreme precautions.
I used to make CT tubes in a previous life. These are large vacuum tubes with a massive anode that can soak up several MJ of energy (enough for an entire CT scan), at which point they reach a temperature of around 1200 C (meaning they glow "white hot"). One of the safety tests we had to do was to evaluate the behavior of the tube and its enclosure if the vacuum envelope failed and the insulating oil surrounding the tube (which acted both as dielectric insulator and cooling medium) would be sucked into the vacuum, and instantaneously come to a boil against the anode. It was one of the most spectacular experiments I have ever participated in; it was done in a sealed room, with video cameras and other instrumentation all around, in their own sealed boxes. The outermost container of the tube had a baffle that could absorb the expanding oil in normal conditions - in abnormal conditions it would cause boiling oil to spray out, safely directed away from the patient.
Believe me, you don't want to do that experiment unless you really, really know what you are doing. | {
"domain": "physics.stackexchange",
"id": 28348,
"tags": "optics, electricity"
} |
How is the time complexity of an algorithm with successive-nested loops measured? | Question: Does the time complexity of the following pseudo-algorithm simplify as $O(n^2 log(n)) = O(n^2)$?
SOME ALGORITHM
GET n
FOR (i=1; i<=n; i++)
{
calculationOfN[i] // some polylogarithmic time calculation
FOR (j=1; j<=n; j++)
{
// some linear calculation
}
FOR (k=0; k<=n; k++)
{
// some linear calculation
}
// some statement
}
// some statement
Answer: Assuming your linear calculations are actually constant-time calculations, then yes, the final time complexity of the algorithm will be $O(n^2)$ (but not for the reason you seem to be indicating).
Your polylogarithmic time calculation (running in time $O(\log^{k}n)$) is run $n$ times (one for each $i$); each of your secondary $O(1)$ time computations is run $O(n^2)$ times (one for each $(i,j)$ pair and $(i,k)$ pair respectively). Altogether, this leads to a time complexity of $O(n\log^{k}n + n^2)$ (notably, not $O(n^2\log^{k}n)$). Since $n^2$ dominates $O(n\log^{k}n)$, this is $O(n^2)$. | {
"domain": "cs.stackexchange",
"id": 9421,
"tags": "algorithm-analysis, runtime-analysis"
} |
Electric potential of sphere | Question:
(a) I am a little confused about this part. The point at A to B isn't radial. The electric field is radially outward, but if I look at the integral
$$\int_{a}^{b}\mathbf{E}\cdot d\mathbf{s} = \int_{a}^{b}\frac{\rho r}{3\epsilon_0}\mathbf{\hat{r}}\cdot d\mathbf{s}$$
The vector ds and r can't be in the same direction, so do I have to express it in norm form of the dot product? I am afraid to do so. So my wishful thinking answer (since it says it is 2 marks ) is
$$\int_{a}^{b}\mathbf{E}\cdot d\mathbf{s} = \int_{r}^{R} \frac{\rho r}{3\epsilon_0}dr$$
(b) Okay this one isn't too bad, but i am extremely paranoid. So I went back to the definition of potential
$$V = k\int\frac{dq}{d}$$ Since the density is uniform, I simply get $V = \dfrac{kQ}{d}$
Now I just substitute $r$ into equation and get $V = \dfrac{kQ}{r} = \dfrac{Q}{4\pi \epsilon_0r}$.
Note that "d" is the radial distance. I avoided using r or R becuase the picture uses r and R
Thank you for reading
Answer: Why are you looking for a radial surface..? Look it as an Equipotential surface (a surface where all points are at same constant electric potential) as it comes with sphere. Hence, you can assume the points A to B as radial to find the potential difference. | {
"domain": "physics.stackexchange",
"id": 4258,
"tags": "homework-and-exercises, electrostatics, electric-fields, potential, gauss-law"
} |
Approximating a Gimbal using Springs and Dampers | Question: I recently asked a question about odd results from a transfer function, and while looking over my work, I conjectured that maybe I'm having a more fundamental problem. Since this is a design problem I don't expect anyone to give a full-fledged answer, but any hints or prodding in the right direction would be appreciated. Here goes.
I'm looking to model a one-axis gimbal in Simulink, which requires a transfer function to plug into my simulation. The gimbal has the following properties that I need to capture in a transfer function:
Pushing the gimbal (giving it an impulse $\delta$) will move it some finite distance $x$. This is equivalent to having an impulse response that converges to a finite value.
The gimbal has a resonant frequency of $\omega$. Since I will be using active PID control on it in Simulink, if I attempt to drive the gimbal with a forcing frequency near $\omega$, I should expect high amplitude oscillations in the position $x$.
These two responses combined will produce a very realistic expectation: pushing on the gimbal will move it (though damping will slow it down to a halt if the force is removed), but if you get near the resonance frequency of the structure, you'll excite the first elastic mode and have large-amplitude oscillations.
Below is a picture of what I thought the system should look like, approximated with springs and dampers (because finding the transfer functions of these elements is easy to compute):
My rationale is as follows: imagine just the mass $m$ and the damper $c$. This will produce the first response required, having the impulse response converge to a finite non-zero value. To force the resonancy condition, I should attach to the mass a spring-and-dashpot so that I can use the simple equation of:
$$\ddot{x} + 2\zeta\omega\dot{x} + \omega^2x$$
However, if you look at the link I've provided, I'm having a whole host of problems reproducing the two required conditions in my model. I think something is wrong with how I've modeled the system, but I don't quite know what's going wrong.
Any help would be greatly appreciated. Further, if anyone knows of papers that describe modeling of one-axis gimbal systems (I've found plenty on two-axis gimbal systems, but those are way too complicated for what I'm trying to accomplish here), that would also be helpful.
Answer: In my "day job" I do a lot of (mostly nonlinear!) dynamics modelling and simulation of mechanical systems. One thing that always raises a "red flag" for me is degrees of freedom with no mass, especially if they are at places where somebody is trying to apply a load to the model, as you have done.
I'm assuming that in the physical structure there is a rod of shaft connecting the "payload" (i.e. the mass $m$) to the "motor" (which applies the force $F$ ). The shaft will have a torsional stiffness (modelled by your $k$) but it also has some mass. The rotating part of the motor also has mass - possibly more than the connecting shaft.
The model should be capable of representing the basic behaviour of the system if you remove all the damping.
With no damping, your model has no "resonant frequency" at all. It is just a disconnected mass, with a massless spring attached to it. Trying to create a non-zero-frequency vibration mode from nowhere by adding the dampers isn't likely to work.
But if you add a second mass at the point where the force is applied, representing the real mass of the shaft and the rotating part of the motor, then (again ignoring the dampers) you now have a two-degree-of-freedom system with two modes. One mode is a zero-frequency mode corresponding to free rotation at constant angular velocity, the other is your "resonance" with a non-zero frequency $\omega$.
The two modes now represent the two "interesting" things that physical structure does, and two dampers will act to control those two modes. It's a reasonable guess that $c$ will mainly damp out the rigid body rotation, and $d$ will mainly damp out the unwanted vibration in the drive shaft.
Note: The $m$ and $k$ in your model would create a vibration mode if you applied a fixed-amplitude displacement to the end of the spring, instead of a force. That would be equivalent to mounting $m$ and $k$ on a vibration testing table and shaking the end of the spring. But if you try to use the model in that way, the damper $c$ is irrelevant, because it has a prescribed displacement at both ends (one end fixed, the other end moving). But the way I interpret your diagram and description, $F$ is meant to represent a force, not a displacement. | {
"domain": "engineering.stackexchange",
"id": 701,
"tags": "mechanical-engineering, dynamics, systems-design"
} |
What is wrong with this definition of ordered state? | Question: It is written in my book that a disorederd state is more probable than an ordered state and hence every system tends to move spontaneously to a state of higher disorder or higher probability.
But I think it depends on us since we can define any state as an ordered or disordered state.
Suppose we have 3 unbiased coins and all are tossed at once.
Now let's say that there is a man A and he defines an ordered state as having three heads at a time. For him the definition of entropy sets good since the probability of getting the ordered state (all head) is less than that for an disordered state.
Now say there is a man B and he defines an ordered state as a state with at least one head. Now , we know that the probability of getting an ordered state for that man is more than the probability of getting a disordered state.
How is this possible ? The definition of entropy is not favourable for B.
What is wrong with this intuition ? Do we need to change the definition of entropy ? Or am I wrong somewhere ?
Answer: The words "ordered" and "disordered", in relation to entropy, are the source of a lot of confusion and are not even always an accurate description.
In your coin example, a more typical use of the word "ordered" would be to say that all three coins are the same (either heads or tails). If you start in an ordered state, and each coin randomly flips, it is unlikely the system will remain in an ordered state, since there are 6 states with the 3 coins not all having the same face, and only 2 states where all 3 coins have the same face.
A more abstract, but also more correct, description of entropy is in terms of the microstates of the system. Entropy is the logarithm of the number of microstates that are consistent with the observed macroscopic properties of the system. In equilibrium, the macroscopic properties (energy, pressure, volume, chemical potential, etc) will be such that there are more microstates consistent with these properties than any other set of macroscopic properties. | {
"domain": "physics.stackexchange",
"id": 75455,
"tags": "thermodynamics, entropy, terminology, definition, thought-experiment"
} |
Circular vs Linear Convolution | Question: When deriving DFT from DTFT,we sample the frequency domain with uniformly spaced samples,hence adding periodicity to time domain.
But DFT requires a limited support: we take only 1 period.
Does that make circular convolution the same as a linear convolution with the signal's periodic extension?
Answer: Convolution in DFT is still circular.
Think of the DFT as taking the 1st period (in time and in frequency) of the DFS (discrete Fourier series). In DFS, both the time sequence and the frequency sequence are N-periodic, and the circular convolution applies beautifully.
I personally think all properties in terms of DFS, and then consider the 1st period when speaking DFT. | {
"domain": "dsp.stackexchange",
"id": 5576,
"tags": "dft, convolution, dtft"
} |
How to Bootstrap dataset for 10000 AUC scores? | Question: I am new to ML and trying to learn the nuances. I work on a binary classification problem with 5K records. Label 1 is 1554 and Label 0 is 3554.
What I currently do is
1) split the data into train(70%) and test(30%)
2) initiate a model --> logreg=LogisticRegression(random_state=41)
3) run 10 fold cv --> logreg_cv=GridSearchCV(logreg,op_param_grid,cv=10,scoring='f1')
4) fit the model --> logreg_cv.fit(X_train_std,y_train)
5) Do prediction --> y_pred = logreg_cv.predict(X_test_std)
Now my question is, how to generate 10000 AUC scores.
I read that people usually do this get a confidence interval of their train and test performance AUC scores.
So, I would like to know how to do this?
I know that bootstrap means generate random samples with replacement from same dataset. But do we still have to split them into train and test? But this looks no different than CV. How do we generate 10000 AUC's and get a confidence interval?
Can you help?
Answer: In this question of stats exchange you can see an answer to your question of when to use bootstrap over CV.
You can see a simple tutorial of how to do Bootstrap in Python in this link
How to generate 10k AUC Scores?
AUC is a performance metric and what you are going to measure is the performance of your model 10k times. For that, you have to select 10k times the number of samples that you consider and measure AUC
for i in range(0,10_000):
sample = df.sample(df.shape[0]/10,random_state=i)
X = df.drop(columns='target')
y = df.target
preds = logreg.predict(X)
print(roc_auc_score(preds,y)) | {
"domain": "datascience.stackexchange",
"id": 6723,
"tags": "machine-learning, deep-learning, statistics, cross-validation, machine-learning-model"
} |
Proof by Turing Reduction | Question: I need to proof the following by turing reduction.
Given two languages:
$Q= \{(\langle M_1 \rangle , \langle M_2 \rangle ) \mid L(M_1) = L(M_2)\}$
$I= \{\langle M \rangle \mid \;\vert L(M) \vert = \infty \}$
Where $\langle M_1 \rangle$ is the encoding of Turing Machine $M_1$.
I need to show that $ I \le_T Q $ ($I$ can be reduced to $Q$ using a Turing Machine).
So far I came up with the following idea, which I think isn't correct.
Create Turing Machine $M'$ so that a mapping from $Q$ to $I$ is found:
$M'$ takes as input a random word $u$ for example $aab$
$M'$ removes the input ($u$) from its tape
$M'$ guesses words $v$ and $w$
$M'$ checks if $v \in L(M_1)$ and $ w \in L(M_2)$ if and only if $v = w$
Answer: Your solution doesn't quite make sense for many reasons:
The reduction should take $\langle M \rangle$ as input, and using an oracle to $Q$, decide whether $\langle M \rangle \in I$.
The reduction should be deterministic (so step 3 is invalid).
The function of the arbitrary word $u$ is not explained; indeed, steps 1 and 2 seem useless.
You don't explain what $M_1,M_2$ are in step 4 (are you trying to reduce from $Q$ to $I$?).
You don't explain how to implement step 4.
Also, you are missing a proof that your reduction works. Without a proof, your solution is wrong even if it could be completed to a correct solution.
One solution goes along the following lines (I'm assuming $L(M)$ consists of all inputs on which $M$ halts; I also say that $n \in L(M)$ is accepted by $M$):
Given $M$, let $M'$ be the machine that on input $n$, simulates $M$ on $n$, and then (if $M$ halted) tries to find an $m > n$ accepted by $M$ (if there is no such $M$, it never halts); you can implement this using the technique of dovetailing.
Give $M$ and $M'$ to the $Q$ oracle, and return its answer.
The idea is that $L(M)$ is infinite iff $L(M) = L(M')$. I'll leave you the details. | {
"domain": "cs.stackexchange",
"id": 5045,
"tags": "complexity-theory, turing-machines, reductions"
} |
Change of heat capacity fluid when you add solvent | Question: I am considering a liquid for which I know $C_p$ or $C_v$. I am wondering how this changes when you add a (minor) amount of solvent to the liquid. Is there a general theory around describing how a random liquid's heat capacity changes when you add a random solvent? Or is it a very specific process for the particular liquid and solvent one considers? References to literature would be very much appreciated!
Answer: If I understand You correctly, by solvent You mean any liquid different than Your 'main' liquid. If so, it is pretty straightforward, since it is a simple mixture of two liquids. If the heat capacity of main fluid is defined as 'energy required to change the temperature of a mass unit of an object by 1 K', then the heat capacity of the mixture is mass fraction-averaged:
$C_m$ = $g_1 * C_1 + g_2 * C_2$
If it is defined as 'energy required to change the temperature of one mole of an object by 1 K', i.e. it is molar heat capacity, then the molar heat capacity of the mixture is mole fraction-averaged:
$C_m$ = $x_1 * C_1 + x_2 * C_2$
$g$ and $x$ are mass and mole fractions of the components. | {
"domain": "physics.stackexchange",
"id": 11999,
"tags": "thermodynamics, physical-chemistry"
} |
Using parser_combinators to parse a string | Question: I am playing with Rust, and am trying to write a simple parser.
I need to parse the string:
"0123,456789"
into a stucture:
Passport { label : i8 = 0, body : i32 = 123456789 }
I am using parser_combinators and my code is working, but very ugly.
How can I rewrite this code?
extern crate parser_combinators as pc;
use pc::*;
use pc::primitives::{State, Stream};
fn main() {
match parser(parse_passport).parse("a123,456") {
Ok((r,l)) => println!("{} {} {}", r.label, r.body, l),
Err(e) => println!("{}", e)
}
}
struct Passport {
label : i8,
body : i32,
}
fn parse_passport<I>(input: State<I>) -> ParseResult<Passport, I> where I: Stream<Item=char> {
let mut label = digit().map(|i : char| (i as i8) - ('0' as i8));
let mut fst = many1(digit()).map(|string : String| string.parse::<i32>().unwrap());
let (i,input) = match label.parse_state(input) {
Ok((x,n)) => (x,n.into_inner()),
Err(e) => return Err(e)
};
let (f,input) = match fst.parse_state(input) {
Ok((x,n)) => (x,n.into_inner()),
Err(e) => return Err(e)
};
let (_,input) = match satisfy(|c| c == ',').parse_state(input) {
Ok((x,n)) => (x,n.into_inner()),
Err(e) => return Err(e)
};
let (s,input) = match fst.parse_state(input) {
Ok((x,n)) => (x,n),
Err(e) => return Err(e)
};
let p = Passport { label : i, body : f * 1000000 + s };
Ok((p, input))
}
Answer: Some small style nits to fix before we dive into the meat. When specifying a variable and a type, there should be space after the :, but not before:
// label : i8,
label: i8,
I break where clauses onto the next line, with each type restriction on its own line as well:
// fn parse_passport<I>(input: State<I>) -> ParseResult<Passport, I> where I: Stream<Item=char> {
fn parse_passport<I>(input: State<I>) -> ParseResult<Passport, I>
where I: Stream<Item=char>
{
Let the compiler use type inference as much as possible. You don't always need to specify the type of a closure variable, for instance:
// let mut label = digit().map(|i : char| (i as i8) - ('0' as i8));
let mut label = digit().map(|i| (i as i8) - ('0' as i8));
Using try! is going to be your biggest win for clarity:
// let (i,input) = match label.parse_state(input) {
// Ok((x,n)) => (x,n.into_inner()),
// Err(e) => return Err(e)
// };
let (i, input) = try!(label.parse_state(input));
let input = input.into_inner();
That's as far as I got with generic Rust knowledge. After reading the docs a bit, I came up with this though (note: this is my first time using this library, so I make no claims to use it well!):
fn parse_passport<I>(input: State<I>) -> ParseResult<Passport, I>
where I: Stream<Item=char>
{
let label = digit().map(|i| (i as i8) - ('0' as i8));
fn to_number(string: String) -> i32 { string.parse().unwrap() }
let fst = many1(digit());
let comma = satisfy(|c| c == ',');
let mut parser = label.and(fst.clone().map(to_number)).and(comma).and(fst.map(to_number));
let ((((i, f), _), s), input) = try!(parser.parse_state(input));
let p = Passport { label: i, body: f * 1000000 + s };
Ok((p, input))
}
I'm certain there's a better way though. The nested tuples are a bit off-putting. Also, I feel like there should be a nice way to map over the result of the parsing so we didn't have to unpack and repack the Result, including the input. There's the straight forward but awkward:
parser.parse_state(input).map(|((((i, f), _), s), input)| {
(Passport { label: i, body: f * 1000000 + s }, input)
})
This is the final thing I ended up at:
extern crate parser_combinators as pc;
use pc::*;
use pc::primitives::{State, Stream};
fn main() {
match parser(parse_passport).parse("0123,456789") {
Ok((r, l)) => println!("{} {} {}", r.label, r.body, l),
Err(e) => println!("{}", e)
}
}
struct Passport {
label: i8,
body: i32,
}
fn parse_passport<I>(input: State<I>) -> ParseResult<Passport, I>
where I: Stream<Item=char>
{
let label = digit().map(|i| (i as i8) - ('0' as i8));
fn to_number(string: String) -> i32 { string.parse().unwrap() }
let fst = many1(digit());
let comma = satisfy(|c| c == ',');
let mut parser = label.and(fst.clone().map(to_number)).and(comma).and(fst.map(to_number));
parser.parse_state(input).map(|((((i, f), _), s), input)| {
(Passport { label: i, body: f * 1000000 + s }, input)
})
} | {
"domain": "codereview.stackexchange",
"id": 12978,
"tags": "parsing, rust"
} |
What is the difference between activity and reactivity when discussing elements? | Question: I screwed up on a true/false question that stated that lithium is the most active alkali metal. I answered false, because I remembered that the lower you go in this group, the more explosive the element is.
It seems, however, I was thinking of reactivity and not activity. I know about the activity series, and it's something I wouldn't like to memorize, but I would a deeper explanation on the difference between activity and reactivity. Why do we have these two terms?
Answer: Activity is an ordered series, showing which element will replace another element. For example, in a solution of copper sulfate in water, a piece of iron quickly becomes coated with copper as the iron displaces copper from the solution because iron is more active. No explosion took place.
$\ce{Fe(solid iron) + CuSO4 (dissolved in water) -> Cu(solid copper) + FeSO4 (dissolved)}$
Reactivity is less clearly defined. In general, it means how easily some chemical change takes place. For example, chlorine, oxygen and sodium are all very reactive, easily attaching to other elements or replacing elements in compounds. Argon and helium are not reactive, and rarely, if ever, combine with other substances. Nitrogen triiodide, $\ce{NI3}$ and acetylene, $\ce{C2H2}$, are very reactive, but in a way opposite that of sodium or oxygen: rather than attaching to other substances, forming molecules, these compounds want to break up their molecules. In fact, even touching $\ce{NI3}$ will make it decompose violently... Now there's your explosion. | {
"domain": "chemistry.stackexchange",
"id": 6734,
"tags": "metal, reactivity"
} |
Fulling–Davies–Unruh effect with non-uniform acceleration | Question: Can there be some version of the Fulling–Davies–Unruh effect, in which the accelerating observer is moving with a non-uniform acceleration? Can someone refer some papers to read?
If there can not be such a version of the effect under consideration and we can say that with certainty, which is the reason?
Any help will be appreciated.
Answer: I see two possible, reasonable, and interesting interpretations for this question. I'll answer both of them. I'll start with the easier one, and then move on to the hardest.
Can a non-uniformly accelerating observer perceive the Minkowski vacuum as having particles?
I'm pretty sure the answer is yes. The notion of particle is extremely observer-dependent and it would be actually surprising if they were to agree with an inertial observer without being inertial themself. Deep down, the notion of particle is related to the observer's notion of what is positive energy, which is then related to the observer's worldline (see this answer I wrote a while ago). If they have different notions of what is positive energy, they will have different notions of particle. I would find it surprising for a non-inertial observer to actually see no particles in the Minkowski vacuum.
Can a non-uniformly accelerating observer perceive the Minkowski vacuum as being thermal?
This requires the observer to not only see particles, but also for them to obey a thermal spectrum. That's a lot more difficult. I find it unlikely, because the thermal spectrum is related to the observers moving along the orbits of a timelike Killing field in the spacetime. Namely, accelerating observers move along wordlines composed of events related by Lorentz boosts. This symmetry is related to the occurrence of a thermal spectrum (a 1991 paper by Kay and Wald discusses how the existence of such a symmetry ends up leading to a thermal spectrum). In the Minkowski spacetime, there are ten independent Killing fields (the maximum possible). The possible symmetries we could then exploit would be:
timelike translation: this is the usual notion of time and inertial observers move along these lines,
3 boosts: in the regions in which they are timelike, these correspond to uniformly accelerated observers moving toward different directions;
3 spacelike translations: these are spacelike and no observer can move along them;
3 rotations: these are spacelike and no observer can move along them.
Hence, there are no other available symmetries left, which means it seems unlikely for other observables to also experience thermality. Furthermore, this lack of symmetry can make the analysis somewhat more challenging, which explains why there is such a huge focus on uniform acceleration. | {
"domain": "physics.stackexchange",
"id": 95457,
"tags": "research-level, qft-in-curved-spacetime, unruh-effect"
} |
How can I get data in a callback function | Question:
I have subscriber receive data from a topic. The message type is std_msgs::Bool. How can I access the data.
I tried ".data" which didn't give me the content.
Originally posted by dustsnow on ROS Answers with karma: 1 on 2012-09-28
Post score: 0
Original comments
Comment by Peter Listov on 2012-09-29:
Do you want to recieve data in a callback function? Then try bool var = msg->Data;
Answer:
Why don't you look at the tutorials for subscribers and publishers? Also take a look at the bool message to figure it out.
Originally posted by Kevin with karma: 2962 on 2012-09-28
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 11173,
"tags": "ros, std-msgs, publisher"
} |
Why is distance taken to be perpendicular to the direction of force when calculating torque? | Question: According to my textbook:
Moment of a force = force $\times$ perpendicular distance of the line of action of the force from the fulcrum.
I don't understand why it would be that way (especially what mathematics goes behind it). Why is distance taken to be perpendicular to the direction of force when calculating torque, regardless of its direction?
For example, in this question:
The answer (given by the marking scheme) is supposedly $240$ $\text{N}$ $\times 1.2$ $\text{m}$ or $288$ $\text{Nm}$. The way I usually tackle questions like these would be to resolve the vector into its components and multiply the magnitude of its vertical vector with the distance.
How does the method used above by the marking scheme work? Is there a proof which shows it does?
Answer: The torque $\vec \tau$ by some force ${\bf \vec F}$ about a pivot is defined as $$\boldsymbol {\vec \tau}={\bf \vec r}\times {\bf \vec F}$$ where ${\bf \vec r}$ is the vector going from the pivot (or fulcrum) to the point of application of the force.
Mathematically, you know that the magnitude of the cross product of two vectors ${\bf \vec a}$ and ${\bf \vec b}$ is given as follows: $$\left|{\bf \vec a} \times {\bf \vec b}\right| = ab\sin\theta$$ where $\theta$ is the angle between the vectors (measured when vectors are tail to tail).
Therefore, you have that the magnitude of torque is $$\tau = F\underbrace{r \sin\theta}_\text{lever arm}$$
Now, because of trigonometry, you can see that $r\sin\theta$ is the $1.2 \ \rm m$ distance you highlighted. See below:
Recall that $\sin(\pi - x)=\sin x$, and you can make sense of the above, to realize that the lever arm is indeed $r\sin(\pi-\theta)=r\sin\theta$. | {
"domain": "physics.stackexchange",
"id": 79035,
"tags": "classical-mechanics, torque"
} |
Determining orthogonal polygons in a 3D integer lattice | Question: The last couple of days I've been trying to get a simple algorithm for uniquely determining orthogonal polygons in 3D integer space, having already determined coplanar points.
By the time we got here we know each point is unique and there are no collinear points.
For example, given the coplanar points pts (lists or tuples)
pts = [
[0, 0, 0], [4, 0, 0],
[1, 1, 0], [2, 1, 0], [3, 1, 0], [4, 1, 0],
[1, 2, 0], [2, 2, 0], [3, 2, 0], [4, 2, 0],
[0, 3, 0], [4, 3, 0]
]
# Generate orthogonal polygons
# 3 +-----------+
# | |
# 2 | +__+ +--+
# | | | |
# 1 | +--+ +__+
# | |
# 0 +-----------+
# 0 1 2 3 4
# result = [
# [
# [0, 0, 0], [0, 3, 0], [4, 3, 0], [4, 2, 0],
# [3, 2, 0], [3, 1, 0], [4, 1, 0], [4, 0, 0]
# ], [
# [1, 1, 0], [1, 2, 0], [2, 2, 0], [2, 1, 0]
# ]
# ]
I found the basis of an algorithm for this in a paper as authored by Joseph O'Rourke
The following seems to work for the few tests I set up (however it doesn't deal with orientation).
def polygonise(pts: list) -> list:
# axes gives us the varying columns (we can ignore z for this)
axes = {i:set(c) for i,c in enumerate(zip(*pts)) if len(set(c))!=1}
a, b = axes.keys()
# construct dicts of axis-columns
pts_by_axis = {axis: {col: [] for col in cols} for axis,cols in axes.items()}
for pt in pts:
for axis in axes:
pts_by_axis[axis][pt[axis]].append(pt)
# each axis-column (sorted by the other dimension)
# will now be paired off into edges, and constructed into axis-edge-dicts
edges = {}
for axis in axes:
other = a if axis != a else b
for col,pt_list in pts_by_axis[axis].items():
pt_list.sort(key=lambda p:p[other])
for i in range(0,len(pt_list),2):
v1 = tuple(pt_list[i])
v2 = tuple(pt_list[i + 1])
edges[tuple((axis,v1))] = other, v2
edges[tuple((axis,v2))] = other, v1
# now derive the polygons, starting at any point.
axis, edge = next(iter(edges.keys()))
result, poly = [], [edge]
while edges:
if (axis, edge) not in edges:
# This polygon is now closed. keep it, and move to the next one.
result.append(poly[:-1]) # if wanted looped, don't slice
poly = []
axis, edge = next(iter(edges.keys()))
else:
# Get the next edge. change the axis (edge is on a different axis)
axis, new_edge = edges.pop((axis,edge))
# delete any old reversed edge.
del edges[(axis, edge)]
edge = new_edge
# add the edge to the existing polygon
poly.append(edge)
# capture the final polygon.
result.append(poly[:-1]) # if wanted looped, don't slice.
return result
if __name__ == "__main__":
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import matplotlib.pyplot as plt
from random import shuffle
pts = [
(0, 0, 0), (4, 0, 0),
(1, 1, 0), (2, 1, 0), (3, 1, 0), (4, 1, 0),
(1, 2, 0), (2, 2, 0), (3, 2, 0), (4, 2, 0),
(0, 3, 0), (4, 3, 0)
]
# shuffle just to make it different.
shuffle(pts)
result = polygonise(pts)
# all the below is just to show the result
fig = plt.figure()
subplot = fig.add_subplot(111, projection='3d')
subplot.axis('off')
subplot.scatter([0.6, 3.1], [0.7, 3.3], [0, 0], alpha=0.0)
subplot.add_collection3d(Poly3DCollection(result, edgecolors='k', facecolors='b', linewidths=1, alpha=0.2))
fig.tight_layout()
plt.show()
I think that the loops could be easier, and I'm not convinced that I need to store edges twice, (which means I have to delete the unused one).
Any ideas for tidying the polygonise() method would be welcome.
J. O'Rourke, "Uniqueness of orthogonal connect-the-dots", Computational Morphology, G.T. Toussaint (editor), Elsevier Science Publishers, B.V.(North-Holland), 1988, 99-104. found here
Answer: Type Hints
+1 for using type hints. But they could be a little more informative.
def polygonise(pts: list) -> list:
Ok. The function takes a list of pts, but what are pts? It returns a list, but a list of what?
from typing import Sequence
Point = Sequence[int]
Polygon = list[Point]
def polygonise(points: list[Point]) -> list[Polygon]:
...
Now we're giving the reader a lot of information. A point is a sequence of integers. Could be a tuple, could be a list. Not important. Instead of pts, we're a bit more explicit and say points. Finally, the function returns a specific kind of list: a Polygon list.
You can help the reader out even more by including a nice """docstring""" for the function.
Axes
axes = {i:set(c) for i,c in enumerate(zip(*pts)) if len(set(c))!=1}
I really, really hate this statement. It takes the list of points, and explodes each point as an argument for zip. Then, it takes the first coordinate of each of those points, and returns it as a tuple. enumerate adds an index to this, to keep track of which coordinate axis we are processing. Then, we assign those to things to i, c ... really opaque variable names. axis, coordinates might be clearer. Then we turn the coordinates into a set, determine the len of the set, and if that is not 1, we turn the coordinates into a set again, for construction the dictionary.
Yikes! That's a lot to unpack.
Then we take the axes, and assign it into to variables a and b, hoping there are exactly 2 axis.
Perhaps we should add some error checking.
num_coord = len(points[0])
if any(len(pt) != num_coord for pt in points):
raise ValueError("All points should have the same number of coordinates")
axes = ...
if len(axes) != 2:
raise ValueError("All coordinates must be in an orthogonal 2D plane")
a, b = axes.keys()
That gives me a lot more confidence in the code.
I still don't like the axes = ... statement. I would re-write it, and maybe use the walrus operator (:=) to avoid creating the set twice, but I'm about to significantly change it.
Useless code
edges[tuple((axis,v1))] = other, v2
Here, (axis, v1) is creating a tuple, to pass to the tuple(...) function. That seems ... odd. You could simplify it to:
edges[axis, v1] = other, v2
Alternate implementation
Axes
Instead of a dictionary, let's just extract a list of axis which have varying coordinate data. A simple comparison of each coordinate of all points with the coordinate of the first point, for each coordinate, will tell us if that axis is important or not.
axes = [axis for axis, first in enumerate(points[0])
if any(pt[axis] != first for pt in points)]
if len(axes) != 2:
raise ValueError("All coordinates must be in an orthogonal 2D plane")
a, b = axes
This is a much simpler concept. No sets are being created (twice). No dictionaries.
Sorted Edges
The pts_by_axis is very complex, a dictionary of dictionaries of arrays.
Let's rework this, by simply sorting the points list. We'll sort it two ways, first by the (a, b) coordinate, then by the (b, a) coordinate. We'll use a helper function to generate the sort keys:
def point_key(first: int, second: int):
def key(pt):
return pt[first], pt[second]
return key
Now we can say sorted(points, key=point_key(a, b)), and we'll have the points sorted in one direction. sorted(points, key=point_key(b, a)) will sort it in the other direction.
Since the points always come in pairs, to form edges, let's add another helper function, to extract pairs of points from the sorted array.
def in_pairs(iterable):
it = iter(iterable)
return zip(it, it)
With these, we can build our sorted edge structure. But instead of keying the edges dictionary with a tuple to encoding the axis of the edge, let's just use two dictionaries:
a_edge = {}
for v1, v2 in in_pairs(sorted(points, key=point_key(a, b))):
a_edge[v1] = v2
a_edge[v2] = v1
b_edge = {}
for v1, v2 in in_pairs(sorted(points, key=point_key(b, a))):
b_edge[v1] = v2
b_edge[v2] = v1
The loop at the end does some check to close a polygon and start the next one. Really, you have two loops. Loop while there are still edges. Inside that loop, loop until you've made a polygon. With that structure, the code reads much better:
result = []
while a_edge:
v1 = next(iter(a_edge.keys()))
poly = []
while v1 in a_edge:
v2 = a_edge.pop(v1)
v3 = b_edge.pop(v2)
a_edge.pop(v2)
b_edge.pop(v1)
poly.extend((v1, v2))
v1 = v3
result.append(poly)
return result
Note that I use poly = [] instead of poly = [v1], which means no slice is required at the end to remove the extra point.
Also, I'm popping v2 out of the a_edge dictionary, not a edge, since what is popped out is just a vertex. I find that much clearer. | {
"domain": "codereview.stackexchange",
"id": 40686,
"tags": "python, python-3.x"
} |
Why can I put my hand through sand but not a table? | Question: I've read in books that one can't put one's hand through a table because the table offers a "Normal Reaction" to the hand. And it is also stated that this force is electromagnetic in nature. But what is this force? Can it be explained using classical electromagnetism "in terms of something I'm more familiar with"?
Moreover, if it is this force that stops my hand from going through the table, why is it that certain other solid substances, like sand, don't produce the same result?
Also, is there any limit to this Normal reaction? I mean on pushing too hard my hand might go through the table. Is there any upper limit to this force?
Answer: You usually cannot push your hand through the table, because it's a single solid. The atoms are held together by covalent bonds, which are electromagnetic in nature. Sand on the other hand is grainy - the $SiO_2$ grains do not interact with each other and are only held "in place" because of gravity. You can run your hand through sand similar to driving a truck through a classroom full of tables, just by pushing them aside.
There is an upper limit to the normal force excerted by the table. When you push enough, the wood will start to bend slightly and finally crack. The exact amount depends on the kind of wood used, its "age"/environmental history and the geometry of your table. | {
"domain": "physics.stackexchange",
"id": 23769,
"tags": "electromagnetism, classical-mechanics, forces"
} |
Oxidation state of nitrogen in HN3 | Question: The book asks me to find the oxidation state of nitrogen in the compound below (the structure is not the one given in Wikipedia for HN3):
To find the oxidation state using Lewis structure, I have to compare the electronegativity of the bonded atoms and assign the bonding electrons to the more electronegative atom. Then, the difference between the number of electrons in element state and bonded state will give the oxidation number.
Here, the bonding electrons of nitrogens bonded with a double bond will stay in between both the atoms will not be assigned to both. The $\ce{N-H}$ bonding electrons will be assigned to $\ce{N}$. What about the two $\ce{N-N}$ bonding electrons?
Answer: This answer uses electronegativities for the calculation of oxidation states as proposed in the Expanded Definition of the Oxidation State by Hans-Peter Loock in 2011.
Comparing Electronegativities
The following table shows an excerpt from Pauling electronegativities ($\chi_{\mathrm{Pauling}}$):
$$\begin{array}{cc}
\hline
\text{Element} & \chi_{\mathrm{Pauling}}\\
\hline
\ce{H} & 2.20\\
\ce{N} & 3.04\\
\hline
\end{array}$$
Looking at $\ce{N-H}$ Bond (heteroatomic)
The table shows, that the nitrogen atom is more electronegative than the hydrogen atom. It will therefore attract the electrons more than hydrogen does, and thus polarize the bond. Imagining the gedankenexperiment of an hypothetical heterolytical bond cleavage - presumed by applying pulling force to the atoms, until they get separated - it is assumed, that the electrons are allocated to the nitrogen atom during separation.
Looking at $\ce{N-N}$ and $\ce{N=N}$ Bonds (homoatomic)
Looking again at the previous gedankenexperiment, now with two identical Atoms bound to each other. Since they have, of course, the same electronegativity, there is no polarization of the bond. It is assumed, that separation would therefore result in an homolytical bond cleavage, allocating always one electron to each of the two nitrogen atoms for each bond respectively.
Summing up
Now, having a look on the complete structure of the molecule in question, and applying the previously states rules:
Last thing to do is calculating the atoms hypothetical charge after separation, which is to be equatable with the oxidation state:
$$ \text{Oxidation state} = N_\mathrm{i}(\ce{e-}) - N_\mathrm{f}(\ce{e-}) $$
With $ N_\mathrm{i}(\ce{e-}) $ representating the number of electrons in a free atom, and $ N_\mathrm{f}(\ce{e-}) $ the one after separation (One should not forget the lone pairs).
One will end up with the following oxidation states for the different (nitrogen) atoms, with the last entry representating the mean oxidation state of all nitrogen atoms:
$$\begin{array}{cccc}
\hline
\text{Atom} & N_\mathrm{i}(\ce{e-}) & N_\mathrm{f}(\ce{e-}) &\text{Oxidation state}\\
\hline
\ce{H} & 1 & 0 & +1\\
\hline
\ce{N-1} & 5 & 5 & 0\\
\ce{N-2} & 5 & 5 & 0\\
\ce{N-3} & 5 & 6 & -1\\
\hline
\ce{N} & - & - & -\frac{1}{3}\\
\hline
\end{array}$$
And the structure with assigned oxidation states: | {
"domain": "chemistry.stackexchange",
"id": 5159,
"tags": "inorganic-chemistry, molecular-structure, oxidation-state, pnictogen"
} |
How to automatically detect blurred document images | Question: The question I have is related to image quality evaluation. Suppose you are supposed to perform OCR (Optical Character Recognition) on a set of document images. Some images, however, are very blurring, and for these images blind deconvolution is needed. However, there are good images that do not need deconvolution. Then my question is: could I find a criterion to automatically separate these images into two groups: blurred image and clear images and then treat them differently when performing OCR?
Answer: Yes, but doing so is difficult. You are basically assuming that when a certain feature is sufficiently present in an image, you can apply a filter and realize an improvement in OCR accuracy. This implies that you can find the right feature, apply the right filter, and evaluate OCR accuracy correctly. Since there are lots of features that might be relevant (edge prominence, high-frequency energy, PSF estimation, unfiltered image OCR accuracy metrics (as @Matt mentions), etc.), lots of filters available (blind deconvolution, among others), and you'll need lots of ground truth images to prove system effectiveness, you are looking at a lot of work and experimentation to create a system that works well.
You also need to make sure you are not working against the OCR software itself, which may do its own image processing to improve its accuracy. In other words, make sure that artifacts introduced by your deblurring do not make OCR accuracy worse, even if the deblurring was done "well" from a deblurring perspective.
All that being said, it should be possible to do this. You might even be able to do this adaptively, by automatically and iteratively adjusting parameters and approaches to optimize the result. | {
"domain": "dsp.stackexchange",
"id": 871,
"tags": "image-processing, discrete-signals"
} |
Software mutexes and consensus number 1 | Question: There are numerous possibilities to implement a software mutex using only shared memory (e.g. Dekker's algorithm).
Recently I've read the following from a paper "Wait-free Synchronization" by M. Herlihy:
there exists no two-process consensus protocol using multireader/multiwriter atomic registers.
And then the theorem: "Read/Write registers have consensus number 1".
I guess I am missing something in my understanding of the definitions. Shared memory is an array of atomic registers, right? And Wikipedia says Dekker's algorithm is deadlock and starvation free. So how is that possible with consensus number 1?
Answer: Herlihy's result, indeed the whole paper, is about wait-free synchronization. Corollary 1 states that there is no wait-free two-process consensus protocols using atomic registers. Wait-free is defined informally at the beginning of the introduction, and defined formally in the automata model in §2.3.
Dekker's algorithm is not wait-free. The busy-wait loop to enter the critical section is a wait in the sense of Herlihy. It's possible to have P0 set its wants_to_enter flag, then P1 sets its own, then P0 sees that P1 wants to enter and one of the processes busy-waits (depending on the turn value).
Think of wait-free as “must work even with a hostile scheduler”, i.e. with a scheduler that always switches between processes at the most inconvenient time. Dekker's algorithm requires a scheduler that's at least weakly fair, i.e. guaranteeing that if a process is schedulable then it's eventually scheduled (as opposed to being locked in an order of execution that causes a livelock, such as never exiting the busy-wait loop). Wait-free algorithms complete in finite time with any scheduler that doesn't completely lock up the system if there's work to do (no fairness requirement).
Busy-wait loops require the scheduler to be preemptive (otherwise the loop runs forever). In practice a busy-wait loop would be written with a yield call, and even non-preemptive schedulers try to avoid immediately re-scheduling a process that just yielded. But even with a preemptive scheduler, busy-wait is generally bad: if nothing else it uses up CPU time and electricity. A case where Dekker's algorithm doesn't work at all is if the two processed have different priorities and the scheduler never schedules the process that would enter the loop because the busy-waiting process has a higher priority.
A naive solution to avoid waiting is to put in a time delay. It isn't a good solution because in practice it means an additional useless delay when there is contention on the mutex, and in theory the delay isn't enough since the scheduler could take that long to make its decision and so end up scheduling the busy-wait loop only anyway. A good solution avoid busy-wait altogether is to add a primitive that lets a process block until some shared variable is modified; with such a primitive, I think it's easy to achieve an infinite consensus number.
It is instructive to contrast Dekker's algorithm with a wait-free consensus solution, such as the one Herlihy gives for two processes (fig. 7).
prefer[P] := input
if RMW(r,f) = v
then return prefer[P]
else return prefer[Q]
end if
RMW is a primitive operation with a consensus number of at least 2, for example test-and-set. There's no busy-wait in the consensus protocol itself: both processes run pretty much instantly, without ever waiting for the other one. In a real scenario, the processes that doesn't acquire the mutex may well wait for it; but the process that does acquire the lock isn't stopped, and in practical terms that means at least an expected performance gain both when there is contention and when there isn't. | {
"domain": "cs.stackexchange",
"id": 7373,
"tags": "concurrency, mutual-exclusion"
} |
What is a resonating valence bond (RVB) state? | Question: There's something known as a "resonating valence bond" (RVB) state, which plays a role in at least some attempts to understand physics of high-$T_c$ superconductors. This, roughly, involves a state that's in a superposition (hence the "resonating" part of the name, if I understand correctly) of different ways to pair electrons into strongly-bonded spin singlets. My question is: what is a more precise definition of this sort of state? What's the underlying physics, when does it arise, and why is it interesting?
Points an answer might address: is there a simple toy model for which this is the ground state, that sheds light on what sort of system it could arise in? Is there an interesting continuum limit, in which we can characterize this state in a more field-theoretic language? Are there particular kinds of instabilities such a state tends to be subject to?
I think I know where I would start digging if I wanted to really understand this for myself, but mostly I'm asking it to probe the community and see what kind of expertise might be lurking here, since there haven't been so many condensed matter questions.
Answer: RVB states were first coined in 1938 by Pauling in the context of organic materials and they were later extended to metals. Anderson revived the interest in this concept in 1973 when he claimed that they explained the Mott insulators. (Mott, not Matt and not Motl, which is a shame because I was born in 1973.) He wrote a new important paper in 1987 in which he described the copper oxide as an RVB state.
If one has a lattice of atoms etc. and there is qubit at each site - e.g. the spin of an electron - then the RVB state in the Hilbert space of many qubits is simply
$$|\psi\rangle = \sum |(a_1,b_1) (a_2,b_2) \dots (a_N,b_N)\rangle$$
which is a tensor product of "directed dimer" states of qubits which are simple singlets
$$|M,N\rangle = \frac{1}{\sqrt{2}} \left( |\uparrow_M\downarrow_N\rangle - |\downarrow_M\uparrow_N\rangle \right) $$
The sum defining the RVB state goes over all arrangements or methods how to divide the lattice into (vertical or horizontal or whatever dimensions you have) pairs of adjacent lattice sites. For each lattice site, one puts the corresponding two qubits (usually spins) into the singlet state.
The singlet state above is $M,N$-antisymmetric, so one has to be careful about the signs. So all the tensor factors $(a_i,b_i)$ above are oriented and the orientation always goes in such a way that $a_i$ is a white site on a chessboard while $b_i$ is a black site on a chessboard, in the usual chessboard method to divide the lattice into two subsets.
Because all singlet states used in the RVB states are made out of nearest neighbors, it looks like a liquid, which is why the resulting material in this state is called the RVB liquid. (Imagine molecules in a liquid - they also like to interact with some neighbors only. If one doesn't rely on distant molecules to neutralize the spin, it is "liquid-like".)
The idea - related to the name - is that the information about the terms defining $|\psi\rangle$ is the information about which adjacent lattice sites are connected - these are the valence bonds ("valence" because the nearest neighbors are interacting through their valence electrons or degrees of freedom). However, a general term of this type could be claimed to evolve into another similar state where different links (valence bonds) are included to the creation of the singlets. If one tries to allow the valence bonds to jump anywhere - and switch from vertical to horizontal directions - one obtains a "resonating" system. This symmetrization (symmetric superposition) of all possibilities is a usual way to obtain a quantum eigenstate of the lowest energy, assuming that the different terms are able to change into each other by a transition amplitude.
The funny feature of this liquid state is that it is invariant under all the translations - those allowed by the lattice - and the rotations - allowed by the lattice, if any. This is very different from a particular chosen method how to divide the lattice sites (qubits) to pairs. By summing over all methods to divide into pairs, we reach a certain degree of "democracy" that gives the state very different and special properties - in comparison with some particle "vertical crystals" or other ways how you could orient the singlets.
Or you may look at it from the other side. It is somewhat nontrivial to construct singlet states of the material, and the RVB state is the most democratic one. It is often useful to look at mathematical guesses that look special and the RVB state was no exception.
You seem to be interested in the high-$T_c$ superconductors. I believe that the critical paper in this direction was this 1987 paper
http://prb.aps.org/abstract/PRB/v35/i16/p8865_1
by Kivelson, Rokhsar, and Sethna. They asked a simple question - what are the excitations above the RVB state. A fascinating feature was that the excitations inherit only 1 of 2 key properties of the electron: there are spin-1/2 fermionic excitations - like the electron - but the shock is that they're electrically neutral; and there are charged excitations - like the electron - but they're spin-0 bosons (similar to solitons in polyacetylene).
It's a cool property that by choosing a pretty natural state, you may obtain totally unfamiliar excitations - of course, it's a common theme in condensed matter physics. I suppose that if they can talk about the mass of the excitations, and they're non-negative, they also have a Hamiltonian for which the state occurs, and they show that it is stable along the way. But you must read the full paper.
I haven't mentioned the high-$T_c$ punch line yet. Of course, the bosonic charged excitations may produce a Bose gas and this Bose gas could exist at high temperatures.
But of course, one must be careful not to get carried away. The RVB state is not the only one that one can construct out of the spins. The experimental attempts to produce a full-fledged RVB liquids remained inconclusive, to put it lightly, and some previously believed applications of the RVB liquid are no longer believed to be true. For example, it was believed that the RVB state is a description of the disordering of anti-ferromagnets, but especially from a 1991 paper by Read and my ex-colleague Sachdev, it became much more likely that the spin-Peierls description is more likely.
An interesting theoretical by-product of the RVB considerations were things related to cQED - strong coupling compact quantum electrodynamics - with a $\pi$-flux RVB state in the continuum limit. This bizarre theory also has the neutral spin-1/2 excitations; an infinite bare coupling; and has been studied nicely for $SU(N)$ and $Sp(2k)$ gauge groups. It must be assumed that the spin Peierls ordering doesn't develop in the system.
Best wishes
Lubos | {
"domain": "physics.stackexchange",
"id": 3599,
"tags": "condensed-matter, research-level, topological-order"
} |
Which Oxygen atom in HCOOH (formic/methanoic acid) does Carbon donate its electrons to, to obtain a partial positive charge? | Question: I was wondering if the Carbon atom in HCOOH (methanoic/formic acid) forms a positive partial charge by donating its electrons to both the Oxygen atoms, since they both possess a higher electronegativity value, or just one of them.
I was also curious about how the single bond and double bond between Carbon and the Oxygen atoms would affect the electron donation.
Answer: Electronic effects are cumulative. For example, the central carbon atom in an orthoester like HC(OEt)3 is surrounded by 3 oxygen atoms which add up.
In the case of formic acid (which is what you described in your question), yes the carbon atom is donating its electron density to BOTH oxygen atoms. However, the oxygen atom involved in the C=O bond will attract more electrons, because the double-bond is shorter. | {
"domain": "chemistry.stackexchange",
"id": 11270,
"tags": "electronegativity, dipole"
} |
Average FFT Magnitude in bins | Question: I am analyzing sounds of daily activities recorded by a smartphone. For example walking, getting up from bed, falling, running etc.. (one at the time).
Let’s take one them. My goal is to
convert from time domain to frequency domain and then
divide it in bins of 10 Hz wide
calculate the average FT magnitude in each of these bins.
Questions:
When I use fft Matlab function, Do I have to choose NFFT>N and power of 2 (Y = fft(signal,NFFT)/N;), or can I just use Y = fft(signal); ?
In my case, is signal = detrend(signal); necessary before doing FFT?
In order to have 10 Hz wide bins, using 4096 Hamming Window is the way to do so?
Do my plots need further processing in order to achieve my goal?
[signal,fs]=audioread(filename); % fs = 44100
N = length(signal); % N = 94144
%% Amplitude Spectrum
Y = fft(signal);
Y_mag = abs(Y);
Fbins = ((0: 1/N: 1-1/N)*fs).';
figure;
subplot(4,1,1);
plot(Fbins,Y_mag);
title('FFT MAGNITUTUDE REPONSE');
xlabel('FREQUENCY (HERTZ)');
ylabel('MAGNITUTDE');
%% Single-Sided Amplitude Spectrum
NFFT = 2^nextpow2(N);
Y_NFFT = fft(signal,NFFT)/N;
f_NFFT = fs/2*linspace(0,1,NFFT/2+1);
subplot(4,1,2);
plot(f_NFFT,2*abs(Y_NFFT(1:NFFT/2+1)))
title('Single-Sided Amplitude Spectrum of y(t)')
xlabel('Frequency (Hz)')
ylabel('|Y(f)|')
%% Pwelch defaut values
subplot(4,1,3);
pwelch(signal,[],[],[],fs);
title(sprintf('Pwelch defaut values'));
%% Pwelch Hamming Window 4096
len = 4096; % different sizes of Hamming Window
subplot(4,1,4);
pwelch(signal,len,[],len,fs);
title(sprintf('Pwelch Hamming Window Size :: %d',len));
Thank you very much
Answer:
When I use fft Matlab function, Do I have to choose NFFT>N and power
of 2 (Y = fft(signal,NFFT)/N;), or can I just use Y = fft(signal); ?
Not necessarily, but it is better to have it in power of 2 for faster computations.
Also NFFT>N will make your response more clearer (just for visual purpose) as the frequency resolution doesnot changes with NFFT>N. You can also look into zero padding as it is what you will achieve with NFFT>N.
As also mentioned in the document of FFT:
The execution time for fft depends on the length of the transform. The
time is fastest for powers of two and almost as fast for lengths that
have only small prime factors. The time is typically several times
slower for lengths that are prime, or which have large prime factors.
In my case, is signal = detrend(signal); necessary before doing FFT?
It depends on whether you want to measure the frequency component of 0-10 Hz range (specifically from your description)
If DC component is of no need then you can perform signal = detrend(signal);
In order to have 10 Hz wide bins, using 4096 Hamming Window is the way
to do so?
NO, first of all windowing is used to decrease the side loobes which occurs due to spectral leakage. There is a effect of response broadening due to side level suppression but I think this is not what you needed. Also do remember that windowing will be applied on raw time signal before zero padding.
You have to play with number of samples to have your desired resolution
Do my plots need further processing in order to achieve my goal?
I cannot guess this, but might be from the above answers you can estimate what has to be done in order to achieve your specific goal.
Hope that this answer might help. | {
"domain": "dsp.stackexchange",
"id": 5175,
"tags": "matlab, fft, signal-analysis, fourier-transform, frequency"
} |
Are voltages discrete when we zoom in enough? | Question: Voltages are often thought of as continuous physical quantities.
I was wondering whether by zooming in a lot, they are discrete.
I feel like the answer to the above question is yes as voltages in the real world are generated by actions of electrons. Can someone give me a more formal proof or a disproof?
Whether voltages are discrete of continuous can have some impact the correctness of devices such as the analog to digital converter.
For example, if voltages in the real world are continuous, then the Buridan principle[1] says that there cannot be a perfect analog to digital converter because such a device makes a discrete decision from continuous inputs.
[1] : Lamport, L. (2012). Buridan’s Principle. Found Phys 42, 1056–1066. http://link.springer.com/article/10.1007/s10701-012-9647-7
(It would be great if someone could also answer a related question https://electronics.stackexchange.com/questions/126091/is-there-an-adc-with-a-finite-bound-on-measurement-time)
Answer: For static charges, the relationship is V (voltage) = Q (charge) / C (capacitance). Capacitance is a function of the shape, size and distance between objects, which are all continuous values. (Well, I suppose you could argue that shape and size are quantized to the atomic spacing of the object's material, but you can't say the same thing for distance.) So even though the charge term is quantized, the capacitance — and therefore, the voltage — is not. | {
"domain": "physics.stackexchange",
"id": 80746,
"tags": "electromagnetism, electric-circuits, electrons, voltage, discrete"
} |
Celestron NexStar 6 SE vs. Orion XT8 | Question: I'm interested in purchasing a new telescope and have narrowed it down to the NexStar 6 SE vs. the Orion XT8.
I'd like to be able to get into astrophotography and read that having the Go-To (auto-tracking) is a must -- this would eliminate the Orion XT8. I really like the large(r) aperture of the XT8, but having the Go-To seems nice.
Is the image quality sharper on one or the other?
Answer: Neither of these telescopes is really suitable for astrophotography. The XT8 has no tracking at all. The 6SE has tracking, but it is basically an altazimuth mount, so that the field of view rotates as it tracks, limiting you to very short exposures. Also, its drives use a spur gear system, which has a lot of backlash, making it really poor at guiding. While people have managed to take photographs with it, it requires superhuman efforts.
Quite honestly, I usually recommend that beginners in astronomy avoid photography in their first couple of years. Most beginners are nearly overwhelmed by basic astronomy: learning to operate the telescope and find objects in the sky, that trying to take photographs is just too much. I'd recommend that you purchase either of these scopes (because they're both excellent) and put photography on the back burner. Once you know your way around the sky, you can buy a proper equatorial mount and remount either of these scopes for photography. The problem is not with the scopes themselves (which are both excellent) but with their mounts, which are simply not designed nor suited for photography. When you do come to try photography, you will find that the minimum equatorial mount alone costs considerably more than either one of these telescopes: you're looking at a minimum of $1000 for a very basic astrophotography-capable mount.
As to the scopes themselves, I own a 6SE myself, and also have an 8-inch Dobsonian. Both are very useful telescopes, and both will be excellent for visual observation. The XT8 will show you more because of its larger aperture: more detail and brighter images. The main advantage of the 6SE is its compact size and portability. It's optics are quite good, but cannot compare with the XT8. If GoTo appeals to you, the XT8 is now available in a GoTo version, the XT8g. There is also the intermediate IntelliScope XT8i, which is my personal favourite. Its computer guides you to any object in the sky, but it uses manual power rather than motors, so is very quiet and low on battery consumption.
I disagree with Florin Andrei that the 6SE is "not much more than a toy." I have been very pleased with mine: it is well made and has good optics in a very solid compact portable package. | {
"domain": "physics.stackexchange",
"id": 3127,
"tags": "astronomy, telescopes, astrophotography"
} |
How do laser rangefinders work when the object surface is not perpendicular to the laser beam? | Question: I find the functioning of a laser rangefinder confusing.
The explanation usually goes like this: "you shine a laser beam onto the object, the laser beam gets reflected and gets back to the device and time required for that is used to calculate the distance".
Okay. But the object surface can be uneven and not perpendicular to the laser beam so only a tiny fraction of beam energy is reflected back to the device. And there's plenty of other radiation around, sunlight included.
How does a rangefinder manage to "see" that very weak reflected signal in a reliable manner?
Answer:
so only a tiny fraction of beam energy
is reflected back to the device.
This tiny fraction is enough. With respect to ambient light:
One can modulate the laser beam, and filter the the voltage
of the receiving photodiode for this modulation frequency and phase.
Another precaution is to have a light filter in front of the
receiving photodiode which only lets the wavelength of the laser pass.
I think both precautions are used.
And of course the receiving photodiode is focussed to a spot of some
centimeters diameter around the laser spot.
Try to point the range finder to a mirror, in that case the
range finder should fail, exept the mirror happens to reflect
precisely back to the range finder (which is rather unlikely).
Reason is that from a (clean) mirror You don't get a stray reflection. | {
"domain": "physics.stackexchange",
"id": 47572,
"tags": "optics, energy, electromagnetic-radiation, laser"
} |
Legendre Transformation of Lagrangian with constraints | Question: I have problems with obtaining a Hamiltonian from a Lagrangian with constraints. My overall goal is to find a Hamiltonian description of three particles independent of any Newtonian Background and with symmetric constraints for positions and momenta. For this I start with the 3-particle Lagrangian
$$L= \frac{1}{2} \sum _{i=1}^3 \dot{x}_i^2 - \frac{1}{2\cdot 3} (\sum _{i=1}^3 \dot{x}_i)^2 - V(\{x_i - x_j\})$$
which only depends on relative variables, who are however still defined with respect to an absolute reference frame. To get rid of these (unphysical) dependencies I define new variables:
$$x_1 - x_2 = q_3\\
x_2 - x_3 = q_1 \\
x_3 - x_1 = q_2\\
x_1 + x_2 + x_3 = q_{cm}.$$
The reverse transformation is not uniquely definded. We can choose
$$x_1 = \frac{1}{3} \left( q_{cm} + q_3 - q_2 \right) \\ x_2 = \frac{1}{3} \left( q_{cm} + q_1 - q_3 \right) \\ x_3 = \frac{1}{3} \left( q_{cm} + q_2 - q_1 \right)$$
along with the constraint
$$ q_1 + q_2 + q_3 = Q = 0.$$
From this I can derive
$$ \dot{q}_1 + \dot{q}_2 + \dot{q}_3 = \dot{Q} = 0.$$
I now want to rewrite the Lagrangian in the new Variables. After a little work with the sums I arrive at
$$ \tilde L(q_i, \dot{q}_i) = \dot q_1^2 + \dot q_2^2 + \dot q_3^2 - V(q_1,q_2,q_3) $$
But now i don't know: Is the new Lagrangian of the form
$$L_{tot} = \tilde L + \alpha Q$$
or
$$L_{tot} = \tilde L + \alpha Q + \beta \dot{Q}~?$$
In a next step, and this is the core of my question, I would like to obtain the Hamiltonian and the conjugate momenta from this Lagrangian, but I have no idea how to treat the constraints. Is it possible to arrive at an Hamiltonian, where the constraint $Q=0$ holds along with a constraint for the conjugate momenta? For every help I'd be extremely grateful!
Another way of doing this could be legendretransforming the original Lagrangian and then finding a canonical transformation which has the same result. But how this could be achieved is even more mystical to me.
Regarding my Background: I am writing my Master's thesis in physics about Quantum Reference Frames. I have some knowledge about singular Lagrangians and constrained Hamiltonian systems (Like treated in the first chapters of Henneaux and Teitelboim's "Quantization of gauge systems). And I do know about the very basics of differential geometry, but I am not really profound in this topic.
Answer: On a mathematical level, a Lagrange multiplier in the Lagrangian is no different from a "real" coordinate whose velocity does not appear in the Lagrangian, such as $A_0$ in the context of Maxwell field theory. One can therefore subject a Lagrangian containing a Lagrange multiplier to the standard Hamilton-Dirac procedure and obtain a corresponding constrained Hamiltonian. I'll sketch out the Hamilton-Dirac analysis for this Lagrangian and leave the details to you.
The transformed Lagrangian is
$$
L = \frac{1}{6} (\dot{q}_1^2 + \dot{q}_2^2 + \dot{q}_3^2) - V(q_1, q_2, q_3) + \alpha (q_1 + q_2 + q_3),
$$
where $\alpha$ is a Lagrange multiplier.
One can construct a Hamiltonian that generates the same equations of motion by treating all of the variables, including the Lagrange multiplier, as having conjugate momenta:
\begin{align}
p_i \equiv \frac{\partial L}{\partial \dot{q}_i} = \frac{1}{3} \dot{q}_i \, \, (i &= 1,2,3) & p_\alpha \equiv \frac{\partial L}{\partial \dot{\alpha}} = 0
\end{align}
Since the last of these quantities vanishes identically, it is therefore a primary constraint of the model.
The base Hamiltonian of the model is then (as usual)
$$
H_0 = \sum p_i \dot{q}_i - L = \frac{3}{2} \left( p_1^2 + p_2^2 + p_3^2 \right) - V(q_1, q_2, q_3) - \alpha (q_1 + q_2 + q_3)
$$
but this Hamiltonian will not, in general, generate the correct equations of motion (i.e. the evolution will generally leave the "constraint surface" $q_1 + q_2 + q_3 = 0$.)
To obtain a Hamiltonian that generates the correct equations of motion, we first construct the augmented Hamiltonian
$$
H_A = H_0 + u p_\alpha
$$
where $u$ is an auxiliary Lagrange multiplier, left arbitrary for now. We must now see whether the requirement that the system stays on the constraint surface places any requirements on $u$. To do this, we take the Poisson brackets of the primary constraint $p_\alpha = 0$ with the augmented Hamiltonian $H_A$. This will lead to a secondary constraint:
$$
0 = \dot{p}_\alpha = \{ p_\alpha, H_A \} = q_1 + q_2 + q_3.
$$
So we have to have $q_1 + q_2 + q_3 = 0$ to preserve the primary constraint.
This secondary constraint must also be preserved by the time evolution, which gives rise to another secondary constraint, which gives rise to another, and so forth. However, in this case, eventually one arrives at an equation that can be solved for the unknown Lagrange multiplier $u$. (I haven't gone through the algebra carefully, but it looks like you will be able to express $u$ in terms of $\alpha$ and the second derivatives of $V$ with respect to $q_i$.)
The full Hamiltonian is then equal to the augmented Hamiltonian with the auxiliary Lagrange multiplier $u$ set equal to this value. In general, one would have to add in the so-called first-class constraints—those which commute with all other constraints—at this stage as well, along with Lagrange multipliers for them. However, I do not believe this model has any first-class constraints.
Further reading:
The best reference I know for this is Dirac's Lectures on Quantum Mechanics (a set of lecture notes from the mid-50s, and not to be confused with his better-known Principles of Quantum Mechanics.) An excellent summary of the procedure can also be found in Appendix B of
Isenberg & Nester, "The effect of gravitational interaction on classical fields: A Hamilton-Dirac analysis." Annals of Physics (NY) 107, pp. 56–81 (1977).
Alternately, you could look my recent paper that discusses this technique for constrained field theories. However, it focuses on a field-theory context and I do not go into as much detail about the procedure there.
Seifert, "Constraints and degrees of freedom in Lorentz-violating field theories", Phys. Rev. D99 045003 (2019). arXiv:1810.09512. | {
"domain": "physics.stackexchange",
"id": 69147,
"tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, hamiltonian-formalism, constrained-dynamics"
} |
Clojure Neural Network | Question: After reading this article about Neural Networks I was inspired to write my own implementation that allows for more than one hidden layer.
I am interested in how to make this code more idiomatic - for example I read somewhere that in Clojure you should rarely need to use the for macro (not sure if this is true or not) due to the functions in the standard library - and if there are any performance improvements. For the simple example below it runs fairly quickly but it is a very small (an XOR network).
Implementation:
(ns neural-net-again.ann
(:refer-clojure :exclude [+ - * == /])
(:use clojure.core.matrix)
(:use clojure.core.matrix.operators))
(set-current-implementation :vectorz)
(defn activation-fn [x] (Math/tanh x))
(defn dactivation-fn [y] (- 1.0 (* y y)))
(defn get-layers
[network]
(conj (apply (partial conj [(:inputs network)]) (:hidden network)) (:outputs network)))
(defn generate-layer
[neurons next-neurons]
(let [values (vec (repeat neurons 1))
weights (vec (for [i (range neurons)] (vec (repeatedly next-neurons rand))))]
{:values values :weights weights}))
(defn generate-network
[& {:keys [inputs hidden outputs]}]
(if (empty? hidden)
{:inputs (generate-layer (inc inputs) outputs) :outputs (generate-layer outputs 1)} ; add one to inputs for a extra bias neuron
(loop [current-layer (first hidden)
next-layer (first (rest hidden))
others (rest (rest hidden))
network {:inputs (generate-layer (inc inputs) (first hidden))}] ; add one to inputs for extra bias neuron
(if (nil? next-layer)
(-> network
(update-in [:hidden] #(conj % (generate-layer current-layer outputs)))
(assoc :outputs (generate-layer outputs 1)))
(recur next-layer (first others) (rest others) (update-in network [:hidden] #(conj % (generate-layer current-layer next-layer))))))))
(defn activate-layer
[{:keys [values weights]}]
(->> (transpose weights) ; group weights by neuron they point to
(mapv #(* values %))
(mapv #(reduce + %))
(mapv activation-fn)))
(defn forward-propagate
[network inputs]
(let [network (assoc-in network [:inputs :values] (conj inputs 1)) ; add one to the inputs for the bias neuron
layers (get-layers network)]
(loop [current-layer (first layers)
layers (rest layers)
all-layers []]
(if (empty? layers) ; we are at the output layer. Stop forward propagating
{:inputs (first all-layers) :hidden (rest all-layers) :outputs current-layer}
(let [layers (assoc-in (vec layers) [0 :values] (activate-layer current-layer))] ; sets the layer aboves values
(recur (first layers) (rest layers) (conj all-layers current-layer)))))))
(defn threshold-outputs
[network]
(update-in network [:outputs :values] (partial mapv #(if (< % 0.1) 0 (if (> % 0.9) 1 %)))))
(defn output-deltas
[network expected]
(let [outputs (get-in network [:outputs :values])]
(assoc-in network [:outputs :deltas] (* (mapv dactivation-fn outputs) (- expected outputs)))))
(defn layer-deltas
[layer layer-above]
(assoc layer :deltas (* (mapv dactivation-fn (:values layer)) (mapv #(reduce + %)
(* (:deltas layer-above) (:weights layer))))))
(defn adjust-layer-weights
[layer layer-above rate]
(assoc layer :weights (+ (:weights layer) (* rate (mapv #(* (:deltas layer-above) %) (:values layer))))))
(defn back-propagate
[network expected rate]
(let [layers (get-layers (output-deltas network expected))]
(loop [layer (last (butlast layers))
layer-above (last layers)
layers (butlast layers)
all-layers [layer-above]]
(if (nil? layer)
{:inputs (last all-layers) :hidden (reverse (rest (butlast all-layers))) :outputs (first all-layers)}
(let [updated-layer (-> layer
(layer-deltas layer-above)
(adjust-layer-weights layer-above rate))]
(recur (last (butlast layers)) updated-layer (butlast layers) (conj all-layers updated-layer)))))))
(defn train
[network data times rate]
(loop [i 0
net network]
(if (< i times)
(recur (inc i) (reduce (fn [network sample] (-> network
(forward-propagate (:inputs sample))
(back-propagate (:outputs sample) rate))) net data))
net)))
Example usage:
(def xor-data [{:inputs [1 0] :outputs [1]}
{:inputs [0 1] :outputs [1]}
{:inputs [1 1] :outputs [0]}
{:inputs [0 0] :outputs [0]}])
(-> (generate-network :inputs 2 :hidden [2] :outputs 1)
(train xor-data 500 0.2)
(forward-propagate [1 0]))
Answer: I'm the author of core.matrix, so hopefully I can give you some tips from that perspective.
If you want to improve performance, it's much better to use vectors in an optimised format throughout (vectorz-clj is a fine choice) rather than mixing in Clojure vectors everywhere. This saves the overhead of converting to/from Clojure vectors all the time, which is possibly your biggest performance bottleneck in this code. Typically, these will be significantly (probably 10-30x) faster than regular Clojure vectors for numerical operations with core.matrix
Here's an illustration of the difference:
;; add with regular Clojure vectors
=> (let [v (vec (range 1000))] (time (dotimes [i 1000] (add v v))))
"Elapsed time: 625.66391 msecs"
;; add with Vectorz vectors
(let [v (array (range 1000))] (time (dotimes [i 1000] (add v v))))
"Elapsed time: 18.917637 msecs"
Some more specific tips:
Use (array ....) instead of (vec ....) to produce Vectorz format vectors (actually it will produce whatever format you have set as your current implementation, so you can switch back and forth as needed)
Use the core.matrix function emap (element map) rather than mapv. This should produce vectors in the format of the first vector argument, so will maintain Vectorz types. Even better, find a specialised function that does what you want: (add x y) is likely to be much faster than (emap + x y)
activate-layer looks like a big bottleneck. It would be much better written as an array operation that exploits matrix multiplication. I think (mmul (transpose weights) value) should do the trick. To make this extra quick, I suggest storing the weights in pre-transposed format, then you can just do (mmul transposed-weights value)
I see you are using the core.matrix operators for +, -, * etc. That's fine, but be aware that they are somewhat slower than the equivalent clojure.core operators if you are applying them to single numbers rather than whole arrays. Normally I use the named core.matrix functions instead (add, sub, mul etc.) if there is any risk of confusion. | {
"domain": "codereview.stackexchange",
"id": 5561,
"tags": "functional-programming, clojure, lisp, machine-learning, neural-network"
} |
Monitoring webpages to be notified | Question: I have been writing a monitoring script where I check for whenever there is changes on a webpage. When a change has happend I want to be notified by printing out that there is a difference.
I have also written a spam filter function that could be caused due to a cache issue where a repo count can go from a old count to a new and back... Here is the code
import sys
import threading
import time
from datetime import datetime, timedelta
from typing import Union
import requests
from bs4 import BeautifulSoup
from loguru import logger
URLS: set = set()
def filter_spam(delay: int, sizes: dict, _requests) -> Union[dict, bool]:
"""Filter requests to only those that haven't been made
previously within our defined cooldown period."""
# Get filtered set of requests.
def evaluate_request(r):
return r not in _requests or datetime.now() - _requests[r] >= timedelta(seconds=delay)
if filtered := [r for r in sizes if evaluate_request(r)]:
# Refresh timestamps for requests we're actually making.
for r in filtered:
_requests[r] = datetime.now()
return _requests
return False
class MonitorProduct:
def __init__(self):
self._requests: dict[str, datetime] = {}
self.previous_state = {}
def doRequest(self, url):
while True:
if url not in URLS:
logger.info(f'Deleting url from monitoring: {url}')
sys.exit()
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.text, 'html.parser')
if soup.find("span", {"data-search-type": "Repositories"}): # if there are sizes
self.compareData({
'title': soup.find("input", {"name": "q"})['value'],
'repo_count': {soup.find("span", {"data-search-type": "Repositories"}).text.strip(): None}
})
else:
logger.info(f"No diff for {url}")
else:
logger.info(f"Error for {url} -> {response.status_code}")
time.sleep(30)
def compareData(self, data):
if self.previous_state != data:
if filter_spam(3600, data['repo_count'], self._requests):
logger.info(f"The data has been changed to {data}")
self.previous_state = data
# mocked database
def database_urls() -> set:
return {
'https://github.com/search?q=hello+world',
'https://github.com/search?q=python+3',
'https://github.com/search?q=world',
'https://github.com/search?q=wth',
}
if __name__ == '__main__':
while True:
db_urls = database_urls() # get all urls from database
diff = db_urls - URLS
URLS = db_urls # Replace URLS with db_urls to get the latest urls
# Start the new URLS
for url in diff:
logger.info(f'Starting URL: {url}')
threading.Thread(target=MonitorProduct().doRequest, args=(url,)).start()
time.sleep(10)
The code is working pretty good however I have written a mocked database where it will constantly be pulled every 10s (If I need to higher it up, please explain to me why) - The reason I wrote a mocked data is that to be able to show you people how it is working. The idea will be to read from database (postgresql through peewee) - I also used Github as an example where it was easiest for me to get the values I want to compare against. I am aware that there is an API for Github but for my situation, I would like to get a better knowledge regarding beautifulsoup therefore I use bs4.
To mention it again, the point for me with this script is that I would like to get notified whenever something has been changed on the webpage.
I hope I could get a feedback from you lovely code reviewer!
Answer: URLS should not be global.
There's a better name for filter_spam, since it isn't spam you're filtering; but I can't get more inventive than "change filter". The algorithm here is probably inefficient - you have no mechanism to purge requests once they've exceeded your cooloff. Another big problem is that this method, though it exists in the global namespace, mutates a class member. Better to only return bool, and also to be a member of your class. An improved algorithm could keep a combined FIFO queue and hash table, represented by an ordered dict.
It's important that you use a session instead of making an isolated get() every time.
Put your response in context management.
Don't check status_code in your case; just check ok.
Do as much computation up-front as possible for BeautifulSoup, using both a strainer and a CSS sieve.
You're not looking at a very useful element span. Have you noticed that often it returns a numeric contraction, 1M? You should look at the h3 instead.
I don't understand why you're passing around your repo_count as a dictionary of a single integer key to a None value. Don't do this.
Don't if self.previous_state != data; your filter does that for you.
Don't bake https://github.com/search into your database; only your search term varies so only use that.
It's good that you have a __main__ guard but it isn't enough. All of those symbols are still globally visible. Make a main() function.
You already have a thread context object, MonitorProduct, so why also require a url parameter? That can be moved to the object.
exit() doesn't seem like what you actually want. If you want to exit the thread, just return.
Suggested
import locale
import logging
import soupsieve
import threading
import time
from collections import OrderedDict
from datetime import datetime
from typing import Callable
from bs4 import BeautifulSoup
from bs4.element import SoupStrainer
from requests import Session
logger = logging.getLogger()
class MonitorProduct:
STRAINER = SoupStrainer(name='main')
SIEVE = soupsieve.compile('div.position-relative h3')
def __init__(self, term: str, is_alive: Callable[[str], bool]) -> None:
self.is_alive = is_alive
self.term = term
self.requests: OrderedDict[int, datetime] = OrderedDict()
self.session = Session()
def do_request(self) -> None:
while True:
if not self.is_alive(self.term):
logger.info(f'Deleting term from monitoring: "{self.term}"')
return
with self.session.get(
url='https://github.com/search',
params={'q': self.term},
headers={'Accept': 'text/html'},
) as response:
if not response.ok:
logger.info(f"Error for {self.term} -> {response.status_code}")
continue
soup = BeautifulSoup(markup=response.text, features='html.parser', parse_only=self.STRAINER)
result_head = self.SIEVE.select_one(soup)
if result_head:
result = result_head.text.strip()
logger.debug(f'Results for {self.term}: {result}')
repo_count = locale.atoi(result.split(maxsplit=1)[0])
else:
repo_count = 0
self.compare_data(repo_count)
time.sleep(30)
def compare_data(self, repo_count: int) -> None:
if self.filter_dupes(repo_count):
logger.info(f"The data for term {self.term} has been changed to {repo_count}")
def filter_dupes(self, repo_count: int, delay: int = 60 * 60) -> bool:
"""Filter requests to only those that haven't been made
previously within our defined cooldown period."""
# Purge old entries
now = datetime.now()
while self.requests:
oldest = next(iter(self.requests.values()))
if (now - oldest).total_seconds() > delay:
self.requests.popitem(last=False)
else:
break
# Check current entry
if repo_count in self.requests:
return False
self.requests[repo_count] = now
return True
# mocked database
def database_terms() -> set[str]:
return {
'hello world',
'python 3',
'world',
'wth',
}
def main() -> None:
logging.basicConfig(level=logging.DEBUG)
terms: set[str] = set()
# Needs to match the locale of github.com
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
while True:
db_terms = database_terms() # get all terms from database
terms.clear()
terms |= diff # Replace URLS with db_terms to get the latest terms
# Start the new URLS
for url in diff:
logger.info(f'Starting URL: {url}')
threading.Thread(
target=MonitorProduct(url, terms.__contains__).do_request
).start()
time.sleep(10)
if __name__ == '__main__':
main()
Output
INFO:root:Starting URL: wth
INFO:root:Starting URL: hello world
INFO:root:Starting URL: world
INFO:root:Starting URL: python 3
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): github.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): github.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): github.com:443
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): github.com:443
DEBUG:urllib3.connectionpool:https://github.com:443 "GET /search?q=wth HTTP/1.1" 200 None
DEBUG:root:Results for wth: 1,550 repository results
INFO:root:The data for term wth has been changed to 1550
DEBUG:urllib3.connectionpool:https://github.com:443 "GET /search?q=python+3 HTTP/1.1" 200 None
DEBUG:root:Results for python 3: 101,658 repository results
DEBUG:urllib3.connectionpool:https://github.com:443 "GET /search?q=world HTTP/1.1" 200 None
INFO:root:The data for term python 3 has been changed to 101658
DEBUG:root:Results for world: 1,981,644 repository results
INFO:root:The data for term world has been changed to 1981644
DEBUG:urllib3.connectionpool:https://github.com:443 "GET /search?q=hello+world HTTP/1.1" 200 None
DEBUG:root:Results for hello world: 1,733,873 repository results
INFO:root:The data for term hello world has been changed to 1733873 | {
"domain": "codereview.stackexchange",
"id": 43646,
"tags": "python, python-3.x, multithreading, beautifulsoup, python-requests"
} |
Magnetic field arising due to electron's orbital motion around the nucleus | Question: My book writes -
The magnetic field in a reference frame fixed with the electron, arising from the orbital motion of the electron with velocity $\vec{v}$ in the electric field $\vec{E}$ (due to the nucleus) is given by
\begin{equation}
\vec{B}=\frac{1}{c^2}(\vec{E}\times\vec{v})
\end{equation}
My doubts are the following:
• Is this $\vec{E}$ the old Coulomb field, given by $\frac{1}{4\pi\epsilon_0}\frac{Ze}{r^2}$ ($Z$ being the atomic number of nucleus)?
• Why the factor $\frac{1}{c^2}$? Is there any relativistic situation involved?
• And lastly, how do we arrive at this equation?
Answer: The answer to the first 2 questions is yes.
The "derivation" comes from the EM fields of a point charge moving at constant speed, see derivation here.
However, you can also get to the final expression "classically". Consider the proton orbiting the electron as a circular loop of current, which will generate a magnetic field according to the usual formula $$ B = \frac{\mu_0 I}{2r}.$$
The current $I$ is $dq/dt$ where $q$ is the nucleus' charge $Ze$ and $t$ is the period of oscillation $2\pi r / v$ ($v$ the velocity), which gives an effective magnetic field strength: $$ B = \frac{\mu_0 Z e v}{4\pi r^2} = \frac{1}{\epsilon_0 c^2}\frac{Z e v}{4\pi r^2}, $$ where is the second step I used $\mu_0 \epsilon_0 = 1/c^2$.
Finally, if I remember correctly, there should be a overall factor of $1/2$ at the end shoved somewhere in the fine structure. This is due to Thomas precession, an actual and classically-irreproducible relativistic effect that arises from changing between rotating frames of reference. See derivation here. | {
"domain": "physics.stackexchange",
"id": 93317,
"tags": "quantum-mechanics, magnetic-fields, electric-fields, atomic-physics, spin-orbit"
} |
How come {ww} isn't regular when {uv | |u|=|v|} is? | Question: As we know, using the pumping lemma, we can easily prove the language $L = \{ w w \mid w \in \{a,b\}^* \}$ is not a regular language.
However, the language $L_1 = \{ w_1 w_2 \mid |w_1| = |w_2| \}$ is a regular language. Because we can get the DFA like below,
DFA:
--►((even))------a,b---------►(odd)
▲ |
|--------a,b--------------|
My question is, $L = \{ w w \mid w \in \{a,b\}^* \}$ also has the even length of strings ($|w|=|w|$, definitely), so $L$ still can have some DFA like the one above. How come is it not a regular language?
Answer: well, there are things that a DFA can do, and things that DFA cannot do. A DFA is quite a simple machine, and it has no access to "memory". The only "memory" it has is its current state, i.e., a very limited memory. A DFA can do tasks that require finite amount of memory, but nothing more than that.
To check if the input is of even length - is very simple task. It requires only 1 bit of memory (odd/even). therefore, it can be done by a DFA.
However, for the second language $\{ ww \mid w\in\Sigma^*\}$ the DFA needs to check that the first $w$ is identical to the second $w$. How can you do that with no memory? In fact, since $w$ can be of any length, a (one-pass) machine that checks that the two copies of $w$ are the same must have infinite memory (to accommodate any $w$..). But a DFA has only limited memory, and thus cannot solve this task.
at least, this is the intuitive explanation. | {
"domain": "cs.stackexchange",
"id": 942,
"tags": "regular-languages, finite-automata"
} |
Powershell script to iterate through folder and restore SQL backups | Question: I am working on a PowerShell script which will loop through a directory containing SQL Server database backup .bak files; move the files to a different location; restore each of those files to the local SQL instance using OSQL.exe; output the SQL output to a log file.
This works OK, but I am willing to improve it.
$sourceBackupDirectory = "C:\FTP"
$sqlDatabaseLocation = "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\"
$backupsLocation = "C:\DatabaseRestore\"
Get-ChildItem $sourceBackupDirectory -Filter "*.bak" -Recurse | Move-Item -Destination $backupsLocation #move files
$backupsLocationFiles = Get-ChildItem $backupsLocation -Filter "*.bak" #load files into array
$todayUK = Get-Date -format yyyyMMdd
$logsFolder = "LogFiles\"
$logFile = $backupsLocation + $logsFolder + $todayUK + ".txt"
foreach ($file in $backupsLocationFiles)
{
$filePath = $backupsLocation + $file
$file = [io.path]::GetFileNameWithoutExtension($file)
#$filePath
#$file
RestoreDB $file $filePath
}
function global:RestoreDB ([string] $newDBName, [string] $backupFilePath)
{
[string] $dbCommand = "RESTORE DATABASE [$newDBName] " +
"FROM DISK = N'$backupFilePath' " +
"WITH FILE = 1, " +
"MOVE N'$newDBName' " +
"TO N'$sqlDatabaseLocation" + "$newDBName.mdf', " +
"MOVE N'$newDBName" + "_log' " +
"TO N'$sqlDatabaseLocation" + "$newDBName" + "_log.ldf', " +
"NOUNLOAD, REPLACE, STATS = 5"
Write-Host $dbCommand
OSQL.EXE -E -Q $dbCommand >> $logFile #this ouputs the SQL output into a logfile.
}
Answer: Functions
They must be declared before they are called. This script would fail if run from a clean session the first time. Just make sure you have functions declared before they are called. At the beginning of a script is a natural in PowerShell. I came from a VBS background so this was odd for me at first too.
Improved string concatenation
In your function
The format operator would be of great use here. Especially since you are putting the same variable in several places. Like in Quills answer use of here-string helps with the new lines. Note that is it not required you can just use and open and closing quotes to wrap the whole thing up.
$dbCommand = "RESTORE DATABASE [{0}]
FROM DISK = N'{1}'
WITH FILE = 1,
MOVE N'{0}'
TO N'{2}{0}.mdf',
MOVE N'{0}_log'
TO N'{2}{0}_log.ldf',
NOUNLOAD, REPLACE, STATS = 5" -f $newDBName, $backupFilePath, $sqlDatabaseLocation
One could argue that readability is reduced but I think this makes it clearer. It removed all the + used for concatenation. No more need for repeated variables in the string. Casting it as a [string] is also redundant but that is just being nitpicky.
For log files
PowerShell is forgiving when it comes to variable expansion in strings. Since you are just dealing with string variables you can just do something simple like this.
$logFile = "$backupsLocation$logsFolder$todayUK.txt"
You can also use the Combine static method from [system.io.path]. This has the feature of being more resilient to the presence and lack of backslashes. It is not simple string concatenation so it is a little more robust.
$logFile = [System.IO.Path]::Combine($backupsLocation, $logsFolder, "$todayUK.txt")
Both of these commands would produce the same output.
Simplify gathering files.
Move-Item has a passthru switch so that it will pass the moved item down the pipeline. No need to requery to get the files again.
$backupsLocationFiles = Get-ChildItem $sourceBackupDirectory -Filter "*.bak" -Recurse |
Move-Item -Destination $backupsLocation -PassThru
Loop Calling Function
The loop where you are calling the function has some room for improvement.
You are using $file and changing its meaning from a file object then turning it into a simple string. Not really a best practice. Does not matter since you don't even really need it.
File objects returned from Get-ChildItem have a basename property which is exactly what you are trying to get from [io.path]::GetFileNameWithoutExtension($file). While not what it was designed for Join-Path will also build the path for you without relying on string concatenation.
foreach ($file in $backupsLocationFiles){
$backupPath = Join-Path -Path $backupsLocation -ChildPath $file.Name
RestoreDB -newDBName $file.BaseName -backupFilePath $filePath
} | {
"domain": "codereview.stackexchange",
"id": 18289,
"tags": "sql-server, powershell"
} |
Centripetal force equation doubt | Question:
In a centrifuge, $a_c$ should be constant. If $m$ increases, the $r$ will increase in order to maintain a constant $a_c$.
Constant centrieptal acceleration is given by
$a_c={ v^2 \over r}$
and $a_c = \omega^2 * r$
But the conflict between these two equations arises when we increase $r$.
In the first equation - if we increase $r$, $v$ should increase so that $a_c$ remains constant.
But in the second equation - since $\omega$ is constant in a centrifuge, if we increase $r$ then $a_c$ is no longer constant.
I dont understand how this is happening and would appreciate any help.I have also looked at other similar questions on this site but they do not answer to this conflict. This is not a duplicate.
Also, does anyone know how to mathematically show that radius increases when mass increases?
Answer: The misconception lies on for what you define $a_c$. Imagine that you put two balls with mass $M$ and $m$ in the centrifuge ($M\gt m$). When the centrifuge reaches constant angular velocity $\omega$, let's say those balls are respectively $R$ and $r$ distance away from the centre of rotation. From the observation we know that $R\gt r$.* Now let's calculate the centripetal acceleration of two masses. $$M\Rightarrow a_c=R\omega^2$$ $$m\Rightarrow a_c=r\omega^2$$ This yields, centripetal acceleration of $M$ is greater than that of $m$. Which means that the sentence
In a centrifuge, $a_c$ should be constant
makes no sense.
*Physics concepts are based on observations. Not observations are built by physics concepts. | {
"domain": "physics.stackexchange",
"id": 82014,
"tags": "newtonian-mechanics, kinematics, acceleration, rotational-kinematics"
} |
Which target variable should I use? | Question: I have a problem where I want an LSTM to predict the resistance of a body. This value can also be calculated if we know the drag coefficient and the speed of that body. In my case, at inference time, the speed is known, meaning that I can do the following:
predict the drag coefficient, and then calculate the resistance accordingly
predict the resistance directly
Which one should I use as my learning target?
Answer: Interesting question!
Short (but maybe naive) answer
Experiment with both options and see which performs best!
Longer answer
predict the drag coefficient, and then calculate the resistance accordingly
If you do this, your network will try to optimize something different than your actual goal (which is the resistance). This means that your model will not "care" if the resistance you eventually calculate is any good, which can result in strange results.
predict the resistance directly
This would be better from a machine learning perspective as your model's goal will be the same as yours, however, you will lose the advantage that you have by knowing how the resistance is calculated.
Solution A
Predict both and then have a final step to decide what your final resistance will be. With LSTM, this is definitely possible, your target will just become 2 numbers instead of 1.
Solution B
The best solution, in my opinion, would be to have the LSTM output a single number (which would act as the drag coefficient), and then, add a layer which calculates the resistance using the known formula so that you can backpropagate on the entire thing, and you get the best of both worlds. In PyTorch this can be done rather elegantly. The big caveat is that the formula to calculate the resistance needs to be differentiable. | {
"domain": "datascience.stackexchange",
"id": 7399,
"tags": "machine-learning, lstm, feature-selection"
} |
To Which distance Punching Shear Reinforcement is placed in Mat Foundation? | Question: As I know, a thinner mat may be possible by providing a nominal amount of vertical reinforcement as compared to a mat without vertical reinforcement, the vertical reinforcement is used to resist punching shear at d/2 from columns or shear walls at foundation level.
Now my question is to what distance (or offset) from the column to stop placing these vertical reinforcement is it at a distance d/2 ?
Please if possible provide reference in ACI Code.
What i need is the critical section outside slab shear reinforcement from which distance from the column ?
Answer: For a more complete discussion see: ACI 421.1R-08 - Guide to Shear Reinforcement for Slabs.
But in principle you need to continue to provide punching shear reinforcement until the shear capacity of the slab is sufficient to prevent punching shear at that perimeter.
From ACI 421.1R-08 this means:
You need to check at $d/2$ from the column face to determine if shear reinforcement is required. If it is then you need to continue to check perimeters at a distance of $\alpha d$ from the column face until you find that (in SI units)
$$\frac{\nu_u}{\phi} \leq \frac{0.17\lambda\sqrt{{f'}_c}}{2}$$
Then the outermost perimeter of shear reinforcement must be at least $\alpha d - \frac{d}{2}$ from the column face.
$\nu_u$ is maximum shear stress due to factored forces. $\phi$ is the strength reduction factor = 0.75. ${f'}_c$ is the specified concrete compressive strength. $\lambda$ is the modification factor for the reduced mechanical properties of lightweight concrete (=1 for normal weight concrete). $d$ is the effective depth of slab. | {
"domain": "engineering.stackexchange",
"id": 1422,
"tags": "structural-engineering, civil-engineering, geotechnical-engineering"
} |
Which algorithms have runtime recurrences like $T(n) = \sqrt{n}\,T(\sqrt{n}) + O(n)$? | Question: The algorithms using the "divide and conquer" (wiki) design strategy often have the time complexity of the form $T(n) = aT(n/b) + f(n)$, where $n$ is the problem size. Classic examples are binary search ($T(n) = T(n/2) + O(1)$) and merge sort ($T(n) = 2T(n/2) + O(n)$).
Do you know any algorithms (probably using "divide and conquer") that have the time complexity of the form $T(n) = \sqrt{n} \cdot T(\sqrt{n}) + O(n)$?
Answer: Think of an algorithm, which do something linear with an integer list of length $n$, for example computes the maximum. Afterwards, the algorithm divides the list of length $n$ into $\sqrt{n}$ lists of length $\sqrt{n}$ and starts the algorithm for them. The result of the algorithm is for example the product of the computed maximum and the results of the $\sqrt{n}$ lists. For the base case, a list of length $1$, you can return the value of the only element.
This algorithm has the time complexity, you asked for. | {
"domain": "cs.stackexchange",
"id": 3564,
"tags": "algorithms, algorithm-analysis, reference-request, runtime-analysis, recurrence-relation"
} |
Frequency Modulation to arbitrary Audio signals | Question: I understand how FM synthesis works, but I was wondering how to FM arbitrary samples or part of samples. I.E any frame of audio samples.
Further more the modulator can itself be an arbitrary frame of samples itself.
How is this done in an algorithmic manner?
Answer: I have, in the past, used delay modulation on arbitrary audio signals. but the modulation waveform itself was not arbitrary, but was a sinusoid. but we could speed this modulation oscillator up to an audio rate.
Now that I think of it, I tried hooking that modulation oscillator up to a pitch detector. This is a quarter century ago, so I don't remember everything.
I s'pose you could derive that delay modulation signal from the input and likely LPF it before applying it to the instantaneous delay of the signal going out. | {
"domain": "dsp.stackexchange",
"id": 6965,
"tags": "audio, frequency-spectrum, frequency-modulation"
} |
Best practices for normalizing up training, validation, and test sets | Question: I was reading up on how to normalize my training, validation, and test sets for a neural network, when I read this snippet:
An important point to make about the preprocessing is that any
preprocessing statistics (e.g. the data mean) must only be computed on
the training data, and then applied to the validation / test data.
E.g. computing the mean and subtracting it from every image across the
entire dataset and then splitting the data into train/val/test splits
would be a mistake. Instead, the mean must be computed only over the
training data and then subtracted equally from all splits
(train/val/test).
(source: http://cs231n.github.io/neural-networks-2/)
Does this mean the following?
Split my training set T into training set T1 & validation set V1
Find the mean/var of T1, mean_T1, var_T1
Normalize T1, V1, and my testing set with mean_T1, var_T1.
Train & test accordingly...
Thanks...
Answer: Yes, that's what it means. Basically, mean_T1 and var_T1 become part of the model that you're learning. So, same as you'd apply machine learning to the training set to learn a model based on the training set, you'll compute the mean and variance based on the training set. | {
"domain": "cs.stackexchange",
"id": 14745,
"tags": "machine-learning, neural-networks, statistics"
} |
Reality of the Wavefunction - Complex numbers and Degrees of Freedom in Configuration space | Question: On the "reality" of the wavefunction, there seem to be two schools of thought on why treating $\psi$ as something more than a mathematical tool is erroneous:
$\psi$ involves complex numbers. Only Real numbers correspond to measurable quantities.
$\psi$ in configuration space has more degrees of freedom than physical space, therefore cannot correspond to physical reality.
My question is as follows:
There's nothing magical or special about $i$. Complex numbers are just as "real" as Real numbers. Both are components of our logical system of computation and together define the number plane - why are physical measurements limited to corresponding to only 50% of the number plane?
This is a follow-on to the question raised in "Reality" of EM waves vs. wavefunction of individual photons - why not treat the wave function as equally "Real"?
Answer:
why are physical measurements limited to corresponding to only 50% of the number plane?
This makes no sense: the real line is a set of measure zero in the complex plane. It does not represent the 50% of it.
If you want to go that way, I would say that $\mathbb R^n$ is "as real" as $\mathbb R$ anyway, so why restrict to $\mathbb C=\mathbb R^2$? why not to use $\mathbb R^3, \mathbb R^4,\cdots$?
The reason to use a complex wave-function is that it allows us to efficiently model the fact that Nature seems to add amplitudes, not probabilities. The physics of this are very well exemplified by the double slit experiment: try to think about that experiment without math. Now try to make up a mathematical model that reproduce the observed characteristics of the experiment. You should convince yourself that complex numbers do the work better than anything else, and this is the reason we want to introduce them.
After all, QM is just a mathematical model that is intended to reproduce observed phenomena. We use the model that works best, and standard QM is the best model we could find. It uses complex number as a mathematical tool, but physics is about modeling Nature, and not about finding the deep internal gears that make Nature work: reality is not mathematical. Nature doesn't work with complex numbers; we humans use them to model Nature. | {
"domain": "physics.stackexchange",
"id": 36078,
"tags": "quantum-mechanics, wavefunction, complex-numbers"
} |
Double bond or carbanion for nucleophillic attack | Question: I came across this question:
According to me the Hydride ion should take hydrogen from left (unsubstituted carbon) because carbanion will be more stable there and this carbanion will then act as a nucleophile and attack $\ce{CH3I}$ giving product (b).
However according to the answer, hydride ion takes hydrogen from the right (Substituted carbon) and then enol form is made and then the double bond attacks $\ce{CH3I}$. Why so?
Answer: First, there is a little mistake in your reasoning. You said that the carbanion is more stable on the unsubstituted side but actually, forming an enolate on the more substituted side will give a more stable substituted alkene. Remember, the negative charge is not concentrated on carbon. It is concentrated on the oxygen atom as the system is conjugated. That is why electrostatic interactions tend to occur on the oxygen while orbital (HOMO-LUMO) interactions occur on the carbon.
Now, an asymmetric ketone can form two regioisomeric enolates, kinetic and thermodynamic:
The kinetic enolate is the one that forms the quickest. Often, it is promoted by LDA, which is a strong but hindered base, at low temperatures, typically $\pu{-78^\circ C}$. Because it is more hindered, it will attack the less hindered and more accessible alpha hydrogen and form the less substituted enolate.
On the other hand, thermodynamic enolates tend to form at higher temperatures with unhindered bases like $\ce{NaH}$. | {
"domain": "chemistry.stackexchange",
"id": 15814,
"tags": "organic-chemistry, carbonyl-compounds, halides, nucleophilic-substitution"
} |
Newton’s third law | Question:
We have boxes and we are picking up a wood and put in between our hand and the box. We start pushing wood with 20N force. This force Is transferred from wood to the box and box return 20N to wood (Newton’s third law). So we have 20N from left to right and 20N from right to left that have an effect on box1. So why the wood (box1) start moving? (You can use any force instead 20N.) (Please check image).
Answer: You can't say that the contact force has a magnitude of 20 N.
To find the contact force, firstly find the acceleration of the system, which will be $\frac{net\: external\: force}{net\: mass}$. After this, you can draw the individual f.b.ds to find the contact force. This I leave to you.
The major error was directly stating that the first block exerts 20N on the second block | {
"domain": "physics.stackexchange",
"id": 83587,
"tags": "newtonian-mechanics, forces, free-body-diagram"
} |
What is the explanation for bright lines in light-diffraction by a straight edge? | Question: I’m just studying diffraction by a straight edge.
I did not find any explanation for the origin of the bright lines l' and l". The line l' is visible in relation to the laser beam at an angle >30°, while the line l" is visible up to 180°. This can be seen if a circular screen is placed instead of a flat screen.
In this regard, I am interested in the following questions:
How is the origin of bright lines l' and l” interpreted?
Are they just a continuation of the diffraction pattern f or is l' created by diffraction and l" by the reflection of the incident light beam?
How is it that the line l" stretches (bends) at such a wide angle (180°)? Maybe partly diffraction and partly reflection?
What is their usual name? Can it be said that these are diffraction lines?
Answer: The incident EM wave has a plane wavefront and within the extent of the beam it $induces$ an essentially homogeneous current along the edge of the obstacle. That edge current in turn acting as an antenna radiates a cylindrical wave. The two waves, the incident plane wave and the indirectly induced cylindrical wave interfere; the result is that the field extends in the geometrical optics shadow with a fluctuating amplitude as is shown below in @Farcher's intensity diagram. | {
"domain": "physics.stackexchange",
"id": 78897,
"tags": "visible-light, diffraction"
} |
sync between disparity image and rectified image | Question:
Hi,
I'm running stereo_image_proc to get rectified images as well as disparity images. And then I have a bunch of processing nodes that work with these. For a particular application, I need the pair of them together (rectified image and the corresponding disparity image).
Now since these are both published on individual topics, due to whatever reason (latency, processing etc.) sometimes they go out of sync. And so I have some code to read the headers and decide if these are the same pair or not. This seems not ideal to me.
Is there a better way to get a rectified image and the corresponding disparity image or point cloud message "together" somehow? Does image_transport have something for this?
Thank you.
Originally posted by 2ROS0 on ROS Answers with karma: 1133 on 2017-04-20
Post score: 0
Answer:
Google: "ros topic synchronize" -> message_filters
Originally posted by NEngelhard with karma: 3519 on 2017-04-20
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 27673,
"tags": "stere-image-proc"
} |
Daily stand-up list | Question: Like most teams, we take turns to speak in our daily stand-ups. When we gather around the sprint board, it's easy to know who speaks when - we start at one end and pass the role along the line.
At present, we're all working remotely, so there's no physical ordering that we would agree on. And we would get tired of using the same order every day, leading to diminished engagement. So we start the meeting by publishing the day's speaking sequence. I started with a random shuffle of the team members, but soon realised it would be better if any of us could generate the list, and we'd all get the same result. So I now use the date as a seed for shuffling.
I use a MD5 hash to quickly spread the entropy all around the random word (just using date +%F unmodified resulted in the same sequence every day, because all the variation is near the end), and xxd to convert its hexadecimal output into raw bytes.
#!/bin/bash
set -eu -o pipefail
team=(Alice Bob Charlie Dan Erin Frank)
# Header for today
date +%A:%n
# Team members in today's order
printf '%s\n' "${team[@]}" |
shuf --random-source <(date +%F|md5sum|xxd -r -p)
Answer: Some suggestions:
#!/usr/bin/env bash is a more portable shebang line.
I would use long options for set and xxd for readability. So set -o errexit -o nounset -o pipefail and xxd -revert -plain.
If you quote the array members you can easily use full names.
In case any remote workers are in another time zone offset you'll want to use a fixed one, such as TZ=Pacific/Auckland date +%F. If you want to be on the safe side, get the time zone where your meeting time is midday at that offset (in the same hemisphere if possible, to avoid a 2 hour offset half of the year). That way, anyone running this code within 12 hours of the meeting time will get the same result.
The output of md5sum is already completely scrambled. Passing it to xxd doesn't make the randomness input any more random. As mentioned in the comments it may be that shuf only uses a very small part of that randomness, in which case xxd is justified. Unfortunately man shuf doesn't go into any detail about --random-source.
You can randomize arguments to shuf: shuf --echo --random-source <(date +%F|md5sum|xxd -r -p) -- "${team[@]}" achieves the same without printf.
--random-source=<(…) would make it obvious that it is a key/value pair rather than a flag followed by a file argument. | {
"domain": "codereview.stackexchange",
"id": 40549,
"tags": "bash, shuffle"
} |
What are the units of P and V when R is expressed in terms of J/(mol K)? | Question: I have learnt the ideal gas equation $PV=nRT$. However, I have some doubts regarding the units of the quantities in this equation.
When we take the gas constant $R=8.314\ \mathrm{J\ mol^{-1}\ K^{-1}}$, what should be the units of $P$ and $V$?
Answer: The units for pressure $P$ are pascals ($\mathrm{Pa}$) and volume $V$ is metres cubed ($\mathrm m^3$). The proof can be found by the derivation provided from the University of Waterloo's page The Ideal Gas Law, reproduced below:
Rearrange the formula to make $R$ the subject, hence:
$$R = \frac{PV}{nT}$$
Now rewrite the Ideal Gas Law in terms of its units:
$$\mathrm{J\ mol^{-1}\ K^{-1}} = \frac{\mathrm{Pa\ m^3}}{\mathrm{mol\ K}}$$
or
$$\mathrm{J\ mol^{-1}\ K^{-1}} = \mathrm{Pa\ m^3\ mol^{-1}\ K^{-1}}$$
Given that $\mathrm{Pa} = \mathrm{N\ m^{-2}}$, rewrite that into the equation:
$$\mathrm{J\ mol^{-1}\ K^{-1}} = \mathrm{N\ m^{-2}\ m^3\ mol^{-1}\ K^{-1}}$$
Now the term $\mathrm{N\ m^{-2}\ m^3}$ can be simplified to $\mathrm{N\ m} = \mathrm J$ as work (unit = $\mathrm J$) = force x distance ($\mathrm{N\ m}$)
Hence, the right hand side becomes $\mathrm{J\ mol^{-1}\ K^{-1}}$ as required | {
"domain": "chemistry.stackexchange",
"id": 800,
"tags": "gas-laws, units"
} |
Why $F = m(v_f - v_0)/2$? | Question: Force is directly proportional to mass and velocity and inversely proportional to time so why don't we write $F=1/t+m+v-v_0$ where $m$ is mass, $v$ is final velocity, and $v_0$ is initial velocity?
Answer:
Force is directly proportional to mass and velocity and inversely proportional to time so why don't we write $F=1/t+m+v-v_0$
As others mentioned, the units don’t work. However, suppose we modify it to $$F=k_t/t+k_m m+k_v(v-v_0)$$ where the various $k$ are constants with appropriate dimensions that make each term a force.
Now, you have an equation that is dimensionally consistent. However, it is not directly proportional to mass and velocity and inversely proportional to time. If you double $m$ then according your formula you do not double $F$. Instead you get $$k_t/t+k_m 2m+k_v(v-v_0) \ne 2 F$$
For $F$ to be proportional to $m$ means $F=km$, and similarly with the other factors.
Also, one nitpick. Force is not proportional to velocity but the average force is proportional to the change in velocity. Those are slightly different statements. | {
"domain": "physics.stackexchange",
"id": 92876,
"tags": "newtonian-mechanics, forces, mass, acceleration"
} |
Python - Normalized cross-correlation to measure similarites in 2 images | Question: I'm trying to measure per-pixel similarities in two images (same array shape and type) using Python.
In many scientific papers (like this one), normalized cross-correlation is used. Here's an image from the ict paper showing the wanted result:
(b) and (c) are the 2 input images, and (d) is the per-pixel confidence.
I only used OpenCV before to do template matching with normalized cross correlation using cv2.matchTemplate function, but in this case it seems to be a really different use of cross correlation.
Do you know if I can approch this result using Python and image processing libraries (numpy, openCV, sciPy etc...), and the logic behind this use of cross correlation ?
Thanks.
Answer: I guess you can compute for each pixel the correlation coefficient between patches centered on this pixel in the two images of interest. Here is an example where I downloaded the figure attached here and tried to compute the correlation in such a way. The output looks different from the one of the article, but it was to be expected since the resolution is very different.
from skimage import io, feature
from scipy import ndimage
import numpy as np
def correlation_coefficient(patch1, patch2):
product = np.mean((patch1 - patch1.mean()) * (patch2 - patch2.mean()))
stds = patch1.std() * patch2.std()
if stds == 0:
return 0
else:
product /= stds
return product
im = io.imread('faces.jpg', as_grey=True)
im1 = im[16:263, 4:146]
sh_row, sh_col = im1.shape
im2 = im[16:263, 155:155+sh_col]
# Registration of the two images
translation = feature.register_translation(im1, im2, upsample_factor=10)[0]
im2_register = ndimage.shift(im2, translation)
d = 1
correlation = np.zeros_like(im1)
for i in range(d, sh_row - (d + 1)):
for j in range(d, sh_col - (d + 1)):
correlation[i, j] = correlation_coefficient(im1[i - d: i + d + 1,
j - d: j + d + 1],
im2[i - d: i + d + 1,
j - d: j + d + 1])
io.imshow(correlation, cmap='gray')
io.show()
The code above is a naive and slow implementation of the correlation, as the two for loops are very slow. For faster execution, you could for example port the script to Cython.
In the article, I think the idea is to measure whether face expressions look similar or not. If a pixel has a large correlation index between two images, it means that the region of the face where this pixel is located does not change much between the images.
If you use this method on good-resolution images, you should increase the patch size for more accurate results (d=2 or 3). | {
"domain": "dsp.stackexchange",
"id": 7432,
"tags": "python, opencv, image-processing, cross-correlation"
} |
Hamiltonian written using Pauli matrices for a two-level system | Question: I have the following homework question:
Two electrons are tightly bound to different neighboring sites in a certain solid. They are, therefore, distinguishable particles which can be described in terms of their respective Pauli spin matrices $\sigma^1$ and $\sigma^2$. The Hamiltonian of these electrons takes the form:
$$H=-J(\sigma_x^1 \sigma_x^2 + \sigma_y^1 \sigma_y^2 )$$
where J is a constant.
How many energy levels does the system have?
Now the solution given to the homework starts by rewriting the Hamiltonian as follows:
$$H=-J/2 ( ( \sigma^1 + \sigma^2 )^2 - (\sigma^1)^2 - (\sigma^2)^2 - (\sigma_z^1 + \sigma_z^2 )^2 - (\sigma_z^1)^2 - (\sigma_z^2)^2 )$$
The solution seems to assume the logic leading from the way the Hamiltonian is formulated initially to this second expression is obvious. I can't honestly see how they got from the way the Hamiltonian was formulated initially to that second expression, perhaps because I'm not clear on what it means to rewrite the Hamiltonian in terms of Pauli matrices as they have done. What am I missing here? How did they convert the first expression into the second one?
Answer: (I could be wrong but...) this seems like just clever rewriting with somewhat obscure notation. For instance
$$
(\sigma^1+\sigma^2)^2=(\sigma^1)^2+(\sigma^2)^2+2\sigma^1\sigma^2
$$
while
\begin{align}
(\sigma^1)^2&=(\sigma^1_x)^2+(\sigma^1_y)^2+(\sigma^1_z)^2\, ,\tag{1}\\
\sigma^1\sigma^2&=\sigma_x^1\sigma_x^2+\sigma_y^1\sigma_y^2+\sigma_z^1\sigma_z^2\tag{2}
\end{align}
You can see that (2) contains bits of your $H$ so if you add and subtract smartly terms in this way you should land on your feet.
There is an implicit “vector” notation here in that $\sigma^1$ is the “vector” $\vec\sigma^1=(\sigma_x^1,\sigma_y^1,\sigma_z^1)$ so that
$(\sigma^1)^2$ is $\vec\sigma^1\cdot\vec \sigma^1$ as per (1) while something like $\sigma^1\sigma^2$ is $\vec\sigma^1\cdot\vec\sigma^2$ as per (2) | {
"domain": "physics.stackexchange",
"id": 44011,
"tags": "quantum-mechanics, homework-and-exercises, quantum-spin, hamiltonian-formalism, notation"
} |
Does PETSc really give speedup? | Question: I searched for linear solver libraries and found out about PETSc library which is considered to be a powerful and useful library. PETSc consists of implementations of various iterative methods with preconditioners and sparse matrix storing methods. All the methods are realized sequentially and in parallel using MPI.
I was very glad for the creators of PETSc. I downloaded and then installed it. However, when I started reading user's guide I encountered the following text:
PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential
code. Certainly all parts of a previously sequential code need not be parallelized but the matrix
generation portion must be parallelized to expect any kind of reasonable performance. Do not expect
to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel.
I was surprised! Did PETSc developers really parallelize only the matrix generating part? What is a benefit of using PETSc as a parallel solver if the linear system solving part runs sequentially?
Answer: You misread the text. The authors of PETSc are just telling you that you can't avoid Amdahl's law.
They have done their best to parallelize every aspect of the linear solver. But a real program is not just a call to a linear solver. First you generate a matrix and then you pass the matrix to the linear solver. If your matrix generator is slow, your whole program will be slow.
For example, suppose your original program spends 1000 seconds generating the matrix $A$ and vector $b$ and then you call a linear solver. Your old (sequential) linear solver took 1000 seconds to find $x$ such that $Ax = b$. Now you replace your old sequential linear solver with PETSc. Suppose the PETSc authors did such a good job that the PETSc parallel linear solver finds $x$ in just 1 second! Now how long does it take your program to run? 1001 seconds. You got less than 2x speedup! You need to do some work on your matrix generation code if you want to get a better speedup.
Pretty much the authors of PETSc are just telling you that a parallelized linear solver library is not a magic bean. | {
"domain": "cs.stackexchange",
"id": 1499,
"tags": "parallel-computing, linear-algebra"
} |
Moravec, Harris noisy window | Question: Harris and Stephens writes about the interest window of Moravec:
"The response is noisy because the window is binary and rectangular", and suggests applying a Gaussian window.
My Question: Why is the response not noisy after applying a Gaussian window? Is it because a Gaussian filter removes noise from a picture, or because a circular window is better for sampling somehow?
Thank you.
Answer: I've added a link to the paper you're quoting in the question.
The issue is that both the Moravec corner detector, and the change to it suggested first in that paper:
use the derivative of the image. ANY difference operation will tend to enhance higher frequency noise in the image. The idea with the Gaussian smoother is to reduce this effect. The two operations (Gaussian smoothing and difference operation) are often combined into a single operation: the difference of Gaussians.
As the paper suggests, the anisotropy of the Moravec detector is why the differential is used, rather than the up, down, and diagonal differences in the Moravec algorithm. The smoothing is introduced to counter the noise issue. | {
"domain": "dsp.stackexchange",
"id": 3322,
"tags": "local-features, detection, corners, harris"
} |
What's wrong in my approach towards solving this problem? | Question: This is the problem that I have to solve:
The figure shows a block $A$ of mass $6m$ having a smooth semicircular grove of radius $a$ placed on a smooth horizontal surface. A block $B$ of mass $m$ is released from a position in grove where its radius is horizontal. Find the speed of the bigger block when the smaller block reaches its bottommost position.
My solution is, let $v_1$ be the speed of the smaller block relative to bigger block when it reaches the bottom-most position and at this instant let $v_2$ be the speed of the bigger block.
Then by conservation of linear momentum in horizontal direction we get $6mv_2 = m(v_1 - v_2)$.
Now by applying work-energy theorem on the smaller block relative to the bigger block, we get
$$ mga = \frac{1}{2}mv_1^2$$
$$ \implies v_1 = \sqrt{2ga}$$
So by subsituting $v_1$ in the linear momentum equation we get $$v_2 = \frac{\sqrt{2ga}}{7} $$
The correct answer given is $v_2 = \sqrt{\frac{ga}{21}} $. I don't know why my approach is wrong. Can anyone point out what is wrong? I think that I didn't include all the work done in the work-energy theorem as the reference frame of the bigger block is non-inertial.
Answer:
Now by applying work-energy theorem on the smaller block relative to the bigger block
This is the problem. You used the big block as the reference frame, however, the big block itself is accelerated. This complicates the problem. You can't apply work-energy theorem as you did because there would be another fictitious force acting the small block.
The better approach is to use the table as the reference frame. Define $v_1$ and $v_2$ as the horizontal speeds of the small and big blocks respectively with respect to the table. In this case, the energy and momentum conservation consideration gives respectively,
\begin{align}
\frac{1}{2}m v_1^2 + \frac{1}{2}(6m)v_2^2 &= mga, \\
mv_1 &= 6mv_2,
\end{align}
where the first equation says that the gravitational potential is transformed to be the kinetic energies of the small and big block. Solving the equations, you will get the correct answer. | {
"domain": "physics.stackexchange",
"id": 60579,
"tags": "homework-and-exercises, newtonian-mechanics, momentum, energy-conservation"
} |
Why do bumble bees run repeatedly into walls? | Question: I have noticed on dozens of occasions that bumble bees will run repeatedly into a wall. Usually between 1 - 5 times before clearing the obstacle or flying away.
Credit: Spine Films
This source suggests that this might be due to "unpredictable, unsteady airflow ... [that results in them being] buffeted back and forth by wind."
However, I've seen this occur in very clear, calm weather.
Further, I do not see other genera of bees colliding into things with anywhere near the frequency of bumble bees.
So what's going on here?
(FYI: their eyes, though primitive, are not likely the reason. Nor is their intelligence).
Answer: Crall et al., 2015, studied bumblebee collision avoidance (note the senior authors are the same from your link). It seems that, contrary to popular belief and experience, bumblebees are actually quite good at avoiding collisions, but they are also in an environment that has a lot of obstacles, so collisions still happen frequently.
There is a lot of interesting content in that paper, but one of the interesting discussions was about types of collision avoidance by body size. The authors note that for larger animals, collision avoidance strategies tend toward reversal/slowing down to reduce impact, whereas in smaller animals, strategies tend toward direction-change. This might be because collisions at high speeds are more dangerous for larger animals, so slowing down is the priority. Accelerating in a lateral or vertical direction, instead, would result in a full-force collision if the acceleration was insufficient to avoid the object (I have to admit, though, your gif doesn't seem to show any deceleration).
Larger individuals in their study performed more slowly on their obstacle course than smaller ones, but it seemed like all the bees were pretty good at avoiding obstacles.
Based on this study, I don't know that we can conclude definitively why bumblebees repeatedly run into walls, but given how well they navigate an obstacle course like this I am guessing it isn't simply because they are incapable, but rather has to do with a few things:
Walls are somewhat unexpected obstacles; bumble bees are probably best evolved for navigating around foliage that has a limited horizontal size, and once headed home they are expecting a direct route to be most efficient. A good transit strategy in that case would be to fly slightly laterally in the event of a collision to find a new opening, but not deviate too much from the original path of travel.
Speed/collision tradeoff. Getting home fast can have some obvious advantages because it increases the effective foraging potential. If collisions can be mitigated or aren't damaging, it might make sense to fly at high speed and just deal with the collisions as they come.
Bumblebees are big. Therefore, it's harder for them to stop, and they might use a strategy for collision avoidance that reduces impacts rather than avoids impacts altogether.
Cognitive biases in humans. Bumblebees make quite the racket when they run into things. You're going to hear it each and every time if you are nearby. The same isn't necessarily true for other bees or other insects.
References
Crall, J. D., Ravi, S., Mountcastle, A. M., & Combes, S. A. (2015). Bumblebee flight performance in cluttered environments: effects of obstacle orientation, body size and acceleration. Journal of Experimental Biology, 218(17), 2728-2737. | {
"domain": "biology.stackexchange",
"id": 7372,
"tags": "zoology, entomology, ecology, ethology"
} |
What does HClO2 aqueous solution decompose to in an open environment? | Question: I looked up the decompositions of diluted $\ce{HClO2}$ aqueous solution (used in a disinfectant) to know if it is safe to use on some substances, such as metal, wood, leather or skin. However, the chain I found leading to $\ce{HClO4},$ which is a very strong acid.
$$
\begin{align}
\ce{5 HClO2 &→ 4 ClO2 + HCl + 2 H2O}\\
\ce{4 HClO2 &→ 2 ClO2 + HClO3 + HCl + H2O}\\
\ce{3 HClO2 &→ 2 HClO3 + HCl}\\
\ce{2 HClO2 &→ 2 HOCl + HClO3}\\
\ce{HClO2 &→ HCl + O2}
\end{align}
$$
Then there is
$$\ce{3HClO3 → 2ClO2 + HClO4 + H2O}$$
(Source: Inorganic Chemistry by Garg and Singh [1], Google book preview)
The $\ce{HClO2}$ solution is diluted (I do not know the exact number, but maybe similar to mild bleach), sprayed to substances' surface, at room or body temperature.
Can the balances in this case lead to $\ce{HClO4}?$
On the other hand, $\ce{HCl}$ recombines to make $\ce{Cl2}$:
$$
\begin{align}
\ce{HClO2 + 3 HCl &→ 2 Cl2 + 2H2O}\\
\ce{HOCl + HCl &→ Cl2 + H2O}
\end{align}
$$
So, can the remaining $\ce{HCl}$ be concentrated enough to corrode metal or cause harms to skin?
From a consumer's point of view, $\ce{HClO2}$ seems to be a complete disinfectants for all of water, surface (by producing $\ce{HOCl}$), and air (by producing $\ce{ClO2}$). However, the issue left for concerns is the byproduct acids.
Updates: I found a list of patents about producing $\ce{HClO2}$ solutions to be used as disinfectants, which are likely the one I am talking about in this question. The patents claimed that the disinfectants are safe to human, can be used to clean food-processing utilities, and even as food additives. The exact reasons for the claims are not clear to me.
https://patents.google.com/patent/US9516878B2/en
https://patents.google.com/patent/AU2013205834B2/en
https://patents.google.com/patent/EP2999490A2/en
https://patents.google.com/patent/US20160106106A1/en
https://patents.google.com/patent/US20160113282A1/en
References
Garg, R.; Singh, R. Inorganic Chemistry; McGraw-Hill Education: New Delhi, 2015. ISBN 978-1-259-06285-8.
Answer: Per Wikipedia on Chlorous acid to quote:
The pure substance is unstable, disproportionating to hypochlorous acid (Cl oxidation state +1) and chloric acid (Cl oxidation state +5):
$\ce{2 HClO2 → HClO + HClO3}$
Also, here are some interesting comments on the decomposition of chlorous acid in "Kinetics and Mechanism of the Decomposition of Chlorous Acid" from J. Phys. Chem. A 2003, 107, pages 6966-6973, to quote:
Of the many mechanistic models tested, the one that fit best included the following reactive intermediates: HOCl, Cl2O2, Cl2O3, •ClO, •OH. The stoichiometric ratio of ClO2 produced to Cl(III) consumed varies with pH and [Cl-]. Reaction of Cl2O3 with Cl(III) yields chlorate exclusively. Reaction of Cl2O3 with Cl- favors ClO2 over chlorate, but does not entirely exclude chlorate, because it is produced by hydrolysis of Cl2O2. Invoking Cl2O3 explains the variation in stoichiometric ratio as well as the maximum observed in the initial rate of ClO2 formation as a function of pH. The kinetics of chlorous acid decomposition cannot be quantitatively fit through the last stages of the reaction without postulating a first-order decomposition. Scission of chlorous acid to give short-lived hydroxyl and chlorine-(II) monoxide is a plausible route for this process [...]
Several groups of investigators[5-7] have found
that in the absence of chloride ion the stoichiometry of the
decomposition of chlorous acid is given by reaction A:
$\ce{4 HClO2 -> 2 ClO2 + ClO3- + Cl- + 2 H+ + H2O}$ (A)
The stoichiometry of the decomposition of chlorous acid in the
presence of chloride ion is given by reaction B:
$\ce{5 HClO2 -> 4 ClO2 + Cl- + H+ + 2 H2O}$ (B)
Also, to quote:
Earlier studies,[9,18] in agreement with our present results, have also found the formation of more chlorate than predicted from reaction A. Reaction C
$\ce{3 HClO2 -> 2 ClO3- + Cl- + 3 H+}$ (C)
also plays a role in determining the stoichiometry at higher $\ce{HClO2}$ concentrations."
So, several intermediate products and depending on chloride presence possibly $\ce{ClO2}$, which is a problematic explosive and toxic gas, along with the strength of the $\ce{HClO2}$, which can introduce chlorate at higher chlorous acid concentrations.
Not a particularly good path to an acid as you are also inquiring about. As a disinfectant, the transient creation of the powerful disinfecting HOCl and associated radicals, may actually make it weaker, albeit more stable (considering end products) than hypochlorous acid, in my opinion.
[EDIT] To answer a comment question, as to whether is ClO2 safe, here is a statement from the CDC, Public Health Statement for Chlorine Dioxide and Chlorite, to quote:
If you are exposed to chlorine dioxide or chlorite, many factors will determine whether you will be harmed. These factors include the dose (how much), the duration (how long), and how you come in contact with them. You must also consider any other chemicals you are exposed to and your age, sex, diet, family traits, lifestyle, and state of health.
Interestingly, ClO2, which is a stable free radical, has found commercial application as an odor removal agent, likely due to its interaction with organics to create volatile organic chlorides (I would suspect that the presence of light would be catalytic). These VOCs are much more of a long-term health danger (as in carcinogenic), in my opinion. | {
"domain": "chemistry.stackexchange",
"id": 13429,
"tags": "inorganic-chemistry, acid-base, decomposition"
} |
How to name the compound with the formula Br3C-CHCl2? | Question:
Name the compound:
How should I name $\ce{Br3C-CHCl2}$? Is it 1,1,1-tribromo-2,2-dichloroethane or 2,2,2-tribromo-1,1-dichloroethane?
Answer: The first name is correct (1,1,1-tribromo...). Counting the substituents from either end, you would get 1 for the first substituent going from either direction. As a tie-breaker, look at the next substituent, which would give you 1,1 going from the bromine end, and 1,1 going from the chlorine end. Since the tie is still not broken, you go to the third substituent. Going from the bromine end you would have 1,1,1 but 1,1,2 going from the chlorine side. So, you number the carbon with the bromines number 1, giving you 1,1,1-tribromo-2,2-dichloroethane.
As a side note, if the carbon on the right had three chlorines instead of two, this method wouldn't work (you'd get 1,1,1,2,2,2 numbering from either direction). In this circumstance you would use alphabetical order to break the tie, with bromo- having higher priority than chloro. That would give you 1,1,1-tribromo-2,2,2-trichloroethane. I know you didn't ask this, but it could prove useful to know! | {
"domain": "chemistry.stackexchange",
"id": 4973,
"tags": "organic-chemistry, nomenclature"
} |
Explain to a 4-year old why people below the equator are not tilted or suspended upside down | Question: So I just bought a globe for my extremely inquisitive soon to be 4-year old son. He knows the very basics of the solar system and knows that we live on planet Earth and also knows the concept of gravity.
While i was explaining to him what the illuminated equator line is he asked me How do people stand below this line? (Look at the globe from a 4 year old's eyes and you will wonder the same)
I explained to him that earth is not as small of a sphere as this globe depicts and almost everywhere it will be relatively flat but then he said there are so many places on this globe at the bottom if you live there earth will be above you and gravity will hold you upside down.
How do I explain it to him?
Answer: Explain that "down" is more or less toward the center of the Earth, and that direction changes with location. (For a four year old, skip the "more or less" part. It's just toward the center of the Earth, period.) The direction of "down" doesn't change much on trips across town, but it does change with larger trips.
If you have taken your son on a long trip (just a few hundred miles or kilometers will do), show your son those two locations on the globe. The direction of "down" at that remote location and the direction where you live point in slightly different directions for a trip of a few hundred miles, vastly different directions for a longer trip. Ask him if he felt tilted sideways at that remote location. | {
"domain": "astronomy.stackexchange",
"id": 2678,
"tags": "gravity, earth"
} |
Diagonalising the Hamilton operator, why does this magic work? | Question: Let the Hamilton operator $H= \omega_1 a_1^\dagger a_1 + \omega_2 a_2^\dagger a_2 + \frac{J}{2} (a_1^\dagger a_2 + a_1 a_2^\dagger)$ be given, of course $a_j$ and $a_j^\dagger$ are the creation and annihilation operators, respectively. This operator can be rewritten as
$$
H=
\begin{pmatrix} a_1^\dagger & a_2^\dagger \end{pmatrix}
\begin{pmatrix}
\omega_1 & J/2\\ J/2& \omega_2
\end{pmatrix}
\begin{pmatrix} a_1 & a_2 \end{pmatrix}
$$
The Eigenvalues of the matrix in the middle, which may be rewritten as
$$
\begin{pmatrix}
\omega_0 - \frac{\Delta}{2\omega_0} & J/2\\ J/2& \omega_0 + \frac{\Delta}{2\omega_0}
\end{pmatrix}
$$
where $\omega_0 = \frac{\omega_1 + \omega_2}{2}$ and $\Delta = \omega_2 - \omega_1$, are
$$
\lambda_{\pm} = \omega_0 \pm \frac{1}{2}\sqrt{J^2 + \Delta^2}.
$$
Question: Could these be the Eigenvalues of that thing $H$ that maps vectors from an infinite dimensional space to others ? Why is that ? How could the diagonalisation of the little $2\times2$-matrix have anything to do with diagonalising $H$, whose true matrix is infinite dimensional ?
Please mention the subject that rigorously explains this and some references about that.
Answer: Using the Schwinger realization, this $H$ can be rewritten as
$$
H=A\hat N+B\hat J_z+C\hat J_y+D\hat J_x
$$
where
\begin{align}
\hat N&=\frac{1}{2}\left(\hat a^\dagger_1\hat a_1+
\hat a^\dagger_2\hat a_2\right)\, ,\\
\hat J_z&=\frac{1}{2}\left(\hat a^\dagger_1\hat a_1-
\hat a^\dagger_2\hat a_2\right)\, ,\\
\hat J_+&=\hat a_1^\dagger \hat a_2\, , \\
\hat J_-&=\hat J_+^\dagger\, .
\end{align}
You can verify that $\hat N$ commutes with everything else so that basically inside each subspace of fixed eigenvalue $n$ of $\hat N$ your $H$ is just proportional to a rotation $R$ of the $\hat J_z$ operator. The change of basis that will give you this rotated $\hat J'_z=R J_z R^{-1}$ is just given by the eigenvectors of your matrix.
My favourite source for this is the (old but nice) textbook by Gordon Baym. | {
"domain": "physics.stackexchange",
"id": 90584,
"tags": "quantum-mechanics, hilbert-space, hamiltonian, linear-algebra, eigenvalue"
} |
Generic data for ListView | Question: I'm in the midst of writing a custom ListView for our application.
In the process of getting this question answered, I realized I needed to separate my column-generation logic from the data population. This led to the need to preserve the ColumnDataBuilder<TDATA> builder object that I used to create the columns, so that I could keep track of the type of my data.
Ideally, usage will look something like this:
builder = MyListView.ColumnsFor<ReceiptHeaderForShipping>();
mlvShippingList.createColumns(_builder.Create("Receipt", 60, x => x.receiptNumber),
_builder.Create("Cust", 100, x => x.customer.LastName + ", " + x.customer.FirstName),
_builder.Create("Total", 60, MyListView.ColumnType.Currency, x => x.subTotal)
);
And then later
mlvShippingList.populateData(_builder, data);
In addition, the MyListView will have to know what type of data it was built with, in order to know what type of ColumnDataBuilder the IMyColumnData comes from, so that it can cast to it in order to sort.
From MyListView:
public partial class MyListView : ListView
{
public static ColumnDataBuilder<T> ColumnsFor<T>(IEnumerable<T> data)
{
return new ColumnDataBuilder<T>();
}
public void populateFromData<TDATA>(IEnumerable<TDATA> data, params ColumnDataBuilder<TDATA>.IMyListViewColumnData[] columns)
{
createColumns(columns);
populateData(data, columns);
}
public void createColumns<TDATA>(params ColumnDataBuilder<TDATA>.IMyListViewColumnData[] columns)
{
Columns.Clear();
foreach (var col in columns)
{
var ch = new ColumnHeader
{
Text = col.Name,
Width = col.Width,
Tag = col,
};
// Other formatting goes here
this.Columns.Add(ch);
}
}
private void populateData<TDATA>(IEnumerable<TDATA> data, params ColumnDataBuilder<TDATA>.IMyListViewColumnData[] columns)
{
Items.Clear();
var parsedData = data.Select(row => CreateListViewItem(columns, row));
this.Items.AddRange(parsedData.ToArray());
}
public class ColumnDataBuilder<T>
{
internal List<IMyListViewColumnData> columns = new List<IMyListViewColumnData>();
public interface IMyListViewColumnData
{
string Name { get; }
int Width { get; }
ColumnType Type { get; }
string GetDataString(T dataRow);
}
public delegate TOut FormatData<out TOut>(T dataIn);
public class MyListViewColumnData<TOut> : IMyListViewColumnData
{
public string Name { get; private set; }
public int Width { get; private set; }
public ColumnType Type { get; private set; }
private readonly FormatData<TOut> _dataFormatter;
public MyListViewColumnData(string name, int width, ColumnType type, FormatData<TOut> dataFormater)
{
_dataFormatter = dataFormater;
Type = type;
Width = width;
Name = name;
}
public string GetDataString(T dataRow)
{
object data = _dataFormatter(dataRow);
switch (Type)
{
case ColumnType.String:
case ColumnType.Integer:
case ColumnType.Decimal:
return data.ToString();
case ColumnType.Date:
return ((DateTime)data).ToShortDateString();
case ColumnType.Currency:
return ((decimal)data).ToString("c");
case ColumnType.Boolean:
return (bool)data ? "Y" : "N";
default:
throw new ArgumentOutOfRangeException();
}
}
}
#region Factory Methods
public IMyListViewColumnData Create<TOut>(string name, int width, ColumnType type, FormatData<TOut> dataFormater)
{
var col = new MyListViewColumnData<TOut>(name, width, type, dataFormater);
columns.Add(col);
return col;
}
public IMyListViewColumnData Create(string name, int width, FormatData<DateTime> dataFormater)
{
return Create(name, width, ColumnType.Date, dataFormater);
}
// More type-specific factories go here
#endregion
}
}
Am I totally off base at this point? Is this a reasonable track to go down?
Answer: I've done something like this a few times (a lot even), but most times it was a complete waste of time. You will probably not earn the time you save by not doing WebForms markup, even if it's terribly boring. If you have like 100-200 entity types to display in similar lists, it might be a useful path, but if it's 10 - go with good old aspx/ascx markup. If you don't need a lot of business rules and/or scalability in some form, go for markup only with SqlDataSource and WYSIWYG listviews. If you need a fancy UI, go with Telerik.
Given you actually have multiple customers with different list needs for the same entities in the same product, this might be "the only" way, though.
With regards to your code, it seems slim and clean enough. But beware when going further with this - you might get a lot of conflicting requirements. As you say, it's a good thing to separate responsibilities, so keep doing that if you really want to make something "re-usable". (It won't be, but still..) | {
"domain": "codereview.stackexchange",
"id": 2799,
"tags": "c#"
} |
Why are inhibitory connections often used in virtual neural networks when they don't seem to exist in real life neural networks? | Question: Something I like about neural network AI is that we already have a blueprint for it - we know in great detail how different types of neurons work, and we don't have to invent the AI, we can just replicate what we already know works (neural networks) in a simplified way and train it.
So what's confusing me right now is why many popular neural models I've seen when studying neural networks include both inhibitory and stimulating connections. In real neural networks, from my understanding, there is no negative signal being transferred, rather the signals sent between neurons are comparable to values between 0.1 and 1. There's no mechanic for an inverse (inhibiting) signal to be sent.
For example, in this network (seen overlaying a snake being simulated), the red lines represent inhibiting connections, and the blue lines represent stimulating neurons:
Is this just an inconsequential detail of neural network design, where there's really no significant difference between a range of 0 to 1 and a range of -1 to 1? Or is there a reason that connections in our simulated neural networks benefit from being able to a express a range from -1 to 1 rather than just 0 to 1?
Answer: I think you are confused. The reason why some neural neural networks have neurons with an output in range (-1, 1) all depends on the activation function used. Some networks even have neurons with an output range of (-Infinity, +Infinity) ( aka the identity function ).
I advise you to take a look at this list: activation functions w/ ranges.
Nowadays a lot of neural networks have a mixture of different activation functions. E.g. combining ReLU with Sigmoid or TanH.
This neural network library even shows that for most simple problems, neural networks prefer a variety of activation functions.
The red lines just mean that that connection has a low value in comparison with other connections.
There is no rule of thumb for activation functions, some problems benefit from a certain function, some don't. | {
"domain": "ai.stackexchange",
"id": 253,
"tags": "neural-networks"
} |
Print linkedlist reversely | Question: I'm working on the problem of print a linked list reversely without destruction. I'm wondering if any other better ideas, in terms of both time complexity improvement and space complexity improvement. I also welcome for code bugs and code style advice.
class LinkedListNode:
def __init__(self, value, nextNode):
self.value = value
self.nextNode = nextNode
def print_reverse(self):
if not self.nextNode:
print self.value
return
else:
self.nextNode.print_reverse()
print self.value
if __name__ == "__main__":
head = LinkedListNode('a', LinkedListNode('b', LinkedListNode('c', LinkedListNode('d', None))))
head.print_reverse()
Answer: Code Style notes:
follow PEP8 style guide, in particular:
one empty between the class methods
two empty lines after the class definition and before the next block
use print() function instead of a statement - this way you are staying compatible with the Python 3.x
rename nextNode to next_node
don't forget about the docstrings
Other improvements:
you should probably explicitly inherit from object to have a new-style class (reference)
instead of a print_reverse(), how about you create a generator method that would yield nodes instead of printing - this can come out handy later when you would actually need to use the node values, not just have them on the console
With all the suggestions applied (except the docstrings):
class LinkedListNode(object):
def __init__(self, value, next_node):
self.value = value
self.next_node = next_node
def reverse_traversal(self):
if not self.next_node:
yield self.value
else:
for node in self.next_node.reverse_traversal():
yield node
yield self.value
if __name__ == "__main__":
head = LinkedListNode('a', LinkedListNode('b', LinkedListNode('c', LinkedListNode('d', None))))
for node in head.reverse_traversal():
print(node)
As a side note, if you would be on Python 3.3+, this part:
for node in self.next_node.reverse_traversal():
yield node
can be rewritten using the generator delegation:
yield from self.next_node.reverse_traversal() | {
"domain": "codereview.stackexchange",
"id": 23526,
"tags": "python, python-2.x, linked-list"
} |
Explaining XGBoost functioning to non-technical people | Question: I have been tasked to explain the principle of the XGBoost algorithm to non-technical people (think 1-2 slides in a powerpoint presentation to upper management).
I am currently working with the original papers : here for the paper specific to XGBoost and here for the orignal boosting idea by Friedmann.
I have come up with a short explanation for the boosting principle : the model is built iteratively, by concentrating on where the previous model made errors. XGBoost take advantage of the tree architecture and tools (splitting, pruning) to do so.
What would you add to this short description ?
Answer: The xgboost docs come with a nice introduction and examples. If it is really only about a few slides, this may be a good start: https://xgboost.readthedocs.io/en/latest/tutorials/model.html | {
"domain": "datascience.stackexchange",
"id": 5891,
"tags": "xgboost"
} |
ROS Answers SE migration: ROS over QNX | Question:
Has anyone tried to use ROS over QNX?
I would like to use ROS topics to export data (i.e. joints) from an existing system that runs over QNX.
Originally posted by dgerod on ROS Answers with karma: 113 on 2013-02-05
Post score: 2
Answer:
As far as found online, there have been some people who tried, but no success has been reported. Some references:
An email thread in 2009.
realtime ROS discussion in May 2013
I'm about to look into porting catkin into QNX, since I have received a kind of strong interest toward enabling ROS, or at least build system, on QNX.
Originally posted by 130s with karma: 10937 on 2013-06-04
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by dgerod on 2013-06-05:
Good. I am interested in your progress...
Comment by 130s on 2013-07-24:
As in this thread, catkin is ported on QNX, and there's also a progress in getting ROS run there too. It seems, however, to have stopped progressing at rosdep at the moment. | {
"domain": "robotics.stackexchange",
"id": 12745,
"tags": "ros, real-time"
} |
What (actually) is Jupiter doing to this year's Perseids meteor shower? | Question: After reading the Astronomy Magazine article Perseid meteor shower set for its best show in nearly 20 years and NASA's Look Up! Perseid Meteor Shower Peaks Aug. 11-12, I understand that Jupiter's gravity is affecting this year's possible peak rate of Perseid's meteors.
I think it is - in simple terms - deflecting some of them out of their general orbit, a bit more towards earth, so that we intercept more than usual. But I am having difficulty visualizing this. Is this deflection in the plane of the ecliptic, or out of plane? At our closest approach, how far away are we from the orbit of this dust?
A good diagram would really help! I found this at space.com credited to Jeremie Vaubaillon, but it's not helping me with the 'big picture'.
Also found this here and credited to NASA/JPL:
Answer: To put it in simple terms, a comet's debris trail or dust trail is like a ring of debris that generally follows the entire orbit of the comet, more concentrated close to the comet, more spread out further away, but it can be thought of, for simplicity, as an entire ring of debris, kind of, a miniature asteroid belt with a fixed elliptical path. The picture below isn't perfect, cause it gives the impression that orbits are all the same when there's some spreading out, but it's the gist of it. (Note, the debris field and comet in the picture below isn't Swift-Tuttle/Perseid)
The reason for this is that any debris that's knocked off the comet at a low velocity has pretty much the same orbit as the original comet, and over hundreds or thousands of orbits, it tends to fill up the entire comet's orbit with mostly tiny particles, some larger ones. The debris field or dust trail from the comet generally follows in the comets orbit.
This picture is a bit more accurate as to what the Earth is passing through.
https://lintvwpri.files.wordpress.com/2016/08/perseid_base4.png?w=650&h=366
The more compact the dust trail or debris field, the more meteors, but the smaller the target the Earth has to hit, so odds are greater that the Earth will miss the field entirely. The more spread out, the greater chance the Earth hits the debris field, but more spread out means we get fewer meteors.
Earth's orbit crosses the Perseid (eliptical ring of debris) or Swift-Tuttle's debris field about the same time every year because Earth's and Swift-Tuttle's orbits nearly intersect.
This year, because of the placement of Jupiter, the debris field (which we can kind of think of as a ring), got bent by Jupiter, and Earth is hitting closer to the center of the debris field which means, more meteors. I can't say on which plane the field got bent, cause it depends on it's orbital relation to Jupiter and I couldn't find that, but it was the ring of debris that got bent, so Earth will hit it more directly this year than usual.
Because the ring of debris is pretty much permanent, Jupiter bends the ring every 11 years as it passes closest to it, and occasionally that bend works in earths' favor and gives us more meteors, so its a rare event that might not happen again for 11 years, it might be over 100 years before Jupiter does this for us again.
See article.
Typically Earth just grazes by Swift-Tuttle’s debris field, but our
planet will be even closer to the particle stream this year thanks to
some help from our neighbor Jupiter. The gas giant occasionally gets
close to the comet stream, and its immense gravity pushes the debris
nearer to Earth. | {
"domain": "astronomy.stackexchange",
"id": 1744,
"tags": "orbit, comets, meteor-shower"
} |
how can i merge some maps? | Question:
i want to know, how can i merge two or three maps to one file, maps are created using hector slam?
Originally posted by programmer on ROS Answers with karma: 61 on 2014-02-26
Post score: 0
Original comments
Comment by AbuIbra on 2014-02-26:
What do you mean by "merge"?
Comment by dornhege on 2014-02-26:
Are they already aligned correctly given the origins in the yaml?
Comment by AbuIbra on 2014-02-26:
What you can do is load the three maps using the map server. And then subscribe to the 3 of them and consider one as your reference-map. Then, compute the rotations necessary (to align obstacles for example) and then merge the 2 other maps with the reference. Of course you have to pay attention to the sizes and origins of the maps.
Answer:
MapStitch allows merging two previously non-aligned maps, provided they have sufficient overlap, are consistent and have similar appearance for the overlapping areas. We haven´t been using it a lot recently, but it worked in experiments we did a few months ago. You might also want to check out the "robocup" branch, as this contains potentially useful bugfixes. Unfortunately, I´m not able to look after it myself currently.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2014-02-26
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 17095,
"tags": "slam, navigation, hector, hector-slam, ros-groovy"
} |
Why can we consider the rest mass of photon is zero? | Question: Because a photon has a momentum, we can calculate the relativistic mass of it according to the special relativity. Also according to this theory, the relativistic mass is proportional to the rest mass when the speed is determined, which means only when the rest mass isn't zero can the relativistic mass not equals to zero. So we cannot consider the rest mass of photon equals to zero. However, once we consider the photon travels at the speed of light, the proportional factor between the relativistic mass and the static mass, $\gamma$, runs to infinity. So, the rest mass has to be zero when the momentum is determined.
This seems contradictory.
Answer: The photon is an elementary particle of mass zero in the table of the Standard Model of particle physics. In particle physics the four vector algebra of special relativity is used in the SM to model all experimental observations.
This is how the invariant mass of a particle is defined by the four vector $(E,p_x,p_y,p_z)$. Composite particles add up to one four vector and the composite has an invariant mass.
Note when the invariant mass is zero . When the momentum is zero, i.e. in the rest frame of the particle, one gets the $E=m_0c^2$.
The relativistic mass is not used in particle physics because of the confusions as in your question. One has to use the four vectors to be consistent with special relativity. The relativistic mass is an outdated concept .
The SM has been validated in most experiments up to now( the exceptions pointing to searches for new physics). The data are consistent with the mass of the photon being zero. | {
"domain": "physics.stackexchange",
"id": 84966,
"tags": "special-relativity, photons, mass, mass-energy"
} |
Condition for true periodicity | Question: In Vibrations and Waves, French writes that
The condition for any sort of true periodicity in the combined motion is that the period of the component motions be commensurable- i.e., there exist two integers $n_1$ and $n_2$ such that $$T=n_1T_1=n_2T_2$$
But what is the justification for this?
Answer: You really should put sufficient details in the question that someone can understand the context without actually going to the book in question.
Here we have a combined signal such that $$x=x_1+x_2$$ where $$x_1=A_1 \cos(\omega_1 t)$$ $$x_2=A_2 \cos(\omega_2 t)$$ so $x_1$ and $x_2$ are periodic with periods $T_1$ and $T_2$ respectively. This implies that $$x_1(t+n T_1)=x_1(t)$$ $$x_2(t+nT_2)=x_2(t)$$ for all integers $n$. Also, if $n_1$ and $n_2$ are integers then $n_1 n$ and $n_2 n$ are also integers. So if $x$ is periodic then $$x(t+nT)=x(t)$$ $$x_1(t+nT)+x_2(t+nT)=x_1(t)+x_2(t)$$ $$x_1(t+nT)+x_2(t+nT)=x_1(t+n_1 n T_1)+x_2(t+n_2 n T_2)$$ so we have $$nT=n_1nT_1$$ and $$nT=n_2 n T_2$$ and thus $$T=n_1 T_1 = n_2 T_2$$ | {
"domain": "physics.stackexchange",
"id": 90368,
"tags": "oscillators, vibrations"
} |
Special Relativity: Finding the Euler Lagrange of a massive particle | Question: Knowing that $$\tag{1} L= -mc\sqrt{-\eta_{ab}\frac{d\xi^a}{d\lambda}\frac{d\xi^b}{d\lambda}}$$
we get
$$\tag{2} p_a=\frac{\partial L}{\partial(d\xi^a/d\lambda)} = m\eta_{ab}u^b.$$
How come? If I differentiated this $L$ with respect to
$$\tag{3} d\xi^a/d\lambda$$
I get whole different answer. Shouldn't it be
$$\tag{4} p_a= (-mc)\left(-\eta_{cd}\frac{d\xi^c}{d\lambda}\frac{d\xi^d}{d\lambda}\right)^{-1/2}\left( -\eta_{ab}\frac{d\xi^b}{d\lambda} \right) ~?$$
How was this performed?
Answer: Your last expression (4) is equal to (2), you just have to realize what does it say. $\lambda$ isn't $\tau$ and $$u^c = \frac{d\xi^c}{d \tau} = \frac{d\xi^c}{d \lambda} \frac{d\lambda}{d \tau}$$ If you look back to your Lagrangian and how it was derived, you should be able to say what is $d\lambda/d\tau$.
To be very explicit, the action of a free relativistic particle is
$$ S = -mc^2\int d\tau = -mc^2\int \frac{d\tau}{d\lambda} d\lambda $$
where we have used a reparametrization into a general parameter $\lambda$. Now you have $$L = -mc^2 \frac{d \tau}{d \lambda}$$
When you compare this with your equation (1) and with the knowledge $d\lambda/d\tau= (d\tau/d\lambda)^{-1}$, you should get the expression for $u^c$ in terms of $\lambda$-parametrization very easily. When you put this expression into (2), you will get (4). | {
"domain": "physics.stackexchange",
"id": 16904,
"tags": "homework-and-exercises, special-relativity, lagrangian-formalism, momentum"
} |
Imaginary Bonding Interactions | Question: Usually in chemistry, we deal with bonding interactions. That is, if I have the diatomic A-A molecule or A-B molecule, there's a favorable interaction (i.e., a bond) and a prototypical MO diagram like this:
As I'd say in a lecture, one orbital goes down in energy, one orbital goes up in energy. I can use Hückel theory to define the energies of the bonding and anti-bonding interactions:
$$e_{ab} = e_a + \frac{H_{ab}^2}{e_a - e_b}$$
$$e^*_{ab} = e_b + \frac{H_{ab}^2}{e_b - e_a}$$
Now, consider if $H_{ab}$ is imaginary.. we'd use the same equations to get a very unusual MO diagram:
How would you describe such an interaction? It's clearly not the traditional favorable bonding. It's not a non-bonding interaction. And it's not the traditional anti-bonding case like $\ce{He2}$, because if I put 2 electrons into this system, the anti-bonding orbital isn't filled.
I ask the question because in a survey we've performed using DFT, we find many such combinations.
I'm leaning towards calling it an imaginary bond, but if there are known discussions, I'd be curious to read references.
Answer: The short answer is that I need to come up with better wording to describe the interaction and that Hückel is surprisingly bad for these systems.
I'll give a slightly longer answer for now, and add a reference as we submit the paper.
First, an example system:
So "A" is a strongly electron-deficient dinitrothiophene ring (an electron acceptor) and "B" is a strongly electron-rich diaminothiophene ring. The figure in the question is a subset of the entire MO diagram, focusing on the HOMOs of the monomers and the HOMO and HOMO-1 of the dimer.
Why do I think this is a strange system? Well, the particle-in-a-box and Hückel models are usually pretty good at explaining conjugated organic molecules. If you make a longer conjugated system, the HOMO should go up in energy, and the HOMO-LUMO gap goes down (delocalization).
The alternative, localization, would have the orbitals remain at the same energy level:
Both of those "limits" can be explained in the Hückel model by changing the $H_{ab}$ or $\beta$ parameter (depending on how you write the equation, the symbol is different). In the localized case, $H_{ab} = 0$.
What's strange about these strong donor-acceptor cases is that as I said, it's not really delocalized, or localized.
Instead, as one comment correctly deduced, the DFT calculation (B3LYP/6-31G*) shows a large degree of charge transfer, but not necessarily localization. The dipole moment computed to be >7 Debye.
If one attempts to use Hückel to analyze these results, you must infer that $H_{ab}$ is imaginary. This indicates an "open system" because electrons are indeed transferred - from monomer "B" to monomer "A" creating the large dipole moment.
My overly-simplified question was probably a bit confusing, but:
It's interesting that no simple model captures charge transfer unless you allow Hückel to have imaginary (non-physical) $H_{ab}$ parameters. (NB, my colleagues aren't too happy with this concept, suggesting it simply indicates Hückel is a poor choice.)
It's surprising that in prototypical conjugated systems, I can't (yet) find a discussion about this alternative to delocalization and localization. We find it commonly as the difference in energy level increases. | {
"domain": "chemistry.stackexchange",
"id": 6239,
"tags": "bond, quantum-chemistry, molecular-orbital-theory, reference-request"
} |
Faster way of getting 2D frequency amplitudes than DFT? | Question: I'm making a small program which gets the DFT of an image to get a general idea of the image's overall orientation. It does this by rotating a line in a radar sweep type pattern, keeping track of which angle has the highest sum of frequency amplitudes. (It actually only does $0-180^\circ$ though because the testing line is symmetrical across the origin)
Anyways, I was wondering, is there a less computationally expensive way to get frequency amplitudes of a 2D image than doing a 2D DFT?
I know that it is separable so can do better than the naive $\mathcal O(N^4)$ operation - I think it becomes $\mathcal O(N^3)$. I also know that the FFT can do a 1D DFT in $\mathcal O(N \log N)$ instead of $\mathcal O(N^2)$. Lastly, I know that it's massively parallelizable so could be (massively) multithreaded on either CPU or GPU.
Those things aside, are there any less computationally expensive ways to get frequency amplitudes of a 2D image than to do a full 2D DFT?
Answer: Note that the computational complexity of a 2D FFT would be $\mathcal O(N^2\log N)$ instead of $\mathcal O(N^4)$ for the naive 2D DFT.
That said, since you mention that you want to obtain spectrum information along beams of a radar-like pattern, it sounds like you are really more interested in computing the 1D FFT of each such beam. In that case, for $M$ beams the computational complexity would be $\mathcal O(MN \log N)$.
As an illustration, lets process the following figure:
from which we shall extract a set of $M$ beams with the following sample matlab code:
% Number of beams to extract
M = 200;
% Convert the input image "img" to polar coordinates
c = size(img)/2+1;
N = max(size(img));
angles = 2*pi*[0:M-1]/M;
radius = [0:floor(N/2)-1];
imgpolar = zeros(length(radius), length(angles));
for ii=1:length(radius)
xi = min(max(1, floor(c(2)+radius(ii)*cos(angles))), size(img,2));
yi = min(max(1, floor(c(1)-radius(ii)*sin(angles))), size(img,1));
imgpolar(ii,:) = img(sub2ind(size(img), yi, xi));
end
% Compute the FFT for each beam angle
ImgFD = fft(imgpolar,[],1);
figure(1);
freqs = [0:size(ImgFD,1)-1]/size(ImgFD,1);
surf(angles, freqs, 10*log10(abs(ImgFD)+1), 'EdgeColor', 'None');
view(2);
colormap("gray");
xlabel('Beam angle (radians)');
ylabel('Normalized frequency');
to yield:
which can be collapsed to the sum of amplitudes as a function of the beam angles to give:
SumAmplitudes = sum(abs(ImgFD),1);
figure(2);
hold off; plot(angles, 10*log10(SumAmplitudes+1));
xlabel('beam angle (radians)');
ylabel('Sum of amplitudes (dB)');
As a side note, if you can use sum of squared amplitudes along those beam (instead of sum of amplitudes), then you can do it directly in spatial domain thanks to Parseval's theorem (which would bring the computational complexity down to $\mathcal O(MN)$, dominated by the conversion to polar coordinates). The equivalence (for the sum of squared amplitudes) can be seen using the following code:
% Compare the result of square amplitude summation in the frequency domain vs spatial domain
SumFD = sum(abs(ImgFD).^2,1)/size(ImgFD,1);
SumSD = sum(abs(imgpolar).^2,1);
figure(3);
hold off; plot(angles, 10*log10(SumFD+1), 'b');
hold on; plot(angles, 10*log10(SumSD+1), 'r:');
xlabel('beam angle (radians)');
ylabel('Sum of squared amplitudes (dB)');
legend('Frequency domain', 'Spatial domain', "location", "southwest");
Notice the overlap of the curves computed in the spatial and frequency domains:
Update:
If you are in fact computing beams of a radar-like pattern in a 2D frequency plot as you seem to suggest in this other post, then you best bet comes back to doing a 2D FFT which would be order $\mathcal O(N^2 \log N)$. You could then perform the conversion to polar form for the sum of frequency coefficients which would add a small $\mathcal O(N^2)$, so the result is still dominated by the 2D FFT. | {
"domain": "dsp.stackexchange",
"id": 4094,
"tags": "image-processing, fourier-transform"
} |
Are the Kerr-Schild coordinates for the Schwarzschild metric related to the standard coordinate system for the metric? | Question: In another question, reference is made to the Kerr-Schild coordinate system for the Schwarzschild metric, and there is a question about whether this system is directly related to the standard metric one by an exact coordinate transformation, rather than some sort of linearization of Einstein's equations, because as formulated, it looks very much like some sort of linearization:
$$g_{ab} = \eta_{ab} + C k_{a}k_{b}$$
where $\eta$ is the Minkowski metric, $C = \frac{2M}{r}$ and (choosing spherical coordinates) $k = (1, \mp 1, 0, 0)$
So, what is going on?
Answer: Yes, it absolutely is an exact coordinate transformation!
Take the expression above, and write it as a line element (also, take $\epsilon$ to be either plus or minus 1 to avoid writing $\pm$ over and over), and take $d\Omega^{2} = \left(d\theta^{2} + \sin^{2}\theta\,d\phi^{2}\right)$:
$$ds^{2} = -\left(1-\frac{2M}{r}\right)dt^{2} + 2dt dr\left( -\frac{2M\epsilon}{r}\right) + \left(1+\frac{2M}{r}\right)dr^{2} + r^{2}d\Omega^{2}$$
By inspection of this expression, we can see that the $g_{tt}$ element is already in schwarzschild form, and the angular parts are already correct. We just need the off-diagnonal elements gone, and we need a different $g_{rr}$. with all of these constraints, we guess that we cannot do very much, and are pretty much restricted to a coordinate change of the form
$$t = T + f(r)$$
where we expect $T$ to be the Schwarzschild time. Substituting this ansatz into our line element, we have:
$$ds^{2} = -\left(1-\frac{2M}{r}\right)dT^{2} + 2dTdr\left(-\frac{2M\epsilon f^{\prime}}{r}\left(1-\frac{2M}{r}\right) -\frac{2M\epsilon}{r}\right) + \left(-f^{\prime 2}\left(1-\frac{2M}{r}\right) -\frac{4M\epsilon f^{\prime}}{r}+ 1+\frac{2M}{r}\right)dr^{2} + r^{2}d\Omega^{2}$$
The diagonal term vanishes if we have:
$$f^{\prime} = -\frac{\epsilon \frac{2M}{r}}{1-\frac{2M}{r}}$$
so, a convenient choice for $f$ is $-2M\epsilon {\rm ln}\left(\frac{r}{2M} -1\right)$
Finally, all we have to do is check to see whether our expression for $g_{rr}$ checks. Substituting the value for $f^{\prime}$, we have:
$$g_{rr} = -\left(-\frac{\frac{2M}{r}}{1-\frac{2M}{r}}\right)^{2}\left(1-\frac{2M}{r}\right) -\frac{4M\epsilon}{r}\left(-\frac{\epsilon \frac{2M}{r}}{1-\frac{2M}{r}}\right)+ 1+\frac{2M}{r}$$
The second term is $-2$ times the first term, and multiplying the last term by $\frac{1-2M/r}{1-2M/r}$ and adding everything together, we find that we do, indeed end up with $\frac{1}{1-2M/r}$, and we are done. $T$ is indeed the schwarzchild coordinate time. | {
"domain": "physics.stackexchange",
"id": 91675,
"tags": "general-relativity, black-holes"
} |
Why is fused silica superior to N-BK7 in terms of thermal lensing? | Question: In optics lab, the two commonly used types of glasses are N-BK7 and fused silica. The lab wisdom has been that fused silica is superior to N-BK7 in terms of thermal lensing.
And indeed it seems so. Our collimator lens for a 10W fiber amplifier looks like it's made of N-BK7, and when a beam profiler is situated 0.5 meter away from the collimator, the initially collimated beam with 1.7 mm diameter focuses to 0.6 diameter.
But from what I learned from the RP Photonics entry on thermal lensing, the figure-of-merit for thermal lensing is the ratio of thermo-optic coefficient to thermal conductivity. That is, you want the ratio $\frac{dn}{dT} / \kappa$ tobe as small as possible.
However, if you look at the thermal properties of different glasses (at 1064 nm), fused silica has $\frac{dn}{dT} \sim 11\times 10^{-6} / K$ and N-BK7 has $\frac{dn}{dT} \sim 2 \times 10^{-6} / K$. On the other hand, they have similar thermal conductivity ($1.4\, W/(m\cdot K)$ for FS and $1.1\,W/(m\cdot K)$ for N-BK7).
Based on what I've said so far, it seems N-BK7 should have better resistance to thermal lensing! But it does not in practice. Why?
EDIT: Based on the answer below, I looked out for extinction coefficient $k$ (or absorption coefficient $\alpha = \frac{4\pi k}{\lambda}$, whichever is available).
According to this data, N-BK7 has absorption coefficient of roughly $10^{-3}/cm$ at 1064 nm.
According to this website, fused silica has absorption coefficient between $10^{-4}$ and $10^{-5}$, per cm, at 1064 nm.
EXTRA QUESTION: For those familiar with optics industry, can you tell me why N-BK7 is popular, and in what uses it is superior to fused silica?
Answer:
"But from what I learned from the RP Photonics entry on thermal
lensing, the figure-of-merit for thermal lensing is the ratio of
thermo-optic coefficient to thermal conductivity."
You should also take into account the extinction coefficient: if there were no losses, the medium would not heat up at all. However, I did not compare extinction of the two media.
Note that your source suggests the "figure-of-merit" "for high-power gain media", whereas your collimator lens is probably a passive device. I don't know if this is important though. | {
"domain": "physics.stackexchange",
"id": 57092,
"tags": "thermodynamics, optics, material-science, glass"
} |
Turbofan: Efficiency wrt. the bypass ratio: "a lot of slow air > a little faster air"? | Question: Reading / viewing up on how jet engines work, this video explains at the 9:02 mark that, for turbofan engines, ".. it is more aerodynamically efficient to have a lot of air moving relatively slowly rather than a little air moving relatively quickly." I.e. a lot of air bypasses the combustion chamber and is only accelerated by the main frontal fan.
This claim has me puzzled, given that kinetic energy is dependent on the square of the velocity, i.e. $E_k = \frac12 m v^2$, I would have guessed the inverse. Does "aerodynamically efficient" mean something very specific here?
Answer: I am not an aerodynamics specialist, so the following is almost certainly a huge oversimplification (or maybe downright wrong), but I think it might help with intuition.
Suppose you have an amount of energy $E$ available to spend, and you are trying to accelerate an object of mass $M$. Suppose you can impart the energy in the form of kinetic energy to an air mass of mass $m$. What speed will this accelerate the mass $M$ to?
By energy and momentum conservation, you have
$$E=\frac{1}{2}m v_m^2+\frac{1}{2}M v_M^2$$
and
$$mv_m+Mv_M=0$$
giving
$$v_M=\sqrt{\frac{2 e m}{M (m+M)}}$$
which as a function of air mass $m$ looks like this:
Plot[Sqrt[(2 e m)/(M (m + M))] /. {e -> 1, M -> 100}, {m, 0, 500}, PlotRange -> All]
which means that according to this simple model, it's more energy efficient to use large values of $m$, ie, to eject large volumes of slow air rather than a small volume of fast air.
Aerodynamic efficiency is not necessarily a measure of how much energy you impart to the ejected air per fuel consumption, but rather is how much momentum you impart to the aircraft per fuel consumption. | {
"domain": "physics.stackexchange",
"id": 17618,
"tags": "thermodynamics, fluid-dynamics, aerodynamics"
} |
An algorithm that determines if regular language accepts all string of its alphabet | Question: Let $L$ be a regular language with the alphabet $\Sigma$. I'm trying to find an algorithm to tell whether $L=\Sigma^{*}$, whether $L$ accepts all strings in its alphabet. I think this algorithm uses converting the language to a DFA, but I'm not sure what to do from there. I have only recently began learning about regular languages and complexity, so help would be appreciated
Answer: If we had an algorithm that decide whether $L(M)=\varnothing $ or not then it can be used for deciding whether for some $DFA$ like $M$ $L(M)=\Sigma^*$ or not,(by creating the DFA $M'$ from $M$ such that $L(M)=L(M')^c$).
For deciding whether $L(M)=\varnothing $ or not suppose $p$ is pumping lemma number for $M$, then check for every string in set $A=\left \{ x\in \Sigma^*: |x|<p+1 \right \}$ whether it is in $L(M)$ or not, If $A\cap L(M)\not =\varnothing $ then $L(M)\not = \varnothing $. If $A\cap L(M)=\varnothing $ then $L(M)=\varnothing $, because if there exists a string $x$ such that $|x|>p$ and $x\in L(M)$ then by pumping down on $x$ we can reach a string like $y$ such that $y\in A\cap L(M)$ and this leads to contradiction. | {
"domain": "cstheory.stackexchange",
"id": 3490,
"tags": "ds.algorithms, fl.formal-languages, automata-theory"
} |
Electrons fly on certain orbits around nucleus. why do not they radiate waves and crash to the nucleus? | Question: How to fill the gap between classical physics and quantum physics?
Answer: Adopting Planck's idea of quantized energy levels, Bohr hypothesized that in a hydrogen atom there can be only certain values of total energy (electron kinetic energy plus potential energy) for the electrons.
These allowed energy levels correspond to different orbits for the electrons as it moves around the nucleus, the larger orbits associated with larger total energies.
Bohr assumed that an electron in one of these orbits does not radiate electromagnetic wave. For this reason, the orbits are called stationary orbits or stationary states.
Bohr recognized that radiation-less orbits violated the laws of physics. But the assumption of such orbitals was necessary because the traditional laws indicated that electron radiates electromagnetic waves as it accelerates around a circular path, and the loss of the energy carried by the waves would lead the the collapse of the orbit. | {
"domain": "physics.stackexchange",
"id": 37917,
"tags": "atomic-physics"
} |
How to Install ROS Kinetic Ubuntu 16.04 | Question:
After trying to install ROS following the instructions here.
After finally entering this command:
sudo apt-get install ros-kinetic-desktop-full
I was presented with this error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation:
The following packages have unmet dependencies: ros-kinetic-desktop-full :
Depends: ros-kinetic-desktop but it is not going to be installed
Depends: ros-kinetic-perception but it is not going to be installed
Depends: ros-kinetic-simulators but it is not going to be installed E: Unable to correct problems, you have held broken packages.
Originally posted by yodigi7 on ROS Answers with karma: 21 on 2016-09-16
Post score: 2
Original comments
Comment by jdmartin86 on 2016-12-19:
I'm having the same problem -- tried following Davide Faconti's solution, and it didn't help.
Comment by Qinsheng on 2017-09-23:
I have same problem and tried everythings but no luck!
Answer:
Hi Yodigi,
I had the same problem with the exact same three depends messages after upgrading to 16.04.
When you run
sudo apt-get update && sudo apt-get upgrade
do you get a message about packages being "held back"? I fixed my issue by running
sudo apt-get install < packages-held-back >
which installs the packages and all of its dependencies. From what I've read, individually installing the packages is safer than running
sudo apt-get dist-upgrade
I also had an issue with openni not configuring but the following commands solved the issue
sudo apt-get remove libopenni0
sudo apt-get purge libopenni*
and finally, running
sudo apt-get install ros-kinetic-desktop-full
ran successfully.
Originally posted by klindgren with karma: 56 on 2016-12-28
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by youssef desouky on 2020-07-15:
this worked for me sudo apt-get dist-upgrade
thankyou | {
"domain": "robotics.stackexchange",
"id": 25778,
"tags": "ros, ros-kinetic, ubuntu, ubuntu-xenial"
} |
word2vec - log in the objective softmax function | Question: I'm reading a TensorFlow tutorial on Word2Vec models and got confused with the objective function. The base softmax function is the following:
$P(w_t|h) = softmax(score(w_t, h) = \frac{exp[score(w_t, h)]}{\Sigma_v exp[score(w',h)]}$, where $score$ computes the compatibility of word $w_t$ with the context $h$ (a dot product is commonly used). We train this model by maximizing its log-likelihood on the training set, i.e. by maximizing $ J_{ML} = \log P(w_t|h) = score(w_t,h) - \log \bigl(\Sigma_v exp[score(w',h)\bigr)$
But why $\log$ disappeared from the $score(w_t,h)$ term?
Answer: No, the logartihm doesn't disappear. From the equation
,
When you want to calculate
,
it essentially means calculating ,
Now ,
So ,
as . | {
"domain": "datascience.stackexchange",
"id": 3672,
"tags": "nlp, word2vec, mathematics"
} |
The operator $A(L)= \{w \mid ww \in L\}$ | Question: Consider the operator $A(L)= \{w \mid ww \in L\}$. Apparently, the class of context free languages is not closed against $A$. Still, after a lot of thinking, I can't find any CFL for which $A(L)$ wouldn't be CFL.
Does anyone have an idea for such a language?
Answer: As an amplification of sdcvvc's hint: You can find a context-free language $L$ such that $A(L) = \{a^m b^m c^m \mid m\geq 0\}$, which as you probably know is not context-free.
So we want to find $L$ such that if $ww\in L$, then $w$ has the form $a^m b^m c^m$. We might as well have $L$ be a sublanguage of $a^*b^*c^*a^*b^*c^*$. Now we just need to put enough restrictions on the words in $L$ so that words of the form $ww$ in $L$ will have the form we require, while ensuring $L$ is still context-free.
Partial spoiler:
A word $a^{m_1} b^{m_2} c^{m_3} a^{m_4} b^{m_5} c^{m_6}$ has the form $ww$ if and only if $m_1 = m_4$, $m_2 = m_5$ and $m_3 = m_6$. Now we can add some restrictions on the $m_i$ for words in $L$ to force them to all be equal in words of the form $ww$. (Hint: we can require $m_1 = m_2$, but then we can't require $m_1 = m_3$ or $m_2 = m_3$, because then $L$ wouldn't be context-free anymore. This isn't a problem though, because we can make use of some of the equalities we already have between the $m_i$.) | {
"domain": "cs.stackexchange",
"id": 193,
"tags": "formal-languages, context-free, closure-properties"
} |
Number of qubits in quantum circuits in ibm qiskit | Question: How to find number of Qubits ?
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
bob = QuantumRegister(8,'b')
alice = ClassicalRegister(2,'a')
eve = QuantumRegister(4,'e')
qc = QuantumCircuit(bob, alice, eve)
I know of syntax
QuantumCircuit(2, 2) ie. 2 parameter but not sure about 3 parameter.
Answer: TL;DR: With QuantumCircuit.num_qubits:
>>> print(qc.num_qubits)
12
A list of them, via QuantumCircuit.qubits:
>>> print(qc.qubits)
[Qubit(QuantumRegister(8, 'b'), 0),
Qubit(QuantumRegister(8, 'b'), 1),
Qubit(QuantumRegister(8, 'b'), 2),
Qubit(QuantumRegister(8, 'b'), 3),
Qubit(QuantumRegister(8, 'b'), 4),
Qubit(QuantumRegister(8, 'b'), 5),
Qubit(QuantumRegister(8, 'b'), 6),
Qubit(QuantumRegister(8, 'b'), 7),
Qubit(QuantumRegister(4, 'e'), 0),
Qubit(QuantumRegister(4, 'e'), 1),
Qubit(QuantumRegister(4, 'e'), 2),
Qubit(QuantumRegister(4, 'e'), 3)]
The QuantumCircuit constructor documentation can be found here:
regs (list(Register) or list(int) or list(list(Bit))) – The registers to be included in the circuit.
This means, that the registers bob, alice, and eve are included. Their length is 8, 2, and 4 respectably. However, alice is a classical register, which does not extend the amount of qubit. So, $8+4=12$. | {
"domain": "quantumcomputing.stackexchange",
"id": 5081,
"tags": "qiskit, quantum-circuit, experimental-realization, bb84"
} |
How do charge carriers "know" how much voltage to use for work in a specific component? | Question: I understand the basic concepts of voltage pretty well, I think. However, one thing has been bugging me which I can’t seem to figure out.
How does a specific charge carrier "know" how much voltage to use as work in a specific component?
As an example, say a bulb of resistance $10\ \Omega$ and an EMF of $12\ V$. As expected, the bulb will "use up" all of that $12\ V$ and convert it to light and thermal energy. However, say another $5\ \Omega$ bulb is added in series. How does the charge carrier know not to use all $12\ V$ on the first $10\ \Omega$ bulb, but rather $8\ V$ to save $4\ V$ for the other one?
Or, as another example, with a potential divider, how do the charge carriers know how much voltage to "drop off" at the $100\ \Omega$ resistor (in the diagram below), before they reach the potentiometer?
I understand that voltage is a potential energy, and so it’s just a measure of where in the electric field it is, but does this mean adding a component or adjusting the potentiometer, etc. changes some property of the field? Or am I misunderstanding voltage entirely?
Answer: This is communicated to the current through surface charges on the surface of conductors and at the interface between conductors of different conductivities.
So, for example, consider a circuit consisting of a battery and a single resistor of low conductivity, connected by wires of high conductivity. At the interface between the positive wire and the resistor there are positive charges, and at the interface between the negative wire and the resistor there are negative charges. These positive and negative charges set up an E field within the resistor, which drives the current at each point in the resistor.
Now, add a second identical resistor, directly attached to the first. Since there is no change in conductivity there is no surface charge between the two, and now the surface charges from the positive and negative wires are further apart. This reduces the E field and thereby reduces the current compared to the previous circuit.
Other more complicated circuits become rapidly more complicated to analyze, but the principle holds. It is the surface charges and their associated fields that communicate this information. This communication, of course, happens at the speed of light or slower during the transitory before the approximations of circuit theory become valid. | {
"domain": "physics.stackexchange",
"id": 98629,
"tags": "electric-circuits, electric-fields, electric-current, electrical-resistance, voltage"
} |
$F_{ST}$ when considering a multi-allelic locus | Question: Sewall Wright defined the $F_{ST}$ in a metapopulation as being:
$$F_{ST} = \frac{\text{Var}(p)}{\bar p (1-\bar p)}$$
, where $p$ is a vector of frequencies of a given allele and $\bar p$ and $\text{Var}(p)$ are the mean and variance of this vector.
For example, consider a metapopulation made of 4 subpopulations. The allele frequencies in these 4 subpopulations are p=[0.2, 0.5, 0.8, 0.3]. $\bar p$ is the mean of $p$ ($\bar p = 0.45$) and $\text{Var}(p)$ is the variance of $p$ ($\text{Var}(p )=0.07 \space$).
Question
Can we define $F_{ST}$ for a multiple allele locus? Or should $F_{ST}$ be defined for each allele independently?
Answer: Short answer: yes, people have formulated ways to estimate $F_{ST}$ for multiallelic loci, e.g. microsatellites. For a review, see here.
Specifically, Nei could define $F_{ST}$ for multiple alleles as
$F_{ST} = \frac{(H_t - H_s)}{H_t}$,
which is to say the proportion of total heterozygosity that is across rather than within populations. This is agnostic to the number of alleles, as it just uses heterozygosity. I believe there are other formulations as well.
There exist alternate statistics such as $G_{ST}$ and $R_{ST}$ (links under each) which are devised specifically for the multiallelic problem, see also the review for further discussion. Basically, microsatellites have high mutation rates such that they deflate the statistic; this will likely be the case for most multiallelic loci.
It's not clear though which statistic works best in practice as far as I can tell. | {
"domain": "biology.stackexchange",
"id": 10715,
"tags": "evolution, population-dynamics, population-biology, population-genetics, fst"
} |
What does a light wave look like? (3D model) | Question: What does a light wave look like?
The only models I can seem to find online are 2D waves, they just look like sin() graphs.
I have seen the models of the two components of "light waves" (electric field and magnetic field) and they are represented on a 3D Cartesian coordinate system, but they are still just two 2D waves.
Surely light isn't really FLAT like this is it?
I guess I have always assumed it pules out in all directions greater then lesser as it travels, giving off the shape that you see during a sonic boom:
I have drawn what I picture to be a 3D model of the electric field of a light wave as it travels from left to right:
(cone image from: http://www.presentation-process.com/images/3d-powerpoint-cone.jpg)
Is this an accurate representation of what the 3D radiation emitting from a light beam looks like, like a 3D wave? (Obviously it would be more wavey, using a 3D cone graphic to create this diagram caused the edges to look spiked and sharp, a better object to use would have been something like a bullet (3D parabola) but I'm not the best with photoshop).
Also, if this is a somewhat accurate model of light wave pulsing in 3D, what does the magnetic wave look like in model form? Do they just overlap with possibly a slightly larger or smaller amplitude but the same maxima and minima locations along t (x-axis in my model)
Answer: The first 2-D image you posted is a typical simplification for teaching purposes. In it, they use the height of the sine wave to represent magnitude, and the directions of the sine waves to show how the fields point relative to each other. The light itself however is not itself at all cone-like. You have to imagine this sine wave existing at multiple points in space, not localized in this cone-like volume. This can be difficult to visualize.
A common method for visualizing these kinds of fields is a 3-D vector field plot.
Length or colour will typically correspond to amplitude, and the arrows show direction of the electric field. The magnetic field is always perpendicular, and the amplitude is proportional, so there's little sense in plotting both together. This one I've included shows how the field actually permeates a volume.
Out of interest, here's a gif of dipole radiation. This is just a 2-D slice of the field, and doesn't show vector direction, but is a very good visualization of a more real-life kind of radiation. Colour corresponds to magnitude in this case. | {
"domain": "physics.stackexchange",
"id": 51936,
"tags": "quantum-mechanics, visible-light, waves, electromagnetic-radiation, wavefunction"
} |
Can a photon be emitted with a wavelength > 299,792,458 meters, and would this violate c? | Question: Just curious if the possibility exists (not necessarily spontaneously) for a photon with a wavelength greater than the distance component of c to be emitted, and would this inherently violate the scalar c?
Answer: $$c=\nu\lambda$$
The waves will still travel at $c$. Changing $\lambda$ changes $\nu$, not $c$.
If, in SI units, $c<\lambda$, then $\nu<1 \text{ Hz}$. These can exist, though we don't come across them often. Ultra-redshifted light coming from sources near a black hole have such frequencies (A source just entering the event horizon gets redshifted all the way to $\nu=0,\lambda=\infty$).
Update: As leftaroundabout pointed out, such low frequency waves are possible when dealing with the transmission of electromagnetic fields. So shaking a charged balloon or any small current can give such photons. | {
"domain": "physics.stackexchange",
"id": 2359,
"tags": "quantum-mechanics, electromagnetic-radiation, speed-of-light, photons"
} |
Real PR2 plug and unplug procedure | Question:
Hello,
We now have a PR2 in our lab and we are cautiously reading the tutorials and learning how to use it the right way. But in our search, we were unable to find a clear information about the way to plug and unplug the PR2 to the mains. I read that it is recommended to let the robot plugged when not used, but what about the plug and unplug procedures ? Are there particular things to know about this point ?
Thanks for reading,
Originally posted by Erwan R. on ROS Answers with karma: 697 on 2013-09-20
Post score: 0
Answer:
From the PR2 Manual (http://pr2support.willowgarage.com/wiki/PR2%20Manual/Chapter3):
When plugging in the PR2, always attach the power cord to the AC inlet on the robot before attaching it to the AC outlet on the wall. Unplug the cord from the wall before unplugging from the robot.
Plugging a live AC cord into the back of the robot can cause arcing, which can deteriorate the power inlet on the PR2.
Originally posted by fergs with karma: 13902 on 2013-09-20
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by dornhege on 2013-09-21:
It should be noted that that's actually happened, so it's not just some theoretical safety advise.
Comment by Erwan R. on 2013-09-22:
Thanks for your answer and sharing your experience. I finally found this quotation in the manual after asking the question. I think It should be more visible on the online documentation (as it is a safety procedure and should not appear only there : http://pr2support.willowgarage.com/wiki/PR2%20Manual/Chapter3#What.27s_in_the_box) | {
"domain": "robotics.stackexchange",
"id": 15601,
"tags": "pr2"
} |
Tilt maze solution | Question: If one does not know tilt maze then the game looks like this. Basically one can travel in 4 directions North, East, South and West, but this travel's endpoint is the furthest non-obstructed block. If solution has multiple routes, then the any single solution is returned which need not be optimal.
Please verify complexity: O(row * col). I'm looking for code review, good practices, optimizations etc.
final class CoordinateTiltMaze {
private final int row;
private final int col;
CoordinateTiltMaze(int row, int col) {
this.row = row;
this.col = col;
}
public int getRow() {
return row;
}
public int getCol() {
return col;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null) return false;
if (getClass() != o.getClass()) return false;
final CoordinateTiltMaze coordinateTiltMaze = (CoordinateTiltMaze)o;
return row == coordinateTiltMaze.row && col == coordinateTiltMaze.col;
}
@Override
public int hashCode() {
return row + col;
}
@Override
public String toString() {
return row + " : " + col;
}
}
public final class TiltMaze {
private boolean[][] maze;
static int ctr = 0;
public TiltMaze(boolean[][] maze) {
if (maze.length == 0) throw new IllegalArgumentException("The maze should have length.");
this.maze = maze;
}
/**
*
* Returns the path from source to destination.
* If solution has multiple routes, then the any single solution is returned which need not be optimal. Please
* verify complexity - O (row * col) Looking for code review, good practices, optimizations etc.
*
*
* @param startRow
* The row index of the start position.
* @param startCol
* The col index of the start position
* @param endRow
* The row index of the end position
* @param endCol
* The column index of end positin
* @return The path from source to destination, empty set if non exists
* @throws IllegalArgumentException
* on invalid input.
*/
public Set<CoordinateTiltMaze> getPath (int startRow, int startCol, int endRow, int endCol) {
verify(startRow, startCol, endRow, endCol);
final Set<CoordinateTiltMaze> coordinateSet = new LinkedHashSet<CoordinateTiltMaze>();
if (processPoint(startRow, startCol, endRow, endCol, coordinateSet)) return coordinateSet;
return Collections.EMPTY_SET;
}
private void verify(int startRow, int startCol, int endRow, int endCol) {
if (rowOutOfBound(startRow)) throw new IllegalArgumentException("The startRow: " + startRow + " is out of bounds.");
if (columnOutOfBound(startCol)) throw new IllegalArgumentException("The startCol: " + startCol + " is out of bounds");
if (rowOutOfBound(endRow)) throw new IllegalArgumentException("The endRow: " + endRow + " is out of bounds.");
if (columnOutOfBound(endCol)) throw new IllegalArgumentException("The endCol: " + endCol + " is out of bounds");
}
private boolean rowOutOfBound(int startRow) {
return startRow < 0 || startRow >= maze.length;
}
private boolean columnOutOfBound(int startCol) {
return startCol < 0 || startCol >= maze[0].length;
}
private boolean processPoint (int row, int col, int endRow, int endCol, Set<CoordinateTiltMaze> coordinateSet) {
final CoordinateTiltMaze coord = new CoordinateTiltMaze(row, col);
if (row == endRow && col == endCol) {
coordinateSet.add(coord);
return true;
}
if (coordinateSet.contains(coord)) {
return false;
}
coordinateSet.add(coord);
for (CoordinateTiltMaze neighbor : getNeighbors(row, col)) {
if (processPoint (neighbor.getRow(), neighbor.getCol(), endRow, endCol, coordinateSet)) return true;
}
coordinateSet.remove(coord);
return false;
}
private List<CoordinateTiltMaze> getNeighbors(int row, int col) {
final List<CoordinateTiltMaze> nodes = new ArrayList<CoordinateTiltMaze>();
// north.
for (int i = row - 1; i >= -1; i--) {
if (isObstacle(i, col)) {
nodes.add(new CoordinateTiltMaze(i + 1, col));
break;
}
}
// east.
for (int i = col + 1; i <= maze[0].length; i++) {
if (isObstacle(row, i)) {
nodes.add(new CoordinateTiltMaze(row, i - 1));
break;
}
}
// south
for (int i = row + 1; i <= maze.length; i++) {
if (isObstacle(i, col)) {
nodes.add(new CoordinateTiltMaze(i - 1, col));
break;
}
}
// west
for (int i = col - 1; i >= -1; i--) {
if (isObstacle(row, i)) {
nodes.add(new CoordinateTiltMaze(row, i + 1));
break;
}
}
return nodes;
}
private boolean isObstacle(int newRow, int newCol) {
// in bounds of board.
if (newRow < 0 || newCol < 0 || newRow >= maze.length || newCol >= maze[0].length) {
return true;
}
// 1's in a matrix represent obstacles.
if (!maze[newRow][newCol]) {
return true;
}
return false;
}
public static void main(String[] args) {
boolean[][] m = {
{true, true, false, false},
{true, true, true, true},
{true, false, false, true}
};
TiltMaze tiltMaze = new TiltMaze(m);
for (CoordinateTiltMaze coordTiltMaze : tiltMaze.getPath(0, 0, 2, 3)) {
System.out.println(coordTiltMaze);
}
}
}
Answer: hashCode implementation
Your hashCode method leaves some things to be desired.
The more often a hashCode is unique, the better. Even though your CoordinateTiltMaze class fulfills the equals & hashCode contract, your if an object has the hashCode 9 it could mean that it is (5, 4), (4, 5), (3, 6), (1, 8) and so on...
A better implementation would be:
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + row;
result = prime * result + col;
return result;
}
This will make the resulting hashCode a lot more unique. In fact, none of the previous objects that had the same hashCode will now have a unique hashCode. And you will still fulfill the hashCode & equals contract.
(Hint: Your IDE almost certainly provides a way to automatically generate a good hashCode implementation, I used Eclipse to generate the above one)
Code duplication in getNeighbors
You have four sets of code in your getNeighbors method, one for each direction. I believe that you can get rid of some of this code duplication by using an Direction4 enum.
Naming
Your naming of CoordinateTiltMaze was a bit confusing for me for a moment. Is it a Maze for a CoordinateTilt? No! It's a Coordinate for a TiltMaze! I think TiltMazeCoordinate, MazeCoordinate, or simply Coordinate would be a better name. Or even Point. Because this class is not coupled to your TiltMaze at all, and this is good. It is totally re-usable in all of your projects where you are working with 2d coordinates. | {
"domain": "codereview.stackexchange",
"id": 6024,
"tags": "java, algorithm, pathfinding"
} |
Is {: L(M) ∈ NP} ∈ NP? | Question: Intuitively I think the answer is no since I don't think every certificate can be checked in polynomial time but I don't know how to give a formal proof. Is the statement true? Why or why not?
Answer: Your language is not even decidable.
Given a Turing machine $T$, define $M_T$ as a Turing machine that simulates $T$ on an empty input, and then decides any $\mathsf{EXPSPACE}$-complete language $L_C$ of your choice (notice that the function $f(T) = M_T$ is computable).
Notice that the language $L(M_T)$ is $L_C \not\in \mathsf{NP}$ if $T$ halts and $\emptyset$ otherwise.
Suppose now that there exists a Turing machine $M^*$ that decides $L$.
Then, you could design a Turing machine that solves the halting problem. When given input $T$, this Turing machine behaves as follows: it computes $M_T$; it simulates $M^*$ with input $M_T$; it rejects if $M^*$ accepts, otherwise it accepts.
Notice that if $M^*$ accepts then you must have $L(M_T) = \emptyset$, showing that $T$ does not halt.
On the contrary, if $M^*$ rejects, then $L(M_T) \not\in \mathsf{NP}$ implying that $L(MT)=L_C$ and hence $T$ halts. | {
"domain": "cs.stackexchange",
"id": 16641,
"tags": "formal-languages, turing-machines"
} |
Laser Cutter Power/Wavelength to Cut Paper | Question: I've been reading up on laser cutting and it seems that 808nm at around 2 watts is typical to cut paper. How would one calculate the wavelength and power required to cut an arbitrary material?
Answer: Here are some of the relevant bits of physics/questions to ask:
To cut a material, it needs to absorb heat faster than it can lose it.
heat is conducted away: this is typically linear with temperature gradient
heat can be radiated away: this is more important at higher temperatures (follows $T^4$ relationship)
laser power may be reflected or absorbed: the right wavelength will be the one that is "mostly absorbed" and will depend on the material.
the mechanism for "cutting" matters: is it melting, burning, or enhancing a chemical reaction (etching, oxidation)
does the cut have to be very small (narrow)?
does the surrounding material deform if it gets hot?
how many meters of material do you want to cut per unit time?
Based on the above considerations it is hard to give a general "formula" for the power and wavelength. I would say that higher power means you can cut faster - which in turn means that you can make a narrower cut without heating up the surrounding material (it didn't have time to heat up). This is also a cleaner cut. As for the wavelength - typically you will use a laser that can efficiently generate a lot of power (CO2 laser can have efficiency up to 20% but a longer wavelength, around 10 µm), and that is well absorbed by the material. You also want to be able to focus it to a small spot - both to increase the power density and to make a fine cut. The longer the wavelength, the harder it is to focus something because of diffraction... | {
"domain": "physics.stackexchange",
"id": 41079,
"tags": "laser"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.