anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Unexpected jamming of a misaligned assembly | Question: I have a small assembly with several parts misaligned. By turning the grey wheel, the grey block is to rotate about its axis (Axis 1). The problem I have is that the grey wheel can't be turned further anti-clockwise given the current position. It seems to be jammed.
The red connecting rod can freely slide in the yellow bore, which itself can rotate with the rod in the grey bore. I'd expect the red rod to extend further and turn the yellow connector.
Does anyone know why the wheel might be jammed or how it could be modified to allow continuous rotation?
Answer: It isn't clear what is constraining the motion of the gray ring (blue in the video), but it seems to be rotating on its own axis. If so, the mechanism jams because the pin at the end of the red rod hits the end of the slot in the orange L-bracket. Regardless of the position of the yellow piece WRT the gray bar, there is a fixed maximum distance between the yellow pin and the red pin. | {
"domain": "engineering.stackexchange",
"id": 714,
"tags": "mechanical-engineering"
} |
Fusing IMU sensor with odometry | Question: Would fusing Odometry estimation with IMU sensor increase the accuracy of estimation for planar differential drive robots?
Answer: Yes.
Example of this can be found here.
Depending on how good your modeling is you could also use the IMU to help detect wheel slippage. | {
"domain": "robotics.stackexchange",
"id": 2070,
"tags": "odometry, sensor-fusion, estimation"
} |
ROS Answers SE migration: keras for ROS | Question:
Hi!
I want to deploy and use on ROS ( melodic or noetic) a simple deep learning network i developped with python ( on spyder environnement).
Do you have any example or any references to do this.
Evething I saw is about tensor flow and is very complex.
Thank's a lot.
Ben
Originally posted by bengao on ROS Answers with karma: 1 on 2021-04-09
Post score: 0
Original comments
Comment by Ranjit Kathiriya on 2021-04-09:
Don't think about Keras, TensorFlow, OpenCV, point cloud, or any of this the most important thing you should think about is data and how you will send it to ros or your robot
Answer:
Hello Ben,
I want to deploy and use on ROS ( melodic or noetic) a simple deep learning network I developed with python ( on spyder environment).
There are few steps:
In your Keras model keep that file as a ROS node and then check the form of data and publish it.
After publishing, You can use that data as whatever you want.
Just for Example (Drone with hand signs): I have hand sign data based on the video stream(right, left, up, and down) and I have obtained a Keras model for it. Then I would suggest making the full model as a single node and then publish it. After publishing let's say I subscribed and I have a string
and based on that particular string I can decide moments and give moments to the drone via tf and it performs the action.
In place of string, you can take any message like Image, transformation, coordinate, pointcloud or even you can create your custom message.
Does this make sense? If you want in-depth understanding to write a comment. This is just an overview.
Originally posted by Ranjit Kathiriya with karma: 1622 on 2021-04-09
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by bengao on 2021-04-09:
Hi Karma!
I think I understand this. It's the first time I will use it.
I will try and If necessary, I will come back to complete my problem.
Thank's a lot !
Comment by Ranjit Kathiriya on 2021-04-12:
Hello @bengao,
Do you think that you got your answer, then tick this as an answer to help others?
Yes! You can drop comments if you are having any trouble implementing this. If you fill, this answer is good, please! vote up. Thanks! | {
"domain": "robotics.stackexchange",
"id": 36300,
"tags": "ros-melodic"
} |
Chemical Formula Parser | Question: As an exercise, I wrote a simple context free parser modeled from a pushdown automaton for chemical formulas such as NO2 or Fe(NO3)3.
To explain a bit about the decisions made:
The state functions (pstart, pchem and pnum) are lambdas because they carry around a lot of context. They could be methods of a class, it doesn't really matter.
The state functions have the input character injected because I realized halfway that I have to process some more after the line ended and I don't want to duplicate the state functions.
I am mostly concerned about the general messiness of the parser. There is no obvious way to know where variables are mutated, control flow is all over the place and consequently I find it difficult to reason about the program. Is there a better code structure?
Finally, I'd like to know if I caught all illegal syntax. Note repeating chemicals like NN is allowed.
#include<iostream>
#include<stack>
#include<unordered_map>
#include<string>
#include<iterator>
using namespace std::literals;
enum class state_t { start, num, chem, error };
double parse(std::string line, const std::unordered_map<std::string, double>& m)
{
auto b = line.begin(), e = line.end();
std::stack<double> stk;
int num = 0;
std::string chem;
state_t state = state_t::start;
auto pstart = [&](char c) {
if(std::isupper(c))
{
chem = ""s + c;
state = state_t::chem;
return true;
}
else if(std::isdigit(c))
{
if(stk.empty())
state = state_t::error;
else
{
num = c - '0';
state = state_t::num;
return true;
}
}
else if(c == '(')
{
stk.push(-1);
return true;
}
else if(c == ')')
{
double sum = 0;
while(!stk.empty() && stk.top() > 0)
{
sum += stk.top();
stk.pop();
}
if(stk.empty())
state = state_t::error;
else
{
stk.pop();
stk.push(sum);
return true;
}
}
else
state = state_t::error;
return false;
};
auto pnum = [&](char c) {
if(std::isdigit(c))
{
num = 10 * num + c - '0';
return true;
}
else
{
stk.top() *= num;
state = state_t::start;
}
return false;
};
auto pchem = [&](char c){
if(std::islower(c))
{
chem += c;
return true;
}
else
{
if(auto it = m.find(chem); it != m.end())
{
stk.push(it->second);
state = state_t::start;
}
else
state = state_t::error;
}
return false;
};
while(b != e)
{
switch(state)
{
case state_t::start:
if(pstart(*b))
b++;
break;
case state_t::num:
if(pnum(*b))
b++;
break;
case state_t::chem:
if(pchem(*b))
b++;
break;
default:
return -1;
}
}
switch(state)
{
case state_t::num:
pnum('\n');
break;
case state_t::chem:
pchem('\n');
break;
}
if(state == state_t::error)
return -1;
double sum = 0;
while(!stk.empty() && stk.top() > 0)
{
sum += stk.top();
stk.pop();
}
if(stk.size() > 0) // expected ')'
return -1;
return sum;
}
int main()
{
std::unordered_map<std::string, double> m;
m["Na"] = 23.5;
m["Cl"] = 35.5;
m["O"] = 16;
m["N"] = 14;
m["Fe"] = 55.8;
std::string line;
while(getline(std::cin, line))
std::cout << parse(std::move(line), m) << '\n';
}
Answer: Using a state machine is fine, but is usually harder to get right than writing the parser with focus on the grammar. You're also mixing a computation with the parsing which adds more information to consider when analyzing the code.
I would recommend to separate the parser code from the computation and then writing the parser strictly following the grammar you want to parse. I'll try to illustrate what I mean by giving a simplified version of your parser. Say you have a grammar:
formula = *(group)
group = element [ count ]
element = uppercase [ lowercase ]
count = "0" .. "9"
You can now give a function for each non-terminal:
// formula = *(group) ; * becomes while (...) ...
std::list<group> parse_formula(std::stringstream& s)
{
std::list<group> rv;
while (!s.eof())
{
rv.push_back(parse_group(s));
}
return rv;
}
// group = element [ count ]
group parse_group(std::stringstream& s)
{
group g;
group.element = parse_element(s);
try
{
group.count = parse_count(s);
}
catch (const parse_failed&)
{
group = 1;
}
}
// element = uppercase [lowercase]
std::string parse_element(std::stringstream& s)
{
if (!std::isupper(s.peek()))
{
throw parse_failed(...);
}
std::string element = s.get();;
if (std::islower(s.peek()))
{
s.get(ch);
element += ch;
}
return element;
}
// count = [0-9]
unsigned parse_count(std::stringstream& s)
{
if (!std::isdigit(s.peek()))
{
throw parse_failed(...);
}
unsigned rv;
s >> rv; // this actually violates the grammar as it reads multiple digits
return rv;
}
You can then iterate over the list of groups and compute the sum. | {
"domain": "codereview.stackexchange",
"id": 33359,
"tags": "c++, parsing"
} |
Why does the speed of sound decrease at high altitudes although the air density decreases? | Question: I understand that the speed of sound is inversely proportional to the density of the medium as shown here and as answered for this question.
The problem now is that the speed of sound in air actually decreases with altitude although the density of the air decreases. This is shown here and here.
I understand that the speed of sound also depends on the elasticity, but I'm not sure how this can change for air.
So what is actually happening? How can the speed of sound decrease although the density has also decreased?
Answer: Wikipedia gives a pretty much straightforward answer. In an ideal gas, the speed of sound depends only on the temperature:
$$ v = \sqrt{\frac{\gamma \cdot k \cdot T}{m}} $$
So it neither decreases, nor increases with altitude, but just follows air temperature as can be seen in this graph: | {
"domain": "physics.stackexchange",
"id": 61209,
"tags": "fluid-dynamics, waves, acoustics, density, air"
} |
Sign conventions | Question: I had my first class of Applied Physics for Electrical engineering. And I am stuck in something very basic. The sign conventions. They are pretty confusing. The teacher said the position of origin din't matter. But some basic calculation I ran oppose that idea. I am attaching a picture. Please do tell me if something is technically wrong or am I too dumb?
Answer: The displacement in your first example should be negative, since you took origin to be above the final point and the acceleration to be negative going downwards. | {
"domain": "physics.stackexchange",
"id": 34014,
"tags": "kinematics, reference-frames, conventions, coordinate-systems"
} |
How do I use dimensional analysis to find the ratio of potentials at the center and corner of a uniformly charged cube? | Question: The problem goes like this, from Purcell's electromagnetism:
Consider a charge distribution that has the constant density $ρ$ everywhere inside a cube of edge $b$ and is zero everywhere outside that
cube. Letting the electric potential $φ$ be zero at infinite distance
from the cube of charge, denote by $φ_0$ the potential at the center of
the cube and by $φ_1$ the potential at a corner of the cube. Determine
the ratio $φ_0/φ_1$.
In solving it, the author used dimensional analysis to say that the potential at the center has to be directly proportional to the whole charge divided by the side length of the cube and from there. I don't know why this is true. Can someone explain his insight and tell me how to generally use dimensional analysis to solve such problems?
Answer: The electric potential due to a point charge $q$ at the origin is (in SI units)
$$
\phi_\text{point}(r) = \frac{1}{4\pi\epsilon_0} \frac{q}{r}
$$
This tells use that the product $\epsilon_0\phi$ must always have the same units as a charge per unit length. Which charge, and which length, are going to depend on the details of the integral you do over the charge distribution. The integral also might give you some stupid high-algebra factor like $\frac{2^5}{7}$ that depends on your geometry. But the dimensionful part of your integral is always going to come directly from the dimensionful part of your problem.
In this case you only have two parameters:
charge density $\rho$, with units $\text{charge}/\text{length}^3$
cube sizes $b$, a length
There is exactly one way to multiply these parameters to get charge per unit length, so every expression for a potential in this problem will be proportional to $\rho b^2$.
Beware that older E&M texts may use CGS units, rather than SI units. The beginner’s guide to CGS units for electromagnetism is that $4\pi\epsilon_0 \equiv 1$ is dimensionless. The advanced approach to CGS units is to replace $\epsilon_0$ with an expression involving the fine-structure constant. That’s kind of a rabbit hole; tread carefully. | {
"domain": "physics.stackexchange",
"id": 83298,
"tags": "electromagnetism, dimensional-analysis, linear-systems"
} |
navigation without a map? | Question:
Hi
I am trying to implement the navigation stack on a custom robot without a priori static map.
Can anyone give me a guideline to do this?
I have set the static_map field to false in the Global Costmap Configuration file.
If I understand the tutorial correctly, I have to include a map node in my launch file, but I don't have one...
I am new to ROS and would appreciate any help I can get :)
Originally posted by gertr on ROS Answers with karma: 31 on 2011-09-20
Post score: 2
Original comments
Comment by gertr on 2011-09-20:
I have tried to create an empty *.pgm file and pointing the launchfile to it, but this results in my map-server-1 process dying...
Comment by 2ROS0 on 2014-07-31:
Do you also have the corresponding *.yaml ? (that might be the reason for it dying)
But I want to know what is the use of having a separate map if it is empty ?
Answer:
A map is not necessary for running the navigation stack. All you need is a costmap, which can be created by costmap_2d. You can take a look at erratic_2dnav_local for an example configuration.
Originally posted by Dan Lazewatsky with karma: 9115 on 2011-09-20
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 6728,
"tags": "navigation, mapping, costmap, costmap-2d"
} |
Pre-envelope of ${\Pi}_{a}(t)\cos(2{\pi}f_{0}t)$ | Question: I want to find the pre envelope of
$$x(t) = {\Pi}_{a}(t)\cos(2{\pi}f_{0}t)$$
where
I found the Fourier transform to be
$$ \begin{align} X(f) & \triangleq \mathfrak{F}[{\Pi}_{a}(t)\cos(2{\pi}f_{0}t)] \\ & = a\text{ sinc}(2a(f-f_{0})) + a\text{ sinc}(2a(f+f_{0})) \\ \end{align} $$
So $$X_{+}(f) = 2u(f)X(f)$$
But I don't know how to continue from here.
Is there an easier way to find the pre envelope with another method?
Answer: The pre-envelope is also called analytic signal. Its Fourier transform is indeed given by the expression in your question:
$$X_+(f)=2X(f)u(f)\tag{1}$$
where $X(f)$ is the Fourier transform of the original signal, and $u(f)$ is the unit step function. Obviously, $X_+(f)$ has only positive frequency components. The analytic signal $x_+(t)$ with its Fourier transform given by $(1)$ is necessarily a complex-valued signal:
$$x_+(t)=x(t)+j\mathcal{H}\{x(t)\}\tag{2}$$
where $\mathcal{H}\{\cdot\}$ denotes the Hilbert transform. Note that the analytic signal is not the same as the complex envelope. For a band pass signal $x(t)$, the complex envelope is a low pass signal, whereas the analytic signal is a band pass signal.
From $(2)$, in order to compute the analytic signal you need to compute the Hilbert transform of $x(t)$:
$$\hat{x}(t)=\mathcal{H}\{x(t)\}=\frac{1}{\pi}\int_{-\infty}^{\infty}\frac{x(\tau)}{t-\tau}d\tau\tag{3}$$
For the given signal you get
$$\hat{x}(t)=\frac{1}{\pi}\int_{-a}^a\frac{\cos(2\pi f_0\tau)}{t-\tau}d\tau\tag{4}$$
The result of $(4)$ can be written in terms of the cosine integral $\text{Ci}(x)$ and the sine integral $\text{Si}(x)$:
$$\hat{x}(t)=\frac{1}{\pi}\left\{\cos(2\pi f_0t)\left[\text{Ci}(2\pi f_0(t+a))-\text{Ci}(2\pi f_0(t-a))\right]+\\\sin(2\pi f_0)\left[\text{Si}(2\pi f_0(t+a))-\text{Si}(2\pi f_0(t-a))\right]\right\}\tag{5}$$
I don't think that anybody expected you to come up with that awful expression. Anyway, for large values of $f_0$, the first term in $(5)$ becomes very small, and the second term converges to $\sin(2\pi f_0)\Pi_a(t)$. So for large $f_0$ you get the expected result
$$x_+(t)\approx x(t)+j\sin(2\pi f_0)\Pi_a(t)=\Pi_a(t)e^{j2\pi f_0 t}\tag{6}$$
Obviously, the respresentation
$$x(t)=\text{Re}\left\{\Pi_a(t)e^{j2\pi f_0 t}\right\}\tag{7}$$
is always valid, but the complex-valued signal $\Pi_a(t)e^{j2\pi f_0 t}$ is no analytic signal, it's just a good approximation of the analytic signal for large values of $f_0$.
Note that for $x(t)=m(t)\cos(2\pi f_0t)$ with $m(t)$ a band-limited function, i.e. $M(f)=0$ for $|f|>B$, the complex-valued signal $m(t)e^{j2\pi f_0 t}$ is an analytic signal, as long as $f_0>B$. The problem with the function given in your question is that $\Pi_a(t)$ is not band-limited. | {
"domain": "dsp.stackexchange",
"id": 3471,
"tags": "hilbert-transform"
} |
Abuse/Misuse of C# BackgroundWorker? | Question: I have finished a program, and it does what I want it to do, but I feel I am "doing it wrong", even though it's seemingly efficient enough. I have prepared a small example of what I feel I am handling wrong with the backgroundworker class and would like to see if I could have handled this any more cleanly.
First, I have a form with 1 button, 1 statusstrip, 1 toolstripprogressbar, and 1 toolstriplabel. I update the statusstrio items while the backgroundworker is running, as well as show messageboxes across classes. See below.
Form1.cs
namespace StatusStripTest
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void prepareToRun()
{
if (!backgroundWorker1.IsBusy)
{
backgroundWorker1 = new BackgroundWorker();
backgroundWorker1.DoWork += delegate
{
Looping.loop(statusStrip1);
};
backgroundWorker1.RunWorkerAsync();
}
}
private void button1_Click(object sender, EventArgs e)
{
prepareToRun();
}
}
}
Looping.cs
namespace StatusStripTest
{
public class Looping
{
public static void loop(StatusStrip strip)
{
Form1 Form1 = new Form1();
string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
while (true)
{
MessageBox.Show("Time to loop");
foreach (char c in alphabet)
{
StripHandler.UpdateProgress(strip, alphabet.IndexOf(c), alphabet.Length);
StripHandler.UpdateStatus(strip, c.ToString());
}
}
}
}
}
StripHandler.cs
namespace StatusStripTest
{
public class StripHandler
{
public static void UpdateStatus(StatusStrip ss, String Status)
{
ss.Invoke((MethodInvoker)delegate
{
if ((ss.Items[1] as ToolStripStatusLabel) != null)
{
ss.Items[1].Text = Status;
}
});
}
public static void UpdateProgress(StatusStrip ss, long position, long len)
{
ss.Invoke((MethodInvoker)delegate
{
if ((ss.Items[0] as ToolStripProgressBar) != null)
{
ToolStripProgressBar tspb = (ToolStripProgressBar)ss.Items[0];
tspb.Minimum = 0;
tspb.Maximum = (int)len;
tspb.Value = (int)position;
}
});
}
}
}
Am I passing the statusstrip incorrectly?
Answer: Here is a technique to do everything your three classes do under one file and one class name. Konrad's code makes good points, but the code below shows how to accomplish your tasks.
Since you have no need to change your BackgroundWorker's event handlers, code them up in the Form's constructor:
private const string alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
public Form1() {
InitializeComponent();
backgroundWorker1 = new BackgroundWorker();
backgroundWorker1.WorkerReportsProgress = true;
backgroundWorker1.WorkerSupportsCancellation = true;
backgroundWorker1.DoWork += new DoWorkEventHandler(backgroundWorker_DoWork);
backgroundWorker1.ProgressChanged += new ProgressChangedEventHandler(backgroundWorker_ProgressChanged);
}
I also defined alphabet as a constant, since it will never change.
Now, you need to know what backgroundWorker_DoWork and backgroundWorker_ProgressChanged are.
This version is closest to the code you posted, using a foreach loop on the alphabet string. Every time you access the next character, that character's index has to be extracted and the length of the fixed array has to be determined again:
private void backgroundWorker_DoWork_obsolete(object sender, DoWorkEventArgs e) {
var obj = (BackgroundWorker)sender;
while (!obj.CancellationPending) {
foreach (char c in alphabet) {
float calc = ((float)alphabet.IndexOf(c) / alphabet.Length) * 100;
obj.ReportProgress(Convert.ToInt32(calc));
}
}
}
The modified version below is going to create the string using a value that is supplied as the argument (so, now this code can be used for other string values). Since your code basically just walks down the length of the string of characters, I modified this version to use a more efficient for(;;) loop (more efficient because it does not have to extract the index or calculate the string length every time):
private void backgroundWorker_DoWork(object sender, DoWorkEventArgs e) {
var obj = (BackgroundWorker)sender;
string alphas = e.Argument.ToString();
int len = alphas.Length;
while (!obj.CancellationPending) {
for (int i = 0; i < len; i++) {
float calc = ((float)i / len) * 100;
obj.ReportProgress(Convert.ToInt32(calc));
}
}
}
Notice in both examples that I use while (!obj.CancellationPending) instead of while(true). Now, if you wanted to include a Cancel button on your form, you could simply wire it up like this:
private void cancel_Click(object sender, EventArgs e) {
backgroundWorker1.CancelAsync();
}
This would halt your BackgroundWorker loop.
Now, the backgroundWorker_ProgressChanged routine is simple:
private void backgroundWorker_ProgressChanged(object sender, ProgressChangedEventArgs e) {
toolstripprogressbar.Value = e.ProgressPercentage;
}
To execute this code, just be sure to initialize your toolstripprogressbar before calling RunWorkerAsync.
private void prepareToRun() {
if (!backgroundWorker1.IsBusy) {
toolstripprogressbar.Minimum = 0;
toolstripprogressbar.Value = 0;
toolstripprogressbar.Maximum = alphabet.Length;
backgroundWorker1.RunWorkerAsync(alphabet);
}
}
private void button1_Click(object sender, EventArgs e) {
prepareToRun();
}
A few extra notes:
The BackgroundWorker.ReportProgress is overloaded to accept an object userState and the BackgroundWorker.ProgressChangedEventArgs contains this UserState object. So, if you wanted to pass the actual item to some Label control called lblProgress, that is possible:
private void backgroundWorker_DoWork2(object sender, DoWorkEventArgs e) {
var obj = (BackgroundWorker)sender;
string alphas = e.Argument.ToString();
int len = alphas.Length;
while (!obj.CancellationPending) {
foreach (char c in alphas ) {
float calc = ((float)alphas .IndexOf(c) / len ) * 100;
obj.ReportProgress(Convert.ToInt32(calc), c);
}
}
}
private void backgroundWorker_ProgressChanged2(object sender, ProgressChangedEventArgs e) {
toolstripprogressbar.Value = e.ProgressPercentage;
lblProgress.Text = e.UserState.ToString();
}
If you wanted to get fancy, wire up the BackgroundWorker.RunWorkerCompletedEventHandler:
backgroundWorker1.RunWorkerCompleted += new RunWorkerCompletedEventHandler(backgroundWorker_RunWorkerCompleted);
...add this little piece of code to hide your Progress Bar and enable your Cancel Button:
private void backgroundWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) {
toolstripprogressbar.Visible = false;
btnCancel.Enabled = true;
}
...and modify your prepareToRun() statement:
private void prepareToRun() {
if (!backgroundWorker1.IsBusy) {
toolstripprogressbar.Minimum = 0;
toolstripprogressbar.Value = 0;
toolstripprogressbar.Maximum = alphabet.Length;
backgroundWorker1.RunWorkerAsync(alphabet);
if (backgroundWorker1.IsBusy) {
toolstripprogressbar.Visible = true;
btnCancel.Enabled = false;
}
}
}
Can you tell I have used a lot of BackgroundWorkers?
UPDATE:
Another note, based on the comment you provided. The DoWorkEventArgs variable e has an object variable called Argument that corresponds to whatever input you supply in backgroundWorker1.RunWorkerAsync(any_object).
That means any_object can be a class like this:
class MyParameters {
public int Minimum { get; set; }
public int Index { get; set; }
public int Maximum { get; set; }
public string TextIn { get; set; }
public string TextOut { get; set; }
}
To use it, simply cast it back to what you passed in:
private const int BG_MSG_1 = 1;
private void backgroundWorker_DoWork3(object sender, DoWorkEventArgs e) {
var obj = (BackgroundWorker)sender;
var params = (MyParameters)e.Argument;
params.Minimum = 0;
params.Index = 0;
params.Maximum = params.TextIn.Length;
while (!obj.CancellationPending) {
foreach (char c in params.TextIn ) {
params.Index++;
params.TextOut = string.Format("Processed {0}", c);
obj.ReportProgress(BG_MSG_1, params);
}
}
}
Notice I have re-used the ReportProgress's int variable of ProgressPercentage as a way to provide further/deeper options within the ProgressChanged Event Handler.
Now, you can do any calculations you need back in your main thread by extracting your MyParameters out of the UserState:
private void backgroundWorker_ProgressChanged3(object sender, ProgressChangedEventArgs e) {
if (e.ProgressPercentage == BG_MSG_1) {
var params = (MyParameters)e.UserState;
float calc = ((float)params.Index / params.Maximum ) * 100;
toolstripprogressbar.Value = Convert.ToInt32(calc)
lblProgress.Text = params.TextOut;
}
}
Now, you are only limited by how complex you want to make whatever class you want to pass back and forth ...or you could create a class for passing data in and another class for passing data out. Really, it's all up to you. | {
"domain": "codereview.stackexchange",
"id": 5762,
"tags": "c#, thread-safety"
} |
Rviz segfaulting on startup | Question:
When starting Rviz (rosrun rviz rviz) it produces a splash screen and immediately segfaults with :
$ rosrun rviz rviz
[ INFO] [1371112927.848817565]: rviz revision number 1.8.17
[ INFO] [1371112927.848909490]: compiled against OGRE version 1.7.3 (Cthugha)
[ INFO] [1371112927.885475931]: Loading general config from [/home/****/.rviz/config]
Segmentation fault (core dumped)
I am using ROS Fuerte, with Ubuntu 12.04 LTS and an Intel graphics card (which as far as I understand may be the issue)
I tried deleting the config files, differtent OGRE_RTT_MODEs, reinstalling, enabling/disabling OpenGL and updating drivers but so far no luck.
UPDATE : Downloaded the latest update today (19/6/2013) and everything seems to work OK
When running on GDB (I am sorry about the wall):
(gdb) run
Starting program: /opt/ros/fuerte/stacks/visualization/rviz/bin/rviz
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[ INFO] [1371112634.850998769]: rviz revision number 1.8.17
[ INFO] [1371112634.851094314]: compiled against OGRE version 1.7.3 (Cthugha)
[New Thread 0x7fffe36ba700 (LWP 8202)]
[New Thread 0x7fffe2eb9700 (LWP 8204)]
[New Thread 0x7fffe26b8700 (LWP 8205)]
[New Thread 0x7fffe1eb7700 (LWP 8206)]
[New Thread 0x7fffe16b6700 (LWP 8211)]
[ INFO] [1371112634.928360124]: Loading general config from [/home/****/.rviz/config]
[New Thread 0x7fffc85fb700 (LWP 8214)]
[New Thread 0x7fffc7dfa700 (LWP 8215)]
[New Thread 0x7fffc75f9700 (LWP 8216)]
[New Thread 0x7fffc6df8700 (LWP 8217)]
[New Thread 0x7fffc65f7700 (LWP 8218)]
[New Thread 0x7fffc5df6700 (LWP 8219)]
[New Thread 0x7fffc55f5700 (LWP 8220)]
[New Thread 0x7fffc4df4700 (LWP 8221)]
[Thread 0x7fffc85fb700 (LWP 8214) exited]
[Thread 0x7fffc5df6700 (LWP 8219) exited]
[Thread 0x7fffc4df4700 (LWP 8221) exited]
[Thread 0x7fffc55f5700 (LWP 8220) exited]
[Thread 0x7fffc6df8700 (LWP 8217) exited]
[Thread 0x7fffc65f7700 (LWP 8218) exited]
[Thread 0x7fffc75f9700 (LWP 8216) exited]
[Thread 0x7fffc7dfa700 (LWP 8215) exited]
Program received signal SIGSEGV, Segmentation fault.
0x00007fffee014cc0 in xcb_glx_get_string_string_length ()
from /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0
And the backtrace :
#0 0x00007fffee014cc0 in xcb_glx_get_string_string_length ()
from /usr/lib/x86_64-linux-gnu/libxcb-glx.so.0
#1 0x00007ffff2b814e4 in ?? () from /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1
#2 0x00007ffff2b7ef4c in ?? () from /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1
#3 0x00007fffca46d96f in Ogre::GLSupport::initialiseExtensions (this=0x6c30a0)
at /tmp/buildd/ros-fuerte-visualization-common-1.8.4/debian/ros-fuerte-visualization-common/opt/ros/fuerte/stacks/visualization_common/ogre/build/ogre_src_v1-7-3/RenderSystems/GL/src/OgreGLSupport.cpp:53
#4 0x00007fffca4a5b9a in Ogre::GLXGLSupport::initialiseExtensions (
this=0x6c30a0)
at /tmp/buildd/ros-fuerte-visualization-common-1.8.4/debian/ros-fuerte-visualization-common/opt/ros/fuerte/stacks/visualization_common/ogre/build/ogre_src_v1-7-3/RenderSystems/GL/src/GLX/OgreGLXGLSupport.cpp:418
#5 0x00007fffca463365 in Ogre::GLRenderSystem::initialiseContext (
this=0x7fffe0dc7a58, primary=0x7fffe0dcc2e8)
at /tmp/buildd/ros-fuerte-visualization-common-1.8.4/debian/ros-fuerte-visualization-common/opt/ros/fuerte/stacks/visualization_common/ogre/build/ogre_src_v1-7-3/RenderSystems/GL/src/OgreGLRenderSystem.cpp:1061
#6 0x00007fffca46857b in Ogre::GLRenderSystem::_createRenderWindow (
this=0x7fffe0dc7a58, name=..., width=1, height=1, fullScreen=false,
miscParams=0x7fffffffd550)
at /tmp/buildd/ros-fuerte-visualization-common-1.8.4/debian/ros-fuerte-visualization-common/opt/ros/fuerte/stacks/visualization_common/ogre/build/ogre_src_v1-7-3/RenderSystems/GL/src/OgreGLRenderSystem.cpp:1017
#7 0x00007ffff601a96e in Ogre::Root::createRenderWindow (this=0x7fffe0db88d8,
name=..., width=<optimized out>, height=<optimized out>,
fullScreen=<optimized out>, miscParams=<optimized out>)
at /tmp/buildd/ros-fuerte-visualization-common-1.8.4/debian/ros-fuerte-visualization-common/opt/ros/fuerte/stacks/visualization_common/ogre/build/ogre_src_v1-7-3/OgreMain/src/OgreRoot.cpp:1199
#8 0x00007ffff7ad2192 in rviz::RenderSystem::makeRenderWindow (this=0x787630,
window_id=65011713, width=1, height=1)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/ogre_helpers/render_system.cpp:237
#9 0x00007ffff7ad2a7a in rviz::RenderSystem::RenderSystem (this=0x787630)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/ogre_helpers/render_system.cpp:68
#10 0x00007ffff7ad2b35 in rviz::RenderSystem::get ()
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/ogre_helpers/render_system.
cpp:55
#11 0x00007ffff7ae2e11 in rviz::QtOgreRenderWindow::QtOgreRenderWindow (
this=0x804780, parent=0x6a77b0)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/ogre_helpers/qt_ogre_render_window.cpp:30
#12 0x00007ffff7b2c902 in rviz::RenderPanel::RenderPanel (this=0x804780,
parent=<optimized out>)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/render_panel.cpp:55
#13 0x00007ffff7abac5c in rviz::VisualizationFrame::initialize (this=0x6a77b0,
display_config_file=..., fixed_frame=..., target_frame=...,
splash_path=..., help_path=..., verbose=false,
show_choose_new_master_option=false)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/visualization_frame.cpp:224
#14 0x00007ffff7acb483 in rviz::VisualizerApp::init (this=0x7fffffffdfb0,
argc=1, argv=<optimized out>)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/visualizer_app.cpp:210
#12 0x00007ffff7b2c902 in rviz::RenderPanel::RenderPanel (this=0x804780,
parent=<optimized out>)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/render_panel.cpp:55
#13 0x00007ffff7abac5c in rviz::VisualizationFrame::initialize (this=0x6a77b0,
display_config_file=..., fixed_frame=..., target_frame=...,
splash_path=..., help_path=..., verbose=false,
show_choose_new_master_option=false)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/visualization_frame.cpp:224
#14 0x00007ffff7acb483 in rviz::VisualizerApp::init (this=0x7fffffffdfb0,
argc=1, argv=<optimized out>)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/visualizer_app.cpp:210
---Type <return> to continue, or q <return> to quit---
#15 0x0000000000402d2e in main (argc=1, argv=0x7fffffffe108)
at /tmp/buildd/ros-fuerte-visualization-1.8.17/debian/ros-fuerte-visualization/opt/ros/fuerte/stacks/visualization/rviz/src/rviz/main.cpp:39
Originally posted by AndreasLydakis on ROS Answers with karma: 140 on 2013-06-12
Post score: 0
Answer:
UPDATE : Downloaded the latest update today (19/6/2013) and everything seems to work OK
Originally posted by AndreasLydakis with karma: 140 on 2013-06-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 14547,
"tags": "ros, rviz, ros-fuerte"
} |
Why does the bubbling of a soda bottle pulsate? | Question: If I take a bottle of carbonated water and open the cap slightly to allow the gas to escape, there will be a sudden rush of bubbles foaming to the top of the liquid for a few seconds, then a partial lull for a few seconds, then another smaller rush of bubbles, then a smaller lull, etc, like a damped oscillation. What causes this pulsation?
Answer: A Computer Model for Soda Bottle Oscillations: "The Bottelator" describes a model for this. That's a pay site, but you can find the text here.
The model used in the paper is that for a bubble to form it must nucleate then grow to a size big enough to rise to the surface. This process takes a finite time. When the cap is loosened the decrease in pressure causes a burst of nucleation, but after a few seconds the bubbles bursting on the surface raise the pressure again and this inhibits nucleation. As the gas escapes through the loosened cap the pressure falls again and nucleation restarts. The process repeats until the amount of gas being released by the bubbles is not longer enough to inhibit nucleation.
Note that the paper doesn't go into the physics of the nucleation and growth process. It just assumes that this is the basis of the oscillation. However the assumption seems reasonable. Suppose you take a bottle of soda and momentarily loosen then tighten the caps. You'll see a burst of fizzing. Loosen then tighten the cap again and you'll get another burst, and so on. In this case you're causing the pressure to oscillate by manually controlling the gas flow, but you can see how you'd get a similar effect if the gas flow though a slightly loosened cap was just right.
It would be interesting to measure the pressure above a saturated CO$_2$ solution as a function of gas flow through the cap. I don't have the kit to do this, but it wouldn't be a difficult experiment. | {
"domain": "physics.stackexchange",
"id": 2852,
"tags": "pressure, everyday-life"
} |
Would the electron cyclotron-maser emission mechanism affect Proxima b's ability to retain an atmosphere? | Question: In a recent arXiv preprint, Pérez-Torres et al. "Monitoring the radio emission of Proxima Centauri" claim the detection of radio emissions synchronised with the orbit of the planet Proxima b. They explain this as the result of electron cyclotron-maser (ECM) emissions, making the Proxima–Proxima b system a scaled-up analogue of the Jupiter–Io or Jupiter–Ganymede systems.
Would this mechanism make it more difficult for Proxima b to retain an atmosphere?
Answer: This mechanism, being actively emitting in the radio-wavelengths, is certainly negligible for the overall atmospheric energetics at Proxima b. One can conclude this by taking the band luminosities from the cited paper ($\rm 2.51\times10^{20} erg/s$, p.3, first paragraph) and compare them to the solar constant at the planets orbit, which should amount to $\rm9\times10^{23}erg/s$.
The latter is however only the thermal energy contribution given by the stellar effective temperature, there might be other significant chunks of energy contributed in the UV and X-bands.
The luminosities thus considered would be important for driving both bulk escape and jeans escape. Being negligible in the radio-range, shows that this process is unimportant for the retention of a bulk atmosphere via thermal mechanisms.
In terms of non-thermal mechanisms it's possible to imagine that there would be some resonant absorption of the ECM emission at the planetary radius, driving scattering of heavier species. Considering however the scalings of the cyclotron frequency $f=qB/{2\pi m}$, and the fact that the magnetic field is weaker at the planet compared to the emission region at the star, the scattered particles would have to be lighter than electrons. Those particles do not exist in atmospheres in significant numbers, only in particle accelerators.
Hence we can conclude that non-thermal processes driven by ECM heating will also not play a role for atmospheric escape. | {
"domain": "astronomy.stackexchange",
"id": 5094,
"tags": "exoplanet, red-dwarf, radio, stellar-atmospheres"
} |
Planet with acid ice caps | Question: I remember reading an old astronomy book mentioning a certain planet and the main point of interest was that this planet has or may have polar ice caps but of acid. I can't remember the name though, what planet or planets fit this description?
Now there some very strange worlds out there though I'm unsure whether this one may or may not be all that strange. Is this something of an oddity or is a planet containing frozen acid not that uncommon?
Answer: Of the planets, two have ice caps: Earth and Mars. The Earth's ice caps are water ice, and Mars has ice caps of Carbon dioxide, and some Water ice. The CO2 on Mars would give a weakly acidic solution if melted but it would not be a caustic acid.
Titan has hydrocarbon lakes at its poles, and other planets are too hot, or are uniformly frozen. | {
"domain": "astronomy.stackexchange",
"id": 3166,
"tags": "planet, identify-this-object"
} |
How can I make consecutive waitForMessage wait for the consecutive messages? | Question:
To my surprise, the following simple program
#include "ros/ros.h"
#include "std_msgs/Int32.h"
int main(int argc, char** argv)
{
ros::init(argc, argv, "test");
ros::NodeHandle nh;
std_msgs::Int32::ConstPtr ret;
ret = ros::topic::waitForMessage<std_msgs::Int32>("/topic");
ROS_INFO("%d", ret->data);
ret = ros::topic::waitForMessage<std_msgs::Int32>("/topic");
ROS_INFO("%d", ret->data);
ret = ros::topic::waitForMessage<std_msgs::Int32>("/topic");
ROS_INFO("%d", ret->data);
ros::spin();
return 0;
}
Prints 3 0s when I run a single rostopic pub /topic std_msgs/Int32 "data: 0".
One workaround is to add a ros::Duration(1).sleep() between the waits but that doesn't work for my situation.
How can I have waitForMessage actually wait for 3 publishes, provided that I don't know the interval between publishes?
Originally posted by Rufus on ROS Answers with karma: 1083 on 2020-12-29
Post score: 0
Answer:
To my surprise, the following simple program: [..] Prints 3 0s when I run a single rostopic pub /topic std_msgs/Int32 "data: 0".
The behaviour you observe is actually as it should be, and the cause is not waitForMessage(..) doing something "wrong", but your expectations are incorrect.
The following is the output of the command you ran:
$ rostopic pub /topic std_msgs/Int32 "data: 0"
publishing and latching message. Press ctrl-C to terminate
Notice the "and latching message" part of what rostopic pub prints.
Latched publishers (wiki/roscpp/Overview/Publishers and Subscribers - Publisher Options) will republish the last published message upon subscription by new subscribers, even if those subscribers were not online when the initial message was published.
So in your case, rostopic pub publishes "the 0" once when it is started, and then subsequently again for every ros::Subscriber created by ros::topic::waitForMessage(..) (as that function creates a new subsciber every time it is run).
Receiving the same message with every waitForMessage(..) seems to be as it should.
One workaround is to add a ros::Duration(1).sleep() between the waits [..]
That would surprise me actually, as that's not how latched publishers work.
How can I have waitForMessage actually wait for 3 publishes, provided that I don't know the interval between publishes?
I wouldn't know, as that's not what that function is for. How would it distinguish between "the same 0" and another message, which happens to also carry a 0?
As a final comment: please note that waitForMessage(..) is not intended to be used to periodically "sample" a message stream, or to implement some sort of rate-limiter (ie: calling waitForMessage(..) repeatedly in a while loop with a ros::Rate or something similar).
It is a fairly costly operation which is typically only used to get some initial state from a topic, or when really only a single message is needed for some reason.
In almost all other cases it would be better to use something like message_filters and/or regular subscribers.
Originally posted by gvdhoorn with karma: 86574 on 2020-12-29
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Rufus on 2020-12-30:
Further testing shows the sleep workaround doesn't actually work.
Comment by Rufus on 2020-12-30:
Curiously, it appears waitForMessage does not work as expected for non-latching messages, even if waitForMessage occurs before the publishing of the non-latching message, I've asked that as a separate question
Comment by gvdhoorn on 2020-12-30:\
Curiously, it appears waitForMessage does not work as expected for non-latching messages, even if waitForMessage occurs before the publishing of the non-latching message
waitForMessage(..) works fine for "non latched messages".
Perhaps you are expecting things to work in a way they don't. | {
"domain": "robotics.stackexchange",
"id": 35917,
"tags": "ros-melodic"
} |
Solving problems that DTM can't solve | Question: Let L be a problem that DTM can't solve. Can we prove that there is an abstract machine that can solve this problem?
Here, L is not Halting problem or Hilbert's tenth problem (because we proved that algorithms for these don't exist).
In other words, is there an abstract machine more powerful than DTM? If not, how do we prove it?
Answer: We define computable functions to be those computable by a (deterministic) Turing machine, and we identify algorithms with computable functions. Therefore, by fiat, any algorithm can be implemented on a Turing machine.
Whether this is the correct definition of the intuitive concept of algorithm is the subject of the Church–Turing thesis. The fact that many models of computation are known to be equivalent strengthens the thesis, but since it's not a mathematical statement, it is impossible to prove. | {
"domain": "cs.stackexchange",
"id": 12075,
"tags": "computability, undecidability"
} |
How can I protect the ester group from reduction? | Question: If I process this reaction the NaBH4 will also reduced the ester group to alcohol. I want to convert the Nitro (-NO2) functional group to Amine (-NH2) without disturbing the Ester group. So, my question is that, Which protecting agent can be used here to convert Me-3-Nitrobenzoate into Me-3-Aminobenzoate. I will appreciate, if someone could help me please.
Answer: I suggest you use SnCl2.2 H2O in EtOAc or EtOH as described by Bellamy & Ou as the ester group is untouched by the reaction conditions. I have used this method many times in high yield.
Various other methods, such as hydrogenation over 5% Pd-C or Pt in EtOAc or EtOH; or refluxing in ethanol with indium with ammonium chloride or sodium dithionite will also work. | {
"domain": "chemistry.stackexchange",
"id": 17342,
"tags": "organic-chemistry, organic-reduction"
} |
What is this yellow sticky excretion from a moth? Details in photos | Question: Saw this moth on some grass the other day and took some bad phone photos. I'd like to know what the yellow excretion is. I believe it is an excretion from the moth (as opposed to for example, some bubble gum someone dropped on the ground), because it was attached to the tail end of the moth instead of its whole body.
I tried to label the pictures as best I could, you may have to look closely. You can only see a tiny portion of the moth wing in the first picture, below.
The yellow thing was quite sticky but not as hard as bubble gum, as it stopped the moth from flying away. It was frantically flapping its wings trying to get away from some ants, which seemed to be very attracted to the yellow thing. At first I thought the swarm of ants were eating the dead moth, but the moth was alive and the ants were more interested in the yellow sticky thing. I got a thin stick and pulled away some of the sticky stuff and the moth was free enough to crawl away, taking the yellow sticky thing with it, leaving a thin trail behind, like when you stretch out bubble gum. It seemed to be very sticky to the moth and my stick, and very tensile, because unlike a spider's web, I was able to draw it out more when I tried to pull it away from the moth to free it.
I am so curious to know what it might be. I don't think it was made by ants, because it seemed to be attached to, or coming from the actual moth, although, the ants did not seem to be stuck to it.
Please add "moths" to the tags if you have enough 'reputation level' to do so.
Thank you
Answer: Some moths exude a yellow lymph excretion when they are threatened.
https://www.nature.com/articles/219747a0
The photo is not very clear, It's difficult to see the species of moth. what did the moth do after? did it fly away? If the moth was looking ill, it could be that it had a parasite.
https://www.insidescience.org/news/meet-moth-produces-both-bird-and-ant-repellants | {
"domain": "biology.stackexchange",
"id": 10699,
"tags": "species-identification, entomology, lepidoptera"
} |
Proving Something is a Rank-4 Tensor | Question: Whilst going over my undergraduate notes on General Relativity, I came across the Quotient Rule for tensors: Briefly, if $\mathbf{X \, A} = \mathbf{B}$ with $\mathbf{B}$ being a non-zero tensor and $\mathbf{A}$ any arbitrary tensor, then $\mathbf{X}$ is a tensor as well. For now, we can assume all tensors considered are Cartesian. I am trying to apply this rule to the following example.
Let's say the components of $\mathbf{B}$ are given by
$$ B_{ij} = X_{ijkl} A_{kl}.$$ I want to verify that $\mathbf{X}$ is a tensor as well. I start off by writing out the transformation of $\mathbf{B}$ under a "rotation" to new, primed coordinates:
$$ B_{ij}' = X_{ijkl}' A_{kl}'.$$ Since $\mathbf{A}$ and $\mathbf{B}$ are rank-2 tensors, their components transform like so:
$$B_{ij}' = a_{i p } a_{j q} B_{pq} \\ A_{kl}' = a_{kx } a_{ly} A_{xy}.$$ I substitute these into the previous equation to get
$$X_{ijkl}' a_{kx}a_{ly} A_{xy} = a_{ip}a_{jq}B_{pq},$$ and I further substitute in $\mathbf{B}$, so I have
$$X_{ijkl}' a_{kx}a_{ly} A_{xy} = a_{ip}a_{jq}X_{pqkl}A_{kl}.$$ Here, I am unsure how to proceed. I believe the problem is well-defined, and I know that in order to show $\mathbf{X}$ is a tensor, I need to show it transforms something like
$$X_{ijkl}' = (\mathrm{four \, terms})\, X_{pqkl}$$ with 4 free indices. How do I get these extra terms? Any help is greatly appreciated.
Answer: So, when you applied the equation $B_{ij}=X_{ijkl}A_{kl}$ to get $X_{ijkl}' a_{kx}a_{ly} A_{xy} = a_{ip}a_{jq}X_{pqkl}A_{kl}$, note that in the expression $X_{ijkl}A_{kl}$, $k$ and $l$ are essentially dummy indices, and can be renamed to whatever we want. In particular, the next step is much clearer if we rename them to $x$ and $y$!
Then we instead get
$$X_{ijkl}' a_{kx}a_{ly} A_{xy} = a_{ip}a_{jq}X_{pqxy}A_{xy}$$
Now, we use the fact that this equation is assumed to hold for an arbitrary tensor $A$, and so holds componentwise for all $x,y$. (Explicitly, you can imagine getting each equation for each particular $x,y$ by considering $A$ to be a basis tensor, i.e. such that it is $1$ in its $xy$ component and $0$ otherwise.) So we have, for every $x,y$,
$$X_{ijkl}' a_{kx}a_{ly} = a_{ip}a_{jq}X_{pqxy}$$
Now we use the fact that these are Cartesian tensors, and so $a_{ij}$ is the inverse of $a_{ji}$, i.e. $\delta_{ik}=a_{ij}a^{-1}_{jk}=a_{ij}a_{kj}$. (This is much clearer with upper and lower indices and in a coordinate basis, so you can essentially use only one symbol, and not need to worry about transposes!) So, we have
$$\begin{align} X_{ijkl}' a_{kx}a_{ly}a_{mx}a_{ny} &= a_{ip}a_{jq}X_{pqxy}a_{mx}a_{ny} \\
X_{ijkl}' \delta_{km}\delta_{ln} &= a_{ip}a_{jq}a_{mx}a_{ny}X_{pqxy}\\
X_{ijmn}' &= a_{ip}a_{jq}a_{mx}a_{ny}X_{pqxy}\end{align}$$
and we're done. (The swap in the order of $X$ and $a$ on the rhs from step 1 to 2 is just commutativity of ordinary multiplication for a nice layout, not any meaningful tensor operation.)
You might find this PDF helpful, which slips in the inversion at a slightly different step! (For them, $X$ is $A$, and $A$ is $\xi$.) | {
"domain": "physics.stackexchange",
"id": 77971,
"tags": "general-relativity, tensor-calculus, vectors"
} |
collecting and organizing rosbag files | Question:
Recording rosbag files is great. After multiple recording sessions, I'm left with all these bag files that are really only telling me date/time by direct observation. I can do rosbag info to get more details. I was wondering if anyone had a best practice for collecting, storing, and organizing rosbag files. I'm thinking of a database containing rosbag files with additional metadata including some text field for a description of what was recorded.
Maybe this problem has already been solved.
Thanks in advance.
Originally posted by jeremya on ROS Answers with karma: 275 on 2015-10-05
Post score: 2
Answer:
Two (commercial) alternatives (I've no connections with either):
Marv Robotics, by Ternaris (also presented at ROSCon16 (video))
BotBags: Announcing BotBags, the cloud rosbag storage service
Originally posted by gvdhoorn with karma: 86574 on 2017-02-01
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by lucasw on 2019-11-11:
It looks like BotBags no longer exists, but maybe https://github.com/cruise-automation/webviz is suitable (not sure if it plays back bag files that are local or online)? | {
"domain": "robotics.stackexchange",
"id": 22745,
"tags": "rosbag"
} |
Ros eats up RAM while recording rosbag remotely | Question:
I am running my ros application on a robot (arm) with limited RAM (1 GB).
I configured my robot as the rosmaster. The rosbag recording was happening on my local linux machine whoser Ros master was the robot.
Whenever I start my rosbag record, I can see that the available RAM space on my arm board shoots down eventually causing a bad alloc error.
I do not understand why invoking a subscriber (rosbag) on my local computer would consume RAM on the robot.
Any thoughts or pointers would be much appreciated.
This is a cross compiled ros kinetic version on QNX running within a realtime loop. The data to be sent over to the ros are written to a lock free ring buffer from which ros publisher running on a separate thread reads and publishes.
Originally posted by mhariharasudan on ROS Answers with karma: 70 on 2018-01-11
Post score: 2
Answer:
The ROS publisher has an outbound queue. If messages are published more quickly than they can be transferred over the network, the outbound queue will fill up until it starts dropping messages.
You can compute the rough amount of ram that the queue will use by multiplying the average message size by the queue size.
Note the publisher queues are per-subscriber, so if you have multiple subscribers you should budget for each subscriber separately.
Originally posted by ahendrix with karma: 47576 on 2018-01-12
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by mhariharasudan on 2018-01-12:
Wouldnt this queue size be fixed at compile time. If so, why would the RAM usage slowly spike up causing memory crash. This seems to be either memory leak or dynamic allocation.
Also, the RAM usage starts only when the rosbag record is run on the client machine.
Comment by gvdhoorn on 2018-01-12:\
Wouldnt this queue size be fixed at compile time.
no, it's configured upon publisher creation.
This seems to be either memory leak or dynamic allocation
it's dynamic allocation.
RAM usage starts only when the rosbag record is run on the client machine.
yes. That's when queueing starts.
Comment by mhariharasudan on 2018-01-12:
Does this mean that the messages published by the publishers are actually "published" only when there is a subscriber for them. I am more interested in understanding why the RAM usage dynamically increases at runtime and is not fixed at compile time.
Comment by mhariharasudan on 2018-01-12:
Even if it is dynamically allocated, they should be pre-allocated when the publishers are created and not when the execution starts?
Comment by ahendrix on 2018-01-12:
No. ROS does not do any pre-allocation of buffers.
Comment by ahendrix on 2018-01-12:
Since the number of subscribers is not known at compile time and buffers are per-subscriber, there's no way to know how many buffers to pre-allocate. Since the message size is not always known, there isn't enough information to know the buffer size to pre-allocate.
Comment by ahendrix on 2018-01-12:
If you want control over this type of feature, perhaps you should look at ROS2, which is based on DDS. | {
"domain": "robotics.stackexchange",
"id": 29735,
"tags": "ros, rosbag, ros-comm, realtime"
} |
Effect of windowing on FFT magnitude | Question: I have 2 sin waves:
Fs = 1500;
N = 100;
t = (0:N-1)'/Fs;
sinwave = sin(2*pi*15*t);
sinwaveshifted = sin(2*pi*15*t + 1.3);
If I get the magnitude at 15Hz using an fft (need to use an FFT that is a power of 2 for reasons beyond my control):
fftsin = fft(sinwave,128);
fftsinshift = fft(sinwaveshifted,128);
% 2 corresponds to 15Hz
abs(fftsin(2)) = 51.819;
abs(fftsinshift(2)) = 41.6151;
Why the difference? I'm assuming because my bins do not line up perfectly there is some leakage occurring. Also, I haven't done any windowing.
If I do window my signals (I used a flat top because I read that was best in terms of magnitude accuracy), these are my results:
w = window(@flattopwin, 100);
sinwin = sinwave .* w;
sinshiftwin = sinwaveshift .* w;
fftsin = fft(sinwin ,128);
fftsinshift = fft(sinshiftwin ,128);
% 2 corresponds to 15Hz
abs(fftsin(2)) = 2.7283;
abs(fftsinshift(2)) = 17.822;
Why the huge difference?
Answer: I presume you are correct in that the small difference in the original signal fft's is due to leakage. However, the huge difference in the windowed results arises because you simply do not have enough data. Your original signals only have data for one period of the sinusoid. When you window this, you are clamping the edges of your signal to zero, effectively leaving you with only the middle portion of the signal. So you are left with signals that have only a fraction of a single period worth of information, definitely not enough to have any accuracy in the extracted frequency analysis. To illustrate this, here are matlab images of your original functions before and after windowing:
and you can easily see the loss of much of the meaningful frequency information from the signals. A better candidate for windowing would be something like below, using 10 periods of your sinusoid signals:
which demonstrates the balance between retaining frequency information and clamping the edges to reduce leakage. This paper provides a more in depth look at different windowing functions and their performance on different classes of signals if you are interested. | {
"domain": "dsp.stackexchange",
"id": 857,
"tags": "fft, matlab, window-functions"
} |
Using 4-SUM to improve Subset Sum $O(2^{n/2})$ runtime | Question: I read k-Sum $k \geq 3$ can be solved in $\approx O(n^{k/2})$
If this is right, why do not have a subset sum solver with a time complexity $O(2^{n/4})$ or better?
That is, enumerating all the subsets from 4 splits, then apply 4-sum to find if some of the four partial lists add up to the target value?
Answer: Work through the math; you are off by a factor of two. Also, you might be confusing yourself by using the same letter $n$ for two different things.
Suppose we have a list of $N$ numbers. You are proposing to split that into four chunks, where each chunk has $N/4$ numbers. Enumerating all of the subsets of a single chunk takes $O(2^{N/4})$ time. Now, you have a 4-sum problem where you are working 4 lists each containing $M=2^{N/4}$ items. This 4-sum problem can be solved in $O(M^{k/2}) = O(M^{4/2}) = O(M^2)$, and $M^2 = 2^{N/2}$, so your proposed algorithm runs in $O(2^{N/2})$ time.
There is an algorithm for subset-sum that runs in time $O(2^{N/2})$ and space $O(2^{N/4})$. The algorithm is due to Schroeppel and Shamir. However, it's running time is still $O(2^{N/2})$, not $O(2^{N/4})$.
I don't think there is any known subset-sum algorithm whose running time is non-trivially better than $O(2^{N/2})$. Blum, Kalai, and Wasserman have solved a related problem in subexponential time, but it's a different problem. | {
"domain": "cs.stackexchange",
"id": 12124,
"tags": "algorithms, time-complexity"
} |
The direction of the drag force | Question: From my reading, the drag force is opposite in direction to the velocity vector, but can't the pushing of the "air particles" in some direction be much tougher than in another, i.e, for some reason, the air is denser in some direction? Wouldn't then $\angle (\mathbf{F_d}, \mathbf{v})$ be not necessarily $\pi$?
Answer: In everyday life the word drag tends to mean quadratic drag, and this is a complex process involving turbulent flow. The quadratic drag equation isn't a fundamental one but instead is an approximation. This makes it a poor example for your question.
It's easier to see this if you consider just the force when you make a fluid flow as this is nice and simple. For the fluid flow we use a strain rate $\dot\gamma$ rather then just a velocity, and if we make the fluid flow at this rate then we get a stress given by:
$$ \tau = \mu \dot\gamma $$
where $\mu$ is the viscosity of the liquid. For simple fluids the viscosity is just a number and the force is always opposite to the direction of motion as you describe. However for non-Newtonian fluids it is possible for the viscosity to be different in different directions. In that case we have to replace the viscosity by the viscous stress tensor.
This isn't as complicated as it sounds. A tensor is just something we can use to relate two vectors e.g.
$$ \mathbf a = \mathbf T \mathbf b $$
where $\mathbf a$ and $\mathbf b$ are vectors and $\mathbf T$ is the tensor. For the simple fluid $\mathbf T$ is just the viscosity, which is a rank $0$ tensor, and for more complicated fluids $\mathbf T$ would be a second rank tensor that is typically written as a matrix.
Anyhow the point of all this is that when the stress and strain are related by a second rank tensor the force and velocity are not necessarily colinear and the angle between them can be different from $\pi$. Nematic liquid crystals would be an example of a fluid like this. | {
"domain": "physics.stackexchange",
"id": 63964,
"tags": "newtonian-mechanics, projectile, drag"
} |
"Time jumped forward" in gmapping | Question:
I'm using slam_gmapping like in link:this tutorial, but with a live scanner and I'm getting messages like "Time jumped forward by [0.010211] for timer of period [0.010000], resetting timer (...)".
I am running it from a single computer (except rqt_logger_level) so I guess clock synchronisation shouldn't be the cause. But I updated both clocks with ntpdate to be sure (which didn't help).
I noticed that the problem seems to get worse if there is more load on the computer. Could the load be the cause? And in that case is there a way to increase the size of the allowed time jumps? (Or is there an other way to reduce the load for gmapping?)
Originally posted by WLemkens on ROS Answers with karma: 33 on 2013-10-20
Post score: 0
Answer:
Hi WLemkens,
you can follow 1 Answer from
http://answers.ros.org/question/94671/error-in-creating-2d-global-map-of-the-environment/#99961
let me know if it works.
Originally posted by cognitiveRobot with karma: 167 on 2013-11-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15917,
"tags": "slam, navigation, slam-gmapping, gmapping"
} |
How accurate are NASA's eclipse predictions? | Question: NASA's predictions for the August 2017 solar eclipse show that about half of Kansas City will see a total eclipse and the other half won't (they'll just see a "near" total eclipse, perhaps even Bailey's Beads?).
I thought this would be a great way of testing how accurate NASA's eclipse path prediction is: have people in Kansas City report, with location, whether they saw a total eclipse or not (of course, many people in Kansas City will probably head into the total eclipse zone just to see it, but hopefully few will remain outside it).
It then occurred to me that someone must've already done this for previous eclipses (I know it's done for lunar occultations of other stars).
However, googling only tells me that NASA's 2017 path predictions are more accurate than ever before, but not actually HOW accurate they are.
Where can I find data about accuracy of eclipse predictions?
Answer: The path of totality, which eclipse maps show to be 70 miles wide, is actually narrower than that, by up to 1 mile, according to some experts.
An article by the Kansas City Star discusses this:
Those maps, provided by NASA and others, show a crisply defined, 70-mile-wide path of totality where the moon will block 100 percent of the sun. But they are not as precise as they appear, at least on their edges.
The southern edge of the path as shown on the maps could be off by as little as the length of a football field or as much as a half-mile, eclipse mapping experts say. Likewise for the northern edge, meaning the path of totality might be just 69 miles wide.
“This is an issue. This is really an issue, but it’s not advertised. … Yeah, all the maps are wrong,” Mike Kentrianakis, who is the solar eclipse project manager for the American Astronomical Society and who routinely consults with NASA, told The Star.
Xavier Jubier, a French engineer whose calculations have been used to create the interactive Google maps of the eclipse, confirmed to The Star by email that the actual path of the totality is slightly narrower than the 70 miles shown on current maps.
Ernest Wright, who created maps and other multimedia presentations on the eclipse for NASA, said he thought the map might be narrower by about 100 meters, slightly longer than a football field. He also said it’s possible that Kentrianakis is correct in his estimation that the path is narrower by a half-mile or more.
The article explains the reason the path of totality is narrower than the eclipse maps indicate is because they use a 41-year old value for the radius of the Sun, which is now known to be too small a value:
Wright explained that eclipse maps are made based on what is known about the relative sizes and positions of the moon and the sun. “We have really good information about the orbit of the moon, the positions of the sun, the positions of the Earth. All of that is really well nailed down,” Wright said. “In order to get more accuracy, we need to take into account the mountains and valleys on the moon, and the elevations on the Earth. And we’re starting to do that, as well.”
The size of the moon, in fact, has been measured to within a meter, and its position in the heavens has been measured to within a centimeter. “But the last sort of uncertainty might surprise you,” Wright said. “It’s the size of the sun.”
Jubier said that the current maps are accurate using the the 696,000-kilometer radius and other standards agreed upon in 1976 at a meeting of the International Astronomical Union. “This is perfectly accurate but we know it does use a solar diameter that is not large enough. Why don’t we change the value(?)” Jubier wrote. “Well simply because the IAU (International Astronomical Union) has not yet approved a new value. This is part of the research we’re doing and for which we’re looking for funding.”
He continued, “So technically speaking if the Sun is larger than the adopted IAU value, and we know it is, the eclipse path is necessarily narrower and our tools can simulate this, yet the standard maps for the public will still retain the currently adopted solar radius until a new value has been accepted." | {
"domain": "astronomy.stackexchange",
"id": 2413,
"tags": "solar-eclipse"
} |
Can we create custom gene/protein? | Question: Does it possible to create any custom gene or protein we want with current technology?
I have a protein sequence or a gene sequence about 4000 bp write down on my computer, is there anyway to "print" it to real gene molecule or protein molecule (for example use chemistry and enzyme and automatic machine..) ? What is the challenge ?
Answer: Yes, there are well established methods for synthesizing DNA with any sequence you want. Several commercial companies will accept DNA sequence (a text file) and generate the DNA for you. Genscript for example is well known.
Synthesizing the protein can be bit more tricky, depending on what it looks like and what you want it for --- proteins are way more heterogenous than DNA. There are commercially available cell-free protein synthesis systems that might be useful if you are merely interested in a peptide. Most people express the protein of interest in cultured cells though. In either case there is some biochemistry work to purify the produced protein. Genscript provides protein synthesis services as well. (No, I'm not paid by them, just don't have the energy to google other companies :)
In most cases, this works fine, and with a bit of tweaking you can produce quite large amounts. But for some proteins that depend on posttranslational modification for their function, expression can be difficult: see Can any enzyme be produced? | {
"domain": "biology.stackexchange",
"id": 4758,
"tags": "genetics, molecular-biology, proteins"
} |
Output of an LTI system given its transfer function and input | Question:
Given the transfer function $$T(s) = \frac{100}{1 + \frac{s}{10^{6}}}$$ and the input $$v_i(t) = 0.1 \sin(100t)$$ find the output, $v_o(t)$.
My approach was to use $v_o(t) = \mathcal{L^{-1}}\left\{T(s)\ V_i(s)\right\}$, where $V_i(s) = \mathcal{L\left\{v_i(t)\right\}}$. This gives
$$v_o(t) = \left(\frac{10^5}{10^8+1}\right) \mathrm{e}^{-10^6 \,t} - \left(\frac{10^5}{10^8+1}\right) \cos\left(100\,t\right) + \left(\frac{10^9}{10^8+1}\right) \sin\left(100\,t\right)$$
in MATLAB. However, my textbook does the following:
Which one is correct? The difference between the two functions is of the order $10^{-3}$, and the first function is not reducible to the second in MATLAB.
Edit:
This is the whole question — Example 1.5 from Sedra & Smith's Microelectronic Circuits (7th ed.):
Answer: You cannot solve this problem using the Laplace transform. The reason is that the Laplace transform of the input signal doesn't exist. You could use the Fourier transform, but in this case there's an even simpler way to determine the output signal. You need to know one important property of linear time-invariant (LTI) systems: their response to a sinusoidal input is a sinusoidal signal with the same frequency, but with its amplitude and phase altered according to the system's frequency response evaluated at the input frequency.
So for an input signal
$$x(t)=A\sin(\omega_0t+\phi)\tag{1}$$
the output is given by
$$y(t)=A\big|H(j\omega_0)\big|\sin\left(\omega_0t+\phi+\arg\big\{H(j\omega_0)\big\}\right)\tag{2}$$
where
$$H(j\omega)=\big|H(j\omega)\big|e^{j\arg\{H(j\omega_0)\}}\tag{3}$$
is the system's frequency response. | {
"domain": "dsp.stackexchange",
"id": 7457,
"tags": "fourier-transform, linear-systems, frequency-response, transfer-function, laplace-transform"
} |
Electric Field Homework Question | Question: Can a charged particle move through an electric field that exerts no force on it?
Answer: No it is not possible when $\vec{E} \neq 0$, by definition of the electric field. Given any non-zero $\vec{E}$ and $q$, the electric force experienced will be $\vec{F} = q\vec{E}$
Of course, you can consider specific regions where $\vec{E}=0$, i.e. between 2 equal charges. | {
"domain": "physics.stackexchange",
"id": 13728,
"tags": "homework-and-exercises, forces, electric-fields"
} |
FastAPI repository endpoint for version control system | Question: I am creating an API for a version control system. This is my first time creating an API (and a web project in general) and I wanted to get some feedback. I plan to connect this API to a frontend to create website. I choose to share this part of the code because the rest of the code follows this structure. For reference: Users have repositories ; repositories have commits, files and branches; branches point to a specific commit; commits and files have a many to many relationship.
router = APIRouter(prefix= "/{user_name}", tags=["repository"])
@router.post("/", response_model= schemas.RepositoryResponseSchema ,status_code=status.HTTP_201_CREATED)
def create_repo(user_name: str, repo_name : str,
db: Session = Depends(database.get_db), current_user = Depends(oauth2.get_current_user)):
if current_user.name != user_name:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="User does not have permission to create a repository for another user")
user = crud.get_one_or_error(db, models.User, name= user_name)
repo = crud.create_unique_or_error(db, models.Repository, name= repo_name, creator= user)
master_branch = crud.create_unique_or_error(db, models.Branch, name= "master", head_commit_oid = None, repository_id= repo.id)
repo.branches.append(master_branch)
repo.current_branch_id = master_branch.id
db.commit()
return repo
@router.put("/{repository_name}/change-branch", response_model=schemas.ChangeBranchResponse, status_code=status.HTTP_200_OK)
def change_branch(user_name: str, repository_name: str, branch_name: str,
db: Session = Depends(database.get_db), current_user = Depends(oauth2.get_current_user)):
if current_user.name != user_name:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="User does not have permission to change a branch for another user")
user = crud.get_one_or_error(db, models.User, name= user_name)
repo = crud.get_one_or_error(db, models.Repository, name= repository_name, creator= user)
branch = crud.get_one_or_error(db, models.Branch, name= branch_name, repository_id= repo.id)
repo.current_branch = branch
db.commit()
return schemas.ChangeBranchResponse(repo.name, branch_name=branch.name)
@router.get("/{repository_name}/tree", status_code=status.HTTP_200_OK)
def get_tree_for_repo(user_name: str, repository_name: str, db: Session = Depends(database.get_db)):
"""Return the graph formed by commits and branches
o → o → o → o → o → o
↓ ↑
o ← dev_branch master_branch
"""
user = crud.get_one_or_error(db, models.User, name= user_name)
repo = crud.get_one_or_error(db, models.Repository, name= repository_name, creator_id= user.id)
# Create an empty directed graph
graph = nx.DiGraph()
for branch in repo.branches:
head = branch.head_commit_oid
commit = crud.get_one_or_error(db, models.Commit, oid=head, repository_id= repo.id)
while commit:
graph.add_node(commit.oid, label=f"Commit: {commit.commit_message}")
if commit.parent_oid:
graph.add_edge(commit.parent_oid, commit.oid)
else:
root = commit.oid # Returnning this could be good for drawing better graphs
commit = crud.get_one_or_none(db, models.Commit, oid=commit.parent_oid, repository_id= repo.id)
return json_graph.node_link_data(graph)
I am unsure with some of my design choices and I am looking for some feedback.
Answer: design of URL namespace
@router.post("/", ... )
def create_repo( ...
This seems like the wrong URI.
Prefer "/{repository_name}".
As stated it seems like we're creating a user.
Also, "lint": prefer
$ black *.py
over the idiosyncratic spacing choices found in OP.
boilerplate
We naturally see these a fair amount in signatures:
def ...
db: Session = Depends(database.get_db),
current_user = Depends(oauth2.get_current_user)):
if current_user.name != user_name:
raise HTTPException(status_code=status.HTTP_403_FORBIDDEN, detail="... permission ...")
Could an @authenticated decorator maybe subsume them?
Import nit: It might be convenient to add aliases like
from status import HTTP_403_FORBIDDEN
conventional branch name
master_branch = crud.create_unique_or_error( ... , name= "master", ... )
There are good reasons why GitHub and others
have long ago ditched that name, in favor of main.
You might choose similar defaulting.
insert
repo = crud.create_unique_or_error( ...
repo.branches.append(master_branch)
You chose to omit imports and a lot of other essential Review Context.
For example, each import is implicitly a pypi docu-pointer.
Here, I do not know what that last line did.
My fear is that the ORM has a list that we tacked the branch onto,
and the DB table has a JSON column containing a list or something
like that.
In other words, that the DB is de-normed.
I encourage you to
normalize,
to INSERT a separate row for each and every branch.
Later on you'll be glad that you did.
Avoid aggregates such as "list" or "array" columns,
as they make relational queries harder, and harder to optimize.
backend vs UI
def change_branch( ...
This is a mapping from "(user, repo)" to "default branch".
It's not obvious to me that the backend has to offer
such a service at all.
It feels more like a frontend UI kind of thing.
And it seems like the UI might have multiple preference settings
to keep track of, beyond just "current branch".
Suppose I merge a pair of branches, big enough that conflict resolution
takes more than a second. Meanwhile I do some other operation
on another branch. Do I need to worry about racing default branch name?
I would expect that each operation would explicitly mention
things like (user, repo, branch), rather than letting branch
default to the current one. The UI does such defaulting, sure,
but not sending explicit branch name to the backend API seems a bit off.
Instead of a branch name,
maybe operations like merge send an unnamed hash to the backend?
more boilerplate
@router.get("/{repository_name}/tree", status_code=status.HTTP_200_OK)
We will want to default that 200, right?
Possibly with a decorator, or with a souped-up router object.
scaling
def get_tree_for_repo( ...
graph = nx.DiGraph()
for branch in repo.branches:
head = branch.head_commit_oid
commit = crud.get_one_or_error( ... )
while commit:
graph.add_node( ... )
For a "hello world" repo I'm sure that's very nice.
You will want to test against realworld historic repos with
a few years of commits in them.
Not linux kernel, but maybe from say an Apache project,
something biggish.
You will soon discover that users will wish to
limit their exploration of the graph to a certain
date range, or the last K commits resulting in a certain tag ID.
Else traversing the graph will take all day.
Your design should bear in mind that some queries
will want to quickly chase digraph arrows in the "opposite" direction.
Also, when reporting on a giant graph,
the crud.get_one_... is insanity.
I'm looking at N roundtrips to the DB for N commits.
Figure out how to issue a single giant SELECT ... WHERE ...
so the database backend planner knows you want everything.
It will find an appropriate access path,
gather up the result rows, and stream them at you
as fast as TCP can go, rather than doing a stutter stop
next row next row for each of N rows.
documentation
It's unclear what the various concepts in your VC system are.
So document them, and include URL of documentation in your source code.
Hard way would be to document from scratch.
Easier way would be to stand on the shoulders of giant Linus,
who in turns stands on the shoulders of others.
Write down that your system is "just like git",
or more likely that it is "git lite" and then delineate the limits. | {
"domain": "codereview.stackexchange",
"id": 45498,
"tags": "python, api, fastapi"
} |
Why does this E2 reaction occur on the less substituted carbon? | Question:
In compound C it eliminates the beta hydrogen attached to the less substituted carbon, rather than attacking the beta hydrogen attached to the tertiary carbon. Why is this the case?
Answer: E2 elimination is stereospecific for anti elimination. In both variants of compound B that you have drawn, the OTs and the proton on the tertiary centre are syn to each other, meaning that the elimination is disfavoured.
For this reason, the NaOMe deprotonates at the secondary centre, generating the products as drawn, despite the fact that this goes against the Zeitsev rule which favours the most substituted alkene. | {
"domain": "chemistry.stackexchange",
"id": 4726,
"tags": "organic-chemistry, halides, hydrocarbons, elimination"
} |
Understanding SURF Features Calculation Process | Question: So, I was reading the paper on SURF (Bay, Ess, Tuytelaars, Van Gool: Speeded-Up Robust Features (SURF)) and I can not comprehend this paragraph below:
Due to the use of box filters and integral images, we do not have to
iteratively apply the same filter to the output of a previously filtered
layer, but instead can apply box filters of any size at exactly the
same speed directly on the original image and even in parallel
(although the latter is not exploited here). Therefore, the scale
space is analysed by up-scaling the filter size rather than iteratively
reducing the image size, figure 4.
This is figure 4 in question.
PS: The paper has an explanation of integral image, however the whole content of the paper is based on the particular paragraph above. If anybody has read this paper, can you briefly mention what is going on here. The whole mathematical explanation is quite intricate to have a good grasp first up, so I need some assistance. Thanks.
Edit,couple of issues:
1.
Each octave is subdivided into a constant number of scale levels.
Due to the discrete nature of integral images, the minimum scale
difference between 2 subsequent scales depends on the length lo of the
positive or negative lobes of the partial second order derivative in
the direction of derivation (x or y), which is set to a third of the
filter size length. For the 9x9 filter, this length lo is 3. For two
successive levels, we must increase this size by a minimum of 2 pixels
(one pixel on every side) in order to keep the size uneven and thus
ensure the presence of the central pixel. This results in a total
increase of the mask size by 6 pixels (see figure 5).
Figure 5
I could not make sense of the lines in the given context.
For two successive levels, we must increase this size by a minimum of
2 pixels (one pixel on every side) in order to keep the size uneven
and thus ensure the presence of the central pixel.
I know they are trying to do something with the length of the image, if its even they are trying to make it odd, so that there is a central pixel which will enable them to calculate the maximum or the minimum of the pixel gradient. I am bit iffy about its contextual meaning.
2.
In order to calculate descriptor Haar wavelet is used.
How is the middle region has low $\sum\ dx$ but high $\sum\ |dx|$.
3.
What is the necessity of having an approximate filter?
4.
I have no issue with the way that they found out the size of the filter. They "did" something empirically. However, I have some nagging issue with this piece of line
The output of the 9x9 filter, introduced in the previous section, is
considered as the initial scale layer, to which we will refer as scale
s = 1.2 (approximating Gaussian derivatives with σ= 1.2).
How did they found out about the value of σ . Moreover how does the calculation of scaling done shown in the image below.The reason I am stating about this image is that the value of s=1.2 keeps on recurring, without clearly stating about its origin.
5.
The Hessian Matrix represented in terms of L which is the convolution of second order gradient of Gausssian filter and the image.
However the "approximated" determinant is said to contain only terms involving second order Gaussian filter.
The value of w is:
My question why the determinant is calculated like that above, and what is the relationship between approximate Hessian and Hessian matrix.
Answer: What's SURF?
In order to correctly understand what is going on, you also need to be familiar with SIFT: SURF is basically an approximation of SIFT. Now, the real question becomes: what's SIFT?.
SIFT is both a keypoint detector and a keypoint descriptor.
In the detector part, SIFT is essentially a multi-scale variant of classical corner detectors such as the Harris corner, and that has the ability to auto-tune the scale. Then, given a location and a patch size (derived from the scale), it can compute the descriptor part.
SIFT is very good at matching locally affine pieces of images, but it has one drawback: it is expensive (i.e., long) to compute.
A large amount of time is spent in computing the Gaussian scale-space (in the detector part), then in computing histograms of the gradient direction (for the descriptor part).
Both SIFT and SURF can be seen as difference of Gaussians with automatic scale (i.e., Gaussian sizes) selection. This, you construct first a scale-space where the input image is filtered at different scales. The scale-space can be seen as a pyramid, where two consecutive images are related by a scale change (i.e., the size of the Gaussian low-pass fiéter has changed), and scales are then grouped by octaves (i.e., a big change in the size of the Gaussian filter).
In SIFT, this is done by repeatedly filtering the input with a Gaussian of fixed width until the scale of the next octave is reached.
In SURF, you do not suffer any runtime penalty from the size of the Gaussian filter thanks to the use of the integral image trick. Thus, you compute directly the image filtered at each scale (without using the result at the previous scale).
The approximation part
Since computing the Gaussian scale-space and the histograms of the gradient direction is long, it is a good idea (chosen by the authors of SURF) to replace these computations by fast approximations.
The authors remarked that small Gaussians (like the ones used in SIFT) could be well approximated by square integrals (also known as box blur).
These rectangle averages have the nice property to be very fast to obtain thanks to the integral image trick.
Furthermore, the Gaussian scale-space is actually not used per se, but to approximate a Laplacian of Gaussians (you can find this in the SIFT paper).
Thus, you do not need just Gaussian-blurred images, but derivatives and differences of them. So, you just push a bit further the idea of approximating a Gaussian by a box: first derive a Gaussian as many times as needed, then approximate each lobe by a box of the correct size. You will eventually end up with a set of Haar features.
Increment by 2
This is just an implementation artifact, as you have guessed. The goal is to have a central pixel.
The feature descriptor is computed with respect to the center of the image patch to be described.
Middle region
When going from a black ray to a white ray, you have something like $\sum_{\text{all pix in column}} \partial x = A$. Then, going from white to black, you have the opposite sum: $\sum_{\text{all pix in column}} \partial x = -A$. Thus, you have a small $\sum \partial x$ for the window, but a higher sum of the magnitudes.
Magic number
The first scale is obtained by applying a blur with $\sigma = 1.2$ (or 1.4 in some papers). This is because a natural (real) sharp image can be considered as being the result of the convolution of an ideal (without aliasing) image with a blur kernel of width $\sigma = 1.2$. I can't really remember where it comes from, but it was also explicitly studied in Guoshen Yu's work on A-SIFT, so you might check this page. | {
"domain": "dsp.stackexchange",
"id": 1424,
"tags": "image-processing, computer-vision, multi-scale-analysis"
} |
Is this code thread-safe - Singleton Implementation using Concurrent Dictionary | Question: class Connection
{
private string param1;
private string param2;
private static readonly ConcurrentDictionary<Tuple<string, string>, Connection>
connections = new ConcurrentDictionary<Tuple<string, string>, Connection>();
private Connection()
{
//Prevent instantiation
}
private Connection(string param1, string param2)
{
this.param1 = param1;
this.param2 = param2;
}
public static Connection getInstance(string param1, string param2)
{
Connection conn = activeConnections.GetOrAdd(new Tuple<string, string>
param1,param2), new Connection (param1, param2));
return conn;
}
}
Answer:
Never create two connection objects with the same parameters. If one exists, use it.
If you really need to guarantee this, then I think you will need to use locking instead of ConcurrentDictionary.
If it's okay to create duplicate Connections (that will never be used) in rare circumstances, then you can use an overload of GetOrAdd() that takes a lambda that creates the Connection:
return activeConnections.GetOrAdd(
Tuple.Create(param1,param2), _ => new Connection (param1, param2));
With your current code, every time you call getInstance(), a new Connection is created and then most of the time thrown away. | {
"domain": "codereview.stackexchange",
"id": 10999,
"tags": "c#, thread-safety, singleton"
} |
Mass and distance of the bodies of the solar system? | Question: This might be a bit of a historical question in nature.
Obviously given that we know the constant G, the mass of the sun, and the distance between a solar body and the sun we can calculate it's mass. Ditto if one of the other variables are missing.
What I don't understand however is how we managed to find the initial variables that we used to calculate all other variables? E.g. how did we find the distance between the Earth and sun and the sun's mass?
Sorry if the question is more historical than physical, but I couldn't find a place that describes how we arrived at our current knowledge.
Answer: $G$ was historically calculated from the Cavendish experiment, involving balls and a torsion balance. The earth's mass was actually calculated before the sun's mass. Using the assumption that the earth was a sphere, its circumference and thus its radius could be determined through geodesy, as was done historically even before Newton. The acceleration of an object, and thus the average force exerted on the object, is easy to compute and was done so by Galileo and others. With these facts, the earth's mass could be calculated. We can use Kepler's laws and geometry to determine the earth-sun distance, or parallax. Since we obviously know the period, from this, we can easily determine the sun's mass. Finally, as for determing the distances to other space bodies, for planets we use Kepler's laws and geometry or parallax and for other bodies in modern times we use more sophisticated apparatus. | {
"domain": "physics.stackexchange",
"id": 22143,
"tags": "astronomy, history, solar-system"
} |
Why does the conductivity $\sigma$ decrease with the temperature $T$ in a semi-conductor? | Question: We performed an undergrad experiment where we looked at the resistance $\rho$ and Hall constant $R_\text H$ of a doped InAs semiconductor with the van der Pauw method. Then we cooled it down to around 40 K and did temperature-dependent measurements up to around 270 K. We were asked to create the following three plots from our measurements and interpret them.
This is conductivity $\sigma = 1 / \rho$ versus the inverse temperature $T^{-1}$. I see that increasing the temperature (to the left) decreases the conductivity. I do understand that higher temperatures do that since the electrons (or holes) have more resistance due to phonon scattering. However, since higher temperatures mean a higher amount of free electrons, I would think that $\sigma$ should go up, not down.
http://chaos.stw-bonn.de/users/mu/uploads/2013-12-07/plot1.png
The density of holes $p = 1/(e R_\text H)$ does increase with the temperature, that is what I would expect:
http://chaos.stw-bonn.de/users/mu/uploads/2013-12-07/plot2.png
And the electron mobility $\mu = \sigma R_\text H$ decreases with the temperature as well:
http://chaos.stw-bonn.de/users/mu/uploads/2013-12-07/plot3.png
Now, I am little surprised that even though $p$ goes up with $T$, $\mu$ and $\sigma$ go down with $T$. Are the effects of phonon scattering and other things that increase the resistance that strong?
Answer: Phonon scattering goes up a lot as temperature increases -- faster than electron numbers increase in the conduction band.
Keep in mind that phonons obey the Bose-Einstein distribution, so their numbers scale like
$$N_{BE}=\frac{1}{e^{\frac{\hbar\omega}{k_b T}}-1}$$
In the large $T$ limit, this becomes
$$\frac{k_b T}{\hbar\omega}$$
So their numbers roughly scale linearly with temperature at "high temperature". For phonons, "high temperature" means above the Debye temperature, but that's only ~650K for silicon; you're a good chunk of the way there at room temperature.
However, electrons follow a Fermi-Dirac distribution, so you'd expect their numbers to scale like.
$$N_{FD}=\frac{1}{e^{\frac{\epsilon}{k_b T}}+1}$$
In the large T limit, this goes to $\frac{1}{2}$.
There's also a chemical potential for the electrons that limits their numbers. Phonons have no such restriction; given the energy, you can have as many phonons as you want.
Even if you're not talking about high temperatures, note that the $N_{BE}>N_{FD}$ is always true. | {
"domain": "physics.stackexchange",
"id": 10934,
"tags": "scattering, semiconductor-physics, phonons, cryogenics"
} |
Is it possible for bubbles to exist in vacuum? | Question: In the case of a bubble, the outside pressure is less then the inside pressure.
If that is the case can bubbles exist in vacuum? I am not sure but this should be true if vacuum has zero pressure
Answer: Yes, a bubble can exist in vacuum. A bubble itself has surface tension which tries to minimize the surface area, i.e. tries to push inward. It is small compared with the atmosphere on Earth though. But in the vacuum, there is no pressure from the outside and very little pressure from the inside. Thus, the surface tension becomes significant.
The final shape will become equilibrium with the inner pressure. If the inner pressure is too large, the bubble will simply burst. That said, the pressure allowed inside is very small compared to the atmosphere. | {
"domain": "physics.stackexchange",
"id": 93946,
"tags": "pressure, fluid-statics, vacuum, surface-tension, bubbles"
} |
Which variables are controlled in the mammalian womb to ensure a healthy environment? | Question: Recently, researchers have had some success with artificial wombs. Which aspects of the womb are difficult to replicate through technology?
Answer: It seems you are referring to this recent advance:
http://www.sciencemag.org/news/2017/04/fluid-filled-biobag-allows-premature-lambs-develop-outside-womb
https://www.nature.com/articles/ncomms15112
Involving putting premature lambs (delivered via caesarean) in bags of nutritive fluid and connecting the umbilical cord to an apparatus to oxygenate the blood.
Your question asks which variables are controlled in a mammalian womb, which I think is a misleading way to look at what a womb does and thus what the difficulties are with reproducing it. Namely, it makes it sound like the womb is a container whose purpose is to keep environmental variables (temperature, pressure, etc) at a good value for the development of a fetus. The problem is the womb doesn't just contain the fetus, it feeds it via the placenta. And the placenta is a structure built by fetus during embryogenesis, that connects to the lining of the uterus and enmeshes itself with it so that there is a huge surface of contact between the blood vessels of the mother and the blood vessels of the placenta, so oxygen and nutrients can pass through the mother's blood vessels to the placenta's and from there to the fetus via the umbilical cord. This allows the fetus to get all the nutrients and oxygen it needs to develop.
The "Biobag" these researchers created for these lambs is designed for fetuses who are advanced enough in their development that the main problem they have with surviving outside the womb is that their lungs are immature; the main purpose of this contraption is to allow oxygenation via their umbilical cord while keeping their lungs filled with fluid. And it also seems like they're being fed via that fluid, which suggests they're mature enough to get nutrition via their digestive system.
The paper gives quite a few details of the challenges involved in even that comparatively low level of uterine service. Infection, setting up the cannulas and oxygenating system so that the blood flow is just right and doesn't overtax the fetal heart, making sure there is no clotting, haemorrhage or brain damage...
I imagine the main challenge of making an actual artificial womb that's usable from implantation on would be emulating implantation itself, with artificial endometrial lining that a placenta could naturally develop in and intermingle its blood vessels with and somehow accurately reproducing the flow of blood that's necessary to feed the placenta. If we assume an artificial womb for a fetus that already has an umbilical cord, so we basically need to emulate the placenta's job not the uterus', but can be used earlier than Partrige et al's Biobag, then I imagine all the concerns they had about cannula size and getting the blood flow just right would be a million times harder with tinier, less-mature umbilical cords and fetuses. Not to mention if they need to put nutrients in the blood and not just oxygen. | {
"domain": "biology.stackexchange",
"id": 7068,
"tags": "reproduction, human-physiology, mammals, pregnancy"
} |
Do CNN convolution and pooling layers get backpropogated? | Question: I can't find a simple answer to this by Googling which leads me to think the answer is no, but I want to be sure...
In a feed forward network, all of the layers of weights get backpropogated, but what happens in a convolutional neural network on the backprop step? Is it only the feedforward part of the network (after the convolution and pooling layers) that gets backpropped? Which would mean that the convolutional layers are a form of static feature extraction...
Answer: All layers of a neural network take part in the back-propagation process.
This includes the convolutional layers and the pooling layers. In general, every step of the network that the input has to go thru, the back propagation goes thru as well (in reverse order).
However, not all layers contain trainable parameters. For example standard pooling layers (max-pooling, average-pooling) and standard activation layers (sigmoid, ReLU, softmax) don't have any parameters to adjust. They still take part in the back propagation, contributing their partial derivatives, but they just have no weights that can be updated.
The convolutional layers do contain weights that are updated during the process (the parameters of the filter and their bias).
Note: I assume that what you refer to as "feed forward part", are the
fully-connected layers that are usually placed as the final layers of
a network. In standard CNNs, all the network is a "feed forward part", (including convolutional layers)
it just means that the input goes thru a sequential pipeline until
becoming the final output. | {
"domain": "datascience.stackexchange",
"id": 5725,
"tags": "machine-learning, neural-network, cnn, convolution"
} |
Why does the spring constant not depend on the mass of the object attached? | Question: It is said that:
$$ F = -m\omega^2 x = -kx, $$
so $k=m\omega^2$. Since $k$ is the spring constant it doesn't depend on the mass of the object attached to it, but here $m$ signifies the mass of the object. Then how is $k$ independent of the mass attached?
Answer: $\omega$ isn't a constant of the spring, but it actually depends on the mass you attach to the spring. $\omega$ refers to the frequency of oscillation of the attached mass. The formula for $\omega$ for an attached mass $m$ is $\sqrt{\frac{k}{m}}$, where $k$ is the spring constant. If you use $\omega=\sqrt{\frac{k}{m}}$ in the formula, $m$ cancels out leaving only $k$ | {
"domain": "physics.stackexchange",
"id": 70764,
"tags": "newtonian-mechanics, mass, harmonic-oscillator, spring"
} |
Proper way to shutdown /rosout (roscore) | Question:
Hello,
I have a project where i start gazebo and several other rosnodes.
I want to be able to shut them all down at some point. I have a node that waits for a certain message and then shuts all other nodes down following with a self shutdown call like:
system("rosnode kill NAME_OF_NODE"); // shutdown other nodes
ros::shutdown(); // shutdown this node
This does work for everything except the /rosout node. The /rosout node always respawns if killed with rosnode kill.
What I tried is to start a roscore in the starting programm before the other roslaunch files are executed, and then at the end, when all nodes finished cleanly I kill the roscore with kill all like:
system("roscore &"); // start roscore
... // starting other nodes with roslaunch
system("killall roscore"); // kill roscore
But this does seem like a dirty way. Is there a better way to do this?
Originally posted by JulianR on ROS Answers with karma: 1 on 2017-06-05
Post score: 0
Original comments
Comment by Akhilesh on 2017-06-06:
Hi @Gieseltrud, try http://answers.ros.org/question/250182/terminate-ros-program/#250224
Answer:
My solution is to use a launch file to start all the relevant nodes, with the "terminator" node set to required="true". When the terminator node exits, it will trigger all of the other nodes to close if they are using things like ros::ok() to determine if they should keep running. This only works when roscore has been started by this launch file.
Originally posted by Dale McConachie with karma: 91 on 2017-06-07
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by JulianR on 2017-06-08:
This does work fine, There still is a red message informing about the Initiating of the shutdown, but I guess that is just how it is. | {
"domain": "robotics.stackexchange",
"id": 28061,
"tags": "c++, rosout, rosnode, roscore, respawn"
} |
When can we handle a quantum field like a classical field? | Question: I am curious that, are there any criterion to justify the use of a classical field to describe a fundamentally quantum field? To rephrase in another way, when we can take the classical limit of a quantum field (I am not asking how to take the classical limit)?
For instance, usually in the slow-roll inflation, the scalar field is taken as classical when it is far from the bottom of potential. Why can we do this? Are there any argument to show the quantum effects are negligible?
I think this is also related to the use of wave-packet in quantum mechanics.
Answer: A clean way to make the concept of a classical field precise it to phrase things in terms of a quantum effective action: given a generating function of connected and renormalised Green functions, $W(J)$, with
$$
e^{W(J)} = \int \mathcal{D}\phi\,e^{-I[\phi]+\int d^dx\,J\phi}
$$
the quantum effective action, $\Gamma[\varphi]$, is the Legendre transform (when it exists),
$$
W(J) = -\Gamma[\varphi]+\int J\varphi, \quad {\rm where}\quad \varphi(J)\equiv \frac{\delta W(J)}{\delta J}.
$$
It is assumed here that $\varphi(J)$ can be inverted (at least within perturbation theory) in the sense that one can derive from it an explicit expression for $J(\varphi)$. So we must assume that $J(\varphi)$ exists and is single-valued.
Then the quantum effective action contains exact information about the full quantum theory.
Suppose now that $\bar{\varphi}$ solves the full quantum equations of motion of $\Gamma[\varphi]$, that is:
$$
\frac{\delta \Gamma[\varphi]}{\delta \varphi}\Big|_{\varphi=\bar{\varphi}}=0.
$$
It then follows from the Legendre transform above that this solution for the full quantum effective action is the one-point function:
$$
\frac{\delta W(J)}{\delta J}\Big|_{J=0}=\bar{\varphi}.
$$
Everything would be consistent if in our original path integral we were computing quantum fluctuations around the full quantum corrected background, i.e. if we expanded $\phi=\bar{\varphi}+\tilde{\phi}$ and integrated over $\tilde{\phi}$ in the defining path integral of $W(J)$. This is a non-trivial requirement, but it is understood how to do this.
So the quantity $\bar{\varphi}$ is an exact onshell field that minimises $\Gamma(\varphi)$ and $\varphi$ is the corresponding offshell field that appears in the generic action $\Gamma(\varphi)$. Now comes a crucial point: information about the full quantum theory is contained in the tree level diagrams of $\Gamma(\varphi)$, i.e. in the "classical equations of motion" of $\Gamma(\varphi)$. Furthermore, when you compute $\Gamma(\varphi)$ within perturbation theory you will find that you can rearrange it in a loop expansion (i.e. expansion in $\hbar$, forget about Wilsonian effective actions here),
$$
\Gamma(\varphi) = \sum_{\ell=0}^{\infty}\Gamma_{\ell}(\varphi),
$$
where $\ell$ denotes the loop order. The $\ell=0$ term is usually a manifestly local action and the higher order ($\ell>0$) terms are often highly non-local (especially in massless theories). (For massive theories the seemingly non-local terms can be written as an infinite superposition of local terms but this is not possible for massless theories. It is this latter expansion that makes the link to Wilsonian renormalisation, because here this superposition is organised in terms of energy scales. So all these concepts are very closely and intimately related.)
So finally we can answer your question: when the higher loop ($\ell>0$) terms in the expansion of $\Gamma(\varphi)$ are negligible the full dynamics is essentially captured by the classical equations of motion of the $\ell=0$ term, $\Gamma_0(\varphi)$. It is this quantity that one usually identifies with the dynamics of the classical field $\varphi$, and, e.g., in the context of inflation $\varphi$ (or, onshell $\bar{\varphi}$) would be the inflaton. To leading order it is often the case that $\Gamma_0(\varphi)$ coincides with the bare action $I(\phi)$ (in form) when the latter is treated classically. This is why we can consider classical fields and their classical dynamics, and this has its origins in the full quantum theory. It is often the case however that in the literature people do not distinguish between $\Gamma_0(\varphi)$ and $I(\phi)$, and this is what causes all the confusion that many people have. At least this is my understanding. | {
"domain": "physics.stackexchange",
"id": 46952,
"tags": "quantum-field-theory, field-theory, classical-field-theory"
} |
Measuring instruments and significant figures | Question: To start with, I am familiar with the rules that we have for sig-figs.
I have two chemistry books by different authors (One by McMurry and the other one by Nivaldo J Tro).
I was doing some casual readig, I am familiar with sig-figs (or that's what I thought).
According to the first book, let's say I measure the thickness of the cover of a hardbound version of a book with a ruler that is accurate up to the nearest millimeter, and I find that it is 3±1 mm thick, or 0.3±0.1 cm thick. The book says I can claim just one significant figures when I use a ruler because it is accurate up to 0.1 cm, or 1 mm. The book clearly wants to say that I can't claim more than one sig fig if I use the ruler. For example, I can't say that the cover is about 3.2 mm thick, or 3.4 mm thick because the limitations of the ruler, it is not accurate up to 0.1 cm, or 1 mm.
On the other hand, if I use an instrument that is accurate up to 0.01 mm (Vernier caliper, for example), I find that the book is 3.25±0.01 mm thick. It means I can claim three sig figs if I use a Vernier caliper because it is accurate up to 0.01 mm. Out of these three sig figs (3 2 5), 3 and 2 are certain digits and 5 is uncertain.
The other book, however, says that if I use a ruler (accurate up to the nearest mm) to measure the cover of the book, I can be sure the the cover is at least 3 mm thick, and then I can mentally divide the little space for a mm on the ruler into 10 parts and I can extrapolate that the cover is about 3.2 mm thick (or 3.3 mm, whatever). Now I can say that the cover is about 3.2 mm thick, out of which 3 is the certain digit and 2 is uncertain. That's what the book says.
In other words, there are two sig-figs according to the 2nd book, when we used a ruler to measure the thickness of the cover of a hardbound book. On the other hand, the first book says that we can claim only one sig-fig if we use the same ruler that is accurate up to the nearest mm.
I am confused, which one is correct? From what I understand about significant figures, I would think that the first book is correct in saying that we can't have two sig-figs if we use a ruler that can accurately measure up to the nearest mm. But the second book says that we can have two sig-figs.
Please help me with it
Answer: Last significant digit is for the most of common measurements the first unsure digit. The eye+ruler can estimate $\pu{0.1 mm}$ , so it is $\pu{ 3.2 +- 0.1 mm}$, not $\pu{3 +- 1 mm}$.
There is big difference between e.g some 1st generation digital voltmeter, displaying integer values of voltage, and analogue voltmeter with integer voltage marks, where you can estimate the position between marks. The 1st book speaks about the former, the 2nd book about the latter. The common sense must tell you you know better than the thickness is within $\pu{ 2-4 mm}$.
In large samples, where error of standard deviation estimation is small, there can be overlap of 2 digits. That applies e.g. to massive and very accurate parallel measumurent of physical values. Like $1.12345678 \pm 0.00000034$, by convention written as $1.12345678(34)$. | {
"domain": "chemistry.stackexchange",
"id": 13790,
"tags": "significant-figures"
} |
confusing about simple ros c++ code | Question:
Hi,
I have a simple question about c++ pointer. This is the main.
int main(int argc, char** argv)
{
ros::init(argc, argv, "joint_trajectory_action_node");
ros::NodeHandle node;//("~");
JointTrajectoryExecuter jte(node);
ros::spin();
return 0;
}
And this is part of the constructor of JointTrajectoryExecuter
public:
JointTrajectoryExecuter(ros::NodeHandle &n) :
node_(n),
action_server_(node_, "joint_trajectory_action",
boost::bind(&JointTrajectoryExecuter::goalCB, this, _1),
boost::bind(&JointTrajectoryExecuter::cancelCB, this, _1),
false),
has_active_goal_(false)
node_ is defined as ros::NodeHandle node_;
I don't understand "node_(n)".
Can anyone give me any hint?
Thanks!
Originally posted by AdrianPeng on ROS Answers with karma: 441 on 2013-05-31
Post score: 0
Original comments
Comment by mortonjt on 2013-05-31:
I think you may be referring to a initialization list
Answer:
If you look at the ros::NodeHandle documentation, you can find a copy constructor that matches that signature:
ros::NodeHandle::NodeHandle(const NodeHandle &rhs)
Essentially, this creates a new NodeHandle instance that inherits a copy of all the parameters from the original NodeHandle. In the JointTrajectoryExecuter class you reference, this creates a local NodeHandle variable within the class that is a a copy of the NodeHandle created when the node is first initialized. The class can then use this local copy for handling actions, messages, etc.
In general, this isn't necessary, as all NodeHandle objects in a given node point to the same underlying object. But this version does allow the new NodeHandle to inherit the namespace of the original NodeHandle, if one was specified.
Originally posted by Jeremy Zoss with karma: 4976 on 2013-05-31
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 14385,
"tags": "ros, c++, nodehandle"
} |
FIR filter design with matlab firls() function | Question: I have defined the following linear phase filter of order 4 (real and symmetric) in matlab:
h_d = [0.8367 -1.537 1 -1.537 0.8367]
I calculate the frequency response (for many points) and then I use firls function (which is supposed to give the least-squares solution) to find a linear phase FIR filter (5 taps) that matches the desired response. However the filter is nowhere close to h_d. What am I doing wrong?
Here is the code:
f = [0:1/(2^14-1):1];
[a,~] = freqz([0.8367 -1.537 1 -1.537 0.8367],1,f*pi);
a=abs(a);
n = 4; % Filter order
b_lp = firls(n,f,a);
[h,w] = freqz(b_lp,1,512);
figure(1)
plot(f,20*log10(abs(a)),'-s')
hold on
plot(w/pi,20*log10(abs(h)))
I appreciate any insights.
Answer: The function firls() is meant to design filters with piecewise constant magnitude responses. So in practice you use only a few frequency points and the corresponding desired magnitude values, and the function computes a linear interpolation between the given frequency points. Of course, in theory your call to firls is correct, but I guess that the resulting system of linear equations becomes ill-conditioned.
I wrote a function lslevin.m, which can be used in the way you intended to use firls:
h = [0.8367 -1.537 1 -1.537 0.8367];
[H,w] = freqz(h,1,2048); % that's more than enough frequency points
h2 = lslevin(5,w,H,ones(size(H)));
[H2,w] = freqz(h2,1,2048);
plot(w/pi,abs(H),w/pi,abs(H2),'r--') | {
"domain": "dsp.stackexchange",
"id": 9596,
"tags": "matlab, filters, discrete-signals, filter-design, finite-impulse-response"
} |
Grouping with non-sequential index (datetime) [Pandas] [Python] | Question: I am working in Python, I have a DataFrame whose index is of type DateTime, the times of the index are not continuous. You can see that the first three data are in sequence and after the third data it goes directly to minute 50. The entire DataFrame has this feature.
datos_frecuencia
<class 'pandas.core.frame.DataFrame'>
State
2021-05-07 19:45:00 1.0
2021-05-07 19:46:00 -1.0
2021-05-07 19:47:00 0.0
2021-05-07 19:50:00 -1.0
2021-05-07 19:51:00 1.0
2021-05-07 19:52:00 1.0
2021-05-07 19:55:00 1.0
2021-05-07 19:56:00 -1.0
2021-05-07 19:57:00 -1.0
2021-05-07 20:00:00 -1.0
2021-05-07 20:01:00 1.0
2021-05-07 20:02:00 -1.0
2021-05-07 20:05:00 -1.0
2021-05-07 20:06:00 1.0
2021-05-07 20:07:00 -1.0
2021-05-07 20:10:00 0.0
2021-05-07 20:11:00 -1.0
2021-05-07 20:12:00 1.0
2021-05-07 20:15:00 -1.0
2021-05-07 20:16:00 -1.0
2021-05-07 20:17:00 -1.0
2021-05-07 20:20:00 -1.0
2021-05-07 20:21:00 -1.0
2021-05-07 20:22:00 -1.0
2021-05-07 20:25:00 -1.0
2021-05-07 20:26:00 0.0
2021-05-07 20:27:00 -1.0
I need to group this DataFrame in groups of 3 by 3, to perform the sum of the column "State".
I have tried using resample (), as follows.
datos_frecuencia["State"].resample("3min").sum()
2021-05-07 19:54:00 0
2021-05-07 19:57:00 -1
2021-05-07 20:00:00 -1
2021-05-07 20:03:00 -1
2021-05-07 20:06:00 0
2021-05-07 20:09:00 -1
2021-05-07 20:12:00 1
2021-05-07 20:15:00 -3
2021-05-07 20:18:00 -1
2021-05-07 20:21:00 -2
2021-05-07 20:24:00 -1
2021-05-07 20:27:00 -1
But the result is not what is expected, since in this way, resample () takes times that in the original DataFrame does not exist. For example resample shows: 2021-05-07 20:03:00 -1, when minute 3 of hour 20 is not found in the main DataFrame.
You would need it to be grouped as follows, taking the sum of the "state" column:
State
[2021-05-07 19:45:00 1.0
2021-05-07 19:46:00 -1.0
2021-05-07 19:47:00 0.0]
[2021-05-07 19:50:00 -1.0
2021-05-07 19:51:00 1.0
2021-05-07 19:52:00 1.0]
[2021-05-07 19:55:00 1.0
2021-05-07 19:56:00 -1.0
2021-05-07 19:57:00 -1.0]
[2021-05-07 20:00:00 -1.0
2021-05-07 20:01:00 1.0
2021-05-07 20:02:00 -1.0]
[2021-05-07 20:05:00 -1.0
2021-05-07 20:06:00 1.0
2021-05-07 20:07:00 -1.0]
[2021-05-07 20:10:00 0.0
2021-05-07 20:11:00 -1.0
2021-05-07 20:12:00 1.0]
[2021-05-07 20:15:00 -1.0
2021-05-07 20:16:00 -1.0
2021-05-07 20:17:00 -1.0]
[2021-05-07 20:20:00 -1.0
2021-05-07 20:21:00 -1.0
2021-05-07 20:22:00 -1.0]
[2021-05-07 20:25:00 -1.0
2021-05-07 20:26:00 0.0
2021-05-07 20:27:00 -1.0]
The end result should be a dataFrame with the following data:
State
2021-05-07 19:45:00 0.0
2021-05-07 19:50:00 1.0
2021-05-07 19:55:00 -1.0
2021-05-07 20:00:00 -1.0
2021-05-07 20:05:00 -1.0
2021-05-07 20:10:00 0.0
2021-05-07 20:15:00 -3.0
2021-05-07 20:20:00 -3.0
2021-05-07 20:25:00 -2.0
You know some function in Pandas, with which you can obtain this result.
Answer: Try this:
If needed, reset the index (this should not be a date time)
datos_frecuencia.groupby(datos_frecuencia.index // 3).agg({"tiempo":lambda x: x.head(3).min(),"state":sum})
Output: | {
"domain": "datascience.stackexchange",
"id": 9543,
"tags": "python, pandas"
} |
Randomly assign n elements to n agents such that each agent only knows its own element | Question: Problem
I'm working on an app that involves shuffling and distributing playing cards to players. As a challenge, I tried to solve this problem in a way that doesn't require a trusted intermediary.
In other terms, the task is to find a distributed algorithm that
uniquely assigns $n$ agents numbers $1..n$
allows each agent to know nothing about the assignment but its own
when revealing the assignment, allows other players to verify the assignment
We also assume that knowing other's assignment is an advantage for each agent, and revealing its own prematurely a disadvantage. Agents are also assumed to be able to talk with each other in a way hidden from all other agents.
Partial solution
The solution I came up with works only under the assumption that adversaries do not collaborate.
The idea is to create a set of $n$ nonces, and assign each agent exactly one nonce. The set is then passed from agent to agent in an agreed upon order, hidden from all others, until each agent received the set exactly once. Each time an agent receives the set, it swaps its nonce with a new one, memorizes the new nonce, and confirms receival of the set to the others. This entire procedure is done twice, at which point, all agents have received the set at least once after all other agents swapped their nonces, making it impossible to recognize and hence map the nonces to the other agents.
When the last agent receives the set the second time, it shares it with everyone, and all agents confirm to the others that their nonce is contained in the set. The agents then assign a number to each nonce in the set based on an agreed upon random seed, giving us the required unique assignment.
To allow ownership verification, instead of the nonces, agents put the hash value of their nonce on the set, revealing the actual nonce only when verification is required.
The problem with this solution is that if adversaries are allowed to collaborate, each time an adversary receives the set, they can compare their versions, identify changes and potentially derive which nonce belongs to other agents, allowing them to know what number got assigned to them.
All ideas are appreciated!
Answer: This problem is part of a set of problems known as Mental Poker.
There is an excellent and small article by Shamir, Rivest and Adleman that describe a practical way about how to implement such algorithm.
The abstract is gold:
Can two potentially dishonest players play a fair game of poker without using any cards (e.g. over a phone)?
This paper provides the following answers:
No. (Rigorous mathematical proof supplied.)
Yes.(Correct & complete protocol given.)
Basically, from an information-theoretic point of view, playing such a game is impossible without a trusted third party, but it is feasible to design a protocol using well-known hard to reverse cryptographic functions.
The algorithm will work as following:
Suppose you have $n$ nonces (numbers from $1$ to $n$). Each participant in the protocol has access to secret functions $e_i$ and $d_i$ which are used to encrypt and decrypt data. This functions should satisfy few properties, which we will analyze later.
The first participant is given the full set of nonces. It encrypts each nonce with its secret function $e_1$, shuffles them randomly, and passes the resulting data to second participant.
Second participant will do the same, but in this case it doesn't have nonces, but a random permutation of the encrypted nonces. It will also encrypt each element with its own secret function $e_2$ and shuffle the data again.
Then third participant, and so on, until all participants have shuffled and encrypted the data with its secret function.
Overall the setup process is:
data = [1..n]
for i in [1..n]:
data = [e_i(nonce) for nonce in data]
shuffle(data)
After this setup, each element of data are the nonces encrypted with all secret functions $e_i$ in a random order unknown to any participant.
Notice that at this point is impossible for each participant to deduce each nonce without the help of other participants. And it is enough that one of the participants to shuffle the data completely random to remove any possible bias in the resulting order, independently if some malicious participant tries to bias the order.
Participant $i$-th is assigned the nonce at position $i$. To recover such nonce, participant $i$ asks each other participant $j \neq i$ to decrypt the value with its secret function $d_j$ in turns. At the end participant $i$ will have its nonce only encrypted with its own secret function $e_i$ so it can recover it using $d_i$.
Recovering nonce $i$:
nonce_i_encrypted = data[i]
for j in [1..n]:
if j == i:
continue
nonce_i_encrypted = d_j(nonce_i_encrypted)
nonce_i = d_i(nonce_i_encrypted)
At the end of this procedure each participant knows its own nonce, but no other participant know anything about it.
After using this values for your current problem a player can claim they are really in control of such nonce, by either publishing the secrets functions $e_i$ and $d_i$, so everyone can compute locally all the values, or decrypting the value data[i] after the first step but before the second step, publishing and then using this value as the input in the second step to fully decrypt it.
The functions $e_i$ and $d_i$ should have the following properties:
$e_i(x)$ is the encrypted version of a message $x$ under key $i$,
$d_i(e_i(x)) = x$ for all messages $x$ and keys $i$,
$e_i(e_j(x)) = e_j(e_i(x))$ for all messages $x$ and keys $i$ and $j$,
Given $x$ and $e_i(x)$ it is computationally impossible for a cryptanalyst to derive $i$, for all $x$ and $i$,
Given any messages $x$ and $y$, it is computationally impossible to find keys $i$ and $j$ such that $e_i(x) = e_j(y)$.
As stated in the paper, most of this assumptions are somehow common except for property 3), which tells the encryption functions must commute.
They propose a simple encryption scheme that satisfies these properties. Let's say all participants agree on some large prime number $P$ (it is ok if it fixed in the protocol). And each participant secretly choose a random number $k_i$ such that $gcd(k_i, P - 1) = 1$. Let's say $q_i$ is such value that $k_i \cdot q_i \equiv 1 \mod(P -1)$. Then participant $i$ can use secret functions:
$$e_i(x) = x^{k_i} \mod(P)$$
$$d_i(y) = y^{q_i} \mod(P)$$
Some caveats about this algorithm: Participants can't cheat in such a way that colluding benefits them in learning the nonce about the other peer (unless of course $n-1$ participants collude, and only one nonce is remaining). However, malicious participants can attack the protocol by no participating, effectively stalling the dealing process, since a lot of actions are required from each participants while they are dealing the nonces. They can also produce gibberish, but this can be mitigated extending the protocol to detect which participant was the culprit and penalizing such participant at a higher level.
I implemented this algorithm for a poker game in NEAR blockchain. You can see the code in rust here. Notice there isn't any trusted third party in a blockchain, but there is a trusted computing environment which all participants can use as a mechanism to run this protocol. | {
"domain": "cs.stackexchange",
"id": 16898,
"tags": "distributed-systems"
} |
Photons and phonons | Question: A few months ago I asked about phonons. I got some very good answers but I still have difficulty getting an intuition for phonons, while somehow photons, which in many ways are similar and which I realize I hardly understand anything about, seem more accessible to intuition.
In Ron Maimon's answer to the question "What exactly is a quantum of light?" he asserts that
A quantum of light of wavelength λ is the minimum amount of energy which can be stored in an electromagnetic wave at that wavelength
and
the classical wave is a superposition of a large number of photons
Translating this to vibrations in a crystal lattice, could we say that a phonon is the minimal amount of energy which can be stored in an lattice vibration in a given mode and that a classical vibration is a superposition of a large number of phonons?
I hope I am correct when I say that the electromagnetic field can interact with matter through the absorption of a photon, and it is this interaction that makes the photon into something particle-like. Do we have the same for phonon-interactions? I.e. that when a crystal vibration interacts with matter it does so by the creation/destruction of whole phonons at a time, which may also get absorbed at more or less precise locations, e.g. the energy of a single phonon is absorbed by a localized electron.
Finally I would like to understand how phonon exchange can effectively establish an attractive force between electrons, but I cannot say I have any intuition for how photons mediate the electromagnetic force either. I am afraid that for the moment this is beyond the scope of my background.
Answer: Here are some wordy, math-free answers.
a phonon is the minimal amount of energy which can be stored in an lattice vibration in a given mode
Sounds good.
I.e. that when a crystal vibration interacts with matter it does so by the creation/destruction of whole phonons at a time, which may also get absorbed at more or less precise locations, e.g. the energy of a single phonon is absorbed by a localized electron.
A phonon is a periodic motion of the atoms in a solid, so I'd argue that it's always interacting with matter since it is matter in motion.
Localization of phonons is a tricky business. The textbook derivations for phonons result in vibrations (waves) that extend through the whole material. However, they're usually treated as localized.
You can make any function by adding up waves of different wavelengths (the waves form a basis), so you can build up localized phonon "packets" from lattice vibrations of different frequencies. Unlike photons, phonons have non-linear dispersion relations -- meaning that waves of different frequencies travel at different speeds (unlike light where all frequencies travel at the same speed, at least in a vacuum), so the packets will eventually fall apart if left alone. However, they can stick together long enough that they can be thought of as particles. If the frequencies in the packet are of a narrow range, you can think of the packet as having a frequency equal to the average frequency of its constituent waves.
This localization makes sense if electrons are likewise localized. If an electrons scatters with a phonon, and that electron is localized, that means the electron is only really interacting with nearby atoms. So, any lattice vibration the electron creates should be initially localized to that region too.
I should add that a major form of phonon scattering is with other phonons. It turns out that you can't have two-phonon processes (two phonons colliding and create two other phonons); you can only have three-phonon processes and higher (e.g. two phonons merge to create a third). You don't have to think of these processes as being localized in space.
Finally I would like to understand how phonon exchange can effectively establish an attractive force between electrons
The atoms in a lattice are charged, so they can pull on nearby electrons. If several atoms are pulling on one electron, then the atoms are effectively pulling on each other and are brought closer together. If the electron is moving, it can leave a wake of atoms that are closer together (a phonon). Atoms being closer together means more positive change in an area, and that in turn can draw in another electron -- effectively attracting the electrons together.
See
http://hyperphysics.phy-astr.gsu.edu/hbase/solids/coop.html | {
"domain": "physics.stackexchange",
"id": 11149,
"tags": "quantum-mechanics, wave-particle-duality, phonons, quasiparticles"
} |
Is it possible to transfer magnetic force? | Question: I wonder if you can transfer magnetic force, i.e. magnetize a metal and demagnetize the former magnet. Is this possible? If so, how you do it?
Answer: One way to do this is with coupled inductors. The changing magnetic field of one inductor creates an EMF and current in the other inductor, causing that one to emit a magnetic field. | {
"domain": "physics.stackexchange",
"id": 81802,
"tags": "electromagnetism"
} |
Protect Routes/Page with NextJS | Question: I have a simple project that I built that protects the routes/pages of the website by using the if and else statement and putting each page with a function withAuth(), but I'm not sure if that is the best way to protect routes with nextjs, and I noticed that there is a delay in protecting the route or pages, like 2-3 seconds long, in which they can see the content of the page before it redirects the visitor or unregistered user to the login page.
Is there a way to get rid of it or make the request faster so that unregistered users don't view the page's content? Is there a better approach to safeguard a certain route in the nextjs framework?
Code
import { useContext, useEffect } from "react";
import { AuthContext } from "@context/auth";
import Router from "next/router";
const withAuth = (Component) => {
const Auth = (props) => {
const { user } = useContext(AuthContext);
useEffect(() => {
if (!user) Router.push("/login");
});
return <Component {...props} />;
};
return Auth;
};
export default withAuth;
Sample of the use of withAuth
import React from "react";
import withAuth from "./withAuth";
function sample() {
return <div>This is a protected page</div>;
}
export default withAuth(sample);
Answer: Obviously AuthContext from @context/auth is a blackbox to me here, so I'm unable to know if there's a server side auth capability. However, if you're able to retrieve the session server side, then with Next.js, you can use getServerSideProps to redirect to /login if there's no user found. This would prevent the user from seeing any protected content on the page as you mentioned.
For example:
// pages/myPage.js
import { getUser } from "@context/auth";
export async function getServerSideProps(context) {
const { user } = await getUser();
if (!user) {
return {
redirect: {
destination: "/login",
permanent: false,
},
};
}
return {
props: { user },
};
}
const MyPage = ({ user }) => {
return <div>This is a protected page</div>;
};
export default MyPage;
In general, on the client side (if you're unable to do this server side), I think that avoiding a higher order component pattern will improve readability. I'd create a simple composition instead, something like this:
import { useContext, useEffect, useState } from "react";
import { AuthContext } from "@context/auth";
import Router from "next/router";
const Authorised = ({ children }) => {
const { user } = useContext(AuthContext);
useEffect(() => {
if (!user) Router.push("/login");
}, [user]);
if (!user) {
return null;
}
return children;
};
const Sample = () => {
<Authorised>
<div>This is protected content</div>
</Authorised>
};
Due to the fact you mentioned a delay (before the redirect?), I'd suggest you render a loading component, rather than null, for a better user experience, until you have retrieved the user session.
I think it's also worth you reading the docs from Next.js on authentication, if you haven't done so already. | {
"domain": "codereview.stackexchange",
"id": 42499,
"tags": "javascript, security, react.js"
} |
More efficient maximum bipartite matching | Question: I've been looking into weighted matching in bipartite graphs and am currently looking at maximum matchings in weighted bipartite graphs. As I've been reading and poking around at different books and papers, I've noticed that the Hungarian/Kuhn's algorithm tends to get mentioned a lot as a solution/approach. My sticking point is that it runs in $O(n^3)$ time and I've read in a few sources that there are better implementations.
My challenge is that these sources will often just mention these better approaches at the very end of a discussion and offer little insight into the implementation. So I guess my question is: are there indeed better than $O(n^3)$ runtimes? I think I remember coming across references that $O(nm\log n)$ and better exist, just having a hard time finding either reference implementations or more direction. Any help would be appreciated, especially examples as that's how I tend to grok things best after going through the theory.
Answer: The Hungarian method can be implemented in time $O(mn + n^2 \log n)$. Details are found for example in Alexander Schrijver's book on Combinatorial Optimization, Chapter 17. For a more implementation-oriented guide to this version of the algorithm, have a look in Chapter 7 of Mehlhorn and Näher's book.
The running time is dominated by the time it takes to compute $n$ shortest paths via Dijkstra's algorithm. Each invocation of Dijkstra's algorithm costs $O(m + n \log n)$. (This assumes that Dijkstra's algorithm is implemented with Fibonacci heaps; the simpler implementation based on standard heaps would give $O(m \log n)$ for each invocation.) | {
"domain": "cs.stackexchange",
"id": 12725,
"tags": "algorithms, graphs, reference-request, weighted-graphs, bipartite-matching"
} |
Black body/Sun radiation - λmax | Question: The Sun's effective temperature is 5778K. Using Wien's law we can calculate the wavelength λmax in which we observe the maximum amount of radiation received from the black body. After doing the calculation, we see that λmax=500nm approximately.
We can find that an absorption line of Hydrogen is at 486nm. By comparing the two wavelengths we can see that they are very close. Can we deduct that the reason λmax is equal to approximately 500nm is caused by the fact that the Sun is burning Hydrogen at this point of its lifecycle?
Answer: The answer is no, which you can see because plenty of counterexamples exist. For example, a main-sequence star with spectral type B burns hydrogen and has an effective temperature of 10,000-30,000 K. A type-B star with an effective temperature of 15,000 K has a peak blackbody wavelength of 199 nm, which is nowhere close to any hydrogen spectral lines. The fact that the Sun's effective temperature is kind of (but not particularly) close to a particular hydrogen line is a coincidence, and, given that there are many hydrogen lines from the UV to the IR range, not a particularly rare one, either.
As to why your deduction isn't true, think of it this way: the effective temperature of a star is the temperature of the outer surface of the star. Fusion reactions do not occur at the surface of the star, but rather in the core, where the temperature is much higher (the specifics here depend on several variables, as the dynamics within a star depend heavily on its size and the presence of heavier elements). As such, there is no reason to expect any sort of direct relation between the temperature of the surface and the element being burned. In fact, for most stars on the main-sequence, which burn primarily hydrogen, you can find either a red giant or a white dwarf (or both!) with the same effective temperature; red giants primarily burn heavier elements, and white dwarfs don't burn anything at all. So it should be clear that the effective temperature alone can't tell you anything about the nuclear reactions taking place.
In addition, the spectral lines present in the stellar atmosphere also aren't really related to the nuclear reactions taking place. These absorption lines come about because electrons attached to atoms in a certain state are excited by radiation of a specific frequency into a higher energy state. This requires that atoms with electrons still attached in lower energy states exist (bare ions do not produce spectral lines), and as such, occurs at energies/temperatures far lower than the temperatures required by/energy released by nuclear fusion. The intensity of a spectral line in a stellar atmosphere at a given temperature tells you about the composition of the outer layer, which is not always influenced by the fuel being burned. For example, large stars have a radiative outer layer, which means that convection to the surface doesn't really occur and the fusion products tend to stay in the core, not really influencing the composition of the outer layer. Midrange stars have a radiative core and a convective outer layer, so there is only limited transport between the core and the surface. Only in the smallest stars does convection from the core to the surface occur. | {
"domain": "physics.stackexchange",
"id": 53181,
"tags": "electromagnetic-radiation, temperature, astrophysics, thermal-radiation, wavelength"
} |
Doubt in ROS-2 design document about topics | Question:
In the ROS-2 design page "https://design.ros2.org/articles/ros_middleware_interface.html", it is written by @Dirk Thomas as
The DDS DataReader and DataWriter as
well as DDS topics are not exposed
through the ROS API.
My question is: How can we say that the DDS topics are not exposed through the ROS API, when using "ros2 topic list", we do access the topics ? Does these topics do not belong to DDS implementation ? What was the actual concept that was meant to be conveyed by the quoted text above ?
I request to help me in clearing this doubt.
Originally posted by Aakashp on ROS Answers with karma: 41 on 2020-12-28
Post score: 0
Original comments
Comment by gvdhoorn on 2020-12-29:
@Aakashp: please don't close questions to mark them as resolved. Instead, accept the best answer by clicking the checkmark to the left of the answer.
Answer:
What you see in the ros2 topic list are the ROS topics and not the DDS topics. Under the hood it is implemented with DDS by default but the topics are not exactly the same. And you don't have access to the raw DDS DataReader and DataWriter objects.
Originally posted by tfoote with karma: 58457 on 2020-12-28
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Aakashp on 2020-12-29:
Thanks @tfoote for clearing my doubt. | {
"domain": "robotics.stackexchange",
"id": 35913,
"tags": "ros2"
} |
Analysis of motion of block attached to spring | Question:
A block is attached to a spring of constant $k$ and natural length $\ell_0$. One end of spring is fixed and block is given a velocity $v_0$. System is is horizontal plane. The maximum extension in spring is given as $\ell_0/4$. the question is to find the velocity of block at maximum extension of spring.
My textbook has used conservation of angular momentum about the fixed point since torque of all forces is $0$.
I had tried using centripetal force $= mv^2/r$ at the maximum position since it can be assumed to move in a circle of radius $r = \ell_0+\ell_0/4$. Along with conservation of mechanical energy.
However this method does not give the correct answer.
Can anyone please explain why my method is wrong?
Answer: The block does not move in a circle with radius $r=\frac{5}{4}\ell_0$. This is clearly seen as the block starts in a position with radius $\ell_0$ but ends up at a point of maximum extension where the radius is $\frac{5}{4}\ell_0$.
Assuming that the local behavior of the mass around the point of maximum extension is centripetal is not valid. We can see this as the magnitude of force changes as a consequence of Hooke's law and the fact that the length of the spring changes from its maximal length. That is, $F(t_0\pm\textrm{d}t)\neq F(t_0)$, where $F(t_0)$ is the force on the mass in the radial direction at the time $t=t_0$, when the spring is at maximum extension. In centripetal motion, we always have $F(t\pm\textrm{d}t)=F(t)=\frac{mv^2}{r}$. | {
"domain": "physics.stackexchange",
"id": 59388,
"tags": "newtonian-mechanics"
} |
Finding out modulation index and DC offset | Question: I have a question form my teachers, and I cannot understand why I can find out the modulation index form the figure.
The question provide a Figure like this:
And the information signal is a sinusoidal test signal with peak amplitude $6\ \rm V$ and is applied to an AM-DSB-C modulator, the Fourier spectrum of the modulated signal is shown above.
The solution is like this:
As $\displaystyle \frac {\frac {Acm}{4}}{\frac{Ac}{2}}=\frac{3}{4}$, $m=1.5$, so the modulation index is $1.5$
Moreover as $\displaystyle m=\frac{x}{c}$, the peak amplitude of $s(t)$ is $6\ \rm V$, so the DC offset($c$) is $4\ \rm V$.
I know where is the $\displaystyle \frac {Ac}{2}=4$ come form
as $s_{AM-DSB-C}(t)=A\big(s(t)+c\big)\cos(2 \pi f_c t)$
\begin{align}
\mathcal F\left\{s_{AM-DSB-C}(t)\right\} &=S_{AM-DSB-C}(f)\\
&=\frac{A}{2}\big[S(f-f_c)+S(f+f_c)\big] + \frac {Ac}{2} \big[\delta (f-f_c) + \delta (f+f_c)\big]
\end{align}
so, form the second term we can get $\displaystyle\frac {Ac}{2}=4$
But, I cannot understand how the solution can get $\displaystyle \frac {Acm}{4}=3$.
Answer: The sinusoid test signal is given by
$$
s(t) = V_0\cos(2\pi f_0)
$$
and its Fourier transform by
$$
S(f) = \frac{V_0}{2}\big[\delta(f-f_0) + \delta(f+f_0)\big].
$$
Inserted in $S_\mathrm{AM-DSB-C}$ it yields
$$
S_\mathrm{AM-DSB-C}(f) = \frac{AV_0}{4}\big[\delta(f-f_c-f_0) + \delta(f-f_c+f_0) + \delta(f+f_c-f_0) + \delta(f+f_c+f_0) \big] + \frac {Ac}{2} \big[\delta (f-f_c) + \delta (f+f_c)\big].
$$
Comparing that with the figure given we see that $AV_0/4 = 3$ and $Ac/2 = 4$. Additionaly, $f_0 = 10\text{ kHz, } f_c = 100\text{ kHz}$. Using $m = V_0/c$ and $V_0 = 6\text{ V}$ we find
$$
m = 1.5, c = \frac{V_0}{m}=4\text{ V}.
$$ | {
"domain": "dsp.stackexchange",
"id": 681,
"tags": "fourier-transform, digital-communications, modulation"
} |
How important is initial state for local search optimisation? | Question: I have been enjoying Pascal van Hentenryck's Discrete Optimisation course and we're in Week 4 on the wonders of Local Search algorithms for combinatorial optimisation.
I'm wondering how important the initial configuration is to the quality of solution achievable in a fixed time or, somewhat equivalently, how quickly a solution of a given quality can be reached.
So if I have a good heuristic or intuition for how to arrange things in the first place, is it helpful to devote some processing time to setting that up or is it all dominated by the effect of the local search process?
For example, in the Cartesian Travelling Salesman Problem (where we're working in a 2D plane and the cost of a journy is simply the straight-line distance) I might "feel" that a good route could roughly follow a clockwise sweep from the centre of the space. So I could use this to set my initial tour (i.e. order the nodes by their angle from the mean of all points). This intuition might be rubbish for certain instances and great for others, I was hoping to see a study where (let's say random TSP) instances had been solved by following a heuristic first state as opposed to a completely random (but legal) first state.
Answer: A good initial state can often be helpful. I would guess that there is a significant chance that spending extra time to find a good initial state will be useful.
All we can say in general is that "it depends". It depends on the specific problem and the shape of the fitness landscape. If the problem has few or no local minima, the search might not be very sensitive to the initial state. If it has many local minima, the search might be very sensitive to the start state, and a good start state might help again. There are no universal answers that can be supplied.
Usually the only way to answer these questions is to try it and see. With heuristics, there aren't any provable guarantees and it's often hard to make many predictions based on theory or analysis. | {
"domain": "cs.stackexchange",
"id": 16892,
"tags": "optimization, heuristics"
} |
Can choice or sequential execution be expressed with other basic operators of the pi calculus? | Question: There are several definitions of what basic operators constitute the pi calculus (see papers from R. Milner, B.Pierce, J.Wing).
The common basic operators ($C$) are:
the parallel composition
reading from a named channel
writing to a named channel
restriction to get a new named channel
replication to get an infinite number of parallel copies of a process
the inert process that does nothing.
Sometimes, there is additionally one of the following two operator mentioned as basic operator:
the sequential operator for sequential execution;
the summation operator to choose between processes.
Is $C$ together with the sequential operator equivalent to $C$ with the summation operator? Are they even equivalent to $C$ without any further operator? Why (not)?
Answer: This is a really interesting question and only partially understood.
The $
\newcommand{\OUT}[2]{\overline{#1} #2 }
$ precise answer to such questions depends in subtle ways on exactly what the ambient $\pi$-calculus is and exactly what feature you are encoding.
For sums you need to realise that there are different kinds of sums for example input guarded sums like $x(v).P + x(v).Q$ or $x(v).P + y(v).Q$, or
mixed sums like $x(v).P + \OUT{y}{v}.Q$ or even $x(v).P + \OUT{y}{v}.Q + \tau.R$. What you can and cannot encode really depends on what sums you have. The groundbreaking study (1) gives hard limits to 'good' encodings. Simplifying greatly, what (1) shows is that adding mixed sums to a calculus strictly increases expressivity. This it cannot be encoded (in a nice way). The reason is that mixed sums allow a certain form of symmetry breaking that is not achievable in conventional $\pi$-calculi without mixed choice.
With sequential composition $P; Q$ the situation is less complicated, but you need to be clear exactly what sequential means for parallel processes and replication/recursion:
Should $R$ begin to execute when one or both of $P$ and $Q$ terminate in $(P|Q); R$?
What is the meaning of $(!x(v).P); R$?
If $(P|Q); R$ means that both $P$ and $Q$ must terminate before $R$ goes to work, and if we disallow sequential composition like $(!x(v).P); R$ with replication, then $P; Q$ is easily encodable, and by (1) it cannot simulate general forms of sums that include mixed sums.
The curse of Turing completeness. There is a further complication. Process calculi typically are Turing-complete, hence any feature can be 'somehow' be encoded. So you need to be very precise about what you mean when you ask if some calculus is "equivalent" to some other. The usual approach to making this precise is to constrain admissible encodings, e.g. to require them to be compositional, or preserve termination, or be closed under renaming, or any number of other interesting structural properties. For different concepts of admissible encoding you tend to get different answers.
None of this is germane to process calculi, Turing's curse affects all programming languages, but process calculi are especially subtle, since they can do so much more than mere sequential computation.
For an overview of the state-of-the art in process calculus expressivity, see (2, 3).
(1) C. Palamidessi, Comparing the Expressive Power of the Synchronous and the Asynchronous π-Calculi.
(2) D. Gorla, Comparing Communication Primitives via their Relative Expressive Power.
(3) D. Gorla, A Taxonomy of Process Calculi for Distribution and Mobility. | {
"domain": "cstheory.stackexchange",
"id": 3329,
"tags": "formal-modeling, process-algebra, pi-calculus"
} |
Power of several focused laser beams on a small surface | Question: After viewing this: My Homemade 40W Laser Shotgun
Will the initial 40W power of the diodes beams be roughly transferred to the targeted surface where the 8 beams are focused, or will there be a power loss due to some kind of interferences where the beams overlap?
Answer: If energy is lost, it must go somewhere. So, there won't be any "interference" where the beams overlap which saps the power. Unless, and this does happen, the lasers are powerful enough to ionize the air, in which case the light is blocked at that point and converted to heat, which will never make it to the surface. See this article on laser-induced breakdown.
I doubt the contraption in question has the power-density to do this, though. | {
"domain": "physics.stackexchange",
"id": 22784,
"tags": "optics, laser, power, laser-interaction"
} |
How to directly calculate the infinitesimal generator of SU(2) | Question: We commonly investigate the properties of SU(2) on the basis of SO(3). However, I want to directly calculte the infinitesimal generator of SU(2) according to the definition $$X_{i}=\frac{\partial U}{\partial \alpha_{i}}$$ from Lie group theory. But, where are the problems of the methods I used below?
First, I parameterize the SU(2) with the $(\theta, \phi, \gamma) $ like this:
$$U=\begin{bmatrix} e^{i\theta}sin\phi & e^{i\gamma}cos\phi \\ -e^{-i\gamma}cos\phi & e^{-i\theta}sin\phi\end{bmatrix}$$
and the E is when $(\theta, \phi, \gamma)$= $(0,\frac{\pi}{2},0)$.
Second, I use the definition of infinitesimal generator like this:
$$ I_{1}=\frac{\partial U}{\partial \theta}|_{(0,\frac{\pi}{2},0)}=i\begin{bmatrix}1 & 0\\ 0 & -1 \end{bmatrix}$$
$$ I_{2}=\frac{\partial U}{\partial \phi}|_{(0,\frac{\pi}{2},0)}=i\begin{bmatrix}0 & i\\ -i & 0 \end{bmatrix}$$
$$ I_{3}=\frac{\partial U}{\partial \gamma}|_{(0,\frac{\pi}{2},0)}=i\begin{bmatrix}0 & 0\\ 0 & 0 \end{bmatrix}$$
Here is the question...
Why do I get the 0 matrix? We should expect to have the Pauli Matrix. Isn't it?
Where is the problem from?
Answer: The problem is that your coordinates aren't well defined at $\theta=0$ and $\phi=\pi/2$. Note in particular that
$$
U|_{(0,\frac{\pi}{2},\gamma)} = \begin{pmatrix}1&0\\0&1\end{pmatrix}
$$
for any value of $\gamma$. A simpler choice is
$$
\tilde{U} =
\begin{pmatrix}
x+iy & z+iw \\
-z+iw & x-iy
\end{pmatrix},
$$
with
$$
x = \sqrt{1 - y^2 - z^2 - w^2}.
$$
Differentiating this you find
$$
d\tilde U =
i\begin{pmatrix}
dy & +i\,dz + dw \\
-i\,dz + dw & -dy
\end{pmatrix}
-
\frac{y\,dy+z\,dz+w\,dw}{\sqrt{1-y^2-z^2-w^2}}\begin{pmatrix}1&0\\0&1\end{pmatrix}
$$
from which you can read off the Pauli matrices at the point $(x,y,z,w)=(1,0,0,0)$. | {
"domain": "physics.stackexchange",
"id": 7868,
"tags": "homework-and-exercises, quantum-field-theory, particle-physics, group-theory, group-representations"
} |
Proof for boosting success probability of a random algorithm with binary output | Question: There is a theorem stating that, given a random algorithm with a binary output that has a success probability $\geq 2/3$, you can always create the another algorithm that solves the same problem but with a success probability $\geq 1-\delta$ by just repeating the original algorithm $\mathcal{O}(1/\log\delta)$ times and taking a majority vote (stated in this YouTube video at 29:14).
Is there a (preferably simple) proof for this?
Edit: As remarked by Nathaniel there is a typo in my question and it should read $\mathcal{O}(\log(1/\delta))$ (I left the typo above so that the last part of his answer would still make sense in context)
Answer: The proof I know of uses a weak version of Chernoff bound:
If $X_1, X_2, …, X_n$ are independent Bernoulli random variables of same parameter $p$, and $X = \sum\limits_{i=1}^nX_i$, then:
$$\mathbb{P}(X \geqslant n/2)\leqslant \left(\frac{1+p}{\sqrt{2}}\right)^n$$
The idea is the following:
repeat the algorithm $n$ times;
denote $X_i = \left\{\begin{array}{ll}0 & \text{if the result is correct}\\1&\text{otherwise} \end{array}\right.$ for the $i$-th call of the algorithm.
Those $(X_i)$ form a sequence of independent Bernoulli random variables of same parameter $p\leqslant \frac13$ (because the probability of failure is lesser than one third).
There is a failure if the majority vote fails, meaning that $\sum\limits_{i=1}^nX_i \geqslant \frac{n}2$. Using Chernoff bound, the probability of faillure is lesser than:
$$\left(\frac{1+1/3}{\sqrt{2}}\right)^n = \left(\frac{4}{3\sqrt2}\right)^n$$
Now for the probability of failure to be less than $\delta$, it suffices that:
$$\begin{align*}\left(\frac{4}{3\sqrt2}\right)^n\leqslant \delta &\Leftrightarrow n\log_2\left(\frac{4}{3\sqrt2}\right) \leqslant \log_2 \delta\\
&\Leftrightarrow n\geqslant \frac{\log_2\delta}{\log_2\left(\frac{4}{3\sqrt2}\right)}\end{align*}$$
Note that the last inequality is reversed, because $\log_2\left(\frac{4}{3\sqrt2}\right)$ is negative.
In conclusion, one need to repeat the algorithm $\mathcal{O}\left(\log \frac1{\delta}\right)$ to get the probability of failure to less than $\delta$. Note that this is different of the $\frac{1}{\log \delta}$ you asked, but since this quantity is negative if $\delta < 1$, I think it was a mistake. | {
"domain": "cs.stackexchange",
"id": 21916,
"tags": "complexity-theory, randomized-algorithms"
} |
What is meant with overdamped motion? | Question: I'm learning about Brownian motion. I use the approximation of overdamped motion. I read that the average acceleration is $0$ then, but I don't really understand the concept. So, what does overdamped exactly mean, especially in the context of Brownian motion, and why is the acceleration $0$? Thanks!
Answer: Overdamped means that viscosity forces are much more "relevant" than inertia. When this is the case, essentially any movement will very quickly reach terminal velocity, so the acceleration will be $0$.
As a toy model, imagine trying to push an object through a viscous medium. If we apply some constant force $F$, then Newton's second law gives us the following differential equation:
$$m\ddot x=F-b\dot x$$
It is easy to see that terminal velocity is $v_T=F/b$, and the relevant time scale to relax to terminal velocity here is $\tau=m/b$. If the system is overdamped, then $b$ is very large, which makes $\tau$ very small. In other words, the more damping you have, the faster you reach terminal velocity where the acceleration is $0$.
Iin this regime you can create a "trick Newton's second law" where the velocity is proportional to the applied force: $$F=b\dot x$$ Don't show introductory physics students this ;) | {
"domain": "physics.stackexchange",
"id": 67651,
"tags": "statistical-mechanics, acceleration, drag, brownian-motion"
} |
Find min of 3 numbers hardcoded | Question: public static int min(int a, int b, int c)
{
int result = 0 ;
if( a < b && a < c && b < c) result = a ;
else if( a < b && a < c && b > c) result = a ;
else if( a > b && a < c && b < c) result = b ;
else if( a < b && b > c && c < a) result = c ;
else if( a > b && b < c && a > c) result = b ;
else if( a > b && a > c && c < b) result = c ;
return result ;
}
Is it better than nested if statements? Is there a more readable solution than this? To me it looks pretty readable, but I'm not sure whether it can be improved.
Answer: For these things we have java.lang.Math:
public static int min(final int a, final int b, final int c){
return Math.min(a, Math.min(b, c));
}
Wow, look how short it is!
But it's 3 numbers today, it's 10 tomorrow.
As an alternative, how about an array?
public static int min(int... numbers){
if (numbers.length == 0){
throw new IllegalArgumentException("Can't determine smallest element in an empty set");
}
int smallest = numbers[0];
for (int i = 1; i < numbers.length; i++){
smallest = Math.min(smallest, numbers[i]);
}
return smallest;
}
I'd use the java.lang.Math solution, it's very short, and very readable. | {
"domain": "codereview.stackexchange",
"id": 8851,
"tags": "java"
} |
Could the observable universe be bigger than the universe? | Question: First of all, I'm a layman to cosmology. So please excuse the possibly oversimplified picture I have in mind.
I was wondering how we could know that the observable universe is only a fraction of the overall universe. If we imagine the universe like the surface of a balloon we could be seeing only a small part of the balloon
or we could be seeing around the whole balloon
so that one of the apparently distant galaxies is actually our own.
In the example with the balloon one could measure the curvature of spacetime to estimate the size of the overall universe, but one could also think about something like a cube with periodic boundary conditions.
Is it possible to tell the size of the overall universe?
Artistic image of the observable universe by Pablo Carlos Budassi.
Answer: Yes, it's possible in principle that we see the same galaxy more than once due to light circling the universe. It wouldn't necessarily be easy to tell because each image would be from a different time in the galaxy's evolution.
There is a way to test for this. The cosmic microwave background that we see is a 2D spherical part of the 3D plasma that filled the universe just before it became transparent. If there has been time for light to wrap around the universe since it became transparent, then that sphere intersects itself in one or more circles. Each circle appears in more than one place in the sky, and the images have the same pattern of light and dark patches. There have been searches for correlated circles in the CMB pattern (e.g. Cornish et al 2004), and none have been found. | {
"domain": "physics.stackexchange",
"id": 77814,
"tags": "cosmology, universe, observable-universe"
} |
Is edge state of topological insulator really robust? | Question: I am a little confused! Some people are arguing that the gapless edge state of Topological insulator is robust as long as the time reversal symmetry is not broken,while other people say that it is not stable for lack of topological order。Please help me out!
Answer: I see how that can be confusing. Unfortunately understanding how to reconcile these statements will require a lot of background. I will try to answer this as concisely as I can (hopefully) without relying on concepts that are too advanced.
Well, topological insulators do not possess a so-called intrinsic topological order. It means that the bulk states of a topological insulator are not entangled quantum mechanically over a long range. Topological insulators are, in fact, short-range entangled just like trivial insulators. However, topological insulators and trivial insulators are clearly not the same phases. Therefore short-range entangled phases are further broken down into subcategories. Two such subcategories are: symmetry protected topological phases (topological insulators) and symmetry-breaking phases (trivial insulators).
The reason the word “topological” appears in the distinction between of topological insulators and trivial insulators is that they can be assigned a distinct “topological invariant.” The notion of a topological invariant comes from topology. For example a sphere and a torus have different topological invariants. Just as you cannot deform a torus into a sphere without cutting it, in the same way you cannot deform the band structure of a topological insulator into that of a trivial insulator without closing the bulk gap. As a consequence of this subtle difference in the two types of band structures the number of edge states will either be even (trivial insulators) or odd (topological insulators). Now this is where time reversal symmetry comes in. If any kind of perturbation, which itself obeys time reversal symmetry, acts on these edge states then it can destroy these edge states only in pairs. Therefore if you had odd number of edge states to begin with then you will end up with at least one edge state even if the perturbation destroys all the remaining edge states (in pairs). Hence time reversal symmetry is responsible for the protection of these edge states in topological insulators. You can find a more detailed explanation here:
What conductance is measured for the quantum spin Hall state when the Hall conductance vanishes?
Just scroll all the way down until you see the question in the block quote “Also: Why is there only a single helical edge state per edge? Why must we have at least one and why can't we have, let's say, two states per edge?” To give the above analogy with topology a firm footing I suggest you take a look at Berry curvature and the Chern number (if you haven't already). The topological invariants are closely connected to these.
So to summarize, gapped phases of matter can be divided into two categories: long-range entangled (with intrinsic topological order) and short-range entangled (without intrinsic topological order). Two subcategories of short-range entangled phases are: symmetry protected topological phases (topological insulators) and symmetry-breaking phases (trivial insulators).
In case you are wondering about long-range entangled phases and what it means to have (intrinsic) topological protection then I recommend a little more background reading on the principle of emergence, the fractional quantum Hall effect, string-net condensation (in that order). There are some excellent posts on physics stackexchange on the topic of string-net condensation. Some of them are even answered by Prof. Xiao-Gang Wen who, as a matter of fact, developed the theory of string-net condensation along with Michael Levin (I don’t know if he’s here). | {
"domain": "physics.stackexchange",
"id": 6886,
"tags": "quantum-mechanics, condensed-matter, topological-insulators"
} |
Intrusive smart pointer implementation | Question: I have homework in college to implement intrusive smart pointers.
#include <atomic>
#include <string_view>
#include <string>
#include <iostream>
struct RefCount
{
inline void IncrementRefCount() const noexcept
{
refCount.fetch_add(1, std::memory_order_relaxed);
}
inline void DecrementRefCount() const noexcept
{
refCount.fetch_sub(1, std::memory_order_acq_rel);
}
[[nodiscard]] inline uint32_t getRefCount() const noexcept { return refCount; }
private:
mutable std::atomic_uint32_t refCount = 0;
};
template<typename T>
struct Ref
{
Ref() = default;
Ref(std::nullptr_t) noexcept
: data(nullptr)
{
}
Ref(T* instance) noexcept
: data(instance)
{
IncrementRef();
}
template<typename T2>
Ref(const Ref<T2>& other) noexcept
{
data = (T*)other.data;
IncrementRef();
}
template<typename T2>
Ref(Ref<T2>&& other) noexcept
{
data = (T*)other.data;
other.data = nullptr;
}
~Ref() noexcept
{
DecrementRef();
}
Ref(const Ref<T>& other) noexcept
: data(other.data)
{
IncrementRef();
}
[[nodiscard]] inline Ref& operator=(std::nullptr_t) noexcept
{
DecrementRef();
data = nullptr;
return *this;
}
[[nodiscard]] inline Ref& operator=(const Ref<T>& other) noexcept
{
other.IncrementRef();
DecrementRef();
data = other.data;
return *this;
}
template<typename T2>
[[nodiscard]] inline Ref& operator=(const Ref<T2>& other) noexcept
{
other.IncrementRef();
DecrementRef();
data = other.data;
return *this;
}
template<typename T2>
[[nodiscard]] inline Ref& operator=(Ref<T2>&& other) noexcept
{
DecrementRef();
data = other.data;
other.data = nullptr;
return *this;
}
[[nodiscard]] inline operator bool() noexcept { return data != nullptr; }
[[nodiscard]] inline operator bool() const noexcept { return data != nullptr; }
[[nodiscard]] inline bool operator==(const Ref<T>& other) const noexcept
{
return data == other.data;
}
[[nodiscard]] inline bool operator!=(const Ref<T>& other) const noexcept
{
return !(*this == other);
}
[[nodiscard]] inline T* operator->() noexcept { return data; }
[[nodiscard]] inline const T* operator->() const noexcept { return data; }
[[nodiscard]] inline T& operator*() noexcept { return *data; }
[[nodiscard]] inline const T& operator*() const noexcept { return *data; }
[[nodiscard]] inline T* get() noexcept { return data; }
[[nodiscard]] inline const T* get() const noexcept { return data; }
inline void Reset(T* instance = nullptr) noexcept
{
DecrementRef();
data = instance;
}
template<typename T2>
[[nodiscard]] inline Ref<T2> as() const noexcept
{
return Ref<T2>(*this);
}
template<typename... Args>
[[nodiscard]] inline static Ref<T> Create(Args&&... args) noexcept
{
return Ref<T>(new T(std::forward<Args>(args)...));
}
private:
inline void IncrementRef() const noexcept
{
if (data) {
data->IncrementRefCount();
}
}
inline void DecrementRef() const noexcept
{
if (data) {
data->DecrementRefCount();
if (data->getRefCount() == 0) {
delete data;
}
}
}
template<typename T2>
friend struct Ref;
T* data = nullptr;
};
template<typename T>
class WeakRef
{
public:
WeakRef() noexcept = default;
WeakRef(Ref<T> ref) noexcept
{
data = ref.get();
}
WeakRef(T* instance) noexcept
: data(instance)
{
}
[[nodiscard]] inline Ref<T> lock() const noexcept { return Ref<T>(data); }
[[nodiscard]] inline bool isValid() const noexcept { return data; }
[[nodiscard]] inline operator bool() const noexcept { return isValid(); }
private:
T* data = nullptr;
};
struct Leaf;
struct Node : RefCount
{
virtual ~Node() noexcept = default;
void PrintLeafName() noexcept const;
Ref<Leaf> leaf = nullptr;
std::string name;
protected:
Node(const std::string_view name) noexcept
: name(name)
{
}
};
struct ConcreteNode : Node
{
ConcreteNode(const std::string_view name, const std::string_view leafName) noexcept;
~ConcreteNode() noexcept override = default;
};
struct Leaf : RefCount
{
virtual ~Leaf() noexcept = default;
virtual void PrintName() const noexcept = 0;
void PrintNodeName() const noexcept
{
std::cout << parent.lock()->name << "\n";
}
protected:
Leaf(WeakRef<Node> parent, const std::string_view name) noexcept
: parent(std::move(parent)), name(name)
{
}
WeakRef<Node> parent;
std::string name;
};
struct ConcreteLeaf : Leaf
{
ConcreteLeaf(WeakRef<Node> parent, const std::string_view name) noexcept
: Leaf(std::move(parent), name)
{
}
~ConcreteLeaf() noexcept override = default;
void PrintName() const noexcept
{
std::cout << name <<"\n";
}
};
ConcreteNode::ConcreteNode(std::string_view name, std::string_view leafName) noexcept
: Node(name)
{
leaf = Ref<ConcreteLeaf>::Create(this, leafName);
}
void Node::PrintLeafName() const noexcept
{
leaf->PrintName();
}
int main()
{
Ref<Node> node = Ref<ConcreteNode>::Create("Concrete node", "Concrete leaf");
node->PrintLeafName();
node->leaf->PrintNodeName();
return 0;
}
Could you review it please? I'm not sure if RefCount class needs a virtual destructor. Ref class deletes T, so I don't think it's needed, but I may be wrong.
I'm also not sure of mutable std::atomic_uint32_t refCount, but it's required for RefCount::as() const
https://godbolt.org/z/cvGzaqvqe
Answer: Bugs: WeakRef is a big fat bug. Say you have your smart pointer Ref, create one WeakRef from it. Then clear Ref. Then use lock() function on WeakRef... voila a UB as you address deleted memory.
std::shared_ptr and std::weak_ptr implement the whole thing by keeping control block separately. Once all shared_ptr are gone the object is deleted. And only once all weak_ptr are gone too, the control block is deleted too.
Not sure how you should implement it but the managed object needs to be destroyed once all references are gone while the mechanism behind weak references needs to be kept alive as long as some weak references still exist. Do you really need weak references with intrusive smart pointers? For me it feels that the two don't mesh well together.
The acquire_release memory order in DecrementRefCount() is overkill. You need only to call release regularly. While apply acquire fence only prior to destroying the object. Also the routine, DecrementRef() in Ref is buggy. You first decrease the value then read it. What if it is decreased simultaneously in two places and then it is read simultaneously in two places? You'll get double-delete. DecrementRefCount() should return the value given by fetch_sub and use it to judge whether to delete or not.
Assignment operators: make sure operation like self-assignment (x = x; and x = std::move(x);) are safe. Currently they might behave weirdly or do unnecessary operations. While it doesn't normally happen people might accidentally trigger it in complicated calls.
Operators like ->, *, and get() ought to be const and return the object without unnecessary const. Here constness means that the pointer is unchanged but the data is under users' responsibility not the smart pointer's. It is probably overkill for your homework assignment but the class should support T being a const object. std::shared_ptr<const double> is a thing.
Clarification: Say you have a function foo(const Ref<A>& a). And bar() is not a const function of A. Then writing a->bar() is a compilation error because -> returns a const pointer. However, one can write Ref<A> a2 = a; and then call a2->bar(); which is absurd, right? So returning a const pointer via -> is illogical. The situation with * and get() is the same.
RefCount's operations should be only accessible to Ref and WeakRef.
Writing const std::string_view in function calls is superfluous. Write simply std::string_view.
The class is inconvenient to use. Ref<int> and Ref<std::string> will not compile. It should be able to figure out on its own if the object does not have the control block mechanism and add it automatically while wrapping and unwrapping all the calls for the user's convenience.
The class does not work with polymorphism. It is frequent that one first declares an Interface and then creates a Derived class. Normally users want to hold smart pointer to Interface, right? With this implementation it is impossible.
You shouldn't put noexcept everywhere. There are places where it is much more reasonable to not have it. They are only important for move constructor and move assignment else STL algorithm won't work right with the classes.
Regarding your questions:
Self-Assignment:
Ref<A> x;
x = x;
x = std::move(x);
Well, nobody would ever write that, right? But imagine situation:
bool DoStuff(Ref<A>& input, Ref<A>& output)
{
// lots of convoluted and branched code, some with early returns.
output = std::move(input);
return true;
}
And then someone decided to use it with DoStuff(x,x) for whatever reason. That'll cause a bug if self-assignment were not safe. That's being said, some STL classes do weird stuff on self-move-assignment, like std::vector that clears itself.
The class does not work with polymorphism: Okey I didn't check it fully and just saw that you didn't implement any proper mechanisms.
First, your
template<typename T2>
Ref(const Ref<T2>& other) noexcept
{
data = (T*)other.data;
IncrementRef();
}
Is extremely unsafe. With explicit pointer cast you can cast completely unrelated classes. For implicit conversion you ought to require T is a base of T2. When it is not the case, you ought to write external functions that perform dynamic casting and static casting. Like std::dynamic_pointer_cast and std::static_pointer_cast do for std::shared_ptr. Otherwise, you can easily cast cats into bananas.
Also, as it is now, it is also buggy. Calling delete on pointer to an interface will cause UB. That's because the destructor is not virtual. RefCounter should declare its destructor virtual. | {
"domain": "codereview.stackexchange",
"id": 44272,
"tags": "c++, homework, pointers, c++20"
} |
What is the unitary matrix for a full adder? | Question: I need the unitary matrix for a full adder.
I want to assess how an automatically generated circuit is close to the full adder. Therefore,I need a unitary matrix for the full adder. I believe it should be $2^5 \times 2^5$ (two inputs, two outputs and one carry out).
Does anyone have this unitary matrix for the full adder?
Answer: A full adder is usually depicted like this, with 1-bit inputs, sum, and carry:
A simple quantum circuit for the corresponding truth table could be drawn like this:
In my own code base, this translates into this code:
psi = ops.Cnot(0, 3)(psi, 0)
psi = ops.Cnot(1, 3)(psi, 1)
psi = ops.ControlledU(0, 1, ops.Cnot(1, 4))(psi, 0)
psi = ops.ControlledU(0, 2, ops.Cnot(2, 4))(psi, 0)
psi = ops.ControlledU(1, 2, ops.Cnot(2, 4))(psi, 1)
psi = ops.Cnot(2, 3)(psi, 2)
return psi
This code applies the gates one after the other. So in order to obtain a single big matrix you have to multiply all the gates together in reverse order and properly pad them to 5 qubits, as in the following (assuming I didn't make a mistake here), where * denotes the Kronecker product and @ is matrix multiply:
M = ((ops.Identity(2) * ops.Cnot(2, 3) * ops.Identity(1)) @
(ops.Identity(1) * ops.ControlledU(1, 2, ops.Cnot(2, 4))) @
(ops.ControlledU(0, 2, ops.Cnot(2, 4))) @
(ops.ControlledU(0, 1, ops.Cnot(1, 4))) @
(ops.Identity(1) * ops.Cnot(1, 3) * ops.Identity(1)) @
(ops.Cnot(0, 3) * ops.Identity(1)) )
The resulting matrix is $2^5 \times 2^5$ and looks something like:
Operator for 5-qubit state space. Tensor:
[[1.+0.j 0.+0.j 0.+0.j ... 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 1.+0.j 0.+0.j ... 0.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 1.+0.j ... 0.+0.j 0.+0.j 0.+0.j]
...
[0.+0.j 0.+0.j 0.+0.j ... 0.+0.j 1.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j ... 1.+0.j 0.+0.j 0.+0.j]
[0.+0.j 0.+0.j 0.+0.j ... 0.+0.j 0.+0.j 0.+0.j]]
This should be relatively simple to achieve in other infrastructures, such as Qiskit. Hope this helps. | {
"domain": "quantumcomputing.stackexchange",
"id": 4434,
"tags": "qiskit, circuit-construction"
} |
Do alternative start codons code for methionine after transcription? | Question: There is some literature which shows that all start codons code for methionine. However, in the standard genetic code, the alternative start codons clearly code for leucine. Does that mean these codons will code for leucine when they are encountered during translation (after start codon has been initialised and translated).
Answer: I commented that this was a duplicate, but reading the question more carefully you seem to be asking something slightly different.
In the context of a 'start' these codons will be recognised by fMet-tRNA and a formyl-methionine will be inserted as the first amino acid. Subsequent occurrences of the same codon within the open reading frame will be translated normally (e.g. GUG > valine).
The use of GTG as an initiation codon in the E. coli lacI gene
In this paper
Frottini et al. (2006) The Proteomics of N-terminal Methionine Cleavage. Molec. & Cell. Proteomics 5: 2336-2349
the authors report assays of E. coli methionine aminopeptidase with model peptides showing that when Lys is the 2nd residue, Met removal is very inefficient. They also show that when Pro is the 3rd residue, Met removal is very inefficient.
This explains the fact that when the lacI repressor protein was sequenced:
Beyreuther et al. (1973) The amino-acid sequence of lac repressor. PNAS 70: 3576-3580
it was found to have a Met residue at its N terminus (sequence Met-Lys-Pro-).
However, when the lacI gene was sequenced:
Farabaugh (1978) Sequence of the lacI gene. Nature 274: 765-769
the corresponding DNA sequence was GTG AAA CCA, demonstrating that the N-terminal residue is encoded by a GTG(Val) codon. LacI residue Val23 is encoded by a GTG codon, demonstrating the normal use of that codon in the body of the mRNA. | {
"domain": "biology.stackexchange",
"id": 2680,
"tags": "translation, genetic-code"
} |
$SU(3)$ Clebsch-Gordan Coefficient | Question: I have a problem computing the ratio $$\frac{P(\pi^0 P\rightarrow\Delta^+)}{P(K^- P\rightarrow\Sigma^{*0})}.$$ The problem demands reducing the $S$-matrix first but I really don't see how to get this result. I tried looking for an example of this kind of decays but nothing. Can anybody tell me how to compute just one probability and I'll figure out the rest.
My guess is that $P(\pi^0 P\rightarrow\Delta^+) = \lambda_1\cdot\alpha$, where $\alpha$ is a Clebsch-Gordan Coefficient and $\lambda_1$ is completely determined by the physics and will get cancelled in the ratio.
But still my question, how to compute the Clebsch Gordon Coefficient in this case?
Answer: The tensor product
\begin{align}
(1,1)\otimes(1,1)=(2,2)+2(1,1)+(0,0)+(3,0)+(0,3)
\end{align}
and in fact $(3,1)$ has dimension $24$ where $(3,0)$ has dimension 10 so it's likely $\Delta$ is in $(3,0)$, not (3,1).
You need to work out the remaining quantum numbers for your particles - let's call them $\alpha,\beta$ and $\gamma$ respectively. Once you have this you need to get the CG
$C_{(11)\alpha;(11)\beta}^{(30)\gamma}$.
This CG is given in this table. You can also use Table 2 of the "canonical" deSwart paper | {
"domain": "physics.stackexchange",
"id": 63371,
"tags": "particle-physics, symmetry, group-theory, representation-theory, lie-algebra"
} |
Why are four-vectors needed in the Dirac equation, when there are 4 linearly independent 2D matrices? | Question: I was taught that for the Dirac-equation to "work", you need matrices of the following form:
$Tr(\alpha^i) = 0$.
Eigenvalues +1 or -1
2 previous points together: equal number of negative and positive eigenvalues, thus even dimension.
$\alpha^i$ and $\beta$ meet the anti-commutator relation.
My Advanced QM professor told me convincingly that there are only 3 linearly independent 2D matrices, the Pauli matrices, which can be used in 2D. That's one short, so you'll need at least 4D matrices.
Now my QFT professor, who excells at confusing me (and not only me, mind you), stated that it is perfectly possible to use only 2D matrices, by extending the Pauli matrix set with another one. I think he's wrong, but he seems too old (and "wise") to be wrong, and I didn't remember all conditions above at the time, so I didn't want to argue without arguments.
Is it possible to 2D-ify the Dirac equation? (and not only in a 2D system like graphene, but in general)
Thanks
Answer: Dear rubenb, yes, what your professor says is surely based on solid maths. The reason is that the 4-component Dirac spinor is actually composed of two separate 2-component pieces.
The elementary "spinors" for 3+1 dimensions have two complex components. That results from the isomorphism between groups
$$SL(2,C) \sim Spin (3,1).$$
Note that both groups have 6 real generators. In particular, in a properly chosen basis, the $\alpha^i$ and $\beta$ matrices may be brought into a block-diagonal form with $2\times 2$ blocks. It follows that the $2\times 2$ blocks themselves satisfy the same algebra.
In particular, the four matrices may be written as
$$(\beta, \alpha^i) = (1_{2\times 2}, \sigma^i)\equiv \sigma^\mu$$
i.e. as the Pauli matrices supplemented with the identity matrix. Note that the $\alpha^i$ i.e. $\sigma^i$ matrices anticommute with each other while they commute with $\beta$ i.e. $\sigma^0$ and all the matrices square to the identity much like $\beta$, $\alpha^i$ do.
The isomorphism above may be viewed as a "noncompact extension" of the usual isomorphism
$$SU(2) \sim Spin(3).$$
Note that the group $SU(2)$ is a subgroup of $SL(2,C)$ - it's the same pair as $Spin(3)$ which is a subgroup of $Spin(3,1)$.
The two-component spinors are directly relevant for the description of the neutrinos. They only describe a left-handed massless particle (and right-handed massless antiparticle). That's different from the 4-component Dirac spinor that describes a particle that can either left-handed or right-handed. The neutrino is given by a Weyl spinor and the free equation is simply
$$\sigma^\mu \partial_\mu \chi = 0$$
which is Lorentz-covariant. However, one must realize that the 4-vector of $2\times 2$ matrices, $\sigma^\mu$, don't transform the 2-dimensional complex space (left-handed Weyl spinors) onto itself but onto another 2-dimensional complex space (of right-handed Weyl spinors) which is the complex conjugate of the first one.
Massive charged particles such as the electron require a 4-component spinor - i.e. a pair of two 2-component spinors - but for neutrinos, the minimum amount to describe a single particle is given by one 2-component spinor. | {
"domain": "physics.stackexchange",
"id": 632,
"tags": "quantum-mechanics, quantum-spin, relativity, dirac-equation"
} |
Do the 'fundamental circuit elements' have a correspondence in quantum technologies? | Question: Capacitors, Inductors and Resistors are well-known circuit components. Since the proposal of Leon Chua in 1971, the Memristor joined them as a fundamental component.
I am wondering whether these elements would be somehow imitated by the means of quantum technologies and what would be the requirements to achieve them.
Answer: When you ask,
I am wondering whether these elements would be somehow imitated by the means of quantum technologies
there are different levels on which you can interpret this question. You might mean to ask whether people will realise quantum capacitors, inductors, or resistors, or you might mean to ask whether people will realise components which, in quantum computers, fulfil the same functional roles as capacitors, inductors, or resistors in order to realise digital information processing — as opposed, for instance to analogue computers to model differential systems of equations.
It must be remembered that quantum technologies are at an early phase, where this is no single way which we can be confident will form the basis of a scalable quantum computer. But we can consider whether there are any cases where there may be interesting analogues.
Many quantum technologies do not represent anything like an electrical circuit, as such. Ion traps store bits of information on individual ions, which are moved in a limited and carefully controlled way. There is no natural notion of electrical conduction, resistors, or capacitors in this setting. Quantum dots are even less like electrical circuits, in that the locations of the physical systems storing the data are fixed.
Flux qubits, on the other hand, explicitly include circuits which carry a current (albeit a very small one). The resistance in such circuits is effectively zero, as they are superconducting; but they do involve Josephson junctions, which are often considered a non-linear type of inductor.
This is different from whether or not there is anything in a given platform which is doing the same job as a resistor, capacitor, or inductor: which may be substantially different on the level of physics, but which are somehow performing a similar role in mediating how a system performs information processing. However, there is a big difference between the way that classical semiconductor electronics realises information processing — with physical gates, which transform information-carrying input signals to produce output signals — and the way every current quantum technology realises information processing, which is to perform controlled changes of the dynamics of systems prepared in some input state, to realise an output state.
(The one possible exception are photonic quantum systems, in which the information is carried in light signals rather than in the states of more-or-less static pieces of matter. Perhaps you might argue that an optical memory is analogous to a capacitor somehow, or that a wave plate is analogous to an inductor, but these don't seem to be meaningful functional analogues for how an optical system might be used to perform quantum information processing.)
In summary: there is no single answer to your question, because of the different things you might mean by it and because there is no single platform to refer to in order to provide a definitive answer. But most of the platforms don't have anything which represents these basic electrical components, or which play the same role. Quantum technologies are simply expected to operate differently than classical computing technology. | {
"domain": "quantumcomputing.stackexchange",
"id": 100,
"tags": "technologies"
} |
What needs to be considered when configuring the clock system for a MSP430F5xxx micro-controller? | Question: I need to configure the clock systems for a MSP430F5528 microcontroller. The current development board has a 24MHZ crystal attach to XT2 pins (XT2IN/XT2OUT). The current design will need to support the follow peripherals.
UART
ADC
SPI
Timers
Few GPIO's will be need for system power up. The system is will part of a low power application. The power budget has not yet been determined.
About the MSP430 clock system
The MSP430 for the most part has three internal clocks
MCLK - Master Clock - CPU
SMCLK - Sub Master Clock - fast peripherals
ACLK - Auxiliary Clok - Slow peripherals
Question: What needs to be considered when configuring the clock system base on the above criteria?
Answer: It depends a bit on the quality you expect from some of the peripherals and what their qualities are at given clock speeds.
Will you want a very high speed SPI interface and can this specific chip support that on the Fast Auxiliary Clock? Or are you going to do 1MHz SPI at most? So also for all the other devices.
It's of course a nice boasting thing to be able to say that everything runs on many-MHz, but if it's not needed it will create more power consumption, more heat and more programming effort for no result. It can be tempting to just do everything at maximum speed and highest accuracy, but in almost all cases a good systems design will start with indexing what are the minimum speed and accuracy for something to work and then taking the appropriate margins. (Caveat: If you may need a higher speed later, it may be advantageous to make the hardware such that scaling up is only a firmware issue).
As for activating the Crystal; I don't actually know this particular chip, but it seems compatible in build-up to the ones that I know that have a configurable clock source, that is selected at power-up, in which case setting the crystal as a source is a simple configuration and possibly the configuration of all the other clocks is too. If that isn't the case and you need to actively switch while already running the core, you need to make 100% sure the crystal is running before you switch over. The datasheet will make very clear mention of the procedures needed for that if this is the case. Usually it's something along the line of: Activate Clock --> Wait for a bit in a register to be set --> Switch over.
With respect to the ADC, that can become a very complicated case. If you need very low sampling jitter or highly accurate timing (for highly accurate control or very (very) high grade Audio) you will quickly get to the domain of using the Crystal as a clock source. Where you'll then also need to make some hardware considerations. Both in the Analogue domain and on the Crystal end, since crystal accuracy is influenced by the loading capacitance (which includes traces and pins).
But those applications are few and far between.
In most cases your timers and your UART will determine the clocks you need, since they are most sensitive to clock drift and/or offset. If you need a timer just for low-speed timing you are better off relying on a watch crystal, since both the crystal and the internal driving circuitry for such low frequency crystals are usually better suited to high accuracy and low drift over long time. | {
"domain": "engineering.stackexchange",
"id": 383,
"tags": "electrical-engineering, embedded-systems, power"
} |
How to calculate v min and v max for C51 DQN | Question: Background: In C51 DQNs you must specify a v-min/max to be used during training. The way this is generally done is you take the max score possible for the game and set that to v-max, then v-min is just negative v-max. For a game like Pong deciding the v-min/max is simple because the max score possible is 20, therefore, v_min=-20 and v_max=20.
Question: In a game like Space Invaders, there is no max score, so how would I calculate the v-min/max for a C51 DQN?
Answer: If you're using a discount factor less than 1, you should be able to compute a maximum return (likewise, a minimum return) based on the max (min) reward you can earn at each timestep. However, this issue you bring up is usually cited as a difficulty with C51. I think people tend to simply use fixed values for the min/max return (or just make rough estimates). If you want to avoid this, I recommend looking into the QR-DQN algorithm which circumvents this issue altogether and is more theoretically sound. | {
"domain": "ai.stackexchange",
"id": 2276,
"tags": "reinforcement-learning, dqn, probability-distribution"
} |
Calculator that does basic functions and area/volume in Python 3.6 | Question: This is my first big project that I've made in Python. I've noticed as I was writing it that it's extremely repetitive and I just don't know how to make it less so.
# This is a basic calculator that also has area/volume calculations for certain shapes.
import math
import sys
# Main menu
def main():
choice = input('Which operation would you like?\n'
'1) Area\n'
'2) Volume\n'
'3) Basic Math\n'
'4) Quit\n')
if choice == '1':
area()
elif choice == '2':
volume()
elif choice == '3':
basic_math()
elif choice == '4':
sys.exit('Choice 4 selected. Quitting...')
else:
print('I didn\'t understand your input!')
# Menu to choose which area you'd like to calculate
def area():
choice = input('Please enter what you\'d like to calculate\n'
'1) Area of a square\n'
'2) Area of a rectangle\n'
'3) Area of a circle\n'
'4) Area of a triangle\n'
'5) Area of a equilateral triangle\n'
'6) Area of a trapezoid\n'
'7) Area of a cube\n'
'8) Quit\n'
'9) Main Menu\n')
try:
if choice == '1':
area_of_square()
elif choice == '2':
area_of_rectangle()
elif choice == '3':
area_of_circle()
elif choice == '4':
area_of_triangle()
elif choice == '5':
area_of_equilateral_triangle()
elif choice == '6':
area_of_trapezoid()
elif choice == '7':
area_of_cube()
elif choice == '8':
sys.exit('Choice 8 selected. Quitting...')
elif choice == '9':
main()
else:
print('I didn\'t understand your input!')
main()
except NameError:
print('You forgot to define a function!')
area()
# Volume main menu
def volume():
choice = input('Please enter what you\'d like to calculate\n'
'1) Volume of a cube\n'
'2) Volume of a sphere\n'
'3) Volume of a cylinder\n'
'4) Volume of a cone\n'
'5) Volume of a rectangular prism\n'
'6) Volume of a triangular prism\n'
'7) Quit\n'
'8) Main menu\n')
try:
if choice == '1':
volume_of_cube()
elif choice == '2':
volume_of_sphere()
elif choice == '3':
volume_of_cylinder()
elif choice == '4':
volume_of_cone()
elif choice == '5':
volume_of_rect_prism()
elif choice == '6':
volume_of_tri_prism()
elif choice == '7':
sys.exit('Option 7 selected. Quitting...')
elif choice == '8':
main()
else:
print('I didn\'t understand your input!')
volume()
except NameError:
print('Programmer forgot to define a function!')
volume()
# Menu for basic math
def basic_math():
choice = input('Please enter which operation you would like\n'
'1) Addition\n'
'2) Subtraction\n'
'3) Multiplication\n'
'4) Division\n'
'5) Quit\n'
'6) Menu\n')
try:
if choice == '1':
addition()
elif choice == '2':
subtraction()
elif choice == '3':
multiplication()
elif choice == '4':
division()
elif choice == '5':
sys.exit('Option 5 selected. Quitting...')
elif choice == '6':
main()
else:
print('I\'m sorry, I didn\'t understand your input.')
basic_math()
except NameError:
print('Programmer forgot to define a function! How lazy of him/her')
basic_math()
#############################################
# Start Basic Math functions
#############################################
def addition():
try:
x = int(input('Please enter your first value '))
y = int(input('Please enter the next value '))
print('The sum is {0}'.format(x + y))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
addition()
def subtraction():
try:
x = int(input('Please enter your first value '))
y = int(input('Please enter your next value '))
print('The value is {0}'.format(x - y))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
subtraction()
def multiplication():
try:
x = int(input('Please enter your first value '))
y = int(input('Please enter your next value '))
print('The value is {0}'.format(x * y))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
multiplication()
def division():
try:
x = int(input('Please enter your first value '))
y = int(input('Please enter your next value '))
print('The value is {0}'.format(x / y))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
division()
#################################################
# End Basic Math Functions, start area
#################################################
def area_of_square():
try:
area = int(input('The area of your square is... '))
print(area**2)
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
area_of_square()
def area_of_circle():
try:
radius = int(input('The area of your circle is... '))
area = math.pi * radius**2
print(area)
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
area_of_circle()
def area_of_triangle():
try:
base = int(input('Please enter the base... '))
height = int(input('Please enter the height...'))
print(base * height / 2)
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
area_of_triangle()
def area_of_equilateral_triangle():
try:
area = int(input('The area of your equilateral triangle is... '))
x = math.sqrt(3) / 4 * area**2
print(x)
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
area_of_equilateral_triangle()
def area_of_trapezoid():
try:
a = int(input('Enter base 1: '))
b = int(input('Enter base 2: '))
height = int(input('Enter the height: '))
# For some reason, when this was on the same line, it messed up the
# order of operations, so I set the formula into 3 different variables.
area = a + b
area1 = area / 2
area2 = area1 * height
print('The area is {0}'.format(area2))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
area_of_trapezoid()
def area_of_cube():
try:
a = int(input('Enter an edge of your cube... '))
area = 6 * a**2
print('The area of your cube is {0}'.format(area))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
area_of_cube()
#######################################################################
# End Area functions here: Start Volume functions.
#######################################################################
def volume_of_cube():
try:
a = int(input('Enter an edge of your cube... '))
volume = a**3
print('The volume of your cube is {0}'.format(volume))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
volume_of_cube()
def volume_of_cone():
try:
r = int(input('Enter the radius... '))
h = int(input('Enter the height... '))
volume = math.pi * r**2 * h / 3
print('The volume of the cone is {0}'.format(volume))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
volume_of_cone()
def volume_of_sphere():
try:
r = int(input('Enter the radius of your sphere... '))
volume = 4/3 * math.pi * r**3
print('The volume of your sphere is {0}'.format(volume))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
volume_of_sphere()
def volume_of_cylinder():
try:
r = int(input('Enter the radius of your cylinder... '))
h = int(input('Enter the height of your cylinder... '))
volume = math.pi * r**2 * h
print('The volume of yoru cylinder is {0}'.format(volume))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
volume_of_cylinder()
def volume_of_rect_prism():
try:
w = int(input('Enter the width '))
h = int(input('Enter the height '))
l = int(input('Enter the length '))
volume = l * w * h
print('The volume of the rectangular prism is {0}'.format(volume))
except ValueError:
print('You entered alphabetic characters! Please enter integers only.')
volume_of_rect_prism()
#def volume_of_tri_prism():
# try:
# a = int(input('Enter 1st base side '))
# b = int(input('Enter 2nd base side '))
# c = int(input('Enter 3rd base side '))
# h = int(input('Enter the height '))
# volume = 1 / 4 * h
# volume1 = a**4 + 2(a + b)**2
# print('The volume of the triangular prism is {0}'.format(volume))
# except ValueError:
# print('You entered alphabetic characters! Please enter integers only.')
# volume_of_tri_prism()
if __name__ == "__main__":
main()
Answer: The main problem is the repetetiveness and nestedness when it comes to reading user inputs and mapping them to the function calls. You are currently using multiple if/else branches, but, what if you would use a dictionary to map choices into function names:
COMMANDS = {
'1': addition,
'2': subtraction,
'3': multiplication,
'4': division,
'5': exit,
'6': main
}
if choice not in COMMANDS:
print('I\'m sorry, I didn\'t understand your input.')
basic_math()
else:
COMMANDS[choice]()
Note that exit here is a function you might have to exit the app.
Also, look through the third-party apps in the CLI space - there might be a tool that can ease creating this kind of question-choice style programs.
Here are some other notes:
the program is too long - split it into multiple logical parts to have a better separation of concern and modularity. For example, the calculations of areas and volumes should be separated from the question-choice handling functions
you can use multi-line strings instead of having regular strings with newline characters
you can wrap the string inside the print() statement around double quotes so that you won't need to escape the single quote (credit to @Dex'ter):
"I'm sorry, I didn't understand your input." | {
"domain": "codereview.stackexchange",
"id": 24308,
"tags": "python, calculator"
} |
Spinor notation in general relativity | Question: I have a somewhat broad/big question, and I know that there are many references for it available out there. However, so far I couldn't find anything that I can really understand, that's why here is my last resort.
The question is about the spinor notation and its use in general relativity. Reading a paper by Penrose, I saw the following statement (non-verbatim):
Define $\kappa_{AB}=\psi^{-1/3}o_{(A}i_{B)}$, where the Weyl conformal spinor is $\Psi_{ABCD}=\psi o_{(A}o_{B}i_{c}i_{D)}$. Then $H_{ab}=ik_{AB}\epsilon_{A'B'}-i\epsilon_{AB}\overline{\kappa}_{A'B'}$ is a skew tensor. <…>
Now, I don't understand the above objects at all. From what I read (book "Introduction to 2-Spinors in General Relativity" by O'Donnel) we can think of $o$ and $i$ as vectors $o=(1,0)$ and $i=(0,1)$ (is it correct?) Then the argument is, that the spinor indices (here, A B C D A' B') are purely a notation and don't mean anything. Well, this is where I stopped understanding what kind of objects I'm working with. If it's purely some symbolics, how can I actually work with it/calculate something?
In the example above, since on the left hand side he uses "normal notation" $H_{ab}$ I understand that we have a rank 2 tensor. By looking at the right hand side, I don't understand anything. What I know is that $\epsilon$ is like an equivalent of the metric in the spinor notation, in the sense that we can, for example, raise and lower indices with it. However, since everything is supposed to be just a "symbol", I don't understand the object on the right hand side.
Answer: Before going further, I would suggest you to read Chapter 13 ("Spinors") of R.Wald's book "General Relativity". In that chapter, you will see that 2-spinors are simply vectors living in a two-dimensional complex vector space. The capital letters in the indices are simply the abstract index notation for these vectors (see Section 2.4 in Chapter 2 of the same book).
You will also see that the real spinorial tensors, i.e., the spinorial tensors of type (1,0;1,0) such that $\overline{\varphi}^{A'A}=\varphi^{AA'}$, form a real four-dimensional vector space, $V$. Also, taking the $\epsilon_{AB}$, you can build the following spinorial tensor: $$g_{AA'BB'}=\epsilon_{AB}\overline{\epsilon_{A'B'}}
$$ which gives you a multilinear map of the form $V\times V\longrightarrow\mathbb{R}$. It can be verified that this multilinear map defines a Lorentz metric on $V$. This vector space $V$ can be identified with the usual tangent spaces $TM_{p}$ of flat spacetime (via identification between orthonormal bases of $V$ and $TM_{p}$), and that's why it's customary to make the notational identification $a\cong AA'
$, where $a$ is the abstract index notation for a tangent vector. For the Minkowski metric of spacetime, $\eta_{ab}$ , we, of course, get:
$$\eta_{ab}\cong g_{AA'BB'}=\epsilon_{AB}\overline{\epsilon_{A'B'}}
$$
It was Penrose who invented this notation, and he (as well as most relativists today) uses the notation extensively. | {
"domain": "physics.stackexchange",
"id": 21477,
"tags": "general-relativity, differential-geometry, notation, tensor-calculus, spinors"
} |
Qiskit: QAOAnsatz circuit with custom Hamiltonian | Question: I am trying to implement the Quantum Approximate Optimization Ansatz by creating a parametrized subcircuit
$$V (α) = e^{−iH_M α_1} e^{−iH_D b_1} ... e^{−iH_M α_n} e^{−iH_D b_n}$$
with the custom driver hamiltonian $H_M = \mathbb{I} - \left|b \right> \left< b\right|$, where $\left| b \right>= U \left| 0 \right>$ is a random normalized state, and the default $H_M$ mixer hamiltonian of the original paper.
I have a problem feeding my hamiltonian to the QAOAAnsatz as it asks an OperatorBase class for input.
How do I construct the Operator object for QAOAAnsatz or how do I create a custom QAOA circuit?
Answer: Assume that $U$ is given as a quantum circuit:
U = QuantumCircuit(num_qubits)
Then to get the state vector $\left| b \right>$ we can use Statevector.evolve() method[1]
from qiskit.quantum_info import Statevector
zero = Statevector.from_label('0'*num_qubits)
b = zero.evolve(U)
The method Statevector.to_operator()[2] converts a state to a rank-1 projector operator. So we can construct an OperatorBase instance for $H_D = \mathbb{I} - \left|b \right> \left< b\right|$ as follows:
from qiskit.opflow import I
from qiskit.opflow.primitive_ops import PrimitiveOp
projector_op = PrimitiveOp(b.to_operator())
cost_operator = (I^num_qubits) - projector_op
Finally,
ansatz = QAOAAnsatz(cost_operator) | {
"domain": "quantumcomputing.stackexchange",
"id": 3747,
"tags": "qiskit, circuit-construction, vqe, optimization"
} |
Derivative as a fraction in deriving the Lorentz transformation for velocity | Question: Consider a frame $S$ and $S'$ which is coincides at $t=0$ and then $S'$ starts moving with velocity $v$ in $+x$ direction.
By Lorentz transformation equation,
\begin{align}
x'&=\gamma(x-vt) \\
t'&=\gamma\left(t-\frac{vx}{c^2}\right) \\
x&=\gamma(x'+vt') \\
t&=\gamma\left(t'+\frac{vx'}{c^2}\right)\end{align}
where $\gamma=\sqrt{1-\frac{v^2}{c^2}}$ .
Consider a body $A$ having coordinates $(x,y,z,t)$ in $S$ frame and $(x',y',z',t')$ in $S'$ frame.
In time $dt$, coordinates become $(x+dx,y+dy,z+dz,t+dt)$ in $S$ frame.
In $S'$ frame, in interval $dt'$, coordinate become $(x'+dx',y'+dy',z'+dz',t'+dt')$
In books the derivation for velocity transformation equations are given as
$u_x=\frac{dx}{dt}$
$u'_x=\frac{dx'}{dt'}$
$dx'=\gamma(dx-vdt)$
$dt'=\gamma(dt-\frac{vdx}{c^2})$
$u'_x=\frac{\gamma(dx-vdt)}{\gamma(dt-\frac{vdx}{c^2})}$
Dividing the numerator and denominator by $dt$, we get
$\displaystyle u'_x=\frac{\frac{dx}{dt}-v}{1-\frac{v(dx/dt)}{c^2}}$
$\displaystyle u'_x=\frac{u_x-v}{1-\frac{vu_x}{c^2}}$
I have tried the derivation using chain rule of differentiation,
As $x'=x'(x,t)$
$\frac{dx'}{dt'}=\frac{\partial x'}{\partial x}\Big|_t\frac{dx}{dt'}+\frac{\partial x'}{\partial t}\Big|_x\frac{dt}{dt'}\tag{1}$
As the transformation equations are invertible so,
$x=x(x',t')$
$t=t(x',t')$
So, $\frac{dx}{dt'}=\frac{\partial x}{\partial x'}\Big |_{t'}\frac{dx'}{dt'}+\frac{\partial x}{\partial t'}\Big |_{x'}\frac{dt'}{dt'}\tag{2}$
$\frac{dt}{dt'}=\frac{\partial t}{\partial x'}\Big|_{t'}\frac{dx'}{dt'}+\frac{\partial t}{\partial t'}\Big |_{x'}\frac{dt'}{dt'}\tag{3}$
Plugging $(2)$ and $(3)$ in $(1)$,
$\frac{dx'}{dt'}=\frac{\partial x'}{\partial x}\Big|_t\Big(\frac{\partial x}{\partial x'}\Big |_{t'}\frac{dx'}{dt'}+\frac{\partial x}{\partial t'}\Big |_{x'}\Big)+\frac{\partial x'}{\partial t}\Big|_{x}\Big(\frac{\partial t}{\partial x'}\Big|_{t'}\frac{dx'}{dt'}+\frac{\partial t}{\partial t'}\Big |_{x'}\Big)$
I have calculated all the above partial derivatives from Lorentz transformation. And I get
$\Big(\gamma^2(1-\frac{v^2}{c^2})-1\Big)u'_x=0$
$\implies 1-1=0$
$\implies 0=0$
I have two doubts,
i) If I use chain rule for differentiation then why I am not able to get transformation equation for velocity. Why I get $0=0$ instead of getting an expression for $u'_x$? Have I made some mistake or there is some other way to derive it using chain rule?
ii) The way used in books is that they treated derivative as fraction. They write expression for $dx'$ and $dt'$ and divide them. But isn't derivative a sort of operation. In elementary calculus courses, we have been told that $\frac{dx}{dt}$ is actually $\frac{d}{dt}(x)$. It is not like dividing $dx$ and $dt$. I get very confused how derivative as a fraction is justified.
Please help!
Answer: The particle is moving in the 1+1-dimensional Minkowski space-time on a (world-line) curve which could be represented as function of the parameter $\,\tau$, the proper time. More precisely $\,s=c\,\tau\,$ is the relativistic $''$arc length$''$, a scalar invariant under Lorentz transformations. It corresponds to the arc length (natural) parameter $\,s\,$ of Euclidean curves, a scalar invariant under space transformations.
So, for the parametric representation of the curve we have
\begin{align}
\texttt{in system } \mathrm S & : \mathbf X\left(\tau\right)\boldsymbol{=}\left[x\left(\tau\right),t\left(\tau\right)\right]
\tag{01a}\label{01a}\\
\texttt{in system } \mathrm S'\! & : \mathbf X'\!\left(\tau\right)\boldsymbol{=}\left[x'\!\left(\tau\right),t'\!\left(\tau\right)\right]
\tag{01b}\label{01b}
\end{align}
The space-time coordinates are related by a Lorentz boost transformation with velocity $\,\upsilon \in \left(-c,c\right)\,$ along the common $\,x,x'-$axis in differential form
\begin{align}
\mathrm dx' & \boldsymbol{=} \gamma_v\left(\mathrm dx\boldsymbol{-}\upsilon \mathrm dt\right)
\tag{02a}\label{02a}\\
\mathrm dt' & \boldsymbol{=}\gamma_v\left(\mathrm dt\boldsymbol{-}\dfrac{\upsilon}{c^2} \mathrm dx\right)
\tag{02b}\label{02b}\\
\gamma_v & \boldsymbol{=}\left(1\boldsymbol{-}\dfrac{\upsilon^2}{c^2}\right)^{\boldsymbol{-}\frac12}
\tag{02c}\label{02c}
\end{align}
Now by the chain rule and differentiation with respect to $\,\tau\,$ we have
\begin{align}
u'_x& \boldsymbol{=}\dfrac{\mathrm dx'}{\mathrm dt'} \boldsymbol{=}\dfrac{\dfrac{\mathrm dx'}{\mathrm d\tau}}{\dfrac{\mathrm dt'}{\mathrm d\tau}} \boldsymbol{=}\dfrac{\dfrac{\partial x'}{\partial x}\dfrac{\mathrm dx}{\mathrm d\tau}\boldsymbol{+}\dfrac{\partial x'}{\partial t}\dfrac{\mathrm dt}{\mathrm d\tau}}{\dfrac{\partial t'}{\partial x}\dfrac{\mathrm dx}{\mathrm d\tau}\boldsymbol{+}\dfrac{\partial t'}{\partial t}\dfrac{\mathrm dt}{\mathrm d\tau}}\boldsymbol{=}\dfrac{\gamma_v\dfrac{\mathrm dx}{\mathrm d\tau}\boldsymbol{+}\left(\boldsymbol{-}\gamma_v\upsilon\right)\dfrac{\mathrm dt}{\mathrm d\tau}}{\left(\boldsymbol{-}\gamma_v\dfrac{\upsilon}{c^2} \right)\dfrac{\mathrm dx}{\mathrm d\tau}\boldsymbol{+}\gamma_v\dfrac{\mathrm dt}{\mathrm d\tau}}
\nonumber\\
& \boldsymbol{=}\dfrac{\gamma_v\left(\dfrac{\mathrm dx}{\mathrm d\tau}/\dfrac{\mathrm dt}{\mathrm d\tau}\right)\boldsymbol{-}\gamma_v\upsilon}{\left(\boldsymbol{-}\gamma_v\dfrac{\upsilon}{c^2} \right)\left(\dfrac{\mathrm dx}{\mathrm d\tau}/\dfrac{\mathrm dt}{\mathrm d\tau}\right)\boldsymbol{+}\gamma_v} \boldsymbol{=}\dfrac{\gamma_v\dfrac{\mathrm dx}{\mathrm dt}\boldsymbol{-}\gamma_v\upsilon}{\left(\boldsymbol{-}\gamma_v\dfrac{\upsilon}{c^2} \right)\dfrac{\mathrm dx}{\mathrm dt}\boldsymbol{+}\gamma_v}
\nonumber\\
& \boldsymbol{=}\dfrac{u_x\boldsymbol{-}\upsilon}{\boldsymbol{-}\dfrac{\upsilon}{c^2} u_x\boldsymbol{+}1}
\tag{03}\label{03}
\end{align}
that is
\begin{equation}
u'_x \boldsymbol{=}\dfrac{u_x\boldsymbol{-}\upsilon}{1\boldsymbol{-}\dfrac{\upsilon u_x}{c^2} }
\tag{04}\label{04}
\end{equation}
$=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!=\!$
ADDENDUM
An other way :
\begin{align}
\require{cancel}
u'_x & \boldsymbol{=}\dfrac{\mathrm dx'}{\mathrm dt'} \boldsymbol{=}\dfrac{\dfrac{\partial x'}{\partial x}\mathrm dx\boldsymbol{+}\dfrac{\partial x'}{\partial t}\mathrm dt}{\dfrac{\partial t'}{\partial x}\mathrm dx\boldsymbol{+}\dfrac{\partial t'}{\partial t}\mathrm dt}\boldsymbol{=}\dfrac{\left(\dfrac{\partial x'}{\partial x}\dfrac{\mathrm dx}{\mathrm dt}\boldsymbol{+}\dfrac{\partial x'}{\partial t}\right)\cancel{\mathrm dt}}{\left(\dfrac{\partial t'}{\partial x}\dfrac{\mathrm dx}{\mathrm dt}\boldsymbol{+}\dfrac{\partial t'}{\partial t}\right)\cancel{\mathrm dt}}
\nonumber\\
& \boldsymbol{=}\dfrac{\gamma_v\dfrac{\mathrm dx}{\mathrm dt}\boldsymbol{-}\gamma_v\upsilon}{\left(\boldsymbol{-}\gamma_v\dfrac{\upsilon}{c^2} \right)\dfrac{\mathrm dx}{\mathrm dt}\boldsymbol{+}\gamma_v}
\boldsymbol{=}\dfrac{u_x\boldsymbol{-}\upsilon}{\boldsymbol{-}\dfrac{\upsilon}{c^2} u_x\boldsymbol{+}1}
\tag{A-01}\label{A-01}
\end{align}
that is
\begin{equation}
u'_x \boldsymbol{=}\dfrac{u_x\boldsymbol{-}\upsilon}{1\boldsymbol{-}\dfrac{\upsilon u_x}{c^2} }
\tag{A-02}\label{A-02}
\end{equation}
Any way we use is equivalent to the division of the Lorentz equations \eqref{02a},\eqref{02b} side by side so I don't think that there exists any sense of the chain rule use. It doesn't give to us a better way or something new. | {
"domain": "physics.stackexchange",
"id": 79141,
"tags": "special-relativity, velocity, inertial-frames, differentiation, calculus"
} |
Fixing a stepper motor by its shaft | Question: Suppose I have a small stepper motor, and for reasons (like wiring considerations) it is advantageous for me to not actually have the motor's body fixed as the shaft rotates, but rather have the shaft fixed in place while the entire mechanism of the motor rotate around the shaft. Are the two schemes equivalent if we assume that the shaft is perpendicular to the ground?
My ultimate case is to have a stepper motor rotate while it is upside down, without using additional gear wheels or any other component other than a coupler which secures the shaft to a solid base.
Answer:
Are the two schemes equivalent?
Mostly so. Usually the motor is affixed to a sturdy chassis resting on a solid surface, sinking vast majority of vibrations and jerk coming from the motion. In your configuration, both the motor and anything attached to it will be fully exposed to these vibrations - which introduce all kinds of headaches like electric connectors losing contact, wear of joints from material fatigue, screws coming loose, and so on and so forth - not a single thing critical, just a mountain of small headaches. You're also dealing with accelerations in case the element turns fast (and especially if it stops rapidly, say, knocking against a brake), and you have to get power and control to the rotating part somehow. Control is fairly easy using short-range wireless communication like bluetooth or IRDA. Power is worse - you'll either limit your angle so that an attached cable doesn't get too twisted, or supply power through batteries. Or you can use a slip-ring for both power and data, but slip-rings are another can of worms, something that works okay in 'big industrial', but on small scale is so fault-prone and quirky you're better off redesigning everything to have a fixed stepper. | {
"domain": "engineering.stackexchange",
"id": 3655,
"tags": "mechanical-engineering, motors"
} |
Space Elevator - Could We Do It? | Question: 'Elevator to Space' or 'Death-o-Swing'?
During one of my ever increasing day-dreams;
I remembered something I heard in respect to a theoretical 'Elevator to Space' and I was wondering what you guys think about this.
looking at the diagram added, the first thing noticed was the counterweight - not to mention the gravitational barrier that we will need to overcome initially with the weight of the cable alone I don't think its plausible... However,
Assuming we could;
How far out would it have to be at the end of the cable to escape
the earth' gravity, enough to then let centrifugal forces take over?
is it even possible?
Assuming that could be done;
Do you think it will be a matter of weight to keep the cable upright - or distance? (obviously a mixture of the two - but how much centrifugal force increase would be expected
This is my Main Quarrel;
Wouldn't even a small weight, attached to a tether eventually off-centre the rotation of the earth and potentially change its orbit and relationship with The Sun and our moon... Killing us all :( Or could a secondary elevator on the opposite side counter the effects?
I can only imagine this has been dreamt-up already. but it has had me thinking ... which is always dangerous :)
and I thought why not start a discussion... any thoughts are welcome, thanks, Everybody :-P
Answer: Yes a space elevator is perfectly possible and current technology I have heard it said that technology exists to put one on the moon right now and would cost an estimated $10 Billion or something.
1) An Earth-based space elevator would consist of a cable with one end attached to the surface near the equator and the other end in space beyond geostationary orbit (35,786 km altitude) https://en.wikipedia.org/wiki/Space_elevator#Physics
2) I think it is a matter of tensile strength in the material the elevator is made out of.. the cable would need to hold its own weight with the centrifugal force trying to throw the cable away from earth, so the cable needs to be able to not snap at this long distance.
3) The idea is to attach solar lifts to the cable and these then travel up and down the cable transporting goods without having to worry about burning fuel to break the earths escape velocity. I think the forces are so small in comparison to the things like asteroid / meteoroid impacts, the moons and tidal influence on us that these are causing more distortion to our earths orbit than the space elevator would. However I would be interested to know the impact, I'll see if I can find some more info. | {
"domain": "physics.stackexchange",
"id": 48367,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, material-science, satellites"
} |
Textbased User Interface for user and program taking turns | Question: I wrote this program to model interactions between a user and an artificial player, both playing by the same rules (not enforced here for simplicity).
The game played is here is "your next word has to start with the last letter of mine"
module Main where
import Lib
import System.Random
import System.Exit
import Control.Monad
vocab = ["alpha","beta","gamma"]
blacklist = []
pick:: [a] -> IO a --picks random element. copy pasted, not understood
pick x = Control.Monad.liftM (x !!) (randomRIO (0, length x - 1))
main :: IO ()
main = do
userInput <- getLine
processUser userInput vocab blacklist
processUser :: String -> [String] -> [String] -> IO a
processUser input vocab blacklist = if input == "quit" then exitSuccess
else do
successor <- getNext input vocab (input:blacklist)
processPC successor vocab blacklist
processPC :: Maybe (IO String) -> [String] -> [String] -> IO a
processPC Nothing v b = do putStrLn "I give up"
exitSuccess
processPC (Just ioWord) v b = do word <- ioWord
putStrLn word
userInput <- getLine
processUser userInput v (word:b)
getNext:: String -> [String] -> [String] -> IO (Maybe (IO String))
getNext lastWord vocab blacklist = do let chooseFrom = filter (`notElem` blacklist) vocab
let matches = filter (\x -> head x == last lastWord) chooseFrom
if null matches then return Nothing
else return (Just (pick matches) )
I am especially interested how to formulate this better structurally-
Answer: The main issue with this code is that basically everything “lives” in IO. One of the advantages of Haskell over other programming languages is the ability to cleanly separate effects, and you should strive to implement as much of your code as possible in a pure context, then only actually use IO at the top level of your program.
Breaking apart getNext
The most egregious example of this is getNext, which has a pretty convoluted return value, IO (Maybe (IO String)). This is IO nested inside another IO type, which is pretty convoluted. Ideally, that function shouldn’t need to be anything but completely pure, anyway, so it should really return something like Maybe String.
Taking a step back for a moment, though, what does getNext even do? The name getNext is pretty vague. More than that, though, it has a lot of responsibilities. Let’s list them:
It accepts the last word the player typed in, then finds what letter it ended with.
It filters blacklisted words out of a larger wordlist.
It finds the words that start with the letter found in step 1.
Finally, it picks a random word from that list.
This is a lot of different responsibilities for just one function! Haskell works especially well when you define small, atomic functions that only do one thing at a time, then compose them together. Distilling the core functionality, this function should really probably just find all the words in a list that start with a particular character, then return all of them. You can implement this in a single line using isPrefixOf from Data.List:
import Data.List (isPrefixOf)
startingWith :: Char -> [String] -> [String]
startingWith c = filter (isPrefixOf [c])
(As an aside, this actually fixes a problem with your original code, which used head. The head function is partial; that is, it will crash if given an empty list. The isPrefixOf function will handle empty lists properly.)
IO and strong typing
Now, what about handling the other responsibilities? Well, picking a random element is probably the trickiest thing to do because it is sort of side-effectful. You could manually thread a random number generator state around, but that would be a bit cumbersome. One good way to handle this is the MonadRandom typeclass from the package of the same name, which allows writing a pickRandom function without IO:
import Control.Monad.Random (MonadRandom(..))
pickRandom :: MonadRandom m => [a] -> m (Maybe a)
pickRandom [] = return Nothing
pickRandom xs = Just . (xs !!) <$> getRandomR (0, length xs - 1)
This has two improvements over the pick function you found:
It properly handles the case of empty lists by returning Maybe a. The version of pick you found would simply crash.
It does not depend on IO, just on MonadRandom, which is significantly less powerful.
Haskell has lots of ways to be very precise about the type of things. Rather than making everything nullable, it has Maybe. Rather than just passing strings around, Haskell favors using ADTs. Haskell also provides a way from distinguishing between pure and impure operations using the IO type, but in many ways, IO provides some very weak guarantees.
When a function returns IO, it can effectively do anything. It can spawn threads, it can interact with the filesystem, and it can even send data over the network. Ideally, it would be nicer to have more fine-grained typing, just like we have with Haskell’s domain-specific, fine-grained ADTs.
To accomplish this, it’s often possible to use typeclasses like MonadRandom, which encode a very specific set of capabilities. Functions that include a MonadRandom constraint can do one thing: generate random numbers. Now, because MonadRandom is a typeclass, not a datatype, it does not specify how those numbers are generated; it is up to the caller to decide that. The MonadRandom typeclass actually has an instance for IO, which allows generating numbers using the system random number generator, but it also has an instance for the Rand type, which is a purely functional random number generation monad.
We haven’t decided how we’re going to generate random numbers yet. We might even use IO, eventually. However, that’s not the point… the main point is that we’ve now written a function that can do nothing more than its type claims it can, and that’s a good thing.
Handling turns
Now that we have some extremely basic primitives, we can put them together to create handlers for user and computer turns. Each turn can result in one of two actions: giving up, or submitting a word. Therefore, let’s encode that into a datatype, then implement some extremely simple functions for each kind of turn:
data Turn = Word String | GiveUp
userTurn :: String -> Turn
userTurn "quit" = GiveUp
userTurn input = Word input
computerTurn :: MonadRandom m => String -> [String] -> m Turn
computerTurn lastWord wordList = maybeToTurn <$> maybeRandomWord
where lastChar = last lastWord
maybeRandomWord = pickRandom (startingWith lastChar wordList)
maybeToTurn = maybe GiveUp Word
The computerTurn function is a little more complex, but it’s not too bad. It uses the maybe function, which is a convenient helper for transforming a Maybe value into a value of another type (in this case, Turn), and it uses <$>, which is just an infix alias for fmap.
One nice thing about these implementations is, again, we are able to learn a lot about these functions just by looking at their types. The userTurn function is extremely pure, and the computerTurn function uses randomness, but nothing else.
Writing the main loop
Now that we have some primitives, we can write a top-level interpreter that will actually drive the game itself. This function will be a bit longer, since it will handle the actual imperative logic of the game, but it will also be extremely simple, since it’s basically just wiring things together.
One thing that we have pretty much eliminated is the concept of a word blacklist. After all, why not just remove words from the word list itself rather than maintaining two separate lists and threading them around everywhere? We can completely eliminate the need for a blacklist by just pulling words out of the computer’s vocabulary.
import Data.List (delete)
runGame :: [String] -> IO [String]
runGame wordList = do
userInput <- getLine
case userTurn userInput of
GiveUp -> exitSuccess
Word userWord -> do
let remainingWords = delete userWord wordList
computerResult <- computerTurn userWord remainingWords
case computerResult of
GiveUp -> putStrLn "I give up" >> exitSuccess
Word computerWord -> do
putStrLn computerWord
runGame (delete computerWord remainingWords)
This function may look a little complicated, but it’s really not so bad—by just following the types, the function effectively writes itself. We call userTurn, then handle both potential Turn cases. Next, we call computerResult and handle both of its possible cases. Once that’s done, we just loop, and we’re done! The result of runGame is just the number of words left in the computer’s vocabulary.
Now, all that’s left to do is invoke runGame from main:
main :: IO ()
main = void $ runGame vocab
The void function just ignores the result of runGame vocab, properly returning IO (), and it kicks off the main loop by passing in the initial vocab list.
The final result
Here’s the final code after all of my changes:
module Main where
import Data.List (delete, isPrefixOf)
import Control.Monad (void)
import Control.Monad.Random (MonadRandom(..))
import System.Exit (exitSuccess)
data Turn = Word String | GiveUp
vocab :: [String]
vocab = ["alpha","beta","gamma"]
main :: IO ()
main = void $ runGame vocab
runGame :: [String] -> IO [String]
runGame wordList = do
userInput <- getLine
case userTurn userInput of
GiveUp -> exitSuccess
Word userWord -> do
let remainingWords = delete userWord wordList
computerResult <- computerTurn userWord remainingWords
case computerResult of
GiveUp -> putStrLn "I give up" >> exitSuccess
Word computerWord -> do
putStrLn computerWord
runGame (delete computerWord remainingWords)
userTurn :: String -> Turn
userTurn "quit" = GiveUp
userTurn input = Word input
computerTurn :: MonadRandom m => String -> [String] -> m Turn
computerTurn lastWord wordList = maybeToTurn <$> maybeRandomWord
where lastChar = last lastWord
maybeRandomWord = pickRandom (startingWith lastChar wordList)
maybeToTurn = maybe GiveUp Word
pickRandom :: MonadRandom m => [a] -> m (Maybe a)
pickRandom [] = return Nothing
pickRandom xs = Just . (xs !!) <$> getRandomR (0, length xs - 1)
startingWith :: Char -> [String] -> [String]
startingWith c = filter (isPrefixOf [c])
Formatting changes aside, the main differences from your original code are strengthening the types and separating concerns as much as possible to isolate effects. I have also removed do from most of the function implementations, since do tends to force code into a pseudo-imperative style that eliminates a lot of the declarative benefits of Haskell.
Some of the things in this answer might be a bit more advanced than you’ve been exposed to yet, and that’s okay! In truth, there are probably even fancier ways to accomplish what you want—a free monad came to mind when writing this answer, for example. However, the point of this is not to be either so far above you that you can’t understand it, nor to be precisely at your level. It’s alright if not all of this makes sense to you just yet, but I hope that by reaching for some more complicated concepts, you’ll be encouraged, not discouraged, to challenge yourself some more. | {
"domain": "codereview.stackexchange",
"id": 21487,
"tags": "haskell, user-interface"
} |
Formula for mean free path in two dimensions | Question: I'm running some simulations of particle collisions in two dimensions with discretised time and space. In essence, particles only collide if they occupy the same location (cell) at the same time step. The particles are in a 2D box and collide with both each other and the walls, we also assume a closed system (so no gravity etc).
I want to use the mean free path $\lambda$ (the average distance travelled by a particle between collisions) to determine the best values for number of particles $N$, rms velocity $V_{rms}$, and box length $L$; as such I want to know how to calculate mean free path in two dimensions.
So far I have been able to find a formula on Wikipedia for three dimensions:
$$
\lambda=\frac{k_BT}{\sqrt{2}\pi d^2p}
$$
where d is the diameter of the particle and p is the pressure.
which I can easily turn into:
$$
\lambda=\frac{mv_{rms}^2}{2\sqrt{2}\pi d^2p}
$$
but again this is for three dimensions not the two dimensions that I need.
So the question is, can this be converted to two dimensions? or does a two dimensional form already exist?
Any help with this would be greatly appreciated.
Answer: Sure, you can generalize the mean free path to a different number of dimensions. But first, let's understand the derivation in 3D.
A particle will collide with any other particle that it comes within a distance $d$ of. So if it moves a length $\ell$, it will collide if there is another particle in a cylindrical volume $\pi d^2 \ell$. Call this the volume swept out by the particle.
Imagine a region of volume $V$ filled with gas, and let $\bar{v}$ be the mean speed of the particles of that gas in the rest frame of the entire volume. Then, in the reference frame of one of the gas particles, the other particles have an average speed of $\sqrt{2}\bar{v}$. (See this derivation of the factor of $\sqrt{2}$, and note the presence of $\bar{v}$ instead of $v_\text{rms}$ - see this question for details on that)
In the reference frame of that one particle, the total volume swept out by all the other particles in a time $\Delta t$ is $N\pi d^2 \sqrt{2}\bar{v}\Delta t$, where $N$ is the number of particles.
The probability that the chosen at-rest particle experiences an interaction during $\Delta t$ is equal to the fraction of the total volume ($V$) swept out, namely
$$P(\text{int.}) = \frac{N\pi d^2 \sqrt{2}\bar{v}\Delta t}{V}$$
To calculate the mean free time $\tau$, I technically should find the probability distribution of interaction times and compute its mean. But to make the calculation simple, I'll take advantage of a handy coincidence (which I am not justifying here): $\tau$ happens to be equal to the time after which this probability would reach 1 if it increased at a fixed rate over time. So I can replace $\Delta t$ with $\tau$ and $P(\text{int.})$ with $1$, and I get
$$\tau = \frac{V}{N\pi d^2 \sqrt{2}\bar{v}}$$
The mean free path is then given by
$$\lambda = \bar{v}\tau = \frac{V}{\sqrt{2}N\pi d^2}$$
If you assume the particles follow the ideal gas law, you can replace $\frac{V}{N} = \frac{k_B T}{p}$ and recover the formula from Wikipedia.
To modify this to 2D, we just need to change step 1 and follow the argument from there.
Instead of a particle sweeping out a volume $\pi d^2\ell$, it sweeps out an area $2d\ell$.
No change
The total area swept out by all $N$ particles is then $Nd\sqrt{8}\bar{v}\Delta t$
The probability is equal to the fraction of the total area,
$$P(\text{int.}) = \frac{Nd \sqrt{8}\bar{v}\Delta t}{A}$$
The mean free time is again the time after which this probability would reach 1,
$$\tau = \frac{A}{Nd\sqrt{8}\bar{v}}$$
and the mean free path is
$$\lambda = \bar{v}\tau = \frac{A}{\sqrt{8}Nd}$$
Bear in mind that in 2D, the ideal gas law would be modified; it would have area instead of volume, and you would have to use a 2D version of pressure, which would be force per unit length, not per unit area. It may be easier to just work with $\lambda = \frac{A}{\sqrt{8}Nd}$ directly, since you can just set $A = L^2$ if you know the box side length. | {
"domain": "physics.stackexchange",
"id": 6438,
"tags": "kinetic-theory, mean-free-path"
} |
If two sound waves that are different frequencies create beats that occur several hundred times per second, can you hear this effect as its own tone? | Question: If you have multiple waves of different frequencies, the interference from the different waves cause "beats".
(Animation from https://en.wikipedia.org/wiki/Group_velocity)
Let's say that a green dot in the above animation reaches your ear a few hundred times per second.
Is it possible to hear this phenomenon (wave groups occurring at frequencies in the audible range) as its own tone?
Answer: No, one cannot hear the actual beat frequency. For example, if both waves are ultrasonic and the difference in frequency is 440 Hz, you won't hear the A (unless some severe nonlinearities would come into play; edit: such nonlinear effects are at least 60 dB lower in sound pressure level).
When two ultrasonic waves are close in frequency, the amplitude goes up and down with the beat frequency. A microphone can show this on an oscilloscope. But the human ear does not hear the ultrasonic frequency. It is just silence varying in amplitude :)
(I know a physics textbook where this is wrong.)
Edit: in some cases the mind can perceive the pitch of a "missing fundamental". For example, when sine waves of 880 and 1320 Hz are played, the mind may perceive a tone of pitch A. This is a psychoacoustic phenomenon, exploited for example in the auditory illusion of an Escher's staircase. | {
"domain": "physics.stackexchange",
"id": 52484,
"tags": "waves, acoustics, interference, biophysics"
} |
An API consumer | Question: I built a minimal working model of a consumer of a RESTful API in Python 3.6. It only consumes one end point in this example. It is using the API from a video game (Destiny 2) to collect data about all the members of a clan (a clan is basically a team that likes to play together).
Some features
The core interface consists of a wrapper for requests (called request), and a class that summarizes the response (ResponseSummary).
URL-generating functions produce the URL for each end point.
Helper functions process the ResponseSummary.
Secret stuff (the API key) is stored as environment variables.
Questions
This is my first project interacting with an API, so any comments, or suggestions are welcome, as I may be making a stupid blunder here. Places of concern include, but are not limited to:
The ResponseSummary class seems a bit smelly to me. It includes all the data I want, but I'm not sure about defaulting all fields to None, the error handling, and the __repr__ may be too verbose.
Is my division of labor into core API code, URL-generators, and helper functions reasonable and Pythonic (is there a better way to do it that makes it more readable)?
Naming: are my variable names OK: in particular is the function request a bad name because it is too close to 'requests'?
Is the way I handled the headers argument for requests OK? I turned it into a constant (it is used in every API call).
The code
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
import requests
BASE_URL = 'https://bungie.net/Platform/GroupV2/'
"""
CORE CODE
For making requests, and summarizing the response
"""
def request(url, request_headers):
response = requests.get(url, headers = request_headers)
return ResponseSummary(response)
class ResponseSummary:
"""Important information about the response."""
def __init__(self, response):
self.status = response.status_code
self.url = response.url
self.data = None
self.message = None
self.error_code = None
self.error_status = None
self.exception = None
if self.status == 200:
result = response.json()
self.message = result['Message']
self.error_code = result['ErrorCode']
self.error_status = result['ErrorStatus']
if self.error_code == 1:
try:
self.data = result['Response']
except Exception as ex:
print("ResponseSummary: 200 status and error_code 1, but there was no result['Response']")
print("Exception: {0}.\nType: {1}".format(ex, ex.__class__.__name__))
self.exception = ex.__class__.__name__
else:
print('No data returned for url: {0}.\n {1} was the error code with status 200.'. \
format(self.url, self.error_code))
else:
print('Request failed for url: {0}.\n.Status: {0}'.format(self.url, self.status))
def __repr__(self):
"""What will be displayed/printed for the class instance."""
disp_header = "<" + self.__class__.__name__ + " instance>\n"
disp_data = ".data: " + str(self.data) + "\n\n"
disp_url = ".url: " + str(self.url) + "\n"
disp_message = ".message: " + str(self.message) + "\n"
disp_status = ".status: " + str(self.status) + "\n"
disp_error_code = ".error_code: " + str(self.error_code) + "\n"
disp_error_status = ".error_status: " + str(self.error_status) + "\n"
disp_exception = ".exception: " + str(self.exception)
return disp_header + disp_data + disp_url + disp_message + \
disp_status + disp_error_code + disp_error_status + disp_exception
"""
URL GENERATORS
"""
def get_members_of_group_url(group_id):
"""
Pull all members of a clan.
Documentation: https://bungie-net.github.io/multi/operation_get_GroupV2-GetMembersOfGroup.html
"""
return BASE_URL + group_id + '/Members/?currentPage=1'
"""
HELPER FUNCTIONS
"""
def generate_clan_list(member_data):
"""
Using GetMembersOfGroup end point, create list of member info for clan members.
Each elt is a dict with username. id, join date. Filters out people not on psn.
"""
member_data = member_data['results']
clan_members_data = []
for member in member_data:
clan_member = {}
clan_member['membership_type'] = member['destinyUserInfo']['membershipType']
if clan_member['membership_type'] == 2:
clan_member['name'] = member['destinyUserInfo']['displayName']
clan_member['id'] = member['destinyUserInfo']['membershipId']
clan_member['date_joined'] = member['joinDate']
clan_members_data.append(clan_member)
return clan_members_data
def print_clan_roster(clan_members_data):
"""Print name, membership type, id, and date joined."""
if clan_members_data:
name_list = [clanfolk['name'] for clanfolk in clan_members_data]
col_width = max(len(word) for word in name_list)
for clan_member in clan_members_data:
memb_name = clan_member['name']
length_name = len(memb_name)
num_spaces = col_width - length_name
memb_name_extended = memb_name + " "*num_spaces
print("{0}\tMembership type: {1}\t Id: {2}\tJoined: {3}".format(memb_name_extended, \
clan_member['membership_type'], clan_member['id'], clan_member['date_joined']))
else:
print("print_clan_roster: roster is empty")
def get_environment_variable(var_name):
"""get environmental variable, or return exception"""
try:
return os.environ.get(var_name)
except KeyError:
error_msg = 'KeyError in get_environment_variable: {}.'.format(var_name)
print(error_msg)
if __name__ == "__main__":
"""
Extract and print list of all clan members
"""
#Set constants
D2_KEY = get_environment_variable('D2_KEY')
D2_HEADERS = {"X-API-Key": D2_KEY}
CLAN_ID = '623172'
#Make request to api for clan members, and print list to stdout
get_members_url = get_members_of_group_url(CLAN_ID)
get_members_summary = request(get_members_url, D2_HEADERS)
member_data = get_members_summary.data
clan_members_data = generate_clan_list(member_data)
print_clan_roster(clan_members_data)
Answer: As pointed out by Alex Hall, there are many places the above code could be improved:
Exception handling is nonexistent for requests, which tends to create lots of errors.
The ResponseSummary class is a monster: it doesn't raise any exceptions, but simply prints messages. In Python, exceptions are much better than error codes. Replace that class with a simpler response-handler that simply returns data, or raise an exception.
Instead of feeding requests.get the secret key every time, you should use a session and set the secret key to persist over the whole session.
Instead of displaying errors with print, use a logger. This gives you much more flexibility over what will be displayed, and under what conditions.
Below is a new and improved version that fixes all those problems. Tthere is room for improvement, mainly in the exception handling. That turned out to be the most challenging part of this code, and I am still learning.
import os
import requests
import json
import logging
BASE_URL = 'https://bungie.net/Platform/Destiny2/'
BASE_URL_GROUP = 'https://bungie.net/Platform/GroupV2/'
"""
Set up logger: for now just print everything to stdout.
"""
logging.basicConfig(level = logging.INFO,
format = '%(asctime)s - %(levelname)s - %(message)s',
datefmt =' %m/%d/%y %H:%M:%S')
logger = logging.getLogger(__name__)
"""
CORE CODE
Make requests, and extract data from response
"""
class BungieError(Exception):
"""Raise when ErrorCode from Bungie is not 1"""
def make_request(url, session):
try:
response = session.get(url)
if not response.ok:
response.raise_for_status()
except requests.exceptions.RequestException as requestException:
raise
else:
return response
def process_bungie_response(response):
"""Examines response from d2 if you got status_code 200, throws
exception of type BungieException if bungie ErrorCode is not 1. For list of error
codes, see:
https://bungie-net.github.io/multi/schema_Exceptions-PlatformErrorCodes.html#schema_Exceptions-PlatformErrorCodes
"""
response_url = response.url #If you oops sent it something that can't be json'd
try:
response_json = response.json()
except json.JSONDecodeError as jsonError:
msg1 = f"JSONDecodeError in process_bungie_response().\n"
msg2 = "Response does not contain json data.\n"
msg3 = f"URL: {response_url}.\nError: '{jsonError}'"
msg = msg1 + msg2 + msg3
raise BungieError(msg) from jsonError
try:
data = response_json['Response']
except KeyError as keyError:
error_code = response_json['ErrorCode']
error_status = response_json['ErrorStatus']
error_message = response_json['Message']
msg1 = f"KeyError in process_bungie_response.\nURL: {response_url}.\n"
msg2 = f"Error code {error_code}: {error_status}.\nMessage: {error_message}.\n"
msg = msg1 + msg2
raise BungieError(msg) from keyError
else:
return data
def destiny2_api_handler(url, session):
response = make_request(url, session)
return process_bungie_response(response)
"""
URL GENERATORS
"""
def search_destiny_player_url(user_name):
"""Get user's info card:
https://bungie-net.github.io/multi/operation_get_Destiny2-SearchDestinyPlayer.html
Note for this example it's constrained to ps4 (platform = 2)
"""
return BASE_URL + 'SearchDestinyPlayer/2/' + user_name + '/'
def get_members_of_group_url(group_id):
"""
Pull all members of a clan.
https://bungie-net.github.io/multi/operation_get_GroupV2-GetMembersOfGroup.html
"""
return BASE_URL_GROUP + group_id + '/Members/?currentPage=1'
"""
HELPER FUNCTIONS
"""
def generate_clan_list(member_data):
"""
Using GetMembersOfGroup end point, create list of member info for clan members.
Each elt is a dict with username. id, join date. Filters out people not on psn.
"""
member_data = member_data['results']
clan_members_data = []
for member in member_data:
clan_member = {}
clan_member['membership_type'] = member['destinyUserInfo']['membershipType']
if clan_member['membership_type'] == 2:
clan_member['name'] = member['destinyUserInfo']['displayName']
clan_member['id'] = member['destinyUserInfo']['membershipId']
clan_member['date_joined'] = member['joinDate']
clan_members_data.append(clan_member)
return clan_members_data
def print_clan_roster(clan_members_data):
"""Print name, membership type, id, and date joined."""
if clan_members_data:
name_list = [clanfolk['name'] for clanfolk in clan_members_data]
col_width = max(len(word) for word in name_list)
for clan_member in clan_members_data:
memb_name = clan_member['name']
length_name = len(memb_name)
num_spaces = col_width - length_name
memb_name_extended = memb_name + " "*num_spaces
print("{0}\tMembership type: {1}\t Id: {2}\tJoined: {3}".format(memb_name_extended, \
clan_member['membership_type'], clan_member['id'], clan_member['date_joined']))
else:
print("print_clan_roster: roster is empty")
def get_environment_variable(var_name):
"""get environmental variable, or return exception"""
try:
return os.environ.get(var_name)
except KeyError:
error_msg = 'KeyError in get_environment_variable: {}.'.format(var_name)
logger.error(error_msg)
raise
if __name__ == "__main__":
#Set constants
D2_KEY = get_environment_variable('D2_KEY')
D2_HEADERS = {"X-API-Key": D2_KEY}
CLAN_ID = '623172'
USER = 'cortical_iv'
#Make the requests
with requests.Session() as session:
session.headers.update(D2_HEADERS)
logging.info(f"Retrieving info about {USER}")
search_player_url = search_destiny_player_url(USER)
try:
user_data = destiny2_api_handler(search_player_url, session)
except Exception as e:
logging.exception(f"Error getting user data for {USER}.\nException: {e}.")
logging.info(f"Retreiving info about all members of clan {CLAN_ID}")
get_members_url = get_members_of_group_url(CLAN_ID)
try:
members_data = destiny2_api_handler(get_members_url, session)
except Exception as e:
logging.exception(f"Error getting user data for {USER}.\nException: {e}.")
else:
clan_members_data = generate_clan_list(members_data)
print_clan_roster(clan_members_data) | {
"domain": "codereview.stackexchange",
"id": 28656,
"tags": "python"
} |
atmospheric phenomenon? What causes condensation trails to converge? | Question: This air plane just caught my eye. Two contrails apparently are flowing backward, slightly off-centered and then ultimately converge, giving the overall shape of a very narrow rhomboid parallelogram, with one diagonal being on the order of about four plane-width's.
Please explain what I am seeing here as detailed as possible.
Some of you may even know what class of plane this may be and thus infer the scales involved, to give some hints of the scale of the contrail under investigation?
I looked up other pictures, and if I am not mistaken, most contrails are slightly angled outward. Is this by design or due to expansion and convection of the gases involved? Can someone confirm this?
Edit:
I did observe the described contrail-shape, with quite some consistency, over varying angles of ascension and declination.
I have never heard of conspiracies regarding contrails (e.g. chemtrails) until after writing this post. It shows that good, hard explanations for atmospheric contrail phenomena are in demand.
Given the right circumstances (large distances) some apparent contrail convergence can be attributed to perspective alone.
Links:
Contrail Science (impressive resource, which I just peeked at so far)
Contrail gallery
Contrails - clouds (ppt)
http://www.es.lancs.ac.uk/hazelrigg/amy/Introduction/The%20Science.htm
http://www.epa.gov/oms/aviation.htm
http://www-pm.larc.nasa.gov/newcontrail.html
http://www.epa.gov/oar/oap.html
http://www.history.com/shows.do?episodeId=464914&action=detail
http://mynasadata.larc.nasa.gov/P4.html
Answer: The vapor pattern that we can see on this pictures is due to the instability of trailing edge vortices. Here are some picture showing an instability of this vortices (in French for English see aslo this JFM). Different plane will have different trailing edge vortices resulting in different observable vapor pattern. On Wikipedia the trailing edge vortex are revered as wingtip vortices. The geometry of the plane create a trailing edge vortices and this groupe of vortex are unstable causing the rhomboid parallelogram. | {
"domain": "physics.stackexchange",
"id": 4817,
"tags": "fluid-dynamics, air, aircraft, evaporation"
} |
Stopwatch class | Question: I am learning C# and I have an exercise:
Design a class called Stopwatch. The job of this class is to simulate
a stopwatch. It should provide two methods: Start and Stop. We call
the start method first, and the stop method next. Then we ask the
stopwatch about the duration between start and stop. Duration should
be a value in TimeSpan. Display the duration on the console. We should
also be able to use a stopwatch multiple times. So we may start and
stop it and then start and stop it again. Make sure the duration value
each time is calculated properly. We should not be able to start a
stopwatch twice in a row (because that may overwrite the initial start
time). So the class should throw an InvalidOperationException if its
started twice.
My code works fine, but please review if I could do something better. Especially I do not like this part with the switch case block but I dont have any idea how I could improve it.
Stopwatch class:
using System;
namespace Stopwatch
{
public class Stopwatch
{
private DateTime _startDate;
private DateTime _endDate;
private bool _isRunning;
public void Start()
{
if (_isRunning)
throw new InvalidOperationException("Stopwatch is already running");
_startDate = DateTime.Now;
_isRunning = true;
}
public void Stop()
{
if (!_isRunning)
throw new InvalidOperationException("Stopwatch is not running");
_endDate = DateTime.Now;
_isRunning = false;
}
public TimeSpan GetDuration()
{
return _endDate - _startDate;
}
}
}
Program class:
using System;
namespace Stopwatch
{
class Program
{
static void Main(string[] args)
{
var stopWatch = new Stopwatch();
while (true)
{
Console.WriteLine("Enter 'start' to start Stopwatch\nEnter 'stop' to end Stopwach\nEnter any key to exit:\n");
var input = Console.ReadLine().ToLower();
if (input == "start" || input == "stop")
UseStopwatch(stopWatch, input);
else
return;
}
}
static void UseStopwatch(Stopwatch stopWatch, string command)
{
switch (command)
{
case "start":
try
{ stopWatch.Start(); }
catch (InvalidOperationException)
{ Console.WriteLine("stopWatch is already started\n"); }
break;
case "stop":
try
{
stopWatch.Stop();
Console.WriteLine("Duration: {0}\n", stopWatch.GetDuration());
}
catch (InvalidOperationException)
{ Console.WriteLine("stopWatch is not started\n"); }
break;
default:
break;
}
}
}
}
Answer: First off, the obligatory recommendation that you use System.Diagnostics.StopWatch for this purpose, and not DateTime.Now (or even UtcNow, which won't go wrong if you happen to enter daylight savings while the program is running). Using Diagnostics.Stopwatch is more precise than the methods in DateTime, and it provides an Elapsed property which returns a TimeSpan.
GetDuration() is a bit odd, because if you call it before Stop(), then it will return nonsense. It should either throw, or perhaps compute the 'current' ellapsed time if the stopwatch is running. Either way, this should be documented (see below).
I'd also use a property for GetDuration (as Heslacher has suggested) unless it is going to throw (i.e. when Start() has been called but not Stop()), in which case that might ruffle a few feathers.
As usual, I'll recommend you add some inline-documentation (///) to these methods, which should explain when and why exceptions will be thrown (e.g. explain what calling Start() twice in a row does: from the name, I would be unsure whether it throws, does nothing, or restarts the timer).
/// <summary>
/// Starts the Stopwatch, resetting the elapsed time.
/// Throws an InvalidOperationException if the Stopwatch is already running.
/// </summary>
public static void Start()
{
// snip
}
These don't take long to write, and can improve the API massively. | {
"domain": "codereview.stackexchange",
"id": 29824,
"tags": "c#, beginner, reinventing-the-wheel, timer"
} |
What is the difference between a magnon and a spinon? | Question: For a long time, I thought the terms "magnon" and "spinon" were equivalent, describing the collective spin excitation in a system. Lately, I have seen remarks in the literature that they indeed do differ, however I don't know in what sense. Can somebody, please, explain, how exactly do these technical terms differ?
Answer: What I can give you is the difference in spin's chain. The two, magnons and spinons, are excitations around different background states. The magnons are excitations around the ferromagnetic vacuum and the spinons are excitations around the antiferromagnetic vacuum. Because of that, a lot of properties are different between this two: the magnons are bosons and the spinons are fermions; magnons have spin 1 and spinons 1/2. The dispersion relation of a magnon in a spin chain is:
$$
\varepsilon (p)=4J\sin^2(\frac{p}{2})
$$
and the dispersion relation of the spinon is:
$$
\varepsilon (p)= \frac {\pi}{2} \cos (p)
$$
The S-matrix are different as well. All this follows because excitation are a state-dependent concept. So yes, this two are collective excitation in a "sea" of spins, but they are excitations around different states. | {
"domain": "physics.stackexchange",
"id": 37448,
"tags": "electromagnetism, terminology, quasiparticles"
} |
Hodgin-Huxley model for a single neuron | Question: I am viewing (through edX ) an introduction course to computational neuroscience. In the second lecture, the Hodgin-Huxley model is considered. I am going over some of the questions and have encountered a problem with one of them (a picture of the exercise is attached below). I have a strong background in mathematics, but my background in biology is yet very poor. I am having a hard time connecting the biology to the math. Can anyone help with this question:
Thank you!!
Answer: Since you didn't get the right answer to #6, let's review the basis for this model.
The basis of the model is precisely the statement in #6: an individual channel has a defined conductance when it is conducting, and if all of them are conducting then the total conductance is the single-channel conductance times the number of channels (conductances in parallel add).
Confusion comes from the fact that, in the terminology used here, an "open" channel is not necessarily conducting. The probability that a single channel is conducting is the product $r^{n_1}s^{n_2}$.
$r^{n_1}$ describes the "activation" process, while $s^{n_2}$ describes the "inactivation" process. These are two separate processes, with the activation process now known to be driven by voltage-dependent changes in the configuration of transmembrane domains of the channel protein, while the inactivation process is a charged part of the protein inside the cell that more slowly moves to block the channel when the cell is depolarized. See this summary by Clay Armstrong, who contributed much to our understanding of these processes.
So in the terminology here, a channel can be "open" (in terms of the $r$ process) but still "inactivated" (through the $s$ process) and thus be non-conducting. It can also be "closed" (in terms of the $r$ process) and "inactivated", in which case it is also non-conducting.
The way this is modeled is that $s=1$ means no inactivation, while $s=0$ means fully inactivated. So that covers question 3. This is in contrast to the activation process, in which $r=1$ is full activation.
Question 4 might be a bit misleading as there may be a hidden assumption that you are starting from being at potential $u_0$ for a long-enough time and then increasing $u$ suddenly (which is how action potentials normally are generated). In that case your understanding of differential equations should make it clear that $\tau_r$ and $\tau_s$ are the time constants for the responses of the $r$ and $s$ processes to a change in $u$. The process with a shorter time constant responds more quickly, so activation precedes inactivation (on the average). The different powers associated with the $r$ and $s$ processes ($n_1,n_2$) might confuse things a bit, but try running this model and you should be convinced.
This might seem convoluted at first, but Hodgkin and Huxley worked out this model based solely on their electophysiological studies (with help from Bernard Katz). It's amazing how this model's postulates of multiple single channels and separate activation and inactivation processes have been so well verified at the molecular level over the past decades. | {
"domain": "biology.stackexchange",
"id": 4212,
"tags": "neuroscience, homework, neurophysiology, computational-model"
} |
Does moveIt monitor the planned path for validity? | Question:
Hi everyone,
I have a setup with a robot-arm and 3D-Sensors observing the workspace. I want the arm to execute some Pick & Place scenarios, while I move through the workspace.
Now, while planning the octomap gets respected, the created plans avoid any collisions.
But my question is, would moveIt check if an obstacle moved into the planned path, therefore making it invalid?
Here I read about the StateValidity-Service, but I am wondering if I have to implement it myself or if it happens automaticly?
Thanks in advance,
Rabe
Originally posted by Rabe on ROS Answers with karma: 683 on 2014-09-17
Post score: 0
Answer:
After setting everything up properly, MoveIt calls the necessary Actions to preempt any executions whenever an obstacle moves into the space.
If replanning is activated, it'll wait 2 seconds and then start planning again for the same target.
Originally posted by Rabe with karma: 683 on 2014-09-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19426,
"tags": "moveit"
} |
How can we construct algorithm to evaluate logarithm of a real positive number | Question: How can we construct the algorithm to evaluate the logarithm of a real positive number bit by bit in the base 2 system?
I have first expressed any number as $x\cdot2^n$, where $x \in [1,2]$, by shifting the binary points. But after that, I am not sure what to do.
Answer: Consider $x$ with $1 \le x < 2$. We need to find $f = \log x$ (log base is 2). Denote the binary form of $f$ as $(0.b_1 b_2 b_3 \ldots)$ . Each $b_i$ is a bit.
$f = \log x \quad\quad (1)$
$\Leftrightarrow$
$2f = \log x^2$
$\Leftrightarrow$
$(b_1. b_2 b_3 \ldots) = \log x^2$
So, $x^2 \ge 2$ implies $b_1$ must be $1$, and $x^2 < 2$ implies $b_1$ must be $0$. Thus, $b_1$ can be deduced just by comparing $x^2$ with $2$.
After finding $b_1$, subtract it from both sides of above to obtain equation similar to $(1)$. Repeat the process to deduce bit $b_2$. Repeat this to deduce as many bits as required.
You can read further about this algorithm in this article (written by me). | {
"domain": "cs.stackexchange",
"id": 19189,
"tags": "algorithm-analysis, logic"
} |
dynamixel wheel mode | Question:
I am trying to put my dynamixel mx-64 servo motor into wheel mode like it says in step 1 of this tutorial http://wiki.ros.org/dynamixel_controllers/Tutorials/Creating%20a%20joint%20torque%20controller can anyone explain to me how to put the motor into wheel mode on ubuntu?
Originally posted by rmoncrief on ROS Answers with karma: 21 on 2015-06-16
Post score: 0
Original comments
Comment by zweistein on 2016-02-11:
Did u managed to apply torque control?
Answer:
Hi,
I had the same problem, and I realised that putting the dynamixel in wheel mode is the same as setting CW angle and CCW angle to 0:
rosrun dynamixel_driver set_servo_config.py 1 --cw-angle-limit=0 --ccw-angle-limit=0
"1" states for your motor ID. If you get the following message, then your motor is set to wheel mode:
Configuring Dynamixel motor with ID 1
Setting CW angle limit to 0
Setting CCW angle limit to 0
done
However, I found that there are some issues in this mode for AX-12A: for the range of angle values that are not available in standard mode, the velocity and load values are all wrong. Here are some values I get:
current load: 0.03515625
current vel = 0.79042471074
current load: 0.03515625
current vel = 0.79042471074
current load: 0.10546875
current vel = 0.37196456976
current load: 0.10546875
current vel = 0.37196456976
current load: 0.10546875
current vel = 0.37196456976
current load: 0.16796875
current vel = 0.0
current load: 0.16796875
current vel = 0.0
current load: 0.82421875
current vel = -27.7229843399
current load: 0.82421875
current vel = -27.7229843399
current load: 0.82421875
current vel = -27.7229843399
current load: 0.50390625
current vel = -7.96236657142
current load: 0.50390625
current vel = -7.96236657142
current load: 0.50390625
current vel = -7.96236657142
current load: 0.16796875
current vel = 0.0
current load: 0.16796875
current vel = 0.0
current load: 0.14453125
current vel = 0.13948671366
current load: 0.14453125
current vel = 0.13948671366
current load: 0.14453125
current vel = 0.13948671366
current load: 0.05078125
current vel = 0.6974335683
current load: 0.05078125
current vel = 0.6974335683
The range [0.03 0.05] for the load and [0.69 0.79] for the velocity are correct, but then they take unexpected values, such as 0.82 for the load and -27 for the velocity. Did anyone encountered such issue? I think the problem might be in the joint_position_controller.py file in the dynamixel_controller package, when state is defined with the function filter:
def process_motor_states(self, state_list):
if self.running:
state = filter(lambda state: state.id == self.motor_id, state_list.motor_states)
if state:
state = state[0]
self.joint_state.motor_temps = [state.temperature]
self.joint_state.goal_pos = self.raw_to_rad(state.goal, self.initial_position_raw, self.flipped, self.RADIANS_PER_ENCODER_TICK)
self.joint_state.current_pos = self.raw_to_rad(state.position, self.initial_position_raw, self.flipped, self.RADIANS_PER_ENCODER_TICK)
self.joint_state.error = state.error * self.RADIANS_PER_ENCODER_TICK
self.joint_state.velocity = state.speed * self.VELOCITY_PER_TICK
self.joint_state.load = state.load
self.joint_state.is_moving = state.moving
self.joint_state.header.stamp = rospy.Time.from_sec(state.timestamp)
self.joint_state_pub.publish(self.joint_state)
I just cannot find any description of this filter function to look into it.
If anyone has an answer for this, it would be very helpful.
Originally posted by roxane with karma: 26 on 2016-01-19
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by zweistein on 2016-02-11:
I have the motors mounted in a robot arm, how can I check the initial values of CW angle and CCW angle, so I can set them to (0,0) and then back to the original setup? | {
"domain": "robotics.stackexchange",
"id": 21939,
"tags": "ros"
} |
Expectation Value of Unitary Time Evolution Operator in Quantum Mechanics | Question: Does the expression $\langle \Psi_i|U(t)|\Psi_i\rangle$ have a specific meaning, where $U(T)$ is the unitary time evolution operator of $\Psi$, and $\Psi_i$ is the initial state of $\Psi$?
If so, could you please explain that meaning and also please provide any clear references to this that I could read?
Answer: To talk about $\langle \psi_i \vert U(t) \vert \psi_i \rangle$ as the "expectation value of the time evolution operator" is probably the least insightful way to talk about this quantity. Since $U(t) = \exp(-\mathrm{i}Ht)$ for time-independent Hamiltonians, if you want to look at expectation values, you could as well look at that of $H$ directly. Note that, in particular, the time evolution operator is not self-adjoint, since $U(t)^\dagger = U(-t)\neq U(t)$, so it is not an observable, and speaking of its "expectation value" is physically meaningless.
The physical meaning is clearer when you realize that $\lvert \langle \psi_i \vert U(t)\vert \psi_i\rangle\rvert^2$ is the probability (assuming the states are normalized) to find a system that was at time $t_0 = 0$ in the state $\lvert \psi_i \angle$ again in the same state after the time $t$ has passed, hence $\langle \psi_i \vert U(t) \vert \psi_i \rangle$ is the "transition amplitude for a state into itself". This is probably most interesting when the $\lvert \psi_i\rangle$ are eigenstates of an operator that is being measured. | {
"domain": "physics.stackexchange",
"id": 28302,
"tags": "quantum-mechanics, probability, time-evolution, unitarity, born-rule"
} |
Packages dependency and includes | Question:
Hi all,
I have a package1 which generates services via a Type1.srv file. These generated services are located in <catkin_ws>/devel/include/package1/Type1.h. This all works fine.
Now when I want a package2 to also use the services generated in package1 I follow the steps described in here.
Note that in my code I do not actually add package1 to find_package, because package1 is not compiled yet when find_package is called in package2.
Now when building I get the following error:
<catkin_ws>/src/package2/src/MyFile.cpp:2:32: fatal error: package1/Type1.h: No such file or directory
When I manually add include path <catkin_ws>/devel/include to include_directories, it compiles and links fine, but it is obviously very dirty...
Anyone knows what I am missing here? How can I let my system know about the include path to devel?
-------------------EDIT 1---------------
Note that when I add package1 to find_package(catkin REQUIRED COMPONENTS ... package1) I get the following error:
CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "package1" with any of the following names:
package1Config.cmake
package1-config.cmake
Add the installation prefix of "package1" to CMAKE_PREFIX_PATH or set "package1_DIR" to a directory containing one of the above files. If "package1" provides a separate development package or SDK, be sure it has been installed.
-------------------EDIT 2---------------
File package1/package.xml does exist and has the following name
<package format="2">
<name>package1</name>
<version>0.1.0</version>
package1/CMakeLists.txt contains
project(package1)
...
catkin_package(
LIBRARIES ${PROJECT_NAME}
CATKIN_DEPENDS rtt_ros message_runtime std_msgs
)
and package2/CMakeLists.txt contains
find_package(catkin REQUIRED COMPONENTS roscpp std_msgs package1)
This causes the error described above.
-------------------EDIT 3---------------
I have created a MWE which can be found in this link. Both packages are in meta-package le_painters_companion, package1 corresponds to lc_control and package2 to lc_toolkitLink.
Does this MWE shed some more light on the problem?
Kind regards,
Antoine.
Originally posted by arennuit on ROS Answers with karma: 955 on 2016-09-13
Post score: 0
Answer:
Note that in my code I do not actually add package1 to find_package, because package1 is not compiled yet when find_package is called in package2.
Now when building I get the following error:
<catkin_ws>/src/package2/src/MyFile.cpp:2:32: fatal error: package1/Type1.h: No such file or directory
I thought that the path to <catkin_ws>/devel/include/package1/Type1.h would be provided as an include by the dependency set in package2's package.xml file, no?
Indeed: no.
I thought this was actually explained in one of your earlier questions (CMakeLists.txt vs package.xml), but just to reiterate: package.xml (where you put the build_depend) and CMakeLists.txt (find_package(..) and friends) are two separate things.
The former is used by (meta-)package managers (such as rosdep, but ultimately apt, dfn and others) to understand how your package relates to all others (in terms of dependencies, ie: which pkgs should be on the system, separated out into two sets: 'when running' and 'when compiling'), while the latter is a codification of the actual process to follow when converting your sources into binary artefacts (ie: compiling into objects and executable binaries).
In other words: how would the compiler know about package1 if you don't add it to package2's INCLUDE_PATH etc? That is CMakeLists.txt's task (via find_package(..)), and you deliberately left that out of there.
Compilation order is (partially) determined by the contents of the find_package(..) calls, so:
Note that in my code I do not actually add package1 to find_package, because package1 is not compiled yet when find_package is called in package2.
is a bit of strange statement, as I hope you understand now.
Edit:
I thought that the path to <catkin_ws>/devel/include/package1/Type1.h would be provided as an include by the dependency set in catkin_package, via the CATKIN_DEPENDS keyword, no?
Not if you don't find_package(catkin COMPONENTS .. package1 ..) in package2's CMakeLists.txt.
Though when I do use catkin_package / CATKIN_DEPENDS to indicate the dependency on package1, I have no more luck.
No, because listing package1 under CATKIN_DEPENDS only causes the catkin_package(..) call to configure the build system to generate a PkgConfig (.pc) file for package2 that states the dependency on package1. It does not update the compilation process with any new information regarding the location of package1.
(well, actually it will try to, as the PkgConfig file will need the same set of information, but since you don't find_package(.. package1 ..) anywhere, the variables need will most likely be empty).
CMakeLists.txt in package2 does not explicitly import package1, so even if package1 exports something, package2 won't know about it.
Additionally, I would expect ${catkin_INCLUDE_DIRS}** to include the path to **<catkin_ws>/devel/include/package1/Type1.h**, but **${catkin_INCLUDE_DIRS} only contains the path to ros core installation.
Which makes sense, as you don't find_package(catkin COMPONENTS .. package1 ..) in package2's CMakeLists.txt.
Without an explicit find_package(.. package1 ..) somewhere in the CMakeLists.txt of package2, nothing you do in package1 will be picked up by any of the build activities undertaken by the build tool that builds package2.
Edit2:
Note that when I add package1 to find_package(catkin REQUIRED COMPONENTS ... package1) I get the following error:
CMake Error at /opt/ros/indigo/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "package1" with any of the following names:
package1Config.cmake
package1-config.cmake
Add the installation prefix of "package1" to CMAKE_PREFIX_PATH or set "package1_DIR" to a directory containing one of the above files. If "package1" provides a separate development package or SDK, be sure it has been installed.
(after all the edits, it's no longer visible, but I feel as if the original question was a good example of an xy-problem)
The first thing to check is that a file package1/package.xml actually exists. If that is the case, check the package name. Make sure it corresponds to the one you add to find_package(..) (in package2) and to catkin_package() (in package1).
Edit3:
I have created a MWE which can be found in this link. Both packages are in meta-package le_painters_companion, package1 corresponds to lc_control and package2 to lc_toolkitLink.
If I understand you correctly, you have both package1 and package2 in the directory of the lc_control metapackage? I'm not sure that is supposed to work. Could you please try flattening your workspace? So make package1 and package2 siblings of your metapackage.
No, this is not correct. I was confused by you using the name meta-package for a directory containing other packages. I typically reserve that name for actual metapackages.
Does this MWE shed some more light on the problem?
No, not really. It wasn't an actual MWE, as it is still full of project-specific CMake statements and paths.
It's hard for me to test, but by commenting out all the orocos_component(..) calls (and related lines), I can get a sample workspace containing your two packages to build just fine. With lc_control listed in both find_package(catkin .. COMPONENTS .. lc_control) and in catkin_package(.. CATKIN_DEPENDS .. lc_control).
The CMakeLists.txt of both packages are actually quite convoluted. Perhaps it would help if you stripped them completely (comment almost everything), have just the bare Catkin skeleton in there, then try to build your workspace. See if lc_toolkitLink can find lc_control. Then start re-enabling the orocos and other bits. I suspect that at some point things will break.
Additionally: I assume you've already removed your build and devel directories and rebuild everything from scratch?
Originally posted by gvdhoorn with karma: 86574 on 2016-09-13
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by arennuit on 2016-09-13:
You are right, my comment on package.xml is useless. I have updated the post accordingly and added an update to provide more details on why I did not add package1 to find_package. Although my problem is not solved yet, it looks like you are pointing towards the right direction.
Comment by arennuit on 2016-09-13:
I believe you are right about the xy problem (I have tried to narrow things down now, so the post is useful for later use to others). I have added a 2nd edit to give some more information.
Comment by gvdhoorn on 2016-09-13:
Something is not right here. If you can, try and create an MWE, or provide access to both packages somehow. We don't need the sources (they can be empty), but it would help to see the complete package.xml and CMakeLists.txt of both packages, and the directory layout as well.
Comment by arennuit on 2016-09-14:
Hello gvdhoorn, I believe your edit3 is wrong: package1 and package2 are actually dummy names used to make the post as generic as possible. Their real name (in the MWE) are lc_control (for package1 and lc_toolkitLink (for package2).
Comment by arennuit on 2016-09-14:
... and it turns out that lc_control and lc_toolkitLink are already both in a metapackage (called le_painters_companion).
Comment by arennuit on 2016-09-19:
Hello gvdhoorn, using the stripe down technique you suggested and hours of analysis I have been able to solve the problem described in my post. There are still problems to my build though whiich are very closely linked, but I believe they deserve another post. Thanks for your help.
Comment by arennuit on 2016-09-19:
Note that the problem arouse because, in package1, the name of the produced library was not the name of the project package1, hence I was not allowed to use LIBRARY ${PROJECT_NAME} in catkin_package().
Comment by gvdhoorn on 2016-09-19:\
[..] and hours of analysis [..]
I'm sorry to hear that.
I have been able to solve the problem described in my post
Glad to hear that you got to the bottom of it though. | {
"domain": "robotics.stackexchange",
"id": 25754,
"tags": "ros, catkin, build, cmake"
} |
Pandas datastructure | Question: I'm trying to analyze database performance over a period of time and detect anomalies. The database server consists of many threads that perform different actions. I run a query to determine the number of active threads and the action they are performing.
A sample dataset is below:
My Objective:
I need to analyze over a period of time and determine what is normal at a given timestamp and detect any abnormalities. For example, Monday at 10 am, there are 10 active threads; out of which, there are 2 threads with the action 'Preparestatement' and 10 threads with the action 'Readtable'. Any other thread actions is potentially an anomaly.
As you can see from the image above, the actions (executestatement, Fetchcursor and so on) could be different at each timestamps. I want to understand if the structure of the pandas dataframe is the right choice to meet my objective.
Answer: Not sure if I understood your question correctly, but to dodata analysis, you would need to have a proper DF.
You have something like:
time A B C
1 3 4 6
time B C D
2 9 4 6
You need one table header, so remove any other header text in the rows > 1.
Also (of course) the content of each row must match the column it belongs to.
So the DF above would change to:
time A B C D
1 3 4 6 0
2 9 4 0 6
Note the "zero" entries in time=1 -> D=0 and time=2 -> C=0. Here the "zero" is chosen to fill the gaps (zero activity in this case).
This type of data representation is what you usually use for any kind of modelling. | {
"domain": "datascience.stackexchange",
"id": 5663,
"tags": "pandas, anomaly-detection"
} |
Model the predictive relationship between images | Question: Hello fellow machine learners,
We have numerous pairs of 64 x 64 (or other dimensionality) images (maps). In each pair, the first image demonstrates a physical parameter, e.g. wind speed, at each pixel; the second shows another physical or financial parameter, e.g. temperature or insurance loss, at each pixel. We want to model the predictive relationship from the first image to the second. Our previous models typically consisted of characterizing the field using domain-specific knowledge for dimension reduction. Now my colleague wants to explore the possibility of using statistical learning methods alone.
I did a shallow research and found this could fit in a multi-output regression problem. Online forums suggested papers like Melki and Cano's "Multi-Target Support Vector Regression Via Correlation Regressor Chains", and many Python libraries. However I am a little concerned about the extreme dimensionality, i.e. 4096, and the potential failure of exploiting the spatial structure of our problem if we go down that road.
I know CNNs are suitable for image recognition tasks as such. Has anyone encountered a similar problem and attacked it with neural nets? Any suggestion would be much appreciated!
Thank you!
Answer: To answer my own question. A naïve approach is to train a convolutional neural net with an output layer of 4096 neurons $x_1,x_2,\ldots, x_{4096}$. Assume values in the response image are $y_1,y_2,\ldots,y_{4096}$. Let the loss function be $L=\sum_{i=1}^{4096}(x_i-y_i)^2$. | {
"domain": "datascience.stackexchange",
"id": 9003,
"tags": "machine-learning, neural-network"
} |
Extracting rotational energy from a black hole: why does the direction in which we drop stuff in matter? | Question: Imagine an advanced civilization living around a black hole, extracting energy from it for their needs. Two ways to do this:
Drop matter in and use the light it emits as it heats up.
Extract the rotational energy of the black hole.
For the second mechanism, if we drop in the matter in a way that it increases the angular momentum ("in the direction of rotation"), we'll have more rotational energy to extract. On the other hand, if we drop it in a way it decreases the angular momentum, we'll have less rotational energy to extract. It puzzles me that this decision of how we throw the matter in seems to make a difference in the energy we have available.
If I think of a regular massive object, the paradox is resolved by the fact that throwing matter in a way that it slows down the rotation of the massive object will generate more friction and heat. Is this then true for a black hole as well? Does throwing matter in such that it reduces the angular momentum create more heat than throwing it in such that it increases it?
Answer: The process you are referring to is called the Penrose Process, which is essentially a calculation on the Kerr Solution (a rotating black hole in the context of general relativity). There are more subtle technicalities to this problem than you think which I'm going to explain in detail! Actually, the short answer is that if you want to extract energy from such a solution, then the essential condition for the object falling inside the black hole would be to decrease the black hole's angular momentum! I am going to explain the argument now, but if you are not familiar with concepts such as Killing Vector, Metric, Asymptotic Behavior, etc. I suggest that you read useful books such as Spacetime and Geometry: An Introduction to General Relativity by Sean M. Carroll, or any other book on this matter.
In the case of the Kerr metric, the norm of $ \xi_{t}=\partial_{t} $ which is also a Killing vector for this solution, is equal to zero in a region that is different from where the outer horizon is! We call this surface Ergosphere.
$$|\partial_{t}|^{2}=g_{tt}=-\frac{\Delta-a^{2}\sin^{2}\theta}{\Sigma}=0$$
As you can see in this equation, this is somewhere different from the place of horizon. As a matter of fact, there is a region between the outer horizon and this surface where $$g_{tt}>0 , r>r_{+}. $$ We can find the exact formula for this surface $$r^{2}-2Mr+a^{2}\cos^{2}=0$$ and $$r=M\pm\sqrt{M^{2}-a^{2}\cos^{2}\theta}$$ We call the region between the outer horizon and the Ergosphere, The Ergoregion. In this region, $ \partial_{t} $ is spacelike as seen by the observer sitting at infinity. The asymptotic observer sees that this Killing vector is timelike outside this region, null on this surface and spacelike inside the Ergoregion. As a matter of fact, Ergosphere is the infinite red-shift surface, which is not on the horizon. This region leads to amazing consequences. As $ |\partial_{t}|^{2}<0 $ inside the ergo-region, the sign of energy is negative from a distant observer's point of view. Now consider an object with energy $ E>0 $ falling into the ergo-sphere. Due to the fact that ergo-region is not a trapped surface, this object can still scape from it and come outside. Now suppose that the object explodes (or decays or any other process that can make it into two pieces) before coming back and turns into two pieces. One falling toward the horizon and the other coming back. From the asymptotic observer's point of view, the first piece has energy $ E_{1}<0 $ because it is still on the other side of the ergo-sphere, and the second piece has energy $ E_{2}>0 $ because it came back from the region and is now outside. Now lets write the conservation law for this system $$E=E_{1}+E_{2}.$$ Since $ E_{1} $ is negative, then $ E_{2}>E $. Which means that we have extracted energy from the black hole. If we take a more careful look at this problem, we notice that, we should make sure that the particle with energy $ E_{1} $, falls inside the black hole and does not come out. These are the definitions of energies $$E=-P.\xi_{t}$$ $$E_{1}=-P_{1}.\xi_{t}$$ $$E_{2}=-P_{2}.\xi_{t}$$ Where $ P=P_{1}+P_{2} $ and $ E,E_{2}>0 $. Now, we should make sure that object number 1, falls inside the horizon. The condition satisfying this is $$-P_{1}.\xi\geq0 ; \xi=\partial_{t}+\Omega_{+}\partial_{\phi}$$ Where $ \xi $ is the null Killing vector on the horizon. We also know that $ -P_{1}.\xi_{t}<0 $, which means that $ E_{1}-\Omega_{+}J_{1}\geq0 $ and because $ E_{1}<0$ then $ J_{1}<0 $. This means that if we start with a black hole with mass $ M $ and angular momentum $ J $, we would have a black hole with mass $ M+\delta M $ and angular momentum $ J+\delta J $, after the object falls inside it. Therefore $$\delta M-\Omega_{+}\delta J\geq0$$ where $$\Omega_{+}=\frac{a}{r^{2}+a^{2}}=\frac{\frac{J}{M}}{(M+\sqrt{M^{2}-\frac{J^{2}}{M^{2}}})^{2}+\frac{J^{2}}{M^{2}}}\Rightarrow \delta(M^{2}+\sqrt{M^{4}-J^{2}}\geq0)$$ So until this last relation holds, we can extract energy from the black hole. In conclusion, we can extract energy from a Kerr black hole in the cost of decreasing it's angular momentum. So, until it's angular momentum is non-zero, we can extract energy from it through a process like this. Note that just because an observer sitting at infinity says that there is no time-like Killing vector in ergo-region, does not mean that the geometry lacks a time-like Killing vector there. | {
"domain": "physics.stackexchange",
"id": 84368,
"tags": "black-holes, angular-momentum"
} |
All possible ways of merging two lists while keeping their order | Question: I am writing a function gives all possible ways of merging two ordered lists such that in the merged list, the elements are still ordered according to the elements in their respective starting list.
For example, merging (a b) with (1 2) would result in:
((a b 1 2) (a 1 b 2)(a 1 2 b) (1 a b 2) (1 a 2 b) (1 2 a b))
My initial intuition was to map over an indexing list:
(define (ordered-merge l1 l2)
(if (null? l2)
(list l1)
(let ((num-list (enumerate 0 (length l1) inc identity)))
(flatten (lambda (pos)
(map (lambda (result) (append (take l1 pos)
(cons (car l2) result)))
(ordered-merge (drop l1 pos) (cdr l2))))
num-list))))
enumerate is just a list-builder, which here will return a list (0 1 ... (length l1)).
But I'm having feelings this is probably trying to force imperative style into functional style.
Here's my second attempt:
(define (ordered-merge2 l1 l2)
(cond ((null? l1) (list l2))
((null? l2) (list l1))
(else
(let ((insert-here (ordered-merge2 l1 (cdr l2)))
(not-here (ordered-merge2 (cdr l1) l2)))
(append (map (lambda (result) (cons (car l2) result))
insert-here)
(map (lambda (result) (cons (car l1) result))
not-here))))))
How could this be improved?
Answer: Since lists are built last to first and you want output to read first to last, we need to reverse the inputs. We will also need two accumulator (one for the permutation and one for results so far and some sort of recursion. Given that it's just a matter of messaging the logic into place.
(define (ordered-merge3 l1 l2)
(let loop ((lefts (reverse l1)) (rights (reverse l2))
(permu '()) (acc '()))
(cond ((or (null? lefts) (null? rights))
(let ((new-permu (if (null? lefts)
(append (reverse rights) permu)
(append (reverse lefts) permu))))
(cons new-permu acc)))
;;base case, if either lefts or rights is null there is only one in-order permutation that can be formed.
(else (loop lefts
(cdr rights)
(cons (car rights) permu)
(loop (cdr lefts)
rights
(cons (car lefts) permu)
acc))))))
The last bit is the hard part to explain. When asked for the in-order permutation of lefts and rights, you can start forming the next permutation with either the car of lefts or the car of rights. In the nested loops here the inner loop is evaluated first because the scheme interpreter does eager evaluation. The value returned by examining the permutations that involve picking the left side is going to be one or more permutations tacked onto the existing known permutation at that point in the calculation. This returned value is used as the accumulator when examining the permutations that involve picking the car of the right side instead.
As far as effeciency, this loop executes once for every element in every permutation. Nothing fancy there, but it will do it in a memory stack no deeper than the sum of the length of the inputs.
(ordered-merge (list 1 2) (list 'a 'b))
;Value 14: ((1 2 a b) (1 a 2 b) (a 1 2 b) (1 a b 2) (a 1 b 2) (a b 1 2))
Overall the shape of how I would approach it is very similar to your second attempt. Just a few critiques. Generally cons is the way to build lists. Secondly by mapping up from prior results you are keeping quite a bit extra of data on the stack.
1 ]=> (ordered-merge2 '(1 2 3 4) '(a b c))
(10 0 0)
1 ]=> (ordered-merge3 '(1 2 3 4) '(a b c))
(0 0 1)
1 ]=> (ordered-merge2 '(1 2 3 4 5 6 7 8 9) '(a b c d e f g h))
(170 0 173)
1 ]=> (ordered-merge3 '(1 2 3 4 5 6 7 8 9) '(a b c d e f g h))
(70 0 72)
1 ]=> (ordered-merge2 '(1 2 3 4 5 6 7 8 9) '(a b c d e f g h j k l m o p))
;Aborting!: out of memory
1 ]=> (ordered-merge2 '(1 2 3 4 5 6 7 8 9) '(a b c d e f g h j k l m ))
(2540 1090 3633)
1 ]=> (ordered-merge3 '(1 2 3 4 5 6 7 8 9) '(a b c d e f g h j k l m ))
(860 50 905)
]=> (ordered-merge3 '(1 2 3 4 5 6 7 8 9) '(a b c d e f g h j k l m o p q))
(3820 1080 4902)
1 ]=>(ordered-merge3 '(1 2 3 4 5 6 7 8 9) '(a b c d e f g h j k l m o p q r))
;Aborting!: out of memory
Results are timings (cpu-time garbage-collection-time real-time)
Addendum:
I also figure out that my output list was sharing internal list structure where possible.
(define test2 (ordered-merge2 (list 1 2 3) (list 'a 'b 'c)))
(define test (ordered-merge3 (list 1 2 3) (list 'a 'b 'c))
(for-each (lambda (x) (begin (display x) (newline))) test)
(for-each (lambda (x) (begin (display x) (newline))) test2)
Compare the sublists (1 a 2 3 b c) and (a 1 2 3 b c) which in test are elements 2 and 3, and in test2 are 4 and 6. eq? is only true for lists if they are the same object in memory
1 ]=>(eq? (cddr (list-ref test 2)) (cddr (list-ref test 3)))
;Value: #t
1 ]=> (eq? (cddr (list-ref test2 4)) (cddr (list-ref test2 10)))
;Value: #f
1 ]=> (equal? (cddr (list-ref test2 4)) (cddr (list-ref test2 10)))
;Value: #t | {
"domain": "codereview.stackexchange",
"id": 18590,
"tags": "functional-programming, combinatorics, scheme"
} |
choosing a good representative genome subset | Question: I'm trying to build a genomic database for DNA alignments.
I started with NCBI accessions, but the data is very multiplicative, so I want to use subset of [max] N different strains for each specie.
my question is: I can i obtain the minimal N that the samples will be diverse enough and will cover the specie good?
Answer: Check out the progenomes database: http://progenomes.embl.de/ | {
"domain": "bioinformatics.stackexchange",
"id": 1302,
"tags": "sequence-alignment, clustering, subset"
} |
Using keywords async/await in database queries (Windows Phone 8) | Question: I have a local database in Windows Phone 8 app. The app includes a lot of queries to the database and I don't want bad effect on the responsiveness of UI.
For example I have a table of users and method to get a user from database by id.
Current variant
public class CacheDataContext : DataContext
{
public static string DBConnectionString = "Data Source=isostore:/Cache.sdf";
public CacheDataContext(string connectionString)
: base(connectionString) { }
public static AutoResetEvent OperationOnDatabaseUser = new AutoResetEvent(true);
public Table<User> UserItems;
}
public class CacheDataContextUser : CacheDataContext
{
public CacheDataContextUser(string connectionString)
: base(connectionString) { }
public User GetUser(string id)
{
try
{
OperationOnDatabaseUser.WaitOne();
using (CacheDataContext context = new CacheDataContext(DBConnectionString))
{
//find user in the data base and return
}
}
finally
{
OperationOnDatabaseUser.Set();
}
}
}
I need ensure safety of the data if at the same time on database wallow different requests to add, modify, delete data. For this I use AutoResetEvent. Not sure what I'm doing it right, but so far no problems.
I can get user from the database:
using (DataBaseUser = new CacheDataContextFriends(ConnectionString))
{
var user = DataBaseUser.GetUser(id);
}
Async/await
But I want work with the database using keywords async/await.
public class CacheDataContextUser : CacheDataContext
{
public CacheDataContextUser(string connectionString)
: base(connectionString) { }
private object threadLock = new object();
public Task<User> GetUser(string id)
{
using (CacheDataContext context = new CacheDataContext(DBConnectionString))
{
var result = await Task<User>.Factory.StartNew(() =>
{
lock (threadLock)
{
//find user in the data base and return
}
});
return result;
}
}
}
I'm afraid to rewrite the method as described above, because I'm not sure it's right. Please tell me what the problem may be. My main goal is to improve the responsiveness of the app.
Answer: Before even addressing the actual question, have you actually proven that database queries are harming your responsiveness? Since the queries are to local storage it's possible that they are fast enough to not require using asynchrony, particularly if they are simple queries or the database is small.
Next, I notice you've assumed that the database itself is not threadsafe, i.e. that you must only allow access from one thread at a time. Are you sure that's actually true? Many (most?) database handle concurrency themselves, so you may be adding an unnecessary layer of synchronization. I looked around a bit, but could not find anything specifically documenting concurrent access to isolated storage databases. I would start by researching that, or possibly asking a question on StackOverflow. If the database does allow concurrent access then you just need to worry about update conflicts, which you could hopefully avoid in a single-user phone application.
What I'm getting at here is that multi-threading and locking is hard. Don't do it unless you're sure you have a good reason to do it.
If you really must to multi-threading, then the C# lock keyword is a good place to start. Unfortunately, your example probably will not work properly because each CacheDataUserContext instance will have it's own lock object - so if you create more than one instance they could conflict with each other.
You "Current Variant" actually gets this more right, because your AutoResetEvent is a static variable, so there is only a single instance of it across the system. However, as I understand DataContext it lets you use Linq statements against the database, which will not know anything about your lock and hence will not be synchronized.
I think you'd have to create a separate application layer to wrap the DataContext and expose just the certain operations that your application needs. This is generally called the "Repository Pattern". Inside the repository you could create a single lock object, wrap a lock around all accesses to a DataContext, and use Task.Factory.StartNew inside each of the repository methods to make them asynchronous. | {
"domain": "codereview.stackexchange",
"id": 7619,
"tags": "c#, database, thread-safety, async-await, windows-phone"
} |
Radiation of EM waves by revolving electron | Question: If accelerating charges produce electromagnetic waves, what makes Bohr's model of the atom so different from Rutherford's? Rutherford's model had a drawback that due to the constant acceleration of the electron, EM waves would be produced and the atom would become unstable. But even in Bohr's model, the fact that there are distinct energy levels does not change the fact that the electron is revolving around the nucleus. It's still accelerating, and therefore must radiate energy! And therefore, Bohr's model would be unstable as well, right?
It would be great if someone helped me out with this strange mix of thoughts.
Answer:
But even in Bohr's model, the fact that there are distinct energy levels does not change the fact that the electron is revolving around the nucleus. It's still accelerating, and therefore must radiate energy!
In classical mechanics, yes, but atoms existed, instead of the electrons falling on the nucleus and neutralizing it. The data from Atomic spectra , the photoelectric effect, and the black body radiation forced physicists to invent quantum mechanics,because the classical theories could not explain the data.
Bohr imposed quantization ( second page)of angular momentum in order to explain the mathematical series followed by the spectra of atoms, hydrogen to start with.
Quantization of angular momentum (L) meant that only photons with energy with the difference between energy levels could radiate from the atom, instead of a continuum as classical electrodynamics imposed.
Thus L is not only conserved, but constrained to discrete values by the quantum number n. This quantization of angular momentum is a crucial result and can be used in determining the Bohr orbit radii and Bohr energies.
It was with Schrodinger's equation that the mathematical theory of quantum mechanics developed, and the electrons are not in orbits, but in orbitals, probability loci., defined by the wavefunction solutions of the quantum mechanical equations. | {
"domain": "physics.stackexchange",
"id": 78296,
"tags": "electromagnetic-radiation, atoms, stability"
} |
Implementing histogram in python | Question: I was trying to implement my own version of matplotlib's hist function and I came up with this:
import numpy as np
import matplotlib.pyplot as plt
import numba as nb
from numba import prange
import time
N = 10_000_000
normal = np.random.standard_normal(N)
@nb.jit(parallel=True)
def histogram(a, bins = 1000, density=False):
min_data = np.min(a)
max_data = np.max(a)
dx = (max_data - min_data)/bins
D = np.zeros([bins,2])
for i in prange(bins):
dr = i*dx + min_data
x1 = a[a>dr]
x2 = x1[x1<(dr+dx)]
D[i] = [dr, len(x2)]
D = D.T
x , y = D
if density == True:
inte = sum((x[1]-x[0])*y)
y /= inte
plt.bar(x, y, width=dx)
return D
def main():
start_time = time.clock()
histogram(normal, bins=1000, density=True)
#plt.hist(normal, bins=1000, density=True)
print("Execution time:", time.clock() - start_time, "seconds")
plt.tight_layout()
plt.show()
if __name__ == '__main__':
main()
This outputs the desired result, but with the execution time of 24.3555 seconds. However, when I use the original hist (just comment my function and uncomment plt.hist) it does the job in just 0.6283 seconds, which is ridiculously fast. My question is, what am I doing wrong? And how does hist achieve this performance?
P.S: Using this bit of a code, you can see the the result of hist function and my implementation:
fig, axs = plt.subplots(1,2)
axs[0].hist(normal, bins=100, density=True)
axs[1] = histogram(normal, bins=100, density=True)
plt.tight_layout()
plt.show()
Where the left one is hists output and the one on right is my implementation; which both are exactly the same.
Answer: You are slicing your original dataset 1000 times, making 2000 comparisons with borders for each of the values in it.
It is much faster to calculate the proper bin for each value using simple math:
bin = (value - min_data) / dx
D[bin][1] += 1
And you can use original circle to set all В[bin][0]
The only problem, that at max_data, bin would be equal bins (not bins-1 as expected), you can add additional if or just add additional bin D = np.zeros([bins+1,2]) and then add from bins to bins-1 and drop the bin=bins.
UPD: So here is my full code (on the base of original code with some refactoring)
def histogram(a, bins = 1000, density=False):
min_data = np.min(a)
max_data = np.max(a)
dx = (max_data - min_data) / bins
x = np.zeros(bins)
y = np.zeros(bins+1)
for i in range(bins):
x[i] = i*dx + min_data
for v in a:
bin = int((v - min_data) / dx)
y[bin] += 1
y[bins-2] += y[bins-1]
y = y[:bins]
if density == True:
inte = sum((x[1]-x[0])*y)
y /= inte
plt.bar(x, y, width=dx)
return np.column_stack((x, y))
It gave me 15 seconds in comparision with 60 seconds of original code. What else can we do? The most intense calculations are in this circle
for v in a:
bin = int((v - min_data) / dx)
y[bin] += 1
And they are made pure python which is really slow (C++ is almost 10 times faster, and it is used to optimise inner calculation in numpy). So we'd better make these calculations with numpy (just change previous 3 lines to that 3 lines).
a_to_bins = ((a - min_data) / dx).astype(int)
for bin in a_to_bins:
y[bin] += 1
That gave me the final time about 7 seconds.
UPD 2: What is the slowest part now? As we know python intensive calculations are slow. Let's check it:
def histogram(a, bins = 1000, density=False):
start_time = time.perf_counter()
min_data = np.min(a)
max_data = np.max(a)
dx = (max_data - min_data) / bins
print(time.perf_counter() - start_time, 'to calc min/max')
x = np.zeros(bins)
y = np.zeros(bins+1)
print(time.perf_counter() - start_time, 'to create x, y')
for i in range(bins):
x[i] = i*dx + min_data
print(time.perf_counter() - start_time, 'to calc bin borders')
a_to_bins = ((a - min_data) / dx).astype(int)
print(time.perf_counter() - start_time, 'to calc bins')
for bin in a_to_bins:
y[bin] += 1
print(time.perf_counter() - start_time, 'to fill bins')
y[bins-2] += y[bins-1]
y = y[:bins]
if density == True:
inte = sum((x[1]-x[0])*y)
y /= inte
print(time.perf_counter() - start_time, 'before draw')
plt.bar(x, y, width=dx)
print(time.perf_counter() - start_time, 'after draw')
return np.column_stack((x, y))
Gives me:
0.010483399993972853 to calc min/max
0.011489700002130121 to create x, y
0.012588899990078062 to calc bin borders
0.09252060001017526 to calc bins
7.7265202999988105 to fill bins
7.727168200013693 before draw
8.440735899988795 after draw
So almost all the time was consumed by python circle to add 1 to the bin (pay attention that previous calculation with numpy made all bin calculation for less than 0.1 seconds). So if we rewrite that code in C++ we would probably get 10x faster result which is pretty close to the original .hist timing.
By the way, if I use @nb.jit(parallel=True) than time goes down to
0.014584699994884431 to calc min/max
0.014821599994320422 to create x, y
0.3439053999900352 to calc bin borders
0.5012229999992996 to calc bins
0.8304882999800611 to fill bins
0.8317092999932356 before draw
1.5190471999812871 after draw
While original .hist on my PC takes 0.8466330000082962 (I don't know if it uses parallel computation or not).
In total: pure python calculations are terribly slow, transfering them into C++ makes huge impact on speed. That is why you'd better use numpy calculations (which uses C++ under the hood) instead of python circles. It can give the speed up around 10x or even more. | {
"domain": "codereview.stackexchange",
"id": 42281,
"tags": "python, python-3.x, numpy, matplotlib, numba"
} |
Finding the wavelength of maximum absorption of Tartrazine using Fieser-Kuhn | Question: How would you use Fieser-Kuhn rules to calculate the wavelength of maximum absorption for tartrazine?
Fieser-Kuhn rules states that:
$\lambda_{max}=114+5M+n(48-1.7n)-16.5\ R_{endo} - 10\ R_{exo}$
What would $M$ be in this example? There aren't any alkyl substituents, but what I'm not sure about ring residues. Would this refer to the $\ce{SO3Na}$ and $\ce{COONa}$?
Would $n$, the number of conjugated double bonds be equal to 12?
Does Pyrazole count as a ring for $R_{endo}$? So $R_{endo}=3$?
Or would Woodward-Fieser rules be more suitable to use in this scenario?
Answer: Fieser-Kuhn rules needs to be used to calculate the wavelength in this case, since there are more than 4 conjugated double bonds. Woodward-Fieser rules are only applicable for conjugated dienes and polyenes with upto 4-double bonds or less. [1]
M = 6 (SOONa, SOONa, COONa, 2 N atoms connected by triple bond,N connected to the Benzene sulfinate group)
n=12 as illustrated in the question
= 3
=114+(5*6)+12*(48−1.78*12)−16.5*3−10*0
=425.7nm
This calculation matches the experimental data as seen in [2][3][4] which list the value of tartrazine as 425nm, 427nm, 428nm respectively.
References:
[1] Mehta, A. (2015) Ultraviolet-visible (UV-vis) spectroscopy – Fieser-Kuhn rules to calculate wavelength of maximum absorption (lambda-max) of polyenes (with sample problems): Analytical Chemistry, PharmaXChange.info. Available at: https://pharmaxchange.info/2013/05/ultraviolet-visible-uv-vis-spectroscopy-%E2%80%93-fieser-kuhn-rules-to-calculate-wavelength-of-maximum-absorption-lambda-max-of-polyenes-with-sample-problems/ (Accessed: March 4, 2023).
[2]Gobara, Mohamed & Elsayed, Mohamed. (2016). Preparation and Characterization of PANI/TiO2 Composite for Photocatalytic Degradation of Tartrazine Dye. The International Conference on Chemical and Environmental Engineering. 8. 370-389. 10.21608/iccee.2016.35131.
[3] Ayman A. Ali et al. (2020). Spectrophotometric determination of Tartrazine in soft drink powder and
pharmaceutical dosage form. Journal of Basic and Environmental Sciences, 7, https://jbesci.org/published/7.2.7.pdf
[4] Gobara, Mohamed & Baraka, Ahmad. (2014). Tartrazine Solution as Dosimeter for Gamma Radiation Measurement. International Letters of Chemistry, Physics and Astronomy. 33. 106-117. 10.18052/www.scipress.com/ILCPA.33.106. | {
"domain": "chemistry.stackexchange",
"id": 17298,
"tags": "organic-chemistry, spectroscopy, color, photochemistry, absorption"
} |
Classical Memory enough to store states up to 40 qubits quantum system? | Question: As a part of a discussion with my 'classical' friend, he insisted that making a a state machine for calculating the outcome of quantum computer is possible; so, simply calculate the outcomes of (known) algorithms on supercomputers and store their results in a Look-up table. (Something like storing the truth table).
So, why do people work on quantum simulators (say, capable up to 40 qubits); which calculate the result every time?! Simply (hypothetically) use the supercomputers of the world (say capable up to 60 qubits); calculate the result for $2^{60}$ input cases, store their result and use it as reference? How can I convince him it's not possible?
Note: this is for known quantum algorithms and their known circuit implementations.
Answer: Suppose that you have a quantum algorithm with $2^{60}$ possible inputs. Suppose also that it would take 1 nanosecond to run this on a supercomputer (which is unrealistically optimistic!). The total time required to run through all possible inputs would be 36.5 years.
Clearly it would be much better to just run the instance that you care about, and get the answer in an instant, rather than waiting half a lifetime to pick it from a list. This gets ever more true as we raise the runtime from the unrealistic 1 nanosecond.
why do people work on quantum simulators (say, capable up to 40 qubits); which calculate the result every time?!
Even if you wanted to create a lookup table, you'd still need a simulator like this to create it. | {
"domain": "quantumcomputing.stackexchange",
"id": 339,
"tags": "quantum-algorithms, simulation"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.