anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Do white dwarfs lose mass as they fade to black dwarfs? Is there a correlation between temperature, mass, and radius? | Question: I'm having a difficult time understanding certain behaviors of white dwarfs.
I understand how mass is lost in the red giant to white dwarf transition process. I understand that white dwarfs can accumulate mass from a partner binary or some other source and I'm aware of the 1.4 M☉ Chandrasekhar limit. I get that they're not losing all their mass; in general they're just cooling and in the end there will be a lone black chunk of crystallized material floating forever in the void. I know they are really insanely hot when formed and very cold after an unfathomable amount of time. But is it that same, initial mass, or does it somehow diminish?
But is that initial mass the mass the remnant has after its cooled to a black dwarf? I guess what I'm asking is that I'm trying to figure out if there is a correlation between temperature, mass and radius. Some sort of equation?
Answer: The mass of a white dwarf continues to be that which it was "born" with. It will not change significantly unless it accreted material from a companion.
The radius of a white dwarf is, to first order, given by the "mass-radius relationship" and this relationship does not involve the temperature.
The mass-radius relationship is appropriate for a "cold" star. "Cold" in this context mean that the pressure that supports the white dwarf is only dependent on density, which is the case for the degenerate electrons in the interior, which have kinetic energies much greater than their thermal energy.
However, degenerate electrons also have excellent thermal conductivity, so white dwarf interiors are isothermal. Yet, they have a density gradient - denser in the middle and less dense answer move outwards. At some point close to the surface the electrons are hot enough to stop being degenerate and the gas pressure becomes temperature sensitive.
What this means is that the radius of a hot white dwarf is bigger than that of a cold white dwarf of the same mass. The effect depends on both the mass of the white dwarf (bigger for lower mass white dwarfs with lower surface gravities) and its age (since white dwarfs cool as they get older).
This effect is not negligible and needs to be properly modelled in order to understand the luminosities of white dwarfs.
The plot below is from Parsons et al. (2017) and shows (as points) measurements of masses and radii of white dwarfs in eclipsing binary systems. The lines are model curves for white dwarfs with surface temperatures ranging from zero (dashed line) to 60,000 K (appropriate for a very young white dwarf) in steps of 10,000 K. Clearly, there is not a unique mass radius relationship and the radii do apparently depend on temperature (and core composition), as the model curves suggest. | {
"domain": "astronomy.stackexchange",
"id": 3613,
"tags": "white-dwarf"
} |
Help understanding a result from Euler's laws of rotation | Question: So I'm trying to learn how to apply Eulers laws of rotation. I'm currently looking at an example in the book Engineering Mechanics 3 by Dietmar Gross et. al that can be found on page 190. Seems like a great book btw.
A mill is constructed as in the picture below.
A key result is that the normal force $N$ increases with the rotational speed $\omega_0$, and the book states that this is due to a gyroscopic effect. I find this result absolutely absurd. Am I getting this right? A wheel that is rotated on a flat surface like this, will have a higher normal force the faster it is rotating? Really? If so, is there any intuitive explanation for this?
Edit:
So solving this problem gave me a crash course in how gyroscopes work. This problem is closely related to something called gyroscopic precession. I will post links below that I used for gather the knowledge I needed. It turns out that this problem is much easier solved using the the laws directly related to the angular momentum around the center-point of the mill. I think they used Eulers equation for rotation in the book just to verify that these give the same result.
The approach I used was analyzing the change in angular momentum
$\dot{L} = M$
First I checked $M$ to see about where torque $M$ would act. There is a great answer to this question below where a user posted a picture of this too. Then I solved for the vector norm $ ||\dot{L} || $to see how large the torque would be. This gave the same result as in the textbook. I also bought a toy gyroscope to really feel this effect, which is both weird, absurd and amazing to me. I have learned tons the past days, it has been great.
Answer:
I find this result absolutely absurd.
which shows that most of rotational dynamics is counter intuitive.
Let me try and explain by hand-waving that
A key result is that the normal force $N$ increases with the
rotational speed
is correct.
Looking from the top the angular momentum of the wheel changes from $\vec L_{\rm old}$ to $\vec L_{\rm new}$.
The change in angular momentum is $\Delta \vec L$ as is shown in the vector diagram on the right.
Now change position and look at the wheel from the side.
The change in angular momentum is out of the screen and this must be the direction of the torque $\vec \tau$ which causes that change in angular momentum.
So about the left hand pivot point the torque has to try and rotate the wheel anticlockwise which must mean that $N >W$ remembering that when the wheel was not moving $N=W$
So the normal reaction force on the wheel $N$ is greater than the weight of the wheel $W$.
If the wheel is made to go round faster the magnitudes of $\vec L_{\rm old}$ and $\vec L_{\rm new}$ are larger and so the magnitude of $\Delta \vec L$ must be larger.
In turn the torque must have a larger magnitude and so $N-W$ must be larger with $W$ constant.
So $N$ does increase as the speed of the wheel increases. | {
"domain": "physics.stackexchange",
"id": 37353,
"tags": "newtonian-mechanics, rotational-dynamics"
} |
Why is energy of emitted particles negative? | Question: recently I had one problem here, and in comments I got interesting thing written by user Sofia. Unfortunetly, the topic is in hold now and I can't ask her more.
Here is the comment:
Radio-activity would help? Your class heard - I suppose - of substances as Uranium, Plutonium, and others, that emit electrons, alpha-particles., etc. But the nuclei of these atoms create an electric field around themselves, and the potential energy of this field is much higher tan the total energy of the emitted particle. We know that equality: total energy = kinetic energy + potential energy. So, the non-Newtonian behavior is that when passing through this field, the kinetic energy of the emitted particles is NEGATIVE.
Why is it negative?
Thanks!
Answer: KInetic energy is never negative, the answer you got was wrong.
Of course, you could say methaporically that when the particle is tunneling the KE is negative, but that cannot be measured, and as far as I know, the particle that tunnels is either on one side or the other of the barries, but never inside. In quantum theory the eigenvalues of kinetic energy cannot be negative. The apparent violation can be explained by the uncertainty principle, where the particle borrow energy from the vacumm for a short persiod of time so it can cross the barrier, but by the same reason the particel cannot be measured while in a state of borrowed energy. | {
"domain": "physics.stackexchange",
"id": 17981,
"tags": "newtonian-mechanics, energy"
} |
Role of dam methylase in bacteria | Question: In bacteria, an enzyme called dam methylase (Deoxyadenosine methylase) methylates adenines (A) in the sequence GATC in the new strand formed after replication.
What role does this methylation play?
I read that it has something to do with proofreading. If this is so, then how?
Answer: The dam methylase has three different functions:
Correction replication errors, since the new DNA molecule
is only hemimethylated (the old strand is methylated, the newly
synthesized is not). Since the proof-reading only takes place on the
new strand, errors introduced during replication can be corrected.
Regulation of replication: The ori of the chromosome is methylated to ensure that is only replicated once.
Regulation of transcription: Methylation of GATC sequences promote the transcription of genes.
For further details see the references.
References:
The great GATC: DNA methylation in E. coli
The dam and dcm strains of Escherichia coli--a review.
Dam methylation: coordinating cellular processes. | {
"domain": "biology.stackexchange",
"id": 5911,
"tags": "bacteriology, dna-replication"
} |
How to apply A bang-bang signal of amplitude 1 N and 1 s width as an input force to reproduce certain results in Matlab? | Question: I working on dynamic modeling and simulation of a mechanical system (overhead crane), after I obtained the equation of motion, in the form: $$ M(q)\ddot{q}+C(q,\dot{q})\dot{q}+G(q)=Q $$
All the matrices are know inertia, $ M(q)$, Coriolis-Centrifugal matrix $ C(q,\dot{q})$, and gravity $ G(q)$ as functions of the generalized coordinates $q$, and their derivatives $\dot{q}$.
I want to solve for $q$, using Matlab ODE (in m-file), I got the response for some initial conditions and zero input, but, I want to find the response, for the aforementioned control signal (A bang-bang signal of amplitude 1 N and 1 s width), I'm trying to regenerate some results from the literature, and what the authors of that work said, regrading the input signal is the following: "A bang-bang signal of amplitude 1 N and 1 s width is used as an input force, applied at the cart of the gantry crane. A bang-bang force has a positive (acceleration) and negative (deceleration) period allowing the cart to, initially, accelerate and then decelerate and eventually stop at a target location." I didn't grasp what do they mean by bang-bang signal, I know in Matlab we could have step input, impulse, ...etc. But bang-bang signal, I'm not familiar with. According to this site and this bang bang is a controller rather.
Could anyone suggest to me how to figure out this issue and implement this input signal? preferably in m-file.
The code I'm using is given bellows, two parts:
function xdot = AlFagera(t,x,spec)
% xdot = zeros(8,1);
xdot = zeros(12,1); % to include the input torque
% % Crane Specifications
mp = spec(1);
mc = spec(2);
mr = spec(3);
L = spec(4);
J = spec(5);
g = 9.80; % accelatrion of gravity (m/s^)
% % matix equations
M11 = mr+mc+mp; M12 = 0; M13 = mp*L*cos(x(3))*sin(x(4)); M14 = mp*L*sin(x(3))*cos(x(4));
M21 = 0; M22 = mp+mc; M23 = mp*L*cos(x(3))*cos(x(4)); M24 = -mp*L*sin(x(3))*sin(x(4));
M31 = M13; M32 = M23; M33 = mp*L^2+J; M34 = 0;
M41 = M14; M42 = M24; M43 = 0; M44 = mp*L^2*(sin(x(3)))^2+J;
M = [M11 M12 M13 M14; M21 M22 M23 M24; M31 M32 M33 M34; M41 M42 M43 M44];
C11 = 0; C12 = 0; C13 = -mp*L*sin(x(3))*sin(x(4))*x(7)+mp*L*cos(x(3))*cos(x(4))*x(8);
C14 = mp*L*cos(x(3))*cos(x(4))*x(7)-mp*L*sin(x(3))*sin(x(4))*x(8);
C21 = 0; C22 = 0; C23 = -mp*L*sin(x(3))*cos(x(4))*x(7)-mp*L*cos(x(3))*sin(x(4))*x(8);
C24 = -mp*L*cos(x(3))*sin(x(4))*x(7)-mp*L*sin(x(3))*cos(x(4))*x(8);
C31 = 0; C32 = 0; C33 = 0; C34 = -mp*L^2*sin(x(3))*cos(x(3))*x(8);
C41 = 0; C42 = 0; C43 = -C34; C44 = mp*L^2*sin(x(3))*cos(x(4))*x(7);
C = [C11 C12 C13 C14; C21 C22 C23 C24; C31 C32 C33 C34; C41 C42 C43 C44];
Cf = C*[x(5); x(6); x(7); x(8)];
G = [0; 0; mp*g*L*sin(x(3)); 0];
fx = 0;
if t >=1 && t<=2
fy = 1.*square(t*pi*2);
else fy = 0;
end
F =[fx; fy; 0; 0]; % input torque vector,
xdot(1:4,1)= x(5:8);
xdot(5:8,1)= M\(F-G-Cf);
xdot(9:12,1) = F;
And:
clear all; close all; clc;
t0 = 0;tf = 20;
x0 = [0.12 0.5 0 0, 0 0 0 0,0 0 0 0]; % initional conditions
% % spectifications
Mp = [0.1 0.5 1]; % variable mass for the payload
figure
plotStyle = {'b-','k','r'};
for i = 1:3
mp = Mp(i);
mc = 1.06; mr = 6.4; % each mass in kg
L = 0.7; J = 0.005; % m, kg-m^2 respe.
spec = [mp mc mr L J];
% % Call the the function
[t,x] = ode45(@(t,x)AlFagera(t,x,spec),[t0 :0.001: tf],x0);
legendInfo{i} = ['mp=',num2str(Mp(i)),'kg'];
fx = diff(x(:,9))./diff(t);
fy = diff(x(:,10))./diff(t);
tt=0:(t(end)/(length(fx)-1)):t(end); % this time vector
% to plot the cart positions in x and y direcitons
subplot(1,2,1)
plot(t,x(:,1),plotStyle{i})
axis([0 20 0 0.18]);
grid
xlabel('time (s)');
ylabel('cart position in x direction (m)');
hold on
legend(legendInfo,'Location','northeast')
subplot(1,2,2)
plot(t,x(:,2),plotStyle{i})
axis([0 20 0 1.1]);
grid
xlabel('time (s)');
ylabel('cart position in y direction (m)');
hold on
legend(legendInfo,'Location','northeast')
end
% to plot the input torque, (bagn-bang signal), just one sample
figure
plot(tt,fy)
grid
set(gca,'XTick',[0:20])
xlabel('time (s)');
ylabel('input signal, f_y (N)');
Furthermore, the results I'm getting and what I supposed to get are shown:
Major difficulties, initial conditions are not clearly stated in the paper, the input force direction, is only in y (which it should be), or it has different direction. I appreciate any help.
the paper I'm trying to recreate is:
R. M. T. Raja Ismail, M. A. Ahmad, M. S. Ramli, and F. R. M. Rashidi, “Nonlinear Dynamic Modelling and Analysis of a 3-D Overhead Gantry Crane System with System Parameters Variation.,” International Journal of Simulation–Systems, Science & Technology, vol. 11, no. 2, 2010.
http://ijssst.info/Vol-11/No-2/paper2.pdf
Answer: This is how I would go about simulating a nonlinear ODE in Matlab. As I mentioned in a (now-deleted) comment on your question, I typically work with linear ODE's in my line of work, which means that I usually use the (awesome) functions in the Control System Toolbox.
Now, I'll start by saying that you haven't given any definitions of what the matrices are in your equation, so I can't give you a concrete example of how to solve your specific example. If you update your question to give numbers for everything then I can modify this answer to solve your problem. That said, you give:
$$
M(q)\ddot{q}+C(q,\dot{q})\dot{q}+G(q)=Q
$$
First I would say that this can be re-written as:
$$
\ddot{q} = \frac{1}{M(q)}\left(Q-C(q,\dot{q})\dot{q} -G(q)\right) \\
$$
So, for an example for you, I chose an RLC circuit, which takes the form:
$$
\begin{array}
.\dot{I} = \frac{1}{L}\left(V_{\mbox{in}} - V_C - IR\right) \\
\dot{V} = \frac{1}{C}I
\end{array}
$$
Typically your input signal would be a smooth function of time. Here you're looking for a bang-bang signal, which is akin to a light switch. At some time, the input signal goes from nothing immediately to some value, then later from that value immediately back to nothing.
So, where typically you would use an interpolate command to get values defined sample time increments, here you don't want those commands interpolated. You don't want a ramp, you want the signal to immediately shift.
So this question really is two parts:
How do I pass parameters to an ODE45 (or 23 or whatever else) function in Matlab, and
How do I define a step change in an input signal for an ODE45 function?
The answers are (examples to follow)
Define your ODE function as a function in its own script, such that the first line of the script is something like function [dq] = AlFageraODE(t,x,parameter1,parameter2,...,parameterN). Then, when you want to solve the ODE, call Matlab's built-in ODE solver as follows: [t,y] = ode45(@AlFageraODE,[t0,t1],[y0,y1],[ ],parameter1,parameter2,...,parameterN);. The square brackets in bold there need to be included because that is where you can pass specific options to the ODE solver. If you don't want to pass anything specific and are okay with the default settings, you still need to put something, so put an empty array - this is done with an empty array []. After that you can put in parameters that will be passed to your custom function.
To get a true step function, you need to split the simulation into three distinct sets - before the step, during the step, and after the step. Anything else will result in the need to interpolate the input command. The last entry in the outputs will be the initial conditions for the next segment of the simulation.
Below, I've written the custom ODE function, and then below that is the script used to solve the custom ODE. Note that the R, L, and C values are set in the calling script, not in the custom ODE. This is because those values are passed in to the custom ODE, along with what the applied voltage should be during that particular segment.
The examples:
The custom ODE function (again, this simulates an RLC circuit, but you can modify it for any custom ODE needing parameters passed in.)
.
function dx = RoboticsODE(t,x,Vin,R,L,C)
dx = zeros(2,1);
I = x(1);
V = x(2);
di_dt = (1/L)*(Vin - V - I*R);
dv_dt = (1/C)*I;
dx(1) = di_dt;
dx(2) = dv_dt;
The script that solves that ODE for a bang-bang signal.
.
%% Clear/close everything
clear;
close all;
clc;
%% Simulation Configuration
v0 = 0; % Initial *output* voltage
i0 = 0; % Initial circuit current
C = 47*(1/1000); % Capacitance, F
R = 0.1; % Resistance, Ohm
L = 22*(1/1000); % Inductance, H
appliedVoltages = [0,1,0]; % The applied voltages at t0, bangOn, bangOff
tStart = 0; % t0
bangStart = 1; % bangOn time
bangWidth = 1; % bangOff time = bangOn time + bangWindow
endWindow = 5; % endWindow is how long to "watch" the simulation after
% the bang-bang signal goes "off".
%% Output Initialization
outputTime = zeros(0,1);
outputVoltage = zeros(0,1);
outputCurrent = zeros(0,1);
inputVoltage = zeros(0,1);
%% Dependent Configuration
currentValues = [i0;v0];
samplePoints = cumsum([tStart,bangStart,bangWidth,endWindow]);
% A note on the above - I use the cumulative sum (cumsum) because I defined
% each point as the number of seconds *after* the previous event. If you
% defined absolute time points then you'd just use those time points
% directly with no cumulative sum.
nSegments = numel(samplePoints)-1;
%% Simulation
for currentSegment = 1:nSegments
% Setup the simulation by getting the current time window and "intial"
% conditions.
Vt = appliedVoltages(currentSegment);
t0 = samplePoints(currentSegment);
t1 = samplePoints(currentSegment+1);
sampleTime = [t0;t1];
% Run the simulation by solving the ODE for this particular segment.
[intermediateTime,intermediateOutput] = ...
ode45(@RoboticsODE,sampleTime,currentValues,[],Vt,R,L,C);
% Assign outputs
nOutputPoints = numel(intermediateTime);
outputTime(end+1:end+nOutputPoints) = intermediateTime;
outputCurrent(end+1:end+nOutputPoints) = intermediateOutput(:,1);
outputVoltage(end+1:end+nOutputPoints) = intermediateOutput(:,2);
inputVoltage(end+1:end+nOutputPoints) = Vt*ones(nOutputPoints,1);
% Setup the next simulation by setting the "initial" conditions for
% that simulation equal to the ending conditions for the current
% simulation.
currentValues(1) = outputCurrent(end);
currentValues(2) = outputVoltage(end);
end
%% Output Plot
plot(outputTime,inputVoltage);
hold on;
plot(outputTime,outputVoltage);
title('RLC Circuit with Step Input to ODE45');
xlabel('Time (s)');
ylabel('Voltage (V)');
legend('Input Voltage','Output Voltage');
The plot of the output.
Finally, as I mentioned, if you would be willing to give concrete numbers for your equation then I could tailor this to solve your particular problem. As it stands, I can't provide any solution to a symbolic nonlinear ODE so this example is the best I can give you.
:EDIT:
I've got the problem solved for you. The code is attached below. I'll say this, though: It's important, for a step input (bang-bang, etc.) to segment the ODE solver process, as I described above. This is because Matlab tries to optimize solving the ODE and may not take process time in exactly the way you would expect.
The segmenting method is again, as described above, where you split the solving process at every discontinuity. The initial conditions for the following step is equal to the ending conditions of the current step.
The images below are the solutions I got with the segmented method and the "all-in-one" method. The all-in-one is the way you had it setup, where the force function was determined by an if statement in the ODE function. Because Matlab chooses the sample time increments, the positive and negative segments aren't guaranteed to have exactly the same number of samples. I think this is the reason for the drift in the output of the all-in-one solution.
I found several problems with your method. When I corrected them, I got a plot that looked (to me) to exactly duplicate the plots from the paper you linked.
The biggest problem - fx and fy should be the same.
Also a problem, the pulse width should be 1s. You used a square wave of period 1s, meaning that there was a 0.5s "positive" and 0.5s "negative" signal. I halved the frequency and got the proper 1s signal width.
Your initial conditions for x and y were not zero. They are zero in the paper, so I set them to zero in the simulation to replicate the figures in the paper.
Here's the code! First, the ODE script:
.
function varDot = AlFagera(t,var,spec,F)
% In general, I did a lot of cleanup with this function to make things
% easier for me to read.
%% Misc Declarations
% varDot = zeros(8,1);
varDot = zeros(12,1); % to include the input torque
g = 9.80; % Acceleration of gravity (m/s^2)
%% Define Forces if Undefined
% If the segmentSolution is being used, then the force is supplied to the
% function. If it's not being used (the "all-in-one" solution), then the
% force is determined by the simulation time.
if isempty(F)
if t >=1 && t<=3
fy = -1.*sign(sin(t*pi));
else
fy = 0;
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% I'll highlight this, because I think this was the key problem %
% you were having. The force for fx and fy should be the same. You %
% had fx = 0 for all cases. %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%fx = 0;
fx = fy;
F = [fx;fy;0;0];
end
%% Crane Specifications
mp = spec(1);
mc = spec(2);
mr = spec(3);
L = spec(4);
J = spec(5);
%% Breakout the Input Variable
x = var(1);
y = var(2);
theta = var(3);
phi = var(4);
xDot = var(5);
yDot = var(6);
thetaDot = var(7);
phiDot = var(8);
%% Repackage the Inputs into Useful Form
q = [...
x; ...
y; ...
theta; ...
phi];
qDot = [...
xDot; ...
yDot; ...
thetaDot; ...
phiDot];
%% Simplified expressions for (extensive) use later
mpL = mp*L;
cosT = cos(theta);
sinT = sin(theta);
cosP = cos(phi);
sinP = sin(phi);
%% Matrix Expressions
M = [...
mr+mc+mp , 0 , mpL*cosT*sinP , mpL*sinT*cosP; ...
0 , mp+mc , mpL*cosT*cosP , -mpL*sinT*sinP; ...
mpL*cosT*sinP , mpL*cosT*cosP , mpL*L+J , 0; ...
mpL*sinT*cosP , -mpL*sinT*sinP , 0 , mp*(L*sinT)^2+J];
C = [...
0 , 0 , -mpL*sinT*sinP*thetaDot+mpL*cosT*cosP*phiDot , mpL*cosT*cosP*thetaDot-mpL*sinT*sinP*phiDot; ...
0 , 0 , -mpL*sinT*cosP*thetaDot-mpL*cosT*sinP*phiDot , -mpL*cosT*sinP*thetaDot-mpL*sinT*cosP*phiDot; ...
0 , 0 , 0 , -mpL*L*sinT*cosT*phiDot; ...
0 , 0 , mpL*L*sinT*cosT*phiDot , mpL*L*sinT*cosT*thetaDot];
G = [...
0; ...
0; ...
mpL*g*sinT; ....
0];
%% Assign Outputs
qDdot = M\(-C*qDot-G+F);
varDot = [...
qDot; ...
qDdot; ...
F];
Then, the script that solves the ODE:
.
clear all;
close all;
clc;
% Compare the all-in-one method of solving the problem with the segmented
% method of solving the problem by setting the variable below equal to
% "true" or "false".
segmentSolution = true;
t0 = 0;tf = 20;
% Your initial conditions here for x- and y-positions were not zero, so I
% set them to zero to reproduce Figure 2 and Figure 3 in the paper you
% linked.
% Also - you don't ever use the last 4 values of this except as a way to
% output the force. This isn't used in the segmentSolution because there
% the input force is supplied to the function.
x0 = [0 0 0 0, 0 0 0 0,0 0 0 0]; % Initial Conditions
%% Specifications
Mp = [0.1 0.5 1]; % Variable mass for the payload
figure
plotStyle = {'b-','k','r'};
%% SegmentSolution Settings
fx = [0,1,-1,0];
fy = [0,1,-1,0];
tStart = 0;
tOn = 1;
bangWidth = 1;
tEndWindow = 17;
sampleTime = cumsum([tStart,tOn,bangWidth,bangWidth,tEndWindow]);
nSegments = numel(sampleTime)-1;
%% Simulation
for i = 1:3
mp = Mp(i);
mc = 1.06; mr = 6.4; % each mass in kg
L = 0.7; J = 0.005; % m, kg-m^2 respe.
spec = [mp mc mr L J];
%% Call the the function
initialConditions = x0;
if segmentSolution
t = zeros(0,1);
x = zeros(0,numel(x0));
outputFx = zeros(0,1);
outputFy = zeros(0,1);
for currentSegment = 1:nSegments
inputForce = [fx(currentSegment),fy(currentSegment),0,0].';
t0 = sampleTime(currentSegment);
t1 = sampleTime(currentSegment+1);
[intermediateT,intermediateX] = ode45(@AlFagera,[t0 :0.001: t1],initialConditions,[],spec,inputForce);
nOutputSamples = numel(intermediateT);
index1 = size(t,1)+1;
index2 = size(t,1)+nOutputSamples;
t(index1:index2) = intermediateT;
x(index1:index2,:) = intermediateX;
initialConditions = x(end,:).';
outputFx(index1:index2) = inputForce(1)*ones(nOutputSamples,1);
outputFy(index1:index2) = inputForce(2)*ones(nOutputSamples,1);
end
tt = t;
else
inputForce = []; % Leave this empty for the all-in-one solver.
% There's a check in the code to setup the force
% when it's not defined.
[t,x] = ode45(@AlFagera,[t0:0.001:tf],initialConditions,[],spec,inputForce);
outputFx = diff(x(:,9))./diff(t);
outputFy = diff(x(:,10))./diff(t);
tt=0:(t(end)/(length(outputFx)-1)):t(end);
end
legendInfo{i} = ['mp=',num2str(Mp(i)),'kg'];
%fx = diff(x(:,9))./diff(t);
%fy = diff(x(:,10))./diff(t);
%tt=0:(t(end)/(length(fx)-1)):t(end); % this time vector
% to plot the cart positions in x and y direcitons
subplot(1,2,1)
plot(t,x(:,1),plotStyle{i})
axis([0 20 0 0.18]);
grid
xlabel('time (s)');
ylabel('cart position in x direction (m)');
hold on
legend(legendInfo,'Location','northeast')
subplot(1,2,2)
plot(t,x(:,2),plotStyle{i})
axis([0 20 0 1.1]);
grid
xlabel('time (s)');
ylabel('cart position in y direction (m)');
hold on
legend(legendInfo,'Location','northeast')
end
% to plot the input torque, (bang-bang signal), just one sample
figure
plot(tt,outputFy);
grid
set(gca,'XTick',[0:20])
xlabel('time (s)');
ylabel('input signal, f_y (N)');
I took the liberty of cleaning up the code in the ODE function, I hope you don't mind.
So, to summarize, the solution to your problem is:
Pass inputs to the ODE function (such as parameters and applied forces), and
Segment the ODE solution at each discontinuity. | {
"domain": "robotics.stackexchange",
"id": 1064,
"tags": "control, robotic-arm, dynamics, matlab, input"
} |
How to perform PCA in the validation/test set? | Question: I was using PCA on my whole dataset (and, after that, I would split it into training, validation, and test datasets). However, after a little bit of research, I found out that this is the wrong way to do it.
I have few questions:
Are there some articles/references that explain why is the wrong way?
How can I transform the validation/test set?
Steps to do PCA (from https://www.sciencedirect.com/science/article/pii/S0022460X0093390X):
zero mean
$$\mu = \frac{1}{M}\sum_{i=1}^{M} x_{i}$$
where x is my training set
centering (variance)
$$S^{2} = \frac{1}{M}\sum_{i=1}^{M} (x_{i}-\mu)^{T}(x_{i}-\mu)$$
use (1) and (2) to transform my original training dataset
$$x_{new} = \frac{1}{\sqrt{M}} \frac{(x_{i} - \mu)}{S}$$
calculate covariance matrix (actually correlation matrix)
$$C= x_{new}^T x_{new}$$
take the k-eigenvectors (/phi) from the covariance matrix and defined the new space for my new dimension training set (where k are the principal components that I choose according to my variance)
$$ x_{new dim} = x_{new}\phi$$
Ok, then I have my new dimensional training dataset after PCA (till here it's right, according to other papers that I have read).
The question is: *What I have to do now for my validation/testing set? Just the equation below?
$$y_{new dim} = y\phi $$
where y is my (for example) validation original dataset?
Can someone explain the right thing to do?
Answer: for the first point I'm very sorry that I cannot give you any literature on this, but I might be able to explain you, why you don't take PCA on both datasets independently.
Principal components analysis is simply a transformation of your data into another (less dimensional) coordinate system. The axis for your new coordinate system are defined by the principal components (i.e. eigen-vectors) of your covariance matrix.
Since you will train your machine learning algorithm in the domain that is generated by the PCA, your test data must be exactly in the same domain. So as you said, you use exactly the same transformation for the test data as for the training data, i.e.
$y_{newdim} = y \phi $.
Of course if you applied standardization to your training data, you have to apply the same standardization to your test data. So you need to store the mean $\mu_x$ and the standard deviation $S_x$ and also standardize your test data y to
\begin{equation}
y_{standardized} = \dfrac{y_i - \mu_x}{S_x}
\end{equation}
Note here that you have an error in your standardization formula (you do not need to divide by $M$).
The point is that the principal components of your test data $\phi_{y}$ would not match the principal components of your training data $\phi_{x}$. Thus the transformations from original space into PCA-space $\Phi_x(u)$ and $\Phi_y(u)$ would diverge and similar data points in the original space might be far away in the PCA-representation and vice versa. This is why you generate the mapping $x_{newdim} = \Phi_x(x) = x\phi$ and apply it also on the test data.
I hope I could make it clear to you.
Best | {
"domain": "ai.stackexchange",
"id": 794,
"tags": "machine-learning, principal-component-analysis, test-datasets, validation-datasets"
} |
Formula complexity of arithmetic multiplication | Question: I'd need some bounds on the size of Boolean formulas (over $\land$, $\lor$ and $\neg$) computing the multiplication of two integers.
I'm not an expert in circuit complexity and I'm crawling through literature. As far as I know, it's known that multiplication is not in $\mathsf{AC}^0$, i.e. it cannot be computed by a polynomial-size circuit of constant depth, because it can be used to compute Parity.
But, first of all, what if I relax the restriction on depth? Can it be computed by polynomial-size circuits of, say, logarithmic depth? This would still be a polynomial-size circuit as a whole, right?
Then, this is for circuits, but does this tell me anything on the size of Boolean formulas? At worst, a formula can be exponentially larger than the circuit, but does it happen specifically for any formula for multiplication?
In other words, are there polynomial-size formulas to compute integer multiplication?
Answer: Multiplication is well known to be computable in uniform $\mathrm{TC}^0$, i.e., by a DLOGTIME-uniform family of constant-depth polynomial-size circuits using (unbounded fan-in) $\land$, $\lor$, $\neg$, and Majority gates. In fact, multiplication is $\mathrm{TC}^0$-complete under $\mathrm{AC}^0$ Turing reductions.
Consequently, it is also computable in uniform $\mathrm{NC}^1$, i.e., by a DLOGTIME-uniform family of bounded fan-in Boolean formulas or circuits of depth $O(\log n)$ (thus of polynomial size). Here, uniformity is with respect to the usual infix or prefix representation for formulas, or with respect to the extended connection language for circuits.
Consequently, it is also computable by polynomial-size unbounded fan-in $\land$, $\lor$, $\neg$ circuits of depth $O(\log n/\log\log n)$. This is optimal by the usual circuit lower bounds for Parity or Majority. | {
"domain": "cstheory.stackexchange",
"id": 5616,
"tags": "circuit-complexity"
} |
Math quiz for teachers & students | Question: I have been programming a maths quiz that can be used for teachers and I have have been trying to make the code as short as possible so it is easier to understand.
If there is any way I could make it more concise, please tell me and explain the programming behind it.
import sys
import random
def get_bool_input(prompt=''):
while True:
val = input(prompt).lower()
if val == 'yes':
return True
elif val == 'no':
return False
else:
sys.exit("Not a valid input (yes/no is expected) please try again")
status = input("Are you a teacher or student? Press 1 if you are a student or 2 if you are a teacher")
if status == "1":
score=0
name=input("What is your name?")
print ("Alright",name,"welcome to your maths quiz")
level_of_difficulty = int(input(("What level of difficulty are you working at?\n"
"Press 1 for low, 2 for intermediate "
"or 3 for high\n")))
if level_of_difficulty not in (1,2,3):
sys.exit("That is not a valid level of difficulty, please try again")
if level_of_difficulty == 3:
ops = ['+', '-', '*', '/']
else:
ops = ['+', '-', '*']
for question_num in range(1, 11):
if level_of_difficulty == 1:
number_1 = random.randrange(1, 10)
number_2 = random.randrange(1, 10)
else:
number_1 = random.randrange(1, 20)
number_2 = random.randrange(1, 20)
operation = random.choice(ops)
maths = round(eval(str(number_1) + operation + str(number_2)),5)
print('\nQuestion number: {}'.format(question_num))
print ("The question is",number_1,operation,number_2)
answer = float(input("What is your answer: "))
if answer == maths:
print("Correct")
score = score + 1
else:
print ("Incorrect. The actual answer is",maths)
if score >5:
print("Well done you scored",score,"out of 10")
else:
print("Unfortunately you only scored",score,"out of 10. Better luck next time")
class_number = input("Before your score is saved ,are you in class 1, 2 or 3? Press the matching number")
if class_number not in ("1","2","3"):
sys.exit("That is not a valid class, unfortunately your score cannot be saved, please try again")
else:
filename = (class_number + "txt")
with open(filename, 'a') as f:
f.write("\n" + str(name) + " scored " + str(score) + " on difficulty level " + str(level_of_difficulty))
with open(filename, 'a') as f:
f = open(filename, "r")
lines = [line for line in f if line.strip()]
f.close()
lines.sort()
if get_bool_input("Do you wish to view previous results for your class"):
for line in lines:
print (line)
else:
sys.exit("Thanks for taking part in the quiz, your teacher should discuss your score with you later")
if status == "2":
class_number = input("Which classes scores would you like to see? Press 1 for class 1, 2 for class 2 or 3 for class 3")
if class_number not in (1,2,3):
sys.exit("That is not a valid class")
filename = (class_number + "txt")
with open(filename, 'a') as f:
f = open(filename, "r")
lines = [line for line in f if line.strip()]
f.close()
lines.sort()
for line in lines:
print (line)
Answer: First thoughts:
The user is prompted to enter the class number after taking the test, and if it's not a valid number the program exits. This means they lose the results of the test. You could ask that question up front.
There's a large gap between the check to see if status is '1' and to see if status is '2'. I'd put the code for the student and teacher in functions, so that the result looks more readable:
def student_questions():
pass
def teacher_questions():
pass
if status == "1":
student_questions()
elif status == "2":
teacher_questions()
There's a lot of prompting the user for input and then checking to see if it's valid - that could easily be a function.
def ask_question(question, valid_answers):
while ...
print ...
input ...
if input is valid: return input
The only other comment I'd make straight away is that while choosing the operation at random works, it does mean that one student might get all divisions, others get a broad selection, and so on. Perhaps there needs to be a check to make sure that each student gets at least one of every type? Or a non-random selection? | {
"domain": "codereview.stackexchange",
"id": 14889,
"tags": "python, random, quiz"
} |
Understanding Cache Mapping and Access (Computer Architecture) | Question: Consider a 512-KByte cache with 64-word cachelines (a cacheline is also known as a cache block, each word is 4-Bytes). This cache uses write-back scheme, and the address is 32 bits wide.
Answer the next questions for Direct Mapped cache, Fully Associative and Way Set Associative
What is the size in bits of the cacheline offset, cachline index and tag?
What I know:
Direct mapped caching allows any given main memory block to be mapped into exactly one unique cache location.
Set-associative mapped cache allows any given main memory block to be mapped into two or more cache locations.
Fully-associative mapped caching allows any given main memory block to be mapped into any cache location.
I don't know how that would make a difference in computing byte offset, index and cache
$Byte offset=log_2(\text{bytes in one cache block})$
$Index=log_2(\text{Number of cache blocks in the cache})$
tag: The rest of bits in the address
Attempt at a solution
I would do the mapped cache part by using the previous expressions. I don't want the answer, but I would like orientation.
Thanks!!
Answer: Any physical address consists of two parts : one part is the block offset and the other part is the block number.
Physical Address { Block number, Block Offset}
Block Number has further 2 parts : the index part and the tag.
The memory is assumed to be logically divided into many blocks. A block may contain more than one words in it ( which is indicated by the block size ). To reach to a specific word block offset is used.
Block offset is the number of bits required to address each word in the memory.
For instance, in a byte addressable system ( 1 word = 1 byte ) if the block size = 1 KB thenBlock Offset = 10 bits. (Byteoffset=log2(bytes in one cache block)
The block offset remains same in each type of mapping as the number of words in each block do not change.
The implementation of block number is different in different mappings.
In case of Direct mapping the block number has two parts:
block no { tag, line number}
Since each block is mapped to one line in the cache, the "line number" part of block number contains number of bits required to identify each line in the cache.
In this case since cache size = 512 KB and block size = (64 * 4)B = 256 B
The Number of lines in the cache = 512 KB / 256 B = 2 K = 2 ^ 11
Therefore, the number of bits in line number part will be 11.
The remaining bits are tag bits.
Fully Associative Mapping the tag number is same as the block number .
In Fully Associative Mapping any memory block can be mapped to any of the cache lines. So to check which line of the cache a particular block is mapped to every line number is "tagged".
And in Set Associative Mapping the block number is divided into two parts:
block no { tag, set number}
Here, the cache is divided into many sets, so the "set number" part consists of the number of bits required to identify each set uniquely.
Depending on the number of lines in each set ( a K-way set associative mapping will contain K lines in each set), the number of sets can be found out.
Hope this will help you. | {
"domain": "cs.stackexchange",
"id": 7684,
"tags": "computer-architecture, cpu-cache"
} |
Can a neural network learn to avoid wrong decisions using backpropagation? | Question: I studied the articles on Neural Networks and Deep Learning from Michael Nielsen and developed a simple neural network based on his examples. I understand how backpropagation works and I already taught my neural network to not only play TicTacToe but also improve his own play by learning from his own successes using backpropagation.
Going forward with my experiments, I am facing the problem, that I won't always be able to show the network good moves to use for learning (maybe because I simply don't know what is correct in a certain situation), but I might be required to show it bad moves to avoid (because some of the bad moves are obvious). Teaching the network what to do using backpropagation is easy, but I haven't found a way to teach it what to avoid using similar techniques.
Is it possible to teach simple neural networks using negative examples like this or do I need other techniques? My gut feeling says, that it might be possible to "invert" gradient descent into gradient ascent to solve this problem. Or is it more complicated than this?
Answer: What you are describing is conceptually close to adversarial training. you should read more on adversarial examples and generative adversarial networks for more information.
The idea is that there is a discriminator network, whose job is to correctly discriminate between positive and negative examples. We also have a generative network, that learns to produce "adversarial examples" that "confuses" the discriminator network. By training these two networks side by side, both networks get better at their task. But it's usually the generator network that people are more interested in.
Intuitively, the naive implementation of the method you've described (gradient ascent on incorrect examples from a network in a clean/randomly-initialized state) shouldn't work. This is because negative examples don't form a "natural class" (all triangles have 3 edges, all things that are not triangles however....) | {
"domain": "ai.stackexchange",
"id": 298,
"tags": "neural-networks, backpropagation, gradient-descent"
} |
Determining if three numbers are consecutive | Question: This code will determine if three numbers are consecutive for any order they're supplied to the method (any permutation of [n, n+1, n+2] should be accepted). Is there any way to make this code easier for me to write, or make the code more efficient? Any feedback would be most welcome!
I tried testing with 3! combinations of 2, 3, 4, and it seemed to work. I also tried 1, 2, 9, and 1, 0, 1. All cases seem to work.
public static boolean consecutive(int a, int b, int c) {
if ( a == b || b == c || a == c) {
return false;
} else if (((a == b + 1 || a == b - 1) || (a == c + 1 || a == c - 1)) && ((b == c + 1 || b == c - 1))
|| ((b == c + 1 || b == c - 1) || (a == b + 1 || a == b - 1)) && (a == c + 1 || a == c - 1)) {
return true;
}
return false;
}
Answer: That's a lot of cases to enumerate. You're basically doing all of the comparisons that a sorting algorithm would do, but you've "unrolled" the loop.
If you don't want to construct an array and sort it, then you could try this: check that the minimum and maximum differ by 2, and that all three numbers are distinct. Math.min() and Math.max() are just conditionals packaged in a more readable form.
public static boolean consecutive(int a, int b, int c) {
int min = Math.min(a, Math.min(b, c));
int max = Math.max(a, Math.max(b, c));
return max - min == 2 && a != b && a != c && b != c;
} | {
"domain": "codereview.stackexchange",
"id": 29306,
"tags": "java"
} |
Why doesn't the work function being sensitive to surface not break the conservation of energy? | Question: I understand that the work function is sensitive to the surface. But I don't understand how that doesn't violate energy conservation given the following scenario:
Suppose there are two electrons at the Fermi level. A photon comes in with energy $\hbar\omega$ and hits one of these electrons. The photon had momentum such that the electron goes off in a certain direction and through surface A, thereby it has kinetic energy $\hbar\omega - φ_A$ and has potential energy $E_{vac}$. Now another photon comes in, with equal energy, but momentum in a different direction such that the electron it hits is sent off in a different direction so it leaves the crystal through surface B. Now this electron has kinetic energy $\hbar\omega - φ_B$ and has the same potential energy of $E_{vac}$. Does conservation of energy not require that the two work functions are therefore equal?
i.e. the two electrons start in the same state, are both excited by photons of equal energy, then both finish at the vacuum level but supposedly have different energies due to the particular surface that they were ejected through.
Answer: The resolution to this paradox is realizing that the electrons are originating from two different initial states, one photo-emitted from surface A, $\vert \psi_A\rangle$, and the other surface B, $\vert \psi_B\rangle$. Because the initial states for your two scenarios are different, they naturally have different binding energies (i.e. different starting potential energies) so you expect they will have different kinetic energies after photoemission as well. Thus, there is no violation of energy conservation. | {
"domain": "physics.stackexchange",
"id": 74895,
"tags": "solid-state-physics, energy-conservation, crystals, photoelectric-effect, electronic-band-theory"
} |
What is the geometrical interpretation of Ricci tensor? | Question: In differential geometry and general relativity space is said to be flat if the Riemann tensor $R=0$. If the Ricci tensor on manifold $M$ is zero, it doesn't mean that the manifold itself is flat. So what's the geometrical meaning of Ricci tensor since it's been defined with the Riemann tensor as
$$\mathrm{Ric}_{ij}=\sum_a R^a_{iaj}?$$
Answer: The local geometric structure of a pseudo-Riemannian manifiold $M$ is completely described by the Riemann tensor $R_{abcd}$. The local structure of a manifold is affected by two possible sources
Matter sources in $M$: The matter distribution on a manifold is described by the stress tensor $T_{ab}$. By Einstein's equations, this can be related to the Ricci tensor (which is the trace of the Riemann tensor = $R_{ab} = R^c{}_{acb}$.
$$
R_{ab} = 8 \pi G \left( T_{ab} + \frac{g_{ab} T}{2-d} \right)
$$
Gravitational waves on $M$. This is described by the Weyl tensor $C_{abcd}$ which is the trace-free part of the Riemann tensor.
Thus, the local structure of $M$ is completely described by two tensors
$R_{ab}$: This is related to the matter distribution. If one includes a cosmological constant, this tensor comprises the information of both matter and curvature due to the cosmological constant.
$C_{abcd}$: This describes gravitational waves in $M$. A study of Weyl tensor is required when describing quantum gravity theories. | {
"domain": "physics.stackexchange",
"id": 12038,
"tags": "general-relativity, differential-geometry, curvature"
} |
unable to use image_view with eletric | Question:
Hi all,
I seem to have the same problem as described here
http://comments.gmane.org/gmane.science.robotics.ros.user/14394
I run uvc_cam and then try to view the image but end up with
rosrun image_view image_view image:=/camera/image_raw
init done
opengl support available
(<unknown>:2222): GLib-GObject-WARNING **: invalid uninstantiatable type `(null)' in cast to `GtkWidget'
(<unknown>:2222): GLib-GObject-WARNING **: instance of invalid non-instantiatable type `(null)'
(<unknown>:2222): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed
It works fine in diamondback. Any ideas?
Cheers,
Andreas
Originally posted by andreas on ROS Answers with karma: 168 on 2011-11-04
Post score: 1
Original comments
Comment by joq on 2011-11-04:
Sounds like a bug to me. Maybe in opencv2, perhaps in cv_bridge or image_view.
Answer:
Hi Andreas,
This does look like a bug. Please open a ticket to track it.
In your ticket include:
What OS and version are you using?
What desktop environment (Gnome/KDE/other?). In particular do you have Qt installed?
Did you install image_pipeline with the Debian packages or from source?
Likewise for OpenCV - did it come from the debs or did you build it yourself?
It looks like your OpenCV is using its new Qt backend for HighGUI, while image_view is wrongly assuming the GTK backend. This is a bug in image_view, but I don't understand why it's showing up - in our debs, OpenCV links against GTK but not Qt.
Originally posted by Patrick Mihelich with karma: 4336 on 2011-11-04
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by andreas on 2011-11-08:
Hi Patrick, thanks for the hint. I built OpenCV myself because I needed the GPU version of the libraries. I now built them without Qt and it works. So it's no bug. Cheers | {
"domain": "robotics.stackexchange",
"id": 7183,
"tags": "image-view"
} |
What insect is in this photo? | Question: Found in Russia, in the kitchen. It is about 1–2 cm length.
Answer: This is an immature "True Bug", order Hemiptera. You can tell this from the general shape of the wings, and from the big "beak", a feature of the Hemiptera.
Within Hemiptera, it's possible that with those enlarged front legs, it could be an Assassin Bug, family Reduviidae. | {
"domain": "biology.stackexchange",
"id": 1417,
"tags": "species-identification, zoology, entomology, hemiptera"
} |
Are there nice generalizations of SPQR trees to k-connected components for k>3? | Question: I'm curious how one should best understand the connections between the k-connected components when $G$ has minimum cuts of size $k>3$, or perhaps approximate minimum cuts produced by Karger's algorithm. SPQR trees answer this question for 2-connected graphs. And obviously 1-connected graphs are simply a tree of their 2-connected components.
Answer: For edge connectivity rather than vertex connectivity, there's always the Gomory–Hu tree. | {
"domain": "cstheory.stackexchange",
"id": 1142,
"tags": "graph-theory, co.combinatorics, graph-algorithms"
} |
Basic robot that responds to a specific sound? | Question: I'm envisioning a "robot" with a capability as simple as flinching a given part of itself upon hearing a particular sound. As someone with zero experience and no clue of anything regarding robotics (as of right now, am excited to learn), I suppose I am asking for somewhere to start. What are ways to have a robot recognize a sound and react to it? I also suppose it would be a very basic robotic-arm reacting to the sound.
I apologize if this is a very general question.
Answer: Trying to build and program a robot with zero experience isn't the exception but it's the normal case. Most robotics projects are started with the aim to learn something which is new and to get detailed knowledge about Artificial Intelligence. For getting the maximum learning experience the project goal should be a little more complicated than the user is able to realize. This is equal to create a demanding quest which makes it likely, that the user needs additional knowledge.
The primary question isn't how to solve an existing robot task, for example to react to an audio signal, but how to formalize the challenge. The problem can become static by providing environmental constraints about the robot hardware, for example a certain Arduino kit, and for defining the software task, for example the robot arm should move 4 centimeter backwards (not more) in reaction to a handclap sound.
Carefully defined requirements for a robotic system will make the project much easier. It reduces the space of potential useful literature, reduces the effort to search for existing examples, and allows to identify missing skills. For example, if the task is that the robotarm should react to a handclap sound, but not to clicker sound produced by a dog training device, it's possible to determine if the project was a success or not. | {
"domain": "robotics.stackexchange",
"id": 2016,
"tags": "robotic-arm"
} |
Saving information from reaching definition | Question: I am learning about compilers at the moment and in my textbook it is briefly mentioned that UD/DU (Use-Def, Def-Use Chains) a way of saving information from the reaching definition is. It is just said that, when a variable is used, it has a list of the reaching definitions.
The problem that I am having is, that I am not clearly understanding the way this information is being used. How can this help the compiler optimize the code? Are there any other ways of saving (using) this information?
Answer: Here are just a few examples:
dead-code elimination,
instruction reordering, and
(implementation of) scoping/shadowing. | {
"domain": "cs.stackexchange",
"id": 10986,
"tags": "compilers, data-flow-analysis"
} |
How to implement autonomous navigation with Google Cartographer? | Question:
I am trying to implement autonomous navigation using 2 lasers and a depth camera. IMU and odometry data will also be available. Basically, I'd like to have the simulated robot spawned in some gazebo world and autonomously drive around it covering all of the floor space.
So far, I've come to the conclusion that using cartographer is best for me because it allows 2 lasers to be directly used without additional tools. However, before trying this I'd played around with gmapping and AMCL, where I'd do the following as a starting point for autonomous exploration:
Generate a map by manually moving around
Save the map and serve it
Localise the robot on the map
Provide waypoints to move_base
My question is, what is the order of operations when using cartographer? In my launch file would it just be a cartographer and move_base node and then something else to for frontier exploration? I suppose I'm a bit confused whether I need to replicate the workflow used above with gmapping and AMCL or whether the single cartographer node handles both of these roles in one step.
I'm really getting stuck with some of the tutorials I see online for this as I'm unsure how to adapt them to my use case. I've looked at here, which seems handy but it doesn't work for me as it doesn't do anything when I graphically set a 2DNavGoal in RViz.
Originally posted by Py on ROS Answers with karma: 501 on 2020-05-27
Post score: 0
Answer:
Hi, it seems cartographer can work as SLAM as well as pure localization node. see: Localization only link text
Depending on what your waypoint generating node or usecase works better on, you can either SLAM-navigate, or do mapping and afterwards navigate/plan.
I dont know this tutorial, but when workinng in localization mode one would asume cartographer to demand an initial pose (can be set in RViz or might be served by other means). Some localization nodes dont even start to work before they get an initial pose. In SLAM mode you just start at 0/0 in most cases.
As you run Gabzebo I assume you are running it all on the same computer, otherwise the firewall can hinder proper communication.
Anything else is up and running clean? All depending Jackal packages installed?
Originally posted by Dragonslayer with karma: 574 on 2020-05-27
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Py on 2020-05-27:
Thanks for the message! Everything running is on my computer as you suggested. The only bits I'm confident in are the simulated robot in a Gazebo world and a keyboard teleoperation node with successful laser data visualisation in RViz. I can also save .yaml and .pgm map files generated using cartographer and serve them but am unsure whether this is necessary and what the next steps should be without using the AMCL approach before move_base to explore the mapped area.
Comment by Dragonslayer on 2020-05-28:
Well, cartographer in pure localization mode as linked in my answer makes cartographer do what amcl does (computationally different but with "the same" outcome). If the map is served to cartographer in this mode (and scans) it should localize and corret odometry, giving move_base a clue where the robot is. In tf, map -> odom would be published by cartographer instead of amcl, and odom -> base_link by odometry as before. If you just droped amcl and launched cartographer its likely that you have some remapping of topics to do, linking node outputs to node inputs and defining frames. By the way dont forget to not! launch the teleop node when move_base comes into play as this would give you publisher conflict on the cmd_vel topic, result is likely nothing happens. | {
"domain": "robotics.stackexchange",
"id": 35018,
"tags": "navigation, gazebo7, ros-kinetic, frontier-exploration, gmapping"
} |
What is the rationale behind IgM being the default antibody? | Question: I know that the$\ C _\mu $ gene appears first in line for class switching and hence the IgM is the default antibody. But what is the rationale for it being so? There must be some advantage (evolutionary?) of having the IgM first in line over the IgG or so on.
The IgM is different from the IgG in being a cluster of five, better agglutination, having more avidity, and more cross reactivity (correct me if any of these is wrong). How do these properties help in containing the infection only for the very first time? Or, why is the IgG is not good for the initial response?
Answer: When the Naive B cell gets activated and begins to secrete antibodies the affinity of the antibodies is low, as it hasn't yet gone through several cycles of affinity maturation. The increased avidity of the IgM balances the decreased affinity of these initial antibodies. If IgG were released at this stage, the body would have a much lower immune reaction to the antigen.
However after affinity maturation, the IgG has sufficient avidity and affinity to mount a stronger immune reaction than the initial IgM antibodies. If the body still produced IgM after affinity maturation, there would likely be a significant amount of crosslinking. This could potentially cause aggregates to form in the blood/lymph which could cause occlusions.
Edits:
If the affinity of the antigen-binding sites in an IgG and an IgM molecule is the same, the IgM molecule (with 10 binding sites) will have a much greater avidity for a multivalent antigen than an IgG molecule (which has two binding sites). This difference in avidity, often 10^4-fold or more, is important because antibodies produced early in an immune response usually have much lower affinities than those produced later. Because of its high total avidity, IgM—the major Ig class produced early in immune responses—can function effectively even when each of its binding sites has only a low affinity.
Molecular Biology of the Cell
The claim on aggregation after affinity maturation is speculation based on the crosslinking that occurs with IgA antibodies (e.g. Phlegm) when there are only two high-affinity Ig molecules joined together rather than five. | {
"domain": "biology.stackexchange",
"id": 7192,
"tags": "immunology, antibody, immunoglobin"
} |
Why does this NFA having an empty symbol | Question: I'm trying to understand the difference of DFA from NFA
I have this example in my coursebook
Are the empty symbols at the start necessary?
or can we do away with them to have something like this
Answer: Yes, in this example, your solution is fine.
Perhaps the example just wanted to show the use of epsilon transitions. Also, your approach would be more complicated in the case where one of the original initial states would have a loop. We cannot transfer that loop to the common initial state. | {
"domain": "cs.stackexchange",
"id": 15554,
"tags": "automata, finite-automata"
} |
Reducing Parameterized Problems (whose solution size can be "large") to W[i]-complete problems (for fixed i) | Question: Note: Originally, this question was asked via a comment in this question, but was asked to post a separate question. :)
I'm looking for any known reductions of the following:
Given a parameterized problem X (whose parameter is not the size of a solution and the size of a solution can still be quite large) showing X is W[t]-complete for some fixed t>=1 (e.g., W[1]-complete, W[2]-complete, but not in FPT).
I'm curious how one can show that X is in W[1] or W[2], e.g., when the size of a solution can still be "n" yet we can only choose "k" input gates as it seems impossible (how could you encode a large solution using only a (fixed) number of bits?). Even though X is known to be W[1]-hard or W[2]-hard, it may actually require circuits with large wefts than 2, for instance, if completeness is not known yet.
(Or, perhaps such hardness as X cannot belong to W[t] for any fix t under (some conditions when the solution size is clearly not bounded by the parameter) may be implied in some cases, but I couldn't find any such results, either.)
Here are some problems that do not qualify:
The independent set problem parameterized by tree width would be in FPT (although the solution size is not necessarily bounded by the parameter), so it won't qualify.
The clique problem parameterized by maximum degree of a vertex would also be in FPT (and in this case, the solution size would be bounded by the parameter anyway), so it won't qualify.
The vertex coloring problem (k-coloring) also won't qualify because it's para-NP-hard (i.e., it's not W[t] complete for any fixed constant, t), although its solution size is not bounded by the parameter k.
Update with details (Nov 13):
I now have a concrete problem that (I think) is W[2]-hard and in W[P],
but:
(1) I can't prove that this is in W[2] (so as to prove that it's W[2]-complete) and
(2) I also can't prove that this is W[3]-hard.
We are given n items and m bags (and some constraints to be specified), and we want to assign every item to some bag (subject to constraints below)
but only using up to k bags (here, 'k' is the parameter).
Constraints are specified per item and bag pair:
For each item i and bag j, we are given two numbers L(i, j) and U(i, j) (lower-bound and upper-bound) in [1, n]
such that if we assign item i to bag j,
then the total number of items assigned to bag j must be between L(i, j) and U(i, j), inclusive.
This must be satisfied for all items i in a solution.
(L(i, j) > U(i, j) naturally implies that item i can't possibly be assigned to bag j.)
The input consists of O(nm) numbers (two numbers per pair), and a natural solution would be of size O(n): For each item, we describe an index of the bag to which it is assigned.
On the other hand, a shorter certificate of size k also makes sense:
We can describe which k bags we use in a solution and how many items we assign to each of the said k bags.
To show that this problem is in W[P] (using the shorter certificate):
We need 2k numbers as a certificate: k numbers for the bags used (their indices, log m bits each)
and another k numbers for how many items are assigned to each bag (log n bits each). We can non-deterministically guess these 2k numbers,
and then solve a max-flow problem (or a bipartite matching problem) in poly-time.
To show that this problem is W[2]-hard:
We can reduce from the dominating set problem in a straightforward manner.
For each vertex, we create one item and one bag (so n = m in this reduction).
For each vertex j and its neighbors i,
we set L(i, j) = 1 and U(i, j) = n (this means we can assign item i to bag j).
For all other (i, j) pairs (i.e., no edges), we set L(i, j) > U(i, j) (so we can't assign i to j).
Clearly, we have a dom-set of size k if and only if we can assign n items to k bags.
The natural description of a solution (of size O(n)) is too large for me to reduce this problem to WCSAT where I can only set O(f(k)) input gates to true. On the other hand, a shorter certificate (that I used above) makes it too difficult to verify (the best I got is W[P] membership above). I realize that perhaps there are other, smarter "short" certificates of size O(f(k)),
and that is why I asked the question (to seek other problems/reductions for reference).
I haven't been lucky enough to find useful references yet...
Answer: The answer to this question depends very much on the definition of what a solution is. Take for example the Vertex Cover problem where we ask whether a graph $G$ has a vertex set $S$ of size at most $k$ such that every edge has an endpoint in $S$. The natural definition of solution size is $k$, the size of the vertex cover.
If you consider the dual parameter $\ell:n-k$ for Vertex Cover, then the problem is W[1]-complete since it is exactly the Independent Set problem. Using a strict definition of what a solution is, this gives an example of a problem that is W[1]-complete for a parameter that is not the solution size.
Now, we may define solution more loosely as some kind of certificate that can be verified efficiently. In that case, any parameterized problem that is in W[1] can be considered to be "parameterized by the solution size": Take for example the characterization of W[1] due to Chen and Flum 1. This characterization states that a problem is in W[1] if it can be solved via a nondeterministic RAM that makes all its nondeterministic guesses in the last $h(k)$ computation steps for some function $h$. It is clear from this definition that a problem in W[1] has a certficate and thus also a solution in the broad sense of size $h(k)$.
So in short: It depends on what one views as a solution. If one takes a very strict view, then it is easy to come up with examples that are W[1]-complete for non-solution size parameters. If one takes a broad view of what a solution is, then a problem that is in W[1] for some parameter $k$ has, by definition, solutions (certificates) of size bounded in $k$.
Yijia Chen, Jörg Flum, Martin Grohe:
Machine-based methods in parameterized complexity theory. Theor. Comput. Sci. 339(2-3): 167-199 (2005) | {
"domain": "cstheory.stackexchange",
"id": 5136,
"tags": "parameterized-complexity, fixed-parameter-tractable"
} |
arm_navigation generation always in collision for Robonaut? | Question:
I tried to run the Robonaut2 URDF through the arm_navigation generation tutorial, but it looks like the end-effector is always in collision, and thus i can not generate plans using the planning_components_visualizer.
This is my guess of what's happening based on the end-effector being red upon start up:
and complete inactivity when telling it to "plan" from the menu.
I am using the nasa_r2_common\r2_description\robots\r2c_full_body.urdf.xacro URDF, and choosing the left leg as the 'manipulator' though this seems to work regardless of which of the four limbs i choose.
Is there something i need to check/verify in the advanced wizard to make sure my description is valid?
FYI, I was hoping to push through with the warehouse and motion planning stacks, once I get this working....
Thanks,
Steve
Originally posted by shart115 on ROS Answers with karma: 86 on 2012-09-04
Post score: 0
Answer:
I started looking at just the R2 upper body because I had this working with the arm_navigation some months ago, but have been running into similar issues as I have been with the legs.
Just to minimize the chances of collisions, I reduced all the collision models in the entire robot to about 80%, and unchecked all pairs in the advanced mode of the arm nav wizard.
Tabling for now the fact that if i chose more than one kinematic chain, the warehouse viewer wouldn't start properly for me, I'm still getting collisions for all possible movements. The following picture shows what happens as soon as I move the marker (with the robot visualization at a lower alpha). You can see the smaller collision models, none of which are in contact with each other, show up immediately as red (i.e. in collision). This is true for both arms, even when the (only) kinematic chain I have built the models for are between the chest and the left index finger base!
Originally posted by shart115 with karma: 86 on 2012-09-13
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by shart115 on 2012-09-13:
interestingly, when i remove the collision models for the hands and the first shoulder link, rviz shows the visual models as the collision models when i start up the warehouse viewer. that cant be right.... | {
"domain": "robotics.stackexchange",
"id": 10890,
"tags": "arm-navigation"
} |
Problem with the reference frame of grasps in the household_objects_database | Question:
I'm using the manipulation stack in ROS electric, in a simulated environment in Gazebo.
I have a simulated robot arm and hand, a simulated table and a coke can, and a simulated kinect.
Object recognition is working well with the coke can, from the model in the household_objects_database (using the image from the simulated kinect).
But the problem is that when I want to use the object_manipulator, the grasps for that object are read from the database and represented in a weird and unexpected way.
The grasps (and pregrasps) for the coke can are in the grasp table of the DB with different positions but all of them have the identity quaternion (0,0,0,1) for orientation, so no rotation should be applied.
But when trying to pickup the object with object_manipulator, the grasps are read from the database and the markers representing the grasps are more or less (but not exactly) in the expected position, but they have a completely unexpected orientation, as it can be seen in the following two images.
I thought that the parent frame of the grasps would be the frame of the recognised object (/object_frame in this case), but the orientation of the represented grasps is different from that.
Can anyone tell me what's the reference frame of the grasps, or what can I be doing wrong?
Originally posted by toniOliver on ROS Answers with karma: 159 on 2012-03-01
Post score: 0
Answer:
The pose of the grasps in the database should be saved in the reference frame of the object - that is, relative to the origin of the 3D model (the mesh) of the object.
At run-time however, the manipulation pipeline does not know if the reference frame of the object is actually available in TF. It seems like it is in your system, but htat is not generally the case. Therefore, when you query the objects database node and ask for grasps for the object, it will also look where the object is in relation to a TF frame (this information is contained in the GraspableObject you are passing to the GetGrasps callback). It will then give you back the grasps transformed to the same reference frame that the object is in.
For example: let's say that object recognition gives you the object at location T1 w.r.t. the "base_link" frame. Let's say that in the database you have a grasp stored at location G1 relative to the object. When you get the grasps from the database, the objects_database_node will give you back a grasp expressed in the "base_link" frame, and its pose will be T1 * G1.
There are a few ways in which you can get the desired behavior. Since there seems to be some node in your system publishing an "object_frame" to TF, you could simply ask for grasps for a GraspableObject with a DatabaseModel with an identity pose relative to the "object_frame".
Originally posted by Matei Ciocarlie with karma: 586 on 2012-03-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by toniOliver on 2012-03-06:
Thanks for your answer, Matei. I've finally found what was happening. | {
"domain": "robotics.stackexchange",
"id": 8446,
"tags": "ros, household-objects-database, grasp, manipulation"
} |
Differences between ros_control and arbotix driver package | Question:
1, I wonder what are the differences or similarities between ros_control and arbotix driver.
2, Does arbotix driver belong to firmware for harware ? In other words, it belongs to hardware_interface:RobotHW ?
3, ros_control covers the controller_manager and harware resource interface layer ?
Originally posted by shawnysh on ROS Answers with karma: 339 on 2017-04-27
Post score: 0
Answer:
ros_control is a generic robot controller package, whereas arbotix_driver is a package designed for a specific hardware device.
Originally posted by gstavrinos with karma: 641 on 2017-04-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by shawnysh on 2017-04-29:
Thanks for your quick reply.
I wonder why in book, ros by example volume 2, it never mentions ros_control, but uses arbotix package instead, it completes the motion planning and execution along with moveit. It seems like arbotix completes the jobs including controller manager and hardware_interface
Comment by gstavrinos on 2017-05-02:
I have not read the book, but there are many reason why a book would not include a specific ros package. (Publication date, specific robot packages etc)
Comment by shawnysh on 2017-05-03:
I check arbotix package, I notice that
The arbotix_python package also offers several controllers which add higher-level interfaces to common hardware
Does it mean arbotix package cover the functionalities of ros_control?
Link to arbotix_python
Comment by gstavrinos on 2017-05-03:
From a quick look to the source code, arbotix_python seems to be independent of ros _control, and does not seem to implement the hardware_interface interface. Thus, it should be used only with very specific hardware. | {
"domain": "robotics.stackexchange",
"id": 27737,
"tags": "control, gazebo, simulation, ros-control, arbotix"
} |
Relative angular momentum in two-particle problem | Question: Assume the two particle problem i.e. two particles with equations of motions as follows:
$\ddot{x_i} = -\frac{1}{m_i}\frac{\partial}{\partial x_i}V(|x_1 - x_2|)$
Since the angular momentum is conserved (in relative coordinates), we know that the two particles have to move in a plane and hence can write the distance between the two particles as follows: $x = r e_r$ and $\dot{x} = \dot{r} e_r + r \dot{e_r} = \dot{r}e_r + r \dot{\phi} e_{\phi}$
Now if I wanna compute the actual value of the angular momentum I seem to run into some trouble. If I do it the following way I get the right value:
$|L| = |\mu x \wedge \dot{x}| = \mu |(r e_r \wedge ( \dot{r}e_r + r \dot{\phi} e_{\phi}))| = \mu | r e_r \wedge r \dot{\phi} e_{\phi} | = \mu r^2 \dot{\phi} $
But if I do it the following way it doesn't work out:
$|L | = |\mu x \wedge \dot{x}| = \mu |x||\dot{x}|\sin(\theta) = \mu |x||\dot{x}| = \mu|r||\sqrt{\dot{r}^2 + r^2 \dot{\phi}^2}| $ where I used the formula for $x$ and $\dot{x}$
Why doesn't that work? I feel like I'm missing something completely trivial here, but can't find it.
Answer: As @Qmechanic points out, you really need to think about the angle between $x$ and $\dot{x}$. In particular, since $x = r e_r$, you can take the inner product with $\dot{x}$ and get a nonzero answer: $x \cdot \dot{x} = r\, \dot{r}$. This contribution needs to be removed when taking the exterior product $x \wedge \dot{x}$, which is what you did in the first version when you eliminated $\lvert r e_r \wedge \dot{r} e_r \rvert$. But setting $\sin\theta = 1$ does not accomplish this. | {
"domain": "physics.stackexchange",
"id": 8484,
"tags": "homework-and-exercises, newtonian-mechanics, orbital-motion"
} |
Min path cover problem in Cormen et.al. question about notation | Question: In the book on algorithms by Cormen et.al, the problem 26-2 describes how to obtain a min-path cover for a DAG via max-flow. I have a question about the notation. First, let me quote the problem here:
A path cover of a directed graph $G = (V, E)$ is a set $P$ of vertex-disjoint paths such that every vertex in $V$ is included in exactly one path in $P$. Paths may start and end anywhere, and they may be of any length, including $0$. A minimum path cover of $G$ is a path cover containing the fewest possible paths.
a. Give an efficient algorithm to find a minimum path cover of a directed acyclic graph $G =(V, E)$ (Hint: Assuming that $V = {1, 2, ... , n}$, construct the graph $G' = (V',E')$, where:
$$V' = \{x_0,x_1,\dots x_n\} \cup \{y_0, y_1, \dots y_n\} $$
$$E'=\{(x_0,x_i):i \in V\} \cup \{(y_i,y_0) : i \in V\} \cup \{(x_i,y_i):(i,j) \in E\}$$
and run a maximum-flow algorithm.)
What are the $x_i$ and $y_i$ here? Am I missing something obvious?
Answer: The graph $G'$ has $2n+2$ vertices. We give them the names $x_0,\ldots,x_n,y_0,\ldots,y_n$ to make it easy to refer to them. So $x_i,y_i$ are just names of vertices. They have no value, and do not refer to anything. In that regard, they are akin to indeterminates.
What might be confusing you is the existence of a different graph $G$, which has $n$ vertices. The vertices $x_i,y_i$ are just vertices – they are not variables referring to vertices of $G$. The "value" of $x_i$ is just $x_i$. | {
"domain": "cs.stackexchange",
"id": 17170,
"tags": "graphs, network-flow"
} |
Angular momentum squared and Hamiltonian? | Question: I'm trying to show, for the Hamiltonian $H = \vec{P}^2/2m + V(\vec{X})$, that $[\vec{L}^2,H]=0$ if $V(\vec{X}) = V(|\vec{X}|)$, and I pretty much almost have it, there's just one thing I'm getting messed up on.
So what I did was:
$$ [\vec{L}^2,H] = [L_x^2 + L_y^2 + L_z^2, \frac{\vec{P}^2}{2m} + V(\vec{X})] = [L_x^2 + L_y^2 + L_z^2, \frac{\vec{P}^2}{2m} + V(|\vec{X}|) ] $$
$$ = [L_x^2,\frac{\vec{P}^2}{2m} + V(|\vec{X}|)] + [L_y^2,\frac{\vec{P}^2}{2m} + V(|\vec{X}|)] + [L_z^2,\frac{\vec{P}^2}{2m} + V(|\vec{X}|)] $$
Looking at the $L_x^2$ component:
$$ \rightarrow [L_x^2,\frac{\vec{P}^2}{2m} + V(|\vec{X}|)] = \frac{1}{2m}[L_x^2,P^2] + [L_x^2,V(|\vec{X}|)] $$
$$ = \frac{1}{2m}[L_x^2,P_x^2+P_y^2+P_z^2] + [L_x^2,V(|\vec{X}|)] $$
$$ = \frac{1}{2m} \bigg( [L_x^2,P_x^2]+[L_x^2,P_y^2]+[L_x^2,P_z^2] \bigg) + [L_x^2,V(|\vec{X}|)] $$
Now for the momentum parts, that's a lot of tedious work that I won't type up, but at the end of the day, I get that
$$ [L_x^2,P_x^2]=[L_x^2,P_y^2]=[L_x^2,P_z^2]=0 $$
and similarly for $[L_y^2,P^2]=0$ and $[L_z^2,P^2]=0$. My problem is this, for this to work out, I need
$$ [L_x^2,V(|\vec{X}|)] = 0 $$
and similarly
$$ [L_y^2,V(|\vec{X}|)]=[L_z^2,V(|\vec{X}|)]=0 $$
but I don't really understand why that would be the case? Any insight would be appreciated.
Answer: Remember that $L_x$, $L_y$, and $L_z$ are generators of rotation about the $x$, $y$, and $z$ axes, respectively. But $V(\vec{X})=V(|\vec{X}|)$ says that your potential is invariant under rotations. So on those grounds, physically you would expect any of these angular-momentum operators to commute with such a potential operator $V$.
Mathematically, I would say that the easiest way to see this is to use the angular momentum operator in spherical coordinates. More specifically, I would say that you should probably back up and just prove $[L^2, V(|\vec{X}|)] = 0$ by expanding $L^2$ as $\frac{1}{2}(L_{+}L_{-} + L_{-}L_{+}) + L_{z}^2$. Note that those operators have no $\partial / \partial_r$ term, so they commute with $V(r)$.
I'm guessing you did all your work so far using a Cartesian basis. That's valid, but it often makes sense to look around for other ways of doing a problem. And changing coordinates is one of the first things you should think about. | {
"domain": "physics.stackexchange",
"id": 47195,
"tags": "quantum-mechanics, homework-and-exercises, operators, schroedinger-equation, hamiltonian"
} |
Why replay memory store old states and action rather than Q-value (Deep Q-learning) | Question: Here is the algorithm use in Google's DeepMind Atari paper
The replay memory D store transition (old_state, action performed, reward, new_state)
The old_state and the performed action a are needed to compute the Q-value of this action in this state.
But since we already compute the Q-value of action a in state old_state in order to choose a as the best action, why don't we simply store directly the Q-value ?
Answer:
But since we already compute the Q-value of action a in state old_state in order to choose a as the best action, why don't we simply store directly the Q-value ?
That is because calculating a relevant TD Target e.g. $R + \gamma \text{max}_{a'}Q(S', a')$ requires the current target policy estimates for action value $Q$. The action value at the time of making the step can be out of date for two reasons:
The estimates have changed due to other updates, since the experience was stored
The target policy has changed due to other updates, since the experience was stored
Storing the $Q$ value at the time the experience was made should still work to some degree, provided you don't keep the experience for so long that the values are radically different. However, it will usually be a lot less efficient, as updates will be biased towards older less accurate values.
There is a similar, but less damaging, effect from the experience replay table even with $Q$ recalculations. That is because the distribution of experience may not match what the current policy generates - something that most function approximators (e.g. neural networks used in DQN) are sensitive to. However, there are other factors in play here too, and it can be beneficial to train on a deliberately different distribution of experience - for instance prioritising experiences with larger update steps can speed learning and keeping older experiences available can reduce instances of catastrophic forgetting.
Note that if you were using an off-policy Monte Carlo method, you could store the Monte Carlo return in the experience replay table, since it does not bootstrap by using current value estimates. However, the early parts of older less relevant trajectories would stop contributing to updates in that case once the target policy had changed significantly during learning. | {
"domain": "datascience.stackexchange",
"id": 6676,
"tags": "reinforcement-learning"
} |
PATM Generator GUI | Question: I've been practising trying to write my code neater, so decided to build a practice GUI, the code works, However, how could I tidy this up in terms of separating out the different parts of the GUI, such as a separate defs for labels, Combobox etc? or using a function for the 'with open' section.
from tkinter import*
from tkinter.ttk import *
root = Tk()
class GUI:
def __init__(self, master):
self.master = master
master.title('PATM Generator')
master.geometry('+600+300')
#Label1
master.label1 = Label(root, text = 'Select test bed:')
master.label1.grid(row = 0, column = 0, padx = 10, pady = 5)
#Combobox
master.combo = Combobox(root)
master.combo.grid(row =1, column = 0)
master.combo['values'] = (TB)
#Label2
master.label2 = Label(root, text = 'Enter TRD index:')
master.label2.grid(row = 2, column = 0, padx = 10, pady = 5)
#Entry
master.entry = Entry(root)
master.entry.grid(row = 3, column = 0, padx = 10, pady = 0)
#Button
master.button = Button(root, text = 'Append to txt')
master.button.grid(row = 4, padx = 10, pady = 5)
with open('TB.txt') as inFile:
TB = [line for line in inFile]
def main():
GUI(root)
root.mainloop()
main()
Answer: You didn't give an example of your text file, maybe that would shed some light on the use of a combo box instead of a text widget.
You started out with using self in your class but soon used master.
master.combo = Combobox(root)
Using self. prefix allows you to access the objects, widgets and any other data in other functions.
apple
orange
banana
grape
grapefruit
tangerine
combo_stuff.txt
So I'm guessing that you want to add whatever is typed into the entry widget to your text file- Here's one way to accomplish it.
from tkinter import*
from tkinter.ttk import *
class GUI:
def __init__(self, master):
self.master = master
self.master.title('PATM Generator')
self.master.geometry('+600+300')
#Label1
self.label1 = Label(root, text = 'Select test bed')
self.label1.grid(row = 0, column = 0, padx = 10, pady = 5)
#Combobox
self.combo = Combobox(root)
self.combo.grid(row =1, column = 0)
self.combo['values'] = (TB)
#Label2
self.label2 = Label(root, text = 'Enter TRD index:')
self.label2.grid(row = 2, column = 0, padx = 10, pady = 5)
#Entry
self.entry = Entry(root)
self.entry.grid(row = 3, column = 0, padx = 10, pady = 0)
#Button
self.button = Button(root, text = 'Append to txt',
command=self.append_text)
self.button.grid(row = 4, padx = 10, pady = 5)
def append_text(self):
item= self.entry.get()
if item: # insure there's text in the entry
print(item)
with open('combo_stuff.txt', mode='a',) as write_file:
write_file.write('\n' + item)
write_file.close()
self.entry.delete(0,'end')
with open('combo_stuff.txt') as inFile:
TB = [line for line in inFile]
if __name__ == '__main__':
root = Tk()
GUI(root)
root.mainloop()
The easiest way to open a file is with the pre-made dialog.
here's a link to some examples :
tkinter file dialog | {
"domain": "codereview.stackexchange",
"id": 41265,
"tags": "python, tkinter, gui, user-interface"
} |
Stopwatch class without using System.Diagnostics in C# | Question:
Design a class called Stopwatch. The job of this class is to simulate a stopwatch. It should
provide two methods: Start and Stop. We call the start method first, and the stop method next.
Then we ask the stopwatch about the duration between start and stop. Duration should be a
value in TimeSpan. Display the duration on the console.
We should also be able to use a stopwatch multiple times. So we may start and stop it and then
start and stop it again. Make sure the duration value each time is calculated properly.
We should not be able to start a stopwatch twice in a row (because that may overwrite the initial
start time). So the class should throw an InvalidOperationException if its started twice.
The aim of this exercise is to make you understand that a class should be
always in a valid state. We use encapsulation and information hiding to achieve that. The class
should not reveal its implementation detail. It only reveals a little bit, like a blackbox. From the
outside, you should not be able to misuse a class because you shouldn’t be able to see the
implementation detail.
That is the information given to me from the tutorial I am taking to teach myself C#. I have completed this and looking for ways to improve my code and/or learn something new here. I have tested this and to my knowledge is working as expected. Any help is appreciated.
using System;
namespace ExerciseOne
{
public static class Stopwatch
{
private static DateTime TimeStart { get; set; }
private static DateTime TimeStop { get; set; }
private static bool isStarted = false;
private static bool isStopped = false;
private static void StartTimer(DateTime start)
{
if (isStarted)
{
throw new InvalidOperationException("Unable to start a stopwatch twice in a row.");
}
else
{
isStarted = true;
isStopped = false;
TimeStart = start;
}
}
private static void StopTimer(DateTime stop)
{
if (isStopped)
{
throw new InvalidOperationException("Unable to stop a stopwatch twice in a row.");
}
else
{
isStarted = false;
isStopped = true;
TimeStop = stop;
}
}
private static string ElapsedTimer() => (TimeStop - TimeStart).ToString();
private static void Begin()
{
StartTimer(DateTime.Now);
System.Console.WriteLine(" - Stopwatch has begun.");
}
private static void End()
{
StopTimer(DateTime.Now);
Console.WriteLine($" - Stopwatch has stopped. Elapsed Time: {ElapsedTimer()}");
}
public static void RunProgram()
{
Console.WriteLine("Stopwatch program.");
Console.WriteLine("Type \"S\" to start the program. Type \"T\" to stop the program. Type \"E\" to end the program.");
while (true)
{
ConsoleKeyInfo cki = Console.ReadKey(false);
if (cki.Key == ConsoleKey.E)
{
Console.WriteLine(": \"E\" key was pressed. Progam exited.");
return;
}
else if (cki.Key == ConsoleKey.S)
{
Begin();
}
else if (cki.Key == ConsoleKey.T)
{
End();
}
else
{
Console.WriteLine("\nPlease type either \"S\" to start the program, \"T\" to stop the program, \"E\" to end the program.");
}
}
}
}
}
using System;
namespace ExerciseOne
{
class Program
{
static void Main() => Stopwatch.RunProgram();
}
}
Answer:
public static void RunProgram()
{
....
}
Why do you run the application as a method on the Stopwatch class it self?
You have made all the methods (StartTimer(), StopTimer()) private. This means that they only can be run from your RunProgram method, which occupy the main thread of the program and measures nothing. This set up makes the entire effort rather useless. I would expect a stopwatch being able to measure something in a way like:
Stopwatch stopwatch = Stopwatch.StartNew();
// TODO: execute something you want to measure
Thread.Sleep(5000);
stopwatch.Stop();
Console.WriteLine($"Duration: {stopwatch.Duration}");
private static bool isStarted = false;
private static bool isStopped = false;
These flags can be combined into one isRunning, used like:
private static void StartTimer(DateTime start)
{
if (isRunning)
{
throw new InvalidOperationException("Unable to start a stopwatch twice in a row.");
}
else
{
isRunning = true;
TimeStart = start;
}
}
and
private static void StopTimer(DateTime stop)
{
if (!isRunning)
{
throw new InvalidOperationException("Unable to stop the stopwatch because it is not running.");
}
else
{
isRunning = false;
TimeStop = stop;
}
}
private static void Begin()
{
StartTimer(DateTime.Now);
System.Console.WriteLine(" - Stopwatch has begun.");
}
You shouldn't write to the console in class methods of any object, unless it's explicitly meant to be a console application. The above message is useless in a WinForm application.
private static void StartTimer(DateTime start)
I don't se why you have a start time as argument to StartTimer. Why not just call DateTime.Now inside it? And likewise in StopTimer.
You were told to provide the result as a TimeSpan, but you actually return a string:
private static string ElapsedTimer() => (TimeStop - TimeStart).ToString();
public static class Stopwatch
The major problem with your implementation is though, that you make it static. That means that you only can run one "instance" at a time. In this way you aren't able to measure on two treads at the same time - or have nested measurements.
I would remove all the static stuff and only have one static method - starting a new Stopwatch instance, that then can be stopped by calling Stop() on the returned object like I showed above. So the public interface of the object would be:
public class Stopwatch
{
public TimeSpan Duration { get; }
public void Start() { }
public void Stop() { }
public static Stopwatch StartNew() { }
}
I think you have misunderstood the concepts of "encapsulation and information hiding" slightly. An object must have a public interface through which clients can communicate with it, but they shouldn't be allowed to manipulate the objects internal state (for instance the StartTime member of your watch) - only through the public interface. The client shouldn't know about how you measure the time internally, they are only interested in the final result, when they stop the watch by calling Stop(). | {
"domain": "codereview.stackexchange",
"id": 37723,
"tags": "c#, object-oriented, console"
} |
My EventBus system | Question: I decided to roll out my own EventBus system which is intended to be thread-safe.
Hence a review should focus extra on thread safety apart from all regular concerns.
The EventBus can work in two ways:
You can register events and listeners directly on the EventBus.
You can the methods, of a specific object, that are single argument void-methods annotated with @Event.
First the code, then the unit tests below:
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Event { }
public interface EventBus {
void registerListenersOfObject(final Object callbackObject);
<T> void registerListener(final Class<T> eventClass, final Consumer<? extends T> eventListener);
void executeEvent(final Object event);
void removeListenersOfObject(final Object callbackObject);
<T> void removeListener(final Class<T> eventClass, final Consumer<? extends T> eventListener);
void removeAllListenersOfEvent(final Class<?> eventClass);
void removeAllListeners();
}
public class SimpleEventBus implements EventBus {
private final static Set<EventHandler> EMPTY_SET = new HashSet<>();
private final ConcurrentMap<Class<?>, Set<EventHandler>> eventMapping = new ConcurrentHashMap<>();
private final Class<?> classConstraint;
public SimpleEventBus() {
this(Object.class);
}
public SimpleEventBus(final Class<?> eventClassConstraint) {
this.classConstraint = Objects.requireNonNull(eventClassConstraint);
}
@Override
public void registerListenersOfObject(final Object callbackObject) {
Arrays.stream(callbackObject.getClass().getMethods())
.filter(method -> (method.getAnnotation(Event.class) != null))
.filter(method -> method.getReturnType().equals(void.class))
.filter(method -> method.getParameterCount() == 1)
.forEach(method -> {
Class<?> clazz = method.getParameterTypes()[0];
if (!classConstraint.isAssignableFrom(clazz)) {
return;
}
synchronized (eventMapping) {
eventMapping.putIfAbsent(clazz, new HashSet<>());
eventMapping.get(clazz).add(new MethodEventHandler(method, callbackObject, clazz));
}
});
}
@Override
@SuppressWarnings("unchecked")
public <T> void registerListener(final Class<T> eventClass, final Consumer<? extends T> eventListener) {
Objects.requireNonNull(eventClass);
Objects.requireNonNull(eventListener);
if (!classConstraint.isAssignableFrom(eventClass)) {
return;
}
synchronized(eventMapping) {
eventMapping.putIfAbsent(eventClass, new HashSet<>());
eventMapping.get(eventClass).add(new ConsumerEventHandler((Consumer<Object>)eventListener));
}
}
@Override
public void executeEvent(final Object event) {
if (classConstraint.isAssignableFrom(event.getClass())) {
eventMapping.getOrDefault(event.getClass(), EMPTY_SET).forEach(eventHandler -> eventHandler.invoke(event));
}
}
@Override
public void removeListenersOfObject(final Object callbackObject) {
Arrays.stream(callbackObject.getClass().getMethods())
.filter(method -> (method.getAnnotation(Event.class) != null))
.filter(method -> method.getReturnType().equals(void.class))
.filter(method -> method.getParameterCount() == 1)
.forEach(method -> {
Class<?> clazz = method.getParameterTypes()[0];
if (classConstraint.isAssignableFrom(clazz)) {
eventMapping.getOrDefault(clazz, EMPTY_SET).remove(new MethodEventHandler(method, callbackObject, clazz));
}
});
}
@Override
@SuppressWarnings("unchecked")
public <T> void removeListener(final Class<T> eventClass, final Consumer<? extends T> eventListener) {
Objects.requireNonNull(eventClass);
Objects.requireNonNull(eventListener);
if (classConstraint.isAssignableFrom(eventClass)) {
eventMapping.getOrDefault(eventClass, EMPTY_SET).remove(new ConsumerEventHandler((Consumer<Object>)eventListener));
}
}
@Override
public void removeAllListenersOfEvent(final Class<?> eventClass) {
Objects.requireNonNull(eventClass);
eventMapping.remove(eventClass);
}
@Override
public void removeAllListeners() {
eventMapping.clear();
}
private static interface EventHandler {
void invoke(final Object event);
}
private static class MethodEventHandler implements EventHandler {
private final Method method;
private final Object callbackObject;
private final Class<?> eventClass;
public MethodEventHandler(final Method method, final Object object, final Class<?> eventClass) {
this.method = Objects.requireNonNull(method);
this.callbackObject = Objects.requireNonNull(object);
this.eventClass = Objects.requireNonNull(eventClass);
}
@Override
public void invoke(final Object event) {
try {
method.setAccessible(true);
method.invoke(callbackObject, Objects.requireNonNull(event));
} catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException ex) {
throw new RuntimeException(ex);
}
}
@Override
public int hashCode() {
int hash = 7;
hash = 71 * hash + Objects.hashCode(this.method);
hash = 71 * hash + Objects.hashCode(this.callbackObject);
hash = 71 * hash + Objects.hashCode(this.eventClass);
return hash;
}
@Override
public boolean equals(final Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final MethodEventHandler other = (MethodEventHandler)obj;
if (!Objects.equals(this.method, other.method)) {
return false;
}
if (!Objects.equals(this.callbackObject, other.callbackObject)) {
return false;
}
if (!Objects.equals(this.eventClass, other.eventClass)) {
return false;
}
return true;
}
}
private static class ConsumerEventHandler implements EventHandler {
private final Consumer<Object> eventListener;
public ConsumerEventHandler(final Consumer<Object> consumer) {
this.eventListener = Objects.requireNonNull(consumer);
}
@Override
public void invoke(final Object event) {
eventListener.accept(Objects.requireNonNull(event));
}
@Override
public int hashCode() {
int hash = 5;
hash = 19 * hash + Objects.hashCode(this.eventListener);
return hash;
}
@Override
public boolean equals(final Object obj) {
if (obj == null) {
return false;
}
if (getClass() != obj.getClass()) {
return false;
}
final ConsumerEventHandler other = (ConsumerEventHandler)obj;
if (!Objects.equals(this.eventListener, other.eventListener)) {
return false;
}
return true;
}
}
}
public class SimpleEventBusTest {
static {
assertTrue(true);
}
private AtomicInteger alphaCounter;
private AtomicInteger betaCounter;
private AtomicInteger gammaCounter;
@Before
public void before() {
alphaCounter = new AtomicInteger(0);
betaCounter = new AtomicInteger(0);
gammaCounter = new AtomicInteger(0);
}
private Stream<AtomicInteger> counters() {
return Stream.of(alphaCounter, betaCounter, gammaCounter);
}
@Test
public void testConstructor() {
EventBus eventBus = new SimpleEventBus();
eventBus.registerListenersOfObject(new Object() {
@Event
public void onAlphaEvent(final AlphaEvent alphaEvent) {
alphaCounter.incrementAndGet();
}
});
eventBus.executeEvent(new AlphaEvent());
assertEquals(1, alphaCounter.get());
}
@Test
public void testConstructorWithEventClassConstraint() {
EventBus eventBus = new SimpleEventBus(BetaEvent.class);
eventBus.registerListenersOfObject(new Object() {
@Event
public void onAlphaEvent(final AlphaEvent alphaEvent) {
alphaCounter.incrementAndGet();
}
});
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.executeEvent(new AlphaEvent());
assertEquals(0, alphaCounter.get());
}
@Test
public void testRegisterListenersOfObject() {
EventBus eventBus = new SimpleEventBus();
eventBus.registerListenersOfObject(new Object() {
@Event
public void onAlphaEvent1(final AlphaEvent alphaEvent) {
alphaCounter.incrementAndGet();
}
@Event
public void onAlphaEvent2(final AlphaEvent alphaEvent) {
alphaCounter.incrementAndGet();
}
@Event
public void onAlphaEvent3(final AlphaEvent alphaEvent) {
alphaCounter.incrementAndGet();
}
@Event
public void onBetaEvent1(final BetaEvent betaEvent) {
betaCounter.incrementAndGet();
}
@Event
public void onBetaEvent2(final BetaEvent betaEvent) {
betaCounter.incrementAndGet();
}
@Event
public void onGammaEvent(final GammaEvent gammaEvent) {
gammaCounter.incrementAndGet();
}
});
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(6, alphaCounter.get());
assertEquals(4, betaCounter.get());
assertEquals(2, gammaCounter.get());
}
@Test
public void testRegisterListener() {
EventBus eventBus = new SimpleEventBus();
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet());
eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet());
eventBus.registerListener(GammaEvent.class, gammaEvent -> gammaCounter.incrementAndGet());
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(6, alphaCounter.get());
assertEquals(4, betaCounter.get());
assertEquals(2, gammaCounter.get());
}
@Test
public void testExecuteEvent() {
EventBus eventBus = new SimpleEventBus();
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.executeEvent(new AlphaEvent());
assertEquals(1, alphaCounter.get());
}
@Test
public void testExecuteEventSameInstance() {
AlphaEvent specificAlphaEvent = new AlphaEvent();
EventBus eventBus = new SimpleEventBus();
eventBus.registerListener(AlphaEvent.class, alphaEvent -> assertTrue(alphaEvent == specificAlphaEvent));
}
@Test
public void testRemoveListenersOfObject() {
EventBus eventBus = new SimpleEventBus();
Object object1 = new Object() {
@Event
public void onAlphaEvent(final AlphaEvent alphaEvent) {
alphaCounter.incrementAndGet();
}
@Event
public void onBetaEvent(final BetaEvent betaEvent) {
betaCounter.incrementAndGet();
}
@Event
public void onGammaEvent(final GammaEvent gammaEvent) {
gammaCounter.incrementAndGet();
}
};
Object object2 = new Object() {
@Event
public void onAlphaEvent(final AlphaEvent alphaEvent) {
alphaCounter.incrementAndGet();
}
@Event
public void onBetaEvent(final BetaEvent betaEvent) {
betaCounter.incrementAndGet();
}
@Event
public void onGammaEvent(final GammaEvent gammaEvent) {
gammaCounter.incrementAndGet();
}
};
eventBus.registerListenersOfObject(object1);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
counters().allMatch(counter -> counter.get() == 1);
eventBus.registerListenersOfObject(object2);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
counters().allMatch(counter -> counter.get() == 3);
eventBus.removeListenersOfObject(object2);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
counters().allMatch(counter -> counter.get() == 4);
eventBus.removeListenersOfObject(object1);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
counters().allMatch(counter -> counter.get() == 4);
}
@Test
public void testRemoveListener() {
EventBus eventBus = new SimpleEventBus();
Consumer<AlphaEvent> alphaEventListener = alphaEvent -> alphaCounter.incrementAndGet();
Consumer<BetaEvent> betaEventListener = betaEvent -> betaCounter.incrementAndGet();
Consumer<GammaEvent> gammaEventListener = gammaEvent -> gammaCounter.incrementAndGet();
eventBus.registerListener(AlphaEvent.class, alphaEventListener);
eventBus.registerListener(BetaEvent.class, betaEventListener);
eventBus.registerListener(GammaEvent.class, gammaEventListener);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(1, alphaCounter.get());
assertEquals(1, betaCounter.get());
assertEquals(1, gammaCounter.get());
eventBus.removeListener(GammaEvent.class, gammaEventListener);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(2, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
eventBus.removeListener(BetaEvent.class, betaEventListener);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
eventBus.removeListener(AlphaEvent.class, alphaEventListener);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
}
@Test
public void testRemoveAllListenersOfEvent() {
EventBus eventBus = new SimpleEventBus();
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet());
eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet());
eventBus.registerListener(GammaEvent.class, gammaEvent -> gammaCounter.incrementAndGet());
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
eventBus.removeAllListenersOfEvent(AlphaEvent.class);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(4, betaCounter.get());
assertEquals(2, gammaCounter.get());
eventBus.removeAllListenersOfEvent(BetaEvent.class);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(4, betaCounter.get());
assertEquals(3, gammaCounter.get());
eventBus.removeAllListenersOfEvent(GammaEvent.class);
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(4, betaCounter.get());
assertEquals(3, gammaCounter.get());
}
@Test
public void testRemoveAllListeners() {
EventBus eventBus = new SimpleEventBus();
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(AlphaEvent.class, alphaEvent -> alphaCounter.incrementAndGet());
eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet());
eventBus.registerListener(BetaEvent.class, betaEvent -> betaCounter.incrementAndGet());
eventBus.registerListener(GammaEvent.class, gammaEvent -> gammaCounter.incrementAndGet());
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
eventBus.removeAllListeners();
eventBus.executeEvent(new AlphaEvent());
eventBus.executeEvent(new BetaEvent());
eventBus.executeEvent(new GammaEvent());
assertEquals(3, alphaCounter.get());
assertEquals(2, betaCounter.get());
assertEquals(1, gammaCounter.get());
}
private static class AlphaEvent { }
private static class BetaEvent { }
private static class GammaEvent { }
}
Answer: The synchronization in the code is in some places overly broad, and in others, it is absent where it is needed.
synchronizing on eventMapping in your registerListenersOfObject method means that only one thread can be accessing the eventMapping instance at any one time. This defeats using the ConcurrentHashMap concept entirely (where only a small portion of the map is locked and other portions are available for other threads). The granularity of this lock is overly broad.
Inside that lock, you add data to (and potentially create) a HashSet<EventHandler> instance. This HashSet is then used in other methods, but without any synchronization. Those other methods may have issues with concurrency because they are not included in any synchronization at all.
@Override
public void executeEvent(final Object event) {
if (classConstraint.isAssignableFrom(event.getClass())) {
eventMapping.getOrDefault(event.getClass(), EMPTY_SET).forEach(eventHandler -> eventHandler.invoke(event));
}
}
in the above code, while performing the forEach, any of the following things are possible (and other things as well, I am sure):
data could be added to the Set you are streaming, and that data may, or may not be included in the stream.
the stream could throw a ConcurrentModificationException
the steam could end early (and some data may not be processed at all.
......
Consider the following code in the SimpleEventBus. This code handles adding and using event handlers (though removing handlers needs to be fixed as well)....
private final void includeEventHandler(final Class<?> clazz, final EventHandler handler) {
Set<EventHandler> existing = eventMapping.get(clazz);
if (existing == null) {
final Set<EventHandler> created = new HashSet<>();
// optimistically assume that we are the first thread for this particular class.
existing = eventMapping.putIfAbsent(clazz, created);
if (existing == null) {
// we are the first thread to add one for this clazz
existing = created;
}
}
synchronized (existing) {
existing.add(handler);
}
}
private final EventHandler[] getEventHandlers(final Class<?> clazz) {
Set<EventHandler> handlers = eventMapping.get(clazz);
if (handlers == null) {
return new EventHandler[0];
}
synchronized(handlers) {
return handlers.toArray(new EventHandler[handlers.size()]);
}
}
@Override
public void registerListenersOfObject(final Object callbackObject) {
Arrays.stream(callbackObject.getClass().getMethods())
.filter(method -> (method.getAnnotation(Event.class) != null))
.filter(method -> method.getReturnType().equals(void.class))
.filter(method -> method.getParameterCount() == 1)
.forEach(method -> {
Class<?> clazz = method.getParameterTypes()[0];
if (!classConstraint.isAssignableFrom(clazz)) {
return;
}
includeEventHandler(clazz, new MethodEventHandler(method, callbackObject, clazz));
});
}
@Override
@SuppressWarnings("unchecked")
public <T> void registerListener(final Class<T> eventClass, final Consumer<? extends T> eventListener) {
Objects.requireNonNull(eventClass);
Objects.requireNonNull(eventListener);
if (!classConstraint.isAssignableFrom(eventClass)) {
return;
}
includeEventHandler(eventClass, new ConsumerEventHandler((Consumer<Object>)eventListener));
}
@Override
public void executeEvent(final Object event) {
if (classConstraint.isAssignableFrom(event.getClass())) {
Arrays.stream(getEventHandlers(event.getClass())).forEach(eventHandler -> eventHandler.invoke(event));
}
}
The above code uses the ConcurrentHashMap in a way that is minimally locked. It uses an optimistic process for creating a new HashSet only when it is likely going to be used (instead of creating, and throwing it away almost all the time). It also makes sure that, if one is created in a different thread, and our optimism was proven wrong, that we use the one that other threads are using.
Then, for the actual HashSet, it synchronizes on the whole set, and all operations are completely isolated from other threads.
This is OK, because, the only time there will be thread blocking, is when two threads are accessing the event handlers for a single Class.... which is likely to be uncommon.
Note, that the getHandlers creates a defensive copy of the Set, so that iteration has a consistent copy of the data, and that there does not need to be any locking during the iteration.
Edit: To remove unnecessary work in the code, I would actually recommend the following:
private final EventHandler[] getEventHandlers(final Class<?> clazz) {
Set<EventHandler> handlers = eventMapping.get(clazz);
if (handlers == null) {
return null;
}
synchronized(handlers) {
return handlers.toArray(new EventHandler[handlers.size()]);
}
}
@Override
public void executeEvent(final Object event) {
if (classConstraint.isAssignableFrom(event.getClass())) {
EventHandler[] handlers = getEventHandlers(event.getClass());
if (handlers != null) {
Arrays.stream(handlers).forEach(eventHandler -> eventHandler.invoke(event));
}
}
} | {
"domain": "codereview.stackexchange",
"id": 7404,
"tags": "java, thread-safety, reflection, event-handling"
} |
Finding the isotherm with given minima | Question:
Get the temperature of the isotherm for water for which the local minimum is at $\pu{100 Pa}.$ Use the values of $a$ and $b$ of water.
My approach
In the van der Waals equation, set
$$\frac{\mathrm dP}{\mathrm dV} = 0$$
and get the value of $T.$ Plug this value in the initial van der Waals equation and and solve for $V$ from the biquadratic.
The calculations get very messy and the values come out to be unrealistic. (For example, temperature of the order of $10^6$)
My doubt is whether the $\pu{100 Pa}$ as provided by the problem setter a realistic value?
Answer: Using Mathematica, I obtained $T=546 K$, as follows. One of the nice things about using Mathematica for physical calculations is that it has the ability to understand/keep track of/cancel out/convert units.
The volume is non-physically small -- about 750,000 x smaller than what we'd expect for a mole of, say, ideal gas at $P=100 Pa, T = 546 K$). One does expect a non-physically small volume, since this is at a non-physically low point on the isotherm, but I would not have predicted it would be this much smaller.
If you were doing it by hand, you'd probably want to use a different approach: | {
"domain": "chemistry.stackexchange",
"id": 12463,
"tags": "physical-chemistry, thermodynamics"
} |
Lorentz transformation equations: an insight | Question: Let Bob be moving towards the positive $x$-axis and Alice be stationary at the origin. Then the Lorentz transformation gives:
$$t' = \gamma\left( t- \frac{v x}{c^2}\right)$$
where $t'$ is the time of Bob, $t$ is the time of Alice. If time of Alice is held constant, and $x'$ represents the spatial coordinate of Bob, this will hold true:
$\dfrac{\partial t'}{\partial x'} \Bigg|_t $ $\neq$ 0.
In this way, I am actually trying to express the fact that "all the clocks of Bob are not synchronised according to Alice" mathematically.
Answer:
Then the Lorentz transformation gives:
$$t' = \gamma\left( t- \frac{v x}{c^2}\right)$$
where $t'$ is the time of Bob, $t$ is the time of Alice. If time of Alice is held constant, and $x'$ represents the spatial coordinate of Bob, will this hold true:
$\dfrac{\partial t'}{\partial x'} \Bigg|_t = 0 $?
Your question is not totally clear to me. I will try to answer the question that I think you are asking. Long story short, the answer is no. Details below.
The only way I can make sense of your question is when the expression for t' is re-written to eliminate x (such that t' is now formally a function of x' and t). And in this case, you could ask whether or not the partial derivative of that function t'(x',t) with respect to x' is zero.
From the other Lorentz equation we have:
$$
x' = -\beta\gamma c t + \gamma x\;.
$$
Or, rearranging:
$$
x = \frac{1}{\gamma}\left(x' + v\gamma t\right)
$$
Plugging this in to eliminate x from your first equation gives:
$$
t' = \gamma\left( t - \frac{v}{\gamma c^2}\left(x' + v\gamma t\right) \right)
$$
$$
=\gamma t - \frac{v}{c^2}x' - \frac{v^2}{c^2}\gamma t
$$
$$
=\gamma t\left(1-\frac{v^2}{c^2}\right) - \frac{v}{c^2}x'
$$
$$
= \frac{t}{\gamma} - \frac{v}{c^2}x'
$$
So, we see that the partial derivative you are interested is not zero:
$$
\frac{\partial t'}{\partial x'} = -\frac{v}{c^2} \neq 0
$$ | {
"domain": "physics.stackexchange",
"id": 86619,
"tags": "special-relativity, spacetime, speed-of-light, inertial-frames"
} |
Impact of Grothendieck's program on TCS | Question: Grothendieck has passed away. He had massive impact on 20th century mathematics continuing into the 21st century. This question is asked somewhat in the style/spirit, for example, of Alan Turing's Contributions to Computer Science.
What are Grothendieck's major influences on theoretical computer science?
Answer: Grothendieck's inequality, from his days in functional analysis, was initially proved to relate fundamental norms on tensor product spaces. Grothendieck called the inequality "the fundamental theorem of the metric theory of tensor product spaces", and published it in a now famous paper in 1958, in French, in a limited circulation Brazilian journal. The paper was largely ignored for 15 years, until it was rediscovered by Lindenstrauss and Pelczynski (after Grothendieck had left functional analysis). They gave many reformulations of the paper's main results, related it to research on absolutely summing operators and factorization norms, and observed that Grothendieck had solved "open" problems which had been raised after the paper was published. Pisier gives a very detailed account of the inequality, its variants, and its tremendous influence on functional analysis in his survey.
Grothendieck's inequality is very naturally expressed in the language of combinatorial optimization and approximation algorithms. It says that the non-convex, NP-hard optimization problem
$$
\max\{x^TAy: x \in \{-1, 1\}^m, y \in \{-1, 1\}^n\}
$$
is approximated up to a fixed constant by its semidefinite relaxation
$$
\max\{\sum_{i,j}{a_{ij}\langle u_i, v_j\rangle}: u_1, \ldots, u_m, v_1, \ldots, v_n \in \mathbb{S}^{n+m-1}\},
$$
where $\mathbb{S}^{n+m-1}$ is the unit sphere in $\mathbb{R}^{n+m}$. Proofs of the inequality give "rounding algorithms", and in fact the Goemans-Williamson random hyperplane rounding does the job (but gives a suboptimal constant). However, Grothendieck's inequality is interesting because the analysis of the rounding algorithm has to be "global", i.e. look at all terms of the objective function together.
Having said this, it should not be surprising that Grothendiecks's inequality has found a second (third? fourth?) life in computer science. Khot and Naor survey its multiple applications and connections to combinatorial optimization.
The story does not end there. The inequality is related to Bell inequality violations in quantum mechanics (see Pisier's paper), has been used by Linial and Shraibman in work on communication complexity, and even turned out useful in work on private data analysis (shameless plug). | {
"domain": "cstheory.stackexchange",
"id": 2969,
"tags": "big-picture, ct.category-theory, ho.history-overview"
} |
Meaning of tilde ($\sim$) above vector (Context: particle physics) | Question: I have encountered a notation I am not familiar with, namely a tilde $\sim$ above a vector (i.e. a column vector), e.g. $\tilde{H}$. From the context, it is clear that it cannot mean transposition, complex conjugation or Hermitean conjugation. Is there any standard meaning of this notation? If it helps, I can mention that the notation occurs in the Lagrangian of two Higgs doublet models.
To give a specific example, consider e.g. equation 30 of Theory and phenomenology of two-Higgs-doublet models by Branco et al.
$$\mathcal{L}_\mathrm{Yukawa}=\eta_{ij}^U\bar{Q}_{iL}\tilde{H}_1U_{jR}+\eta_{ij}^D\bar{Q}_{iL}H_1D_{jR}+...$$
Here $...$ denotes further terms which include e.g. $H_2$. Thus it is clear that the tilde is not used to distinguish the two Higgs doublets.
Another example of where the notation occurs is in eq.80 of Building and testing models with extended Higgs sectors by Ivanov. For a complex triplet $X=(\chi^{++},\chi^+,\chi^0)^T$, $\tilde{X}$ is defined as $\tilde{X}=(\chi^{0*},-\chi^{+*},\chi^{++*})^T$
Answer: This is the conjugate representation of SU(2). It is the backbone of the fermion masses in the SM and is detailed in standard SM texts.
That is to say, for the doublet,
$$
\tilde H \equiv i\sigma_2 H^*,
$$
so it transforms identically to H under SU(2)! (It, of course, reverses the hypercharge).
Behold:
$$
\delta H = \frac{i}{2} \theta_a \sigma_a H \qquad \leadsto \\
\delta \tilde H = i\sigma_2 ( \frac{i}{2} \theta_a \sigma_a H )^* \qquad \\
= i\sigma_2 ( -\frac{i}{2} \theta_a \sigma_a^* H^* ) = \frac{i}{2} \theta_a \sigma_a ~ i\sigma_2 H ^* = \frac{i}{2} \theta_a \sigma_a \tilde H ,
$$
by virtue of $\sigma_2 \sigma_a^*+\sigma_a\sigma_2=0 $ for any a!
So, $\tilde H$ is a doublet just like H, except stood on its head, complex conjugated, with an extra - sign at the bottom.
As a result, the Yukawa coupling you exemplify, which is identical to the Yukawa of the SM for the standard Higgs doublet , provides masses for both the uplike quarks, the first term, and the downlike quarks.
That is so because
$$
\langle H\rangle_0= \begin{pmatrix} 0\\ v \end{pmatrix} , \qquad \implies \qquad \langle \tilde H\rangle_0=\begin{pmatrix} v\\ 0 \end{pmatrix} ,
$$
so the v.e.v. of $\tilde H$ provides the uplike quark masses, just like that of the H for the downlike masses.
As for the Higgs triplet, I am not very experienced with its standard conventions. You dot your triplet (which is in the spherical basis) properly transformed to a Cartesian vector, now, to the Pauli vector, and re-express it in the spherical basis again to monitor the properties of the individual components,
$$
\Delta=\begin{pmatrix} {\chi^{+}} &{\sqrt{2}} \chi^{++} \\
{\sqrt{2}}\chi ^0 & - \chi^{+} \end{pmatrix}= \chi^+ \sigma_3 +\chi^{++} \frac{\sigma_1+i\sigma_2}{\sqrt{2}} + \chi^0 \frac{\sigma_1-i\sigma_2}{\sqrt{2}} ,
$$
and read off the new (transformed) components in the adjoint object $$i\sigma_2 \Delta^* (-i\sigma_2) ,$$ noting the transposition effected. (Full disclosure: I seem to be getting a couple of errant - signs extra, contrasted to your expression. It may well be an artifact of the procedure. The v.e.v. is very much in the properly transposed position, as you should check!) | {
"domain": "physics.stackexchange",
"id": 65326,
"tags": "particle-physics, vectors, notation"
} |
Is there any effect on mechanical waves by electromagnetic waves (and vise versa)? | Question: Do electromagnetic waves like light and gravitational waves (due to moon for instance) affect on mechanical waves like sound?
Can sound change the path of light?
Answer: Any physical phenomenon is potentially capable to cause some change to any other phenomenon, more or less directly. If it was not the case, the physical world could be divided into completely independent realms; there would not be the one single world we call Nature.
Practically though, many if not most of the actually existing interactions between systems can be ignored, or just treated as perturbations in models taking into account only the most important ones. This is because interactions happen in a wide range of order of magnitudes. For example you would not usually include electromagnetic interactions between Moon and Earth when modelling their respective motion, although it certainly does play some part in the actual interplay of the two bodies (both having a magnetic field). If you do not ignore negligible effects, well even nocturnal urban lighting does play a part by sending photons to the Moon, pushing it away from Earth!
As a side note, the fact that some interactions are so much less intense than others is very useful: it allows us to use them as measuring devices. As shown in another answer, we can use Schlieren photography as a straightforward way to display air density because indeed the path of light is altered by compression waves, but only marginally so. If the dependence of electromagnetic waves on air density was more intense, it would be more complicated to decorrelate both effects. | {
"domain": "physics.stackexchange",
"id": 31863,
"tags": "waves, electromagnetic-radiation, acoustics"
} |
Find all the induced paths with a start vertex | Question: Let $G$ be a graph and let $v$ be a vertex. Is there a polynomial algorithm for the following operation?
Operation. Find all the induced paths in $G$ with first vertex $v$.
Background
This problem is the confusion I encountered while reading the article Induced paths in graphs without anticomplete cycles. The authors asserted that there is a polynomial algorithm to check whether a graph contains no two non-adjacent cycles (and more generally no k non-adjacent cycles). Their algorithm is as follows:
For each vertex $v$, find all the induced paths with first vertex $v$.
For each induced path $P$, find all induced cycles that consist of $P$ and one extra vertex.
Check whether any two of these cycles are disjoint and have no edges between them.
I am puzzled by the first step. If that's the case, then I think that finding the longest induced path in a graph would be possible with a polynomial algorithm. But as far as I know, it is NP-hard. See https://en.wikipedia.org/wiki/Induced_path
Answer: Consider the following graph $G = (V, E)$ where:
$V = \{v_0\}\cup \{u_1,…, u_n\}\cup \{v_1, …, v_n\}$;
$E = \{\{v_0,u_1\}, \{v_0, v_1\}\}\cup \bigcup\limits_{k=1}^{n-1}\{\{u_k, u_{k+1}\}, \{u_k, v_{k+1}\}, \{v_k, u_{k+1}\}, \{v_k, v_{k+1}\}\}$.
Then the graph has more than $2^n$ induced paths from $v_0$. So there is no way to find all of them in polynomial time.
I think the paper states that there is a polynomial-time algorithm to find them if you make additionnal hypotheses on the graph (though I agree that many details are missing). | {
"domain": "cs.stackexchange",
"id": 21830,
"tags": "algorithms, graphs, algorithm-analysis"
} |
Is cubical type theory still consistent with univalent excluded middle and univalent choice? | Question: I want to formalize some undergraduate maths in cubical agda, and learning cubical type theory in the proccess. The problem is that I will need univalent excluded middle and univalent choice (and maybe propositional resizing). I know these are consistent with homotopy type theory (although computation is lost when axiom are used), but cubical that type theory is stronger (in the sense that univalence is a theorem). Are this axiom still consistent in the cubical setting? Is there a better way of doing classical theorems in cubical type theory?
Answer: I decided to rewrite my answer, because I thought I could explain the details
better. I'd still defer to someone more knowledgeable should they happen to come
along.
I think it is relatively safe to say that cubical type theory as an approach on
the whole is compatible with the classical axioms you want. The reasons for
this are:
The differences people usually point out, like the $\mathsf{Glue}$ type,
are just a way of making univalence hold for all types, independent of
their belonging to a universe. This is (in my view) merely fixing a
limitation of the HoTT approach of minimally extending Martin-löf type
theory, where it's impossible to state the univalence axiom except by
reference to a universe. However, if you have enough universes with
univalence axioms, every type is in a universe, and so univalence holds for
all types. In fact, this is how Agda actually works directly; the universes
are primary, and all types are declared as part of a universe, similar to a
pure type system.
There is (I've heard) a cubical set model (discovered by Emily Riehl) that is
completely compatible with the standard simplicial set model, and I believe
it has been shown to be adequate for doing cubical type theory. I think what
this means is that if you add EM (and maybe choice), you get the same
homotopy theory as the classical simplicial set model. I've also heard that
there are constructive analogues of the simplicial set model, so it is not
inherently classical.
Now, what you have to worry about are other details. For instance, cubical Agda
uses the sort of cubes with inversions and connections. So, if i and j are
dimension variables, then ~ i is a dimension, and so is i ∨ j. There are
also Cartesian cubes that don't have ∨ but have other similar constructions.
These affect the details of how you are able to construct paths. Having more of
them is convenient for easily constructing compositions of paths and whatnot,
but my understanding is that the model that matches the classical simplicial
model lacks most of these things.
Now, more specifically, I've also heard that most of these other cubical set
models are anti-classical (I'd forgotten that earlier). I've not been able to
work out the details of how you demonstrate this inside implemented cubical
type theories, but my understanding is that, using ~ as an example, you try to
construct some contradictory scenario using a point i on the interval such
that i = ~i. There's no way given in the type theory to construct such a
point, and somehow it's inconsistent to presume that you can tell whether any
abstract point is or is not like that.
So, I guess the answer to your specific question is that there is probably a
way to refute excluded middle in cubical Agda (although I don't know how), and
every other computer implemented cubical type theory. But it's also likely that
a cubical type theory could be implemented that is consistent with classical
axioms (and homotopy theory) and supports all the reasoning desired (although
maybe it's more difficult to construct certain things). Unfortunately, you'll probably have to wait until the details are fleshed out more and some implementation decides to pick up the compatible version of the theory.
Another option, I suppose, is to not worry about the anti-classical part too much. If you use the cubical primitives to implement something similar to reasoning in the HoTT book, and avoid working directly with the cubical operations, you can probably be reasonably confident (I think) that you haven't relied on any paradoxes. I think there are efforts to provide a HoTT interface on top of cubical type theory in some of the libraries available. You could also dabble in learning the direct cubical parts, even though you don't use them directly, although I don't know exactly which parts are still part of the 'proper' model.
Edit: by the way, here is a presentation about cubical set models of homotopy type theory that mentions the anti-classical properties of many cubical set models. It describes, vaguely, what you have to do to refute excluded middle, but it's all in terms of looking at the models from the outside, hence some of the difficulty of translating that to a refutation within cubical type theory (for me).
Edit 2: here's a potential more positive answer. It's been pointed out to me that the constructions in the above video may not be possible to carry out inside cubical type theory. They are constructions that can be carried out in the 'intended' cubical set models, but just like Martin-löf type theory has other models than the intended one (which is how you get to HoTT), the cubical set models aren't necessarily the only model of the corresponding cubical type theory. So it's possible that cubical type theories are compatible with excluded middle and choice, even though their most obvious models are not. | {
"domain": "cs.stackexchange",
"id": 15575,
"tags": "homotopy-type-theory, cubical-type-theory"
} |
What are very short programs with unknown halting status? | Question: This 579-bit program in the Binary Lambda Calculus has unknown halting status:
01001001000100010001000101100111101111001110010101000001110011101000000111001110
10010000011100111010000001110011101000000111001110100000000111000011100111110100
00101011000000000010111011100101011111000000111001011111101101011010000000100000
10000001011100000000001110010101010101010111100000011100101010110000000001110000
00000111100000000011110000000001100001010101100000001110000000110000000100000001
00000000010010111110111100000010101111110000001100000011100111110000101101101110
00110000101100010111001011111011110000001110010111111000011110011110011110101000
0010110101000011010
That is, it is not known whether this program terminates or not. In order to determine it, you must solve the Collatz conjecture - or, at least, for all numbers up to 2^256. On this repository there is a complete explanation of how this program was obtained.
Are there (much) shorter BLC programs that also have unknown halting status?
Answer: Yes. This page says there are 98 5-state Turing machines whose halting statuses are unknown. Annoyingly, it does not give any examples of such machines, but this 26-year-old page gives 2 5-state Turing machines whose halting statuses were apparently unknown at that time. (Searching for "simple counter" will take you right between those 2.) I copied them here in case that link goes down:
Input Bit Transition on State Steps Comment
A B C D E
0 B1L C1R D0R A1L H1L > 2*(10^9) ``chaotic''
1 B1R E0L A0L D0R C0L
0 B1L A0R C0R E1L B0L ? complex ``counter''
1 A1R C0L D1L A0R H1L | {
"domain": "cs.stackexchange",
"id": 21469,
"tags": "algorithms, computability, kolmogorov-complexity"
} |
HC-05 connection to ROS using Arduino? | Question:
Hello, guys.
I've been trying to get my HC-05 bluetooth module working with ROS through an Arduino, but I haven't got success so far. In order to just establish the connection between the module and my computer, I have followed this tutorial (link text), which uses bluez bluez-tools. Here are the exactly steps I've taken:
$ hcitool scan
Scanning ...
20:15:07:27:76:81 HC-05
$ bluez-simple-agent hci0 20:15:07:27:76:81
RequestPinCode (/org/bluez/731/hci0/dev_20_15_07_27_76_81)
Enter PIN Code: 1234
Release
New device (/org/bluez/731/hci0/dev_20_15_07_27_76_81)
Then, I've made my /etc/bluetooth/rfcomm.conf file as follows:
rfcomm0 {
bind no;
device 20:15:07:27:76:81;
channel 1;
comment "Arduino";
}
Then, back to terminal:
$ sudo rfcomm connect 0
Connected /dev/rfcomm0 to 20:15:07:27:76:81 on channel 1
Press CTRL-C for hangup
Then I start ROS by using
$ roscore
And finally, I try and fail to establish the connection.
$ rosrun rosserial_python serial_node.py /dev/rfcomm0 _baud:=9600
[INFO] [WallTime: 1470448881.996747] ROS Serial Python Node
[INFO] [WallTime: 1470448882.001542] Connecting to /dev/rfcomm0 at 9600 baud
[ERROR] [WallTime: 1470448899.106085] Unable to sync with device; possible link problem or link software version mismatch such as hydro rosserial_python with groovy Arduino.
The problem isn't in my arduino code, because when I use the USB cable, the topics are successfully created and I'm able to see what's going on inside them.
I'm using
Ubuntu 14.04
ROS Indigo
ros-indigo-rosserial-arduino
ros-indigo-rosserial
Can anyone help me, please?
Thank you.
Originally posted by Giovani Debiagi on ROS Answers with karma: 31 on 2016-08-05
Post score: 0
Answer:
I have already got the bluetooth connection to work with this module by setting some data to be printed on the Serial monitor of Arduino, but nothing related to ROS. The command lines I used were:
$ sudo rfcomm bind 0&
$ cat /dev/rfcomm0
By using this, I've got the data printed on my linux machine terminal. But I don't need just to see them, I need to get them with a node to make some operations.
And I have already tried that same procedure with a 57.600 baud rate, and got the same error message.
EDITED
I have tried to change the serial port by using the class definition below, but it didn't recognize Serial1.
class NewHardware : public ArduinoHardware
{
public:
NewHardware():ArduinoHardware(&Serial1, 57600){};
};
ros::NodeHandle_<NewHardware> nh;
So, I added this line in the beginning of my sketch
# define USBCON
which would change the serial port, according to some page on the internet. And now, I get a new error message.
[INFO] [WallTime: 1470535945.212095] ROS Serial Python Node
[INFO] [WallTime: 1470535945.215136] Connecting to /dev/rfcomm0 at 57600 baud
Traceback (most recent call last):
File "/opt/ros/indigo/lib/rosserial_python/serial_node.py", line 80, in <module>
client = SerialClient(port_name, baud)
File "/opt/ros/indigo/lib/python2.7/dist-packages/rosserial_python/SerialClient.py", line 385, in __init__
self.requestTopics()
File "/opt/ros/indigo/lib/python2.7/dist-packages/rosserial_python/SerialClient.py", line 392, in requestTopics
self.port.flushInput()
File "/usr/lib/python2.7/dist-packages/serial/serialposix.py", line 500, in flushInput
termios.tcflush(self.fd, TERMIOS.TCIFLUSH)
termios.error: (5, 'Input/output error')
Any ideas? This new error message doesn't even say what the problem is.
"SOLVED"
Well, I ended up choosing another way to accomplish my goal. I've developed (not alone) a ROS code, on my laptop, that gets data directly from serial port. This is the code:
#include "ros/ros.h"
#include <fcntl.h>
#include <termios.h>
#include <errno.h>
#include <sys/ioctl.h>
int main(int argc, char **argv)
{
ros::init (argc,argv,"imu_node");
ros::NodeHandle nh;
ros::Rate loop_rate(50); //Hz
std::string aux;
//Begin serial communication with Arduino
struct termios toptions;
int fd, n;
float roll, pitch, yaw;
nh.getParam("imu_node/serial_port",aux);
ROS_INFO_STREAM(aux);
fd = open(aux.c_str(), O_RDWR | O_NOCTTY);
/* wait for the Arduino to reboot */
usleep(3500000);
/* get current serial port settings */
tcgetattr(fd, &toptions);
/* set 9600 baud both ways */
cfsetispeed(&toptions, B115200);
cfsetospeed(&toptions, B115200);
/* 8 bits, no parity, no stop bits */
toptions.c_cflag &= ~PARENB;
toptions.c_cflag &= ~CSTOPB;
toptions.c_cflag &= ~CSIZE;
toptions.c_cflag |= CS8;
/* Canonical mode */
toptions.c_lflag |= ICANON;
/* commit the serial port settings */
tcsetattr(fd, TCSANOW, &toptions);
while(ros::ok())
{
char buf[64]="temp text";
write(fd, "I\n", 2);
usleep(500);
/* Receive string from Arduino */
do
{
n = read(fd, buf, 32);
}while (n < 10);
/* insert terminating zero in the string */
buf[n] = 0;
sscanf(buf, "I|%f|%f|%f|*\r\n", &yaw, &pitch, &roll);
ROS_INFO_STREAM(buf);
ROS_INFO_STREAM("accel_x: " << roll);
ROS_INFO_STREAM("accel_y: " << pitch);
ROS_INFO_STREAM("yaw: " << yaw);
ros::spinOnce();
loop_rate.sleep();
}
}
Then, I link the rfcomm0 port created by the bluetooth connection to another random serial port, because, apparently, the parameter setting doesn't work with the original bluetooth port (rfcomm0).
$ sudo ln -s /dev/rfcomm0 /dev/ttyBT
And finally, I set the node parameter and run the node.
$ rosparam set /imu_node/serial_port /dev/ttyBT
$ rosrun imu imu
Originally posted by Giovani Debiagi with karma: 31 on 2016-08-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by mcshicks on 2016-08-06:
It seems like you test only checks transmit, not receive on the arduino. If you haven't check receive I would do that. Also you have to change the serial port from the Arduino one to BT, something like this
class NewHardware : ....
NewHardware():ArduinoHardware(&Serial1, 57600){};... good luck!
Comment by ogruiz on 2016-10-09:
Any solutions to this problem? I am getting the exact same error.
Comment by Giovani Debiagi on 2016-10-09:
No, ogruiz, sorry. I ended up choosing another way to accomplish my goal. I've developed a ROS code, on my laptop, that gets data directly from serial port. Then, I make a linking between the bluetooth port (rfcomm0) and a new random serial port, and set this new port as a parameter of my ROS node.
Comment by ogruiz on 2016-10-09:
Would you be able to share this new ROS code? I am trying what you suggested but have not succeeded. I am using Virtual Machine VMware to run Ubuntu 14.04 and indigo and connecting my host's bluetooth to the serial port on the VM.
Comment by Giovani Debiagi on 2016-10-09:
Actually, this code is not entirely made by myself (I know very little about serial-linux), so I'd have to ask permission for some people before sharing it, I'm sorry. But I've found some answers around here with code examples that might help. Apparently, this one has the same structure as mine.
Comment by Giovani Debiagi on 2016-10-09:
http://answers.ros.org/question/10114/how-can-ros-communicate-with-my-microcontroller/
Comment by Giovani Debiagi on 2016-10-09:
This other one is really similar to mine:
http://stackoverflow.com/questions/18108932/linux-c-serial-port-reading-writing
Let me know if it helps.
Comment by Giovani Debiagi on 2016-10-10:
ogruiz, I've got permission to share the code now. It's up above, inside my answer. | {
"domain": "robotics.stackexchange",
"id": 25461,
"tags": "ros, arduino, bluetooth, rosserial-python"
} |
Roomba 500 Series - Raw encoder counts readings | Question: I have been trying to read Roomba raw encoder counts I am receiving values, but I have a sudden increase/decrease in values as can be seen over here.
142 152
153 166
167 180
181 196
50101 33280
207 223
222 238
236 252
#include <SoftwareSerial.h>
#include <Wire.h>
// Roomba Create2 connection
int rxPin = 10;
int txPin = 11;
SoftwareSerial Roomba(rxPin, txPin);
void setup() {
pinMode(10, INPUT);
pinMode(11, OUTPUT);
pinMode(5, OUTPUT);
Serial.begin(115200);
Roomba.begin(19200);
delay(1000);
Roomba.write(128); //Start
Roomba.write(132); //Full mode
delay(1000);
}
char command;
void loop() {
if (Serial.available()){
command=Serial.read();}
switch (command)
{
case '8':
Roomba.write(byte(145));
Roomba.write(byte(0));
Roomba.write(byte(0x28));
Roomba.write(byte(0));
Roomba.write(byte(0x28));
delay(50);
encoder_counts();
break;
case '0':
Roomba.write(byte(145));
Roomba.write(byte(0));
Roomba.write(byte(0));
Roomba.write(byte(0));
Roomba.write(byte(0));
break;
}
}
void encoder_counts()
{
unsigned int right_encoder;
unsigned int left_encoder;
byte bytes[4];
Roomba.write(byte(149));
Roomba.write(byte(2));
Roomba.write(byte(43));
Roomba.write(byte(44));
delay(50);
int i=0;
while(Roomba.available()) {
bytes[i++]=Roomba.read();
left_encoder = (unsigned int)(bytes[0] << 8)|(unsigned int)(bytes[1]&0xFF);
right_encoder =(unsigned int)(bytes[2] << 8)|(unsigned int)(bytes[3]&0xFF);
}
Serial.print(right_encoder);
Serial.print(" ");
Serial.println(left_encoder);
}
so please kindly advice, Thank you
Answer: Welcome to Robotics, Ahmed. You have posted the code you use to build the encoder counts, but not the code you used to display the counts. I mention this because you have the encoder counts displayed in your post as:
13786.00 13938.00
as in, with a trailing decimal, but your encoder counts are defined in your code to be integers. This could be problematic if the print function is treating your two-byte integer as the first two bytes of a floating point number.
The second issue it looks like you've got is that you're defining your encoder variables as int instead of unsigned int. Per the manual,
encoder [count] is returned as an unsigned 16-bit number, high byte
first
Try redefining your right_encoder and left_encoder as unsigned int, and then also try changing the formatting in your print command to something like Serial.print(right_encoder) instead of Serial.print(right_encoder, 2).
If that doesn't work, please edit your question to post the code you're using to display the values. | {
"domain": "robotics.stackexchange",
"id": 1758,
"tags": "roomba, quadrature-encoder"
} |
Parachute Model | Question: Can you tell me if this model is correct or not? [assuming drag is linearly proportional to the velocity] Considering y-axis vertical and x-axis horizontal,with y positive upwards:
1) $F_y=-mg-kv_y=ma_y$ and 2) $F_x=-kv_x=ma_x$.
Then considering only 1) we find the equation for the velocity ($t=0$,$v=o$ and $y=h$ are the initial conditions):
\begin{equation}
\frac{dy}{dt}+\frac{k}{m}y=-gt+\frac{k}{m}h
\end{equation}
then we can integrate again (initial conditions ,$t=0$ and $y=h$ again):
\begin{equation}
y=-g\frac{m}{k}t+g\frac{m^2}{k^2}h-g\frac{m^2}{k^2}e^{-\frac{k}{m}t}
\end{equation}
Is this correct as long as we don't open the parachute?
and when we open the parachute and considering it takes a time $\tau$ to open and the drag coefficient increases linearly (i.e. $k(t)=k+k_p\frac{t}{\tau}$ )
what happens?
Answer: As regards modelling the interval during which the parachute deploys using a simple model for the drag coefficient as you (and Floris) suggest:
$$k(t)=k+k_p\frac{t}{\tau}$$
In my coordinate system the equation of motion becomes:
$$m\frac{dv}{dt}+(k+k_p\frac{t}{\tau})v=mg$$
This becomes difficult to intergrate. By setting $a=k_p/\tau$, Wolfram alpha provides the following solution.
Bearing in mind this still needs to be integrated to obtain displacement, that becomes unworkable.
I would suggest another way to model the chute deployment, see diagram below:
Split the expected deployment time into a number of intervals (5 shown), each with its average $k_i$ value. To each value $k_i$ and time interval can then be applied the derivation in the previous answer. Adding all obtained $y_i$ gives an approximated total displacement during deployment. This is of course a popular method of numerical integration. | {
"domain": "physics.stackexchange",
"id": 28929,
"tags": "homework-and-exercises, newtonian-gravity, drag, air, free-fall"
} |
Can the property of non-Newtonian fluid be controlled? | Question: I understand that non-newtonian fluid could become solid under high shear rate. What I can't find online, is could we control at which shear rate the NNF would solidify? Like when an electrical field with field strength A is applied to the NNF, the NNF would only start solidifying under shear rate X, but when the field strength is B the NNF would only start solidifying under shear rate Y.
Answer: Yes, one such example is called electro-rheological fluid, in which the viscosity can be locally controlled by application of an electric field. This can be used in mechanical systems to clutch drive shafts in and out of engagement electrically, with no moving parts to wear out. Certain Honda cars use this to switch in and out of four-wheel drive in response to signals from slippage sensors in the wheels of the car. | {
"domain": "physics.stackexchange",
"id": 66653,
"tags": "non-newtonian-fluids"
} |
Is the enthalpy of a molecule determined by the energy of the electrons? | Question: I understand that the strength of the bond of a molecule is determined by the potential and kinetic energy of the electrons. I also understand that a reaction where the bond strength of the products is stronger than the bond strength of the reactant releases energy (is exothermic).
My textbook states that in a reaction where NaOH dissolves in water, the energy of the solid NaOH is greater than the energy of the dissolved ions in water. Is the "energy" of the reactants and products determined by the bond strength?
Also, my textbook states that enthalpy is "the amount of energy, in the form of either kinetic or potential energy, a substance has." I'm a bit confused about enthalpy. My textbook enthalpy changes:
Are these enthalpy changes determined by the bond strength also? And is the enthalpy of a certain molecule also from the bond strength?
Answer: I find that enthalpy is one of the concepts that students struggle with the most and I feel your pain in trying to understand it!
I understand that the strength of the bond of a molecule is determined by the potential and kinetic energy of the electrons
The strength of a bond is best understood in terms of potential energy. A strong bond involves a deeper potential energy "well" that electrons and bonded atoms can exist in. The atoms involved in the bond (and the electrons too) will have kinetic energy and that can explain how the bond behaves over time, but the overall intrinsic strength of the bond is a reflection of how low the potential energy of the system is at the equilibrium bond length. We call this the bond dissociation energy and it directly tells us how strong the bond is.
https://en.wikipedia.org/wiki/Morse_potential
My textbook states that in a reaction where NaOH dissolves in water, the energy of the solid NaOH is greater than the energy of the dissolved ions in water. Is the "energy" of the reactants and products determined by the bond strength?
Yes, this refers to the potential energy of the system that is decreasing when the stronger attractions are formed in the solution vs the weaker ones broken in the undissolved ionic lattice and bulk water. Essentially, you have to break weaker bonds/IMF's in the reactants then you form in the products. Please note that this is a general statement and includes all "bonds" (covalent, ionic or metallic) plus other types of attraction (IMF's, ion-dipole forces etc.) that exist before or after a process or reaction occurs.
Also, my textbook states that enthalpy is "the amount of energy, in the form of either kinetic or potential energy, a substance has." I'm a bit confused about enthalpy. My textbook enthalpy changes:
The key part here is the potential energy. When this decreases (as in your example of NaOH dissolving), the potential energy turns into thermal energy. By itself, this doesn't change the enthalpy, as that is the sum of all internal energy at constant P. However, it will raise the temperature of the system and this makes it out of thermal equilibrium with the surroundings. Heat now flows from the system and this is when the enthalpy decreases as the system is losing energy. It is best to think of the enthalpy decreasing when the potential energy of the system decreases as the heat flow is what then tends to result. The reason this happens, as you are suggesting, is because stronger bonds are forming than existed before.
Are these enthalpy changes determined by the bond strength also? And is the enthalpy of a certain molecule also from the bond strength?
Hopefully this last bit makes sense now, and the relationship between bond strength and the enthalpy of the system is a but less mysterious!! | {
"domain": "chemistry.stackexchange",
"id": 11897,
"tags": "enthalpy"
} |
What is a successor function (in CSPs)? | Question: In Constraint Satisfaction Problems (CSPs), a state is any data structure that supports
a successor function,
a heuristic function, and
a goal test.
In this context, what is a successor function?
Answer: A successor function is a function that generates a next state from the current state, plus the choices that affect state changes. In e.g. the 8 queens problem, a state might be the location of 5 queens so far, the choice might be where to put the next queen, and the successor function would return the resulting state with 6 queens on the board.
Typically a solver will store/cache a current state, make a choice, and use the successor function to discover what the next state is. Then it may call the heuristic function on the next state and make a decision whether to continue that search deeper into the search tree (or recursion) or try another choice at the current state. | {
"domain": "ai.stackexchange",
"id": 830,
"tags": "terminology, definitions, search, constraint-satisfaction-problems"
} |
What is the Reference in Control Theory? | Question: I've started studying Control and I've come across some concepts that I'm finding a bit difficult to understand.
For example, in the system showed here, there is a signal called "reference" as well as a feedback block. As far as I know, the feedback block has the function of getting the output as close as possible to the reference. But here is what I don't understand: what if the reference is, for example, an impulse, and I want the system to show a step when an impulse is at its input?
I'm getting confused with the difference between reference signal and input signal. Why would someone connect a feedback block in order to get the output to follow the reference? What's the advantage of doing this?
Answer: Imagine that you're heating (or cooling) a home with a modern furnace (or air conditioner).
the reference or set point is the temperature that you set your thermostat to be. the feedback signal is the actual temperature that is measured with some kinda thermometer.
the actual value that you are trying to control, whether it's temperature or the position of a robot arm or the position of a pointer in an asynchronous sample rate converter, that actual value is compared to your reference value. the difference between those two signals is what drives your "controller".
in the case of heating your home, if the actual temperature is below the reference (what you want your temperature to be), the furnace heat is increased. if the actual temperature is above the reference, the furnace heat is decreased. | {
"domain": "dsp.stackexchange",
"id": 12257,
"tags": "control-systems, feedback"
} |
What's wrong with this equation for harmonic oscillation? | Question: The question:
A particle moving along the x axis in simple harmonic motion starts
from its equilibrium position, the origin, at t = 0 and moves to the
right. The amplitude of its motion is 1.70 cm, and the frequency is
1.10 Hz. Find an expression for the position of the particle as a function of time. (Use the following as necessary: t, and π.)
Using the equations:
$$
x(t) = A \cos(\omega t + \Phi)
$$
$$
\omega = 2\pi f
$$
I get A = 1.7cm or 0.017m, and
$$
\omega = 6.91
$$
I know that t = 0, x = 0. Thus,
$$
0 = 0.017 \cos(\Phi )
$$
And therefore,
$$
\Phi = \pi / 2
$$
From all of this, it seems to me that the equation for position with respect to time should be:
$$
x = 0.017 \cos(6.91t + \pi/2)
$$
Am I doing something wrong, because the above is not getting checked as the right answer (it's an online homework)
Answer: The cosine has more than one zero. And the text specifies that the particle goes to the right (I assume that the x axis also goes to the right). Now in which direction does the cosine go at $\pi/2$? And where's another zero? | {
"domain": "physics.stackexchange",
"id": 2795,
"tags": "homework-and-exercises, kinematics, harmonic-oscillator"
} |
Is teleop_twist_keyboard launchable in ROS2? | Question:
Hi,
fairly new to ROS2 I am currently trying to work with launch files.
I was trying to simply launch the teleop_twist_keyboard but it would't work.
launch_ros.actions.Node(
package="teleop_twist_keyboard",
node_executable="teleop_twist_keyboard",
output='screen',
node_name='teleop')
I don't get an error message, and the topic node connections in rqt look right too. Although I don't the usual console output (telling me which keys I should use). Is this package launchable yet? What can I do?
I am working with dashing.
Thanks!
Originally posted by relffok on ROS Answers with karma: 169 on 2019-11-18
Post score: 1
Original comments
Comment by mlanting on 2019-11-27:
have you tried running the launch command with the -d option for more debug output?
Answer:
Update: After adding the prefix 'xterm -e' to launch in a new terminal everything worked as it should.
launch_ros.actions.Node(
package='teleop_twist_keyboard',
node_executable="teleop_twist_keyboard",
output='screen',
prefix = 'xterm -e',
node_name='teleop')
Originally posted by relffok with karma: 169 on 2020-01-16
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by pmuthu2s on 2021-02-10:
I feel this has a work around ! Does anyone have an idea on how to do it without xterm -e?
Comment by KimJensen on 2021-10-11:
I'd also like to know, how to run this teleop node without using 'xterm'?
Comment by KimJensen on 2021-10-11:
As is stated here, then I do not believe it is possible to do it without using 'xterm': ros2/teleop_twist_keyboard/issues | {
"domain": "robotics.stackexchange",
"id": 34030,
"tags": "ros2, roslaunch"
} |
best book for a comprehensive introduction to biology for a computer science graduate interested in undertaking research in bioinformatics | Question: I have a postgraduate degree in computer science, and I wish to undertake research in bioinformatics. I had chemistry but no biology in high school. I have read bioinformatics books like "biological sequence analysis" by durbin and "fundamental concepts of bioinformatics" by krane, etc. But I wish to read a comprehensive book on biology (genetics) that will help me undertake research in bioinformatics properly. I have been considering 2 books: "molecular biology of the cell" and "molecular biology of the gene". Are these books comprehensive enough? Please suggest some good books as I have almost no knowledge of biology at highschool or graduate levels.
Answer: Welcome to Biology and welcome to Biology.SE!
It is hard to answer such a question as the domains you describe are very vast.
If you want a book in an introductory book to biology, then you want to read this post and this post.
If you are more interested in population genetics and molecular evolution then this post will interest you. Still talking about evolution I'd like to attract your attention to this post for the second suggestion in the first answer: Yang, Computational molecular evolution
You'll probably want to have a look at the post A free book/resource for learning genetics?.
Book recommendations for algorithms used in evolutionary biology and Introductory book in genetics might be a post of interest as well.
Unfortunately I don't know any of the two books you cite so I can't quite give you advice for them. molecular biology of the cell is a very common book among first year biology. It sounds like a potential good reading for you. I know nothing about molecular biology of the gene except what I just read about it on Amazon.
I am hoping one will give you a better criticisms about very general books about molecular biology. | {
"domain": "biology.stackexchange",
"id": 4144,
"tags": "bioinformatics"
} |
What is the physical basis behind burnt calories estimates? | Question: we see on treadmills, bikes, fitness trackers, etc., estimations of calories burnt. These are clearly based on correlations to weight, age, hear rate. How were these values correlated?
Is there a physical way to measure the amount of calories being burnt in the body directly? Energy balance seems too primitive since calories can be expelled unburnt, can be stored, and metabolism is different between individuals. I tried to look it up but anything related to fitness on the Internet seems to be loaded with armchair experts' guesses.
Answer: I think this is off topic here and I'm not sure of a better site for it (maybe fitness.SE)...
I'll try to put as much physics as I can to keep it on topic here though. It's all about an energy balance and understanding where the energy can go when an animal exerts effort.
You can measure the basal metabolic rate (BMR) of an animal using respirometry. In particular, the volume of carbon dioxide produced in each breath can be measured directly with proper equipment and this can be related to the basal metabolic rate. This will tell you the amount of energy the animal expends just to keep itself warm, awake, thinking, etc.. The relationship between carbon dioxide production and energy required is based on the chemical reactions that occur within the body, of which carbon dioxide is a product. So you measure how much was produced and you know how much of the reactions took place and now you can determine the energy required for that to happen.
You can measure this on a whole bunch of different people of different age, weight, fitness, height, etc. and generate approximations based on the data. For BMR this can give a few different approximations.
Next you need to measure how energy output changes with whatever other metric you can measure. Ideally you would use power output (either on a treadmill, bike, etc.) along with the amount of oxygen consumed in each breath and quite possibly the change in body temperature.
At given power output levels (or given force levels depending on the testing), you can add up the heat generated by the body, the heat lost due to radiation from the skin, the heat lost due to sweat evaporation and the heat produced by using up the oxygen in the air to produce power, all of which when you sum them together will give you a number for the energy expended. Not all of these variables can be measured easily and may need to be estimated either from approximations generated by other experiments or from looking at simplified models. Based on collecting a whole bunch of data that way, you could go ahead and compute a set of approximations for height, weight, age based on speed (to replace power since most units don't measure power). Some base it on heart rate also.
However, if you are on a device that does measure power directly and also provides you with an energy estimate, it is most likely doing it a different way. By integrating the power it measures vs time, it is providing you with an energy output. This is technically not the energy used by you but the energy that made it into the exercise equipment! In other words, it doesn't count all the energy lost to body heat and only counts "useful energy." Some devices may apply a correction and add to this number an estimate for energy output based on estimates for BMR, heating etc..
You could dig through physiological journals and probably pull out expressions for these curve fits. That doesn't mean that what you see on your treadmill is using them though. Manufacturers like to hide what they use for their computations and like to only report accuracy and precision of their measurements. And usually only one one variable, not on the rest. So for example, if you purchase a power meter for a bike and it says it has accurate power measurements to within X%+/-Y%, that only tells you how good it is at measuring power. If they don't also provide bounds on their energy estimate, or equations for how they determine it, there is no telling how accurate it really is. | {
"domain": "physics.stackexchange",
"id": 20734,
"tags": "energy"
} |
Improving excution time for list and sampling in Python | Question: Here is my code for generating edge list based on Price model algorithm. I think there have two points to reduce execution time:
use proper data type
use faster random sampling
Originally, I used list instead of NumPy array and then changed from extending array till c x n to setting initial fixed array (and change the initial values).
It made execution time faster but still way slower than same implementation written in Matlab.
import pickle
from random import random, sample
import numpy as np
def gen_edge_list(a, c, n):
"""
Generate edge list based on 14.1.1
"""
edge_list = np.zeros(c*n)
p = float(c) / (float(c)+a) # % for preferential attachment edges
edge_list[0] = 1 # edge from vertex 2 to 1
idx = 1 # point to the next index
for t in range(3, n+1):
if t <= c+1:
"""
If network size is smaller than c+1,
connect among all vertices.
"""
edge_list[idx:idx+t-1] = [i for i in range(1, t)]
idx = idx+t-1
else:
"""
decide preferential attachment or uniformly random attachment
by given p
"""
n_pref = len([True for i in range(c) if random() < p])
edge_list[idx:idx+n_pref] = sample(edge_list[0:idx-1], n_pref)
idx += n_pref
edge_list[idx:idx+c-n_pref] = sample(range(1, t+1), c-n_pref)
idx = idx + c - n_pref
if __name__ == "__main__":
a = [1.5]
c = [3]
n = 10**6
edge_lists = []
for i in range(len(a)):
edge_lists.append(gen_edge_list(a[i], c[i], n))
output = open('edge_list.pkl', 'wb')
pickle.dump(edge_lists, output)
output.close()
My biggest concern is especially the following:
"""
decide preferential attachment or uniformly random attachment
by given p
"""
n_pref = len([True for i in range(c) if random() < p])
edge_list[idx:idx+n_pref] = sample(edge_list[0:idx-1], n_pref)
idx += n_pref
edge_list[idx:idx+c-n_pref] = sample(range(1, t+1), c-n_pref)
idx = idx + c - n_pref
Here is my friends code written in Matlab:
a = [1.5];
c = [3];
n = 1^6;
edgelist = cell(numel(c),1);
for i = 1:numel(c)
p = c(i)./(c(i)+a(i));
edgelist{i} = zeros(1, c(i).*n);
edgelist{i}(1) = 1;
idx = 2;
for t = 3:n
if t <= c(i)+1
edgelist{i}(idx:(idx+t-2)) = 1:(t-1);
idx = idx+t-1;
else
pref_or_rand = rand(1,c(i)) < p;
prefn = sum(pref_or_rand);
edgelist{i}(idx:(idx+c(i)-1)) = [edgelist{i}(randi(idx-1,1,prefn)) randi(t,1,c(i)-prefn)];
idx = idx+c(i);
end
end
end
I don't know what makes this huge difference on execution time between them. (40 sec in Matlab on mac book pro vs 40 min with Python code in recent i5 machine on Debian)
If you have any idea, please let me know.
Answer: I have changed sampling part and it took only 19 sec. I have realized that numpy's random sampling is way faster than python's built-in random sampling.
import pickle
import numpy as np
from numpy.random import randint, rand, choice
def gen_edge_list(a, c, n):
"""
Generate edge list based on 14.1.1
"""
edge_list = np.zeros(c*n)
p = float(c) / (float(c)+a) # % for preferential attachment edges
edge_list[0] = 1 # edge from vertex 2 to 1
idx = 1 # point to the next index
for t in range(3, n+1):
print t
if t <= c+1:
"""
If network size is smaller than c+1,
connect among all vertices.
"""
edge_list[idx:idx+t-1] = [i for i in range(1, t)]
idx = idx+t-1
else:
"""
decide preferential attachment or uniformly random attachment
by given p
"""
n_pref = np.sum(rand(c) < p)
edge_list[idx:idx+n_pref] = choice(edge_list[0:idx-1], n_pref)
idx += n_pref
edge_list[idx:idx+c-n_pref] = randint(1, t+1, c-n_pref)
idx = idx + c - n_pref | {
"domain": "codereview.stackexchange",
"id": 19884,
"tags": "python, performance, numpy, matlab"
} |
Sprokets on rotating body | Question: Sorry I'm having a migrane and can't think rn, which is why I am posting such a basic question for confirmation...
The above diagram shows two sprockets on a beam which is attached to a driven gear.
The Red sprocket doesn't rotate (it is fixed). The orange sprocket is at the other end of the beam and can spin freely. Theres a chain between them (not drawn)
Relative to the gear/beam, the red sprocket is rotating in reverse.
So would I be correct in saying that the orange sprocket will turn with the opposite angular velocity of the Driven Gear?
Answer: If the sprockets have the same number of teeth both the red and orange sprockets will remain in the same relative orientation. The writing on the orange will remain horizontal.
If red has N teeth then the chain will advance N links per rotation. If orange has n teeth then it will rotate N/n teeth relative to the red.
So rotating the beam one revolution clockwise will result in the the orange sprocket rotating anti-clockwise relative to the beam by N/n turns. | {
"domain": "engineering.stackexchange",
"id": 3889,
"tags": "gears, mechanisms"
} |
Predicting the outcome of sporting events with multiplicative scoring | Question: In the Olympic format for sport climbing, eight athletes compete in three rounds of climbing. Their final score is the multiplication of their rankings in each round. For example, an athlete who comes 1st in the first round, 5th in the second round, and 7th in the third will have a final score of $1\times5\times7=35$. The athletes with the lowest final score wins.
Assuming that the competition is already partly underway (possibly even mid-round), is there a computer algorithm to quickly compute the probabilities $P_{ar}$ of each athlete $a$ achieving a final ranking $r$, assuming the performance of the athletes is entirely random from here on? Even with 8 athletes the brute force method seems too computationally intensive.
If this isn't computationally possible in a reasonable time, is there an algorithm to get "close enough" to those probabilities?
Answer: Brute force
If you want something easy to implement, brute force might be fast enough, assuming at least one round has been completed.
There are $8! = 40320$ possible permutations of the athletes, so in any round, there are 40320 possible rankings. Assuming the first round has been completed, there are only $40320^2 \approx 1.6 \times 10^9$ possible rankings for the next two rounds, all of them equally likely. A program should be able to enumerate all of them in a few seconds or minutes and compute the probability distribution of the final rankings for each athlete.
There are faster algorithms, and I will describe them below, but I'm unsure whether the time it takes to understand and implement them will outweigh the amount of CPU time they save.
Convolution
Let the random variables $X_1,X_2,X_3$ denote the athlete's ranking in the first, second, and third rounds. Then their final score will be $S = X_1 \times X_2 \times X_3$. Our strategy will be to compute the probability distribution for each $X_i$, then use that to compute the probability distribution for $S$.
Note that if a round has been completed, then the corresponding probability distribution is easy to compute: it assigns probability 1 to the rank the athlete actually obtained, and 0 to all others.
We can compute the probability distribution for a round that hasn't begun by enumerating all $8! = 40320$ possible permutations, observing the athlete's rank in each, and summing up how often each occurs. Or, more simply, we can note that since all permutations are equally likely, the distribution on the athlete's rank in this round is uniform, i.e., all possibilities have probability $1/8$.
We can compute the probability distribution for a round that has been partly completed by enumerating all $8! = 40320$ possible permutations, filtering out all of them that are incompatible with the results observed so far, then observing the athlete's rank in each that remains, and summing up how often each occurs.
So in this way we can obtain the probability distribution for $X_1$, $X_2$, and $X_3$. Now we obtain the probability distribution for $S$ from these distributions. In particular, the probability distribution for $T = X_1 \times X_2$ can be obtained as
$$\Pr[T=t] = \sum_{i=1}^8 \Pr[X_1 = i] \Pr[X_2 = t-i].$$
This requires 8 simple steps for each value of $t$, and there are $30$ possible values of $T$, so this can be done in $8 \times 30$ steps. Next, the probability distribution for $S = T \times X_3$ can be obtained in the same way. There are $80$ possible values of $S$, so this can be done in $8 \times 80$ steps.
The result is the probability distribution for $S$, the athlete's final score. This works with any number of unfinished rounds, even if one round is partly completed. The total running time is at most $40320 + 8 \times 30 + 8 \times 80$ simple steps, which is very fast; an implemention should complete the computation in millliseconds.
Related:
Estimate distribution of a composite variable, largest samples of set of random variables, Polynomial Computation of the probability of a number of independent events. | {
"domain": "cs.stackexchange",
"id": 15108,
"tags": "combinatorics, probabilistic-algorithms"
} |
Improving LSTM Time-series Predictions | Question: I have been getting poor results on my time series predictions with a LSTM network. I'm looking for any ideas to improve the model.
The above graph shows the True Data vs. Predictions. The True Data is smooth zig zag shaped, from 0 to 1. However the predictions rarely reach 0 or 1.
The distribution in the prediction data-set rarely reaches 0 or 1 and it's centered around 0.5.
However the distributions in the True Data set is evenly distributed.
Here is the LSTM model built in keras:
model = Sequential()
model.add(Dropout(0.4, input_shape=(train_input_data_NN.shape[1], train_input_data_NN.shape[2])))
model.add(Bidirectional(LSTM(30, dropout=0.4, return_sequences=False, recurrent_dropout=0.4), input_shape=(train_input_data_NN.shape[1], train_input_data_NN.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
How do I get the predictions to be more similar to the true data?
Answer: Ok it seems i calculated the outputs wrongly. It was not calculated fairly across the entire data-set.
I am getting better results after improving the output calculations:
It still can be improved, but it's a great start. | {
"domain": "datascience.stackexchange",
"id": 2822,
"tags": "keras, time-series, regression, lstm"
} |
convert pixel values to meters | Question:
hello guys ,
this is how i am getting x and y from the blob, can anyone tell me how can i convert these posX and posY values in meters or centimeters ??
//Calculate the moments of the thresholded image
cv:: Moments oMoments = moments(img_mask);
double dM01 = oMoments.m01;
double dM10 = oMoments.m10;
double dArea = oMoments.m00;
int posX = dM10 / dArea;
int posY = dM01 / dArea;
std::cout<<"posx:"<<posX<<"posy:"<<posY<<std::endl;
please help !!
thanks all;
Originally posted by zubair on ROS Answers with karma: 178 on 2017-04-18
Post score: 0
Answer:
Without additional information, this is simply not possible. A pixel corresponds to a ray from an object to the camera center and the position of the object on this ray is not known. Do you have any additional information? Like the distance of the object to the camera? Do you have the intrinsic calibration values for the camera?
Originally posted by NEngelhard with karma: 3519 on 2017-04-18
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by zubair on 2017-04-18:
no i dont have it now,, i will post soon my camera calibration values though,,
Comment by Martin Peris on 2017-04-18:
The only way to get the world coordinates of an object with a monocular camera is if you know the intrinsic parameters of the camera, you know the geometry of the object and you can detect at least 3 known points of the object in the image.
Comment by zubair on 2017-04-19:
i will post my camera intrinsics today
Comment by zubair on 2017-04-19:
guys, soon after having my camera calibration values,, what next i need to do convert these values in meters or find the blob distance from the camera ???
thanks
Comment by zubair on 2017-04-21:
so, i got it working guys,, closing this now,, with remark answer has been excepted because i dont have any other relevant option to select there | {
"domain": "robotics.stackexchange",
"id": 27634,
"tags": "opencv"
} |
Combining Datasets with Different Features | Question: I have multiple datasets, with slightly differing features. What tools can I use to make this a homogeneous dataset?
Dataset1:
featureA,featureB,featureC
1,7,3
4,8,4
Dataset2:
featureA,featureC,featureD,featureE
3,4,5,6
9,8,4,6
Homogeneous Dataset
featureA,featureB,featureC,featureD,featureE
1,7,3,,
4,8,4,,
3,,4,5,6
9,,8,4,6
Answer: You can use R to do that.
The smartbind function is the perfect way to combine datsets in the way you are asking for:
library(gtools)
d1<-as.data.frame(rbind(c(1,7,3),c(4,8,4))))
names(d1)<-c("featureA","featureB","featureC")
d2<-as.data.frame(rbind(c(3,4,5,6),c(9,8,4,6)))
names(d2)<-c("featureA","featureC","featureD","featureE")
d3<-smartbind(d1,d2) | {
"domain": "datascience.stackexchange",
"id": 378,
"tags": "machine-learning, dataset"
} |
Why can't the human eye focus blue light? | Question: I recently noticed that it is hard to focus on blue light sources, especially at night. When observing a blue light source, e.g. a neon sign, it looks somewhat blurry. A sign with a different colour right beside it looks sharp.
I already know about the three kinds of cone cells in the human eye (I'm not a biologist) with their spectral sensitivity peaks in in short (S, 420–440 nm), middle (M, 530–540 nm), and long (L, 560–580 nm) light wavelengths [1]. But does the spectral sensitivity correlate with focus? Or does our eye lens refract blue light in a different way?
When I screw up my eyes looking at a blue light, it becomes less blurry, but then all the other colours are blurred.
[1] http://en.wikipedia.org/wiki/Cone_cell#/media/File:Cones_SMJ2_E.svg
Answer: The same thing happens in photography when you see an image with colorful shadows of objects on it - this is called chromatic aberration. Check this wiki page if you're not familiar with. Now this happens because the lens in the objective has different refractive properties for different wavelengths or if go the other way around different wavelengths refract differently within the same material. This is why the famous triangular rainbow prism is capable of separating the sunlight to rainbow. So back to the question: in our eye there is the eye lens that is responsible for focusing. Let's assume that it is homogenous enough. So when light enters into our lens it refracts to separate colours and our lens is not complex enough to fully compensate for this. The second picture is a good approximation of the human eye lens in shape.
Edit: so when you focus one color perfectly the others will be blurry a bit since those will be out of focus due to different refraction.
taken from: http://commons.wikimedia.org/wiki/File:Prism-rainbow-black.svg
taken from:http://en.wikipedia.org/wiki/Chromatic_aberration#/media/File:Chromatic_aberration_lens_diagram.svg | {
"domain": "biology.stackexchange",
"id": 7637,
"tags": "vision, human-eye"
} |
Property of the adjoint operator in the array element | Question: In Quantum Mechanics how can I prove this property?
$$<\psi|A^{\dagger} |\phi>=<\phi|A|\psi>^{*}$$
Answer: "Sandwiching" an operator between a bra and a ket is standard notation, but I'm going to use somewhat clearer notation and then introduce the "sandwiching" at the end to make this more clear.
The inner product between a state $\psi$ and a second state $\phi$ is written like this:
$$\langle \psi | \phi\rangle$$
If I operate on $\phi$ with the operator $A$ before I take the inner product, then I would write it like this:
$$\langle \psi | A\phi\rangle$$
On the other hand, if I operate on $\psi$ with the operator $B$ before I take the inner product, then I would write that like this:
$$\langle B \psi | \phi\rangle$$
Lastly, because of the properties of the inner product, taking the complex conjugate is equivalent to flipping the order of the states, so
$$ \langle \psi | \phi \rangle^* = \langle \phi | \psi \rangle$$
Now that that's been taken care of, the Hermitian conjugate of an operator $A$, which we denote by $A^\dagger$, is defined as follows: for any two states $\phi$ and $\psi$,
$$\langle\phi | A \psi\rangle = \langle A^\dagger \phi | \psi \rangle$$
Therefore,
$$\langle \phi | A | \psi \rangle^* \equiv \langle \phi | A\psi\rangle^* = \langle A^\dagger \phi | \psi \rangle^* = \langle \psi | A^\dagger \phi \rangle \equiv \langle \psi | A^\dagger | \phi \rangle$$
where, at the beginning and the end, I've used the "sandwich" notation. Such notation is quite common and the reasons why it's questionable are fairly technical, so I wouldn't worry about them for now. | {
"domain": "physics.stackexchange",
"id": 44034,
"tags": "quantum-mechanics, operators, hilbert-space, notation, linear-algebra"
} |
Is $\sf{P^{NP \cap coNP}} = \sf{NP \cap coNP}$? | Question: If it is unknown, are there reasons to believe that they might not be equal?
Answer: First, this result is listed in the complexity zoo: https://complexityzoo.uwaterloo.ca/Complexity_Zoo:N#npiconp. Alternatively, it's possible to prove without much trouble (which I do below).
We want to show that $P^{NP \cap coNP} = NP \cap coNP$. Clearly, one direction is obviously true: $NP \cap coNP \subseteq P^{NP \cap coNP}$. To prove the other direction, it is sufficient to show that $P^{NP \cap coNP} \subseteq NP$ since then $P^{NP \cap coNP} \subseteq coNP$ follows simply by complementing all the languages involved and as a result $P^{NP \cap coNP} \subseteq NP \cap coNP$.
So let's prove that $P^{NP \cap coNP} \subseteq NP$.
Suppose we have a deterministic polynomial time machine $M$ which makes oracle queries for languages in $NP \cap coNP$. We define a new, non-deterministic machine $M'$ based on $M$ as follows:
Simulate $M$ until $M$ needs to run an oracle query on some input $x$ for some language $L$. At that point, run a subroutine which branches $M'$ into some number of branches each of which attempts to learn the answer to the oracle query. The subroutine is designed so that no branch learns the wrong answer, at least one branch learns the correct answer, and some branches get "inconclusive" as a result when trying to learn the answer to the query. In $M'$, the branches with inconclusive results then reject. At this point, the only branches left are ones which know the correct answer to the query. Thus we can continue the simulation of $M$ in these branches. At the end of the simulation, the only branches which haven't rejected are the ones which successfully got through all of the steps of the simulation. These branches accept iff $M$ accepts, so the overall $NP$
Since $L \in NP \cap coNP$, there exists a NP machine $N_L$ with language $L$ and another machine $N_{\neg L}$ whose language is the complement of $L$. To simulate the oracle query (is $x$ in $L$), $M'$ first runs $N_L$ on $x$ and then runs $N_{\neg L}$ on $x$ in every branch. Every resulting branch will have one branch of $N_L$'s computation and one branch of $N_{\neg L}$'s computation. Within any single branch, both the $N_L$ computaion and the $N_{\neg L}$ computation cannot be accepting since that would require $x$ to both be in $L$ and in $\neg L$. Thus each branch will have either the $N_L$ computation accepting, the $N_{\neg L}$ computation accepting, or neither computation accepting. Furthermore, there must be at least one accepting branch of either $N_L$ or $N_{\neg L}$. Next, $M'$ rejects in the branches in which the computations of both $N_L$ and $N_{\neg L}$ reject. In all other branches, $M'$ has learned the answer to the oracle query (since either $N_L$ or $N_{\neg L}$ accept $x$ in the particular branch considered in this $M'$ branch). At that point, $M'$ can continue the simulation of $M$. Note that the simulation will actually continue in at least one branch since at least one accepting branch of either $N_L$ or $N_{\neg L}$ exits. | {
"domain": "cstheory.stackexchange",
"id": 3971,
"tags": "complexity-classes, np, oracles, polynomial-hierarchy"
} |
How to interpret silouette coefficient? | Question: I'm trying to determine number of clusters for k-means using sklearn.metrics.silhouette_score. I have computed it for range(2,50) clusters. How to interpret this? What number of clusters should I choose?
Answer: They are all bad. A good Silhouette would be 0.7
Try other clustering algorithms instead. | {
"domain": "datascience.stackexchange",
"id": 1153,
"tags": "clustering, scikit-learn, k-means"
} |
Schwarzschild radius equals spatial stretch at surface? | Question: Am I right that the calculated Schwarzschild radius for the moon, Earth and Sun is also the calculated value for the "stretching" of space at the surface of the moon, Earth and Sun (.01cm, .88cm, and 2.96km, respectively)?
If so, I would have thought this would have been an interesting "factoid" to include in an explanation of what GR is, especially for beginners like me. If only to give a sense of how much time curvature contributes to weak-field gravity compared to space curvature.
Time curvature seems to be the whole ballgame for what we earthlings can perceive around us, and yet that also is not explained well in a lot of the introductory literature (an exception, Gravity from the Ground Up). It seems to me that saying that time passes slower at my feet than at my head, and that this gradient is what induces falling when I step off the diving board (I believe by conservation of energy in space-time?), would be accurate (in a weak gravitational field) and quite helpful to the novice.
I realize I may be way off, and so would be grateful to be corrected.
Thanks,
John Nolan
Answer: The geometry of spacetime around a spherically symmetric mass is described by an equation called the Schwarzschild metric:
$$ ds^2 = -\left(1-\frac{2GM}{c^2r}\right)c^2dt^2 + \frac{dr^2}{1-\frac{2GM}{c^2r}} + d\Omega^2 \tag{1} $$
And in flat spacetime the geometry is described by the Minkowski metric:
$$ ds^2 = -c^2dt^2 + dr^2 + d\Omega^2 \tag{2} $$
If we rewrite the curved spacetime equation (1) as:
$$ ds^2 = -A(r)c^2dt^2 + \frac{dr^2}{A(r)} + d\Omega^2 \tag{3} $$
where:
$$ A(r) = 1-2GM/c^2r $$
Then if you compare the two equations (3) and (2) you'll see that the difference is that factor of $A(r)$ i.e. if we take the curved space equation (3) and set $A(r) = 1$ we get the flat space equation (2). So you can get a guide as to how curved the spacetime is by evaluating $A(r)$.
A sidenote: you've used the phrase stretching of space. I realise this isn't meant literally, but for the record this factor is not simply the stretching of space because the phrase stretching of space is largely meaningless. The metric allows us to calculate how objects move in a curved spacetime but the metric is a complicated object and can't simply be thought of as how much space has stretched.
Anyhow, back to your question.
In equation (1) $G$ is Newton's constant, $c$ is the speed of light and $M$ is the mass of the object. To simplify the equations we often write:
$$ r_s = \frac{2GM}{c^2} $$
In which case our equation for $A(r)$ simplifies to:
$$ A(r) = 1 - \frac{r_s}{r} $$
And it turns out that $r_s$ is the black hole radius for a black hole with the same mass $M$ as our object. What the paper you cite says is that at the surface of the Sun the factor $r_s/r$ is about $4$ parts per million i.e.
$$ \frac{r_s}{r} \approx 4 \times 10^{-6} $$
What you are doing is taking this number and multiplying it by the radius of the Sun, so you are calculating:
$$\frac{r_s}{r} \times r = r_s $$
And unsurprisingly the result is $r_s$ i.e. the radius of a black hole with the mass of the Sun.
So the answer to your question is that the numbers you are calculating are indeed the black hole radii for black holes with the mass of the Sun, Earth and Moon. But this doesn't mean the stretching of space at the surface of an object is equal to its corresponding black hole radius. | {
"domain": "physics.stackexchange",
"id": 42316,
"tags": "gravity"
} |
Genotypes in diploid/haploid cells under mitotic/meotic cell divisions | Question: I am new to genetics , and am stuck with the following question at hand:
If $2$ cells with genotypes $(A/a)$ and $(A/a,B/b)$ undergo mitotic and meotic cell divisions respectively, what will be the genotypes/gene compositions in the resultant diploid and haploid cells with respect to the above mentioned alleles?
Can someone help me how to proceed?
Answer: Though broad, here I provide a summary in graphical way.
Mitosis:
.
MEIOSIS: 2 successive steps : Meiosis-1 and meiosis-2
Meiosis-1
Meiosis-2
So lets see what happens with cells given at question.
1. Cell with genotype Aa
(monohybrid or one-point cross-experiments; that means we are looking to gene-pair at 1 loci; not looking to any other genes):
1. A. Mitosis:
Possible genotypes of their offsprings: All Aa.
1. B. Meiosis:
gametes will show 2-possibilities ; A, a.
2. Cell with genotype AaBb
(dihybrid or 2-point cross-experiments; that means we are looking on genes of 2 loci; not to any other cells):
2.A. Mitosis:
all offsprings AaBb.
2.B. Meiosis:
If given condition says the loci for A (or a) and B (or b) are on same chromosome (linked genes) and no crossing-over is there;
then We will get only 2 types of gamete AB and ab.
If crossing over(s) take place between the 2 loci; then we'll get 4 types of gamete AB, Ab, aB, ab; but their ocurring frequency will deviate from Mendel's independent assortment.
If the 2 loci are not-linked; i.e. they are located in distinct chromosome; then also we'll get 4 types of gametes AB, Ab, aB, ab; but they will follow Mendel's independent assortment pattern.
Reference:
Concepts of Genetics, 8th Edition (EBook) By William Klug, Michael Cummings, Charlotte Spencer / Pearson; chapter 2 (mitosis and meiosis)
The science of Genetics, 6th edition, by George Burns and Paul Bottino, Macmillan.
Genetics/ P.K. Gupta/ Rastogi Publication Meerut | {
"domain": "biology.stackexchange",
"id": 6131,
"tags": "genetics, population-genetics"
} |
Quark Exchange Feynman Diagram | Question: Consider the reaction $p^{+} + \pi ^{-} \to n + \pi^{0}$, or in terms of quarks $(uud) + (du^{*}) \to (udd) + (uu^{*})$. The reaction is just a quark exchange $u \iff d$. Is this how I draw a Feynman diagram for a quark exchange:
Answer: In a Feynman diagram for the reaction
$$p^{+} + \pi ^{-} \to n + \pi^{0}$$
or in terms of quarks
$$(uud) + (d\bar{u}) \to (udd) + (u\bar{u})$$
you need to draw all 5 quark/antiquarks.
The reaction is essentially the exchange of
an $u$ and a $d$ quark between the two composites.
Notice the following features, all arising
from the Feynman rules of quantum chromodynamics
(the theory of quarks and gluons):
Quarks ($u$, $d$) are represented by lines
with an arrow pointing from past to future.
An antiquark ($\bar u$) is represented by a line
with an arrow pointing from future to past.
When a quark leaves its composite or merges
with another composite, this is accompanied
by the emission/absorption of a gluon ($g$).
The emission/absorption of a gluon does not
change the flavor of the quark, i.e.
$u$ stays $u$, and $d$ stays $d$.
The last of the features above makes that
the Feynman diagram looks different from the one
shown in your question. | {
"domain": "physics.stackexchange",
"id": 73193,
"tags": "particle-physics, feynman-diagrams"
} |
Need to clear some concepts: AHRS - Attitude - Yaw,Pitch and Roll - MARG sensors -INS | Question: it's been while since I started reading about INS, orientation and so for quadrotors .
I faced the following terms : AHRS - Attitude - Yaw,Pitch and Roll - MARG sensors
I know for example how to calculate Yaw,Pitch and Roll , but does it related to Attitude ?
What's Attitude any way and how it get calculated ?
AHRS "Attitude and heading reference system" does it formed from Yaw,Pitch and Roll ?
MARG(Magnetic, Angular Rate, and Gravity) ? how it's related to other terms ?
What about INS ( Inertial Navigation Systems ) ?
My questions here are about these concepts, and there meaning , how they cooperate with each other , how they got calculated and which sensors suits for what ?
Answer: When talking about vehicles (such as aircraft), attitude is just a fancy word for "orientation": the combination of yaw, pitch, and roll. These would be easy to calculate if the plane was just standing still; you'd use a compass to get the yaw, and a plumb bob to measure the pitch and roll. However, the acceleration or deceleration of the aircraft would severely alter these measurements.
Gyroscopes measure angular velocity but can't measure position, velocity, or acceleration. (Fortunately, they aren't affected by it either.)
Accelerometers can measure the force on an object (either real forces like gravity or perceived forces like centrifugal force). This measurement is often integrated to estimate velocity, or double-integrated to estimate position (both of which become increasingly inaccurate with time).
Magnetometers measure the magnitude and direction of magnetic fields.
Regarding the other terms: IMUs produce some subset of MARG data (some have more sensors than others). MARG data is used by an AHRS system to compute attitude (roll, pitch, yaw). MARG data (optionally combined with other sensor data) is used by an INS system to compute both attitude and position. | {
"domain": "robotics.stackexchange",
"id": 443,
"tags": "navigation"
} |
Secret Santa with Groups on Swift | Question: I am working on an special version of Secret Santa with internal sub-groups.
Typical Secret Santa (SS) requirements:
Cannot be assigned to themselves
Should be random
Every participant should be assigned to someone
Every participant should have someone assigned to
Special requirements; there are subgroups that should be taken into consideration:
Reduce participants assigned to someone into their same group as much as possible (none if possible)
Implementation
I am new to Swift, I am working on a playground. Let's first consider the problems to solve:
Problems to consider
For one group (normal SS)
Let's assume a 3 people group
1->2
2->1
3->?
In that case there is no possible person for 3 to give to.
For multiple groups
Assume these 3 groups group-participant:
[1-1,1-2]
[2-1]
[3-1]
This could be assigned like this
2-1 -> 3-1
3-1 -> 2-1
1-1 -> 1->2
1-2 -> 1->1
Which is not a desired outcome.
Algorithm
Pick random participant p from all the possible participants. Also save the first p apart.
if p is not from the largest group
r is a random element from the largest group
else if p is from the largest group and there are multiple non-empty groups
r is a random element from the any other group
else if p is not from the largest group and is from the only remaining group
r is a random element from its group that it is not p
p gives present to r
Now remove p from the groups and repeat the same but make p=r until it is there are no more participants. The last one gives to the first one and the shuffle is complete.
Code
class Participant: CustomStringConvertible {
var name = "noname"
var contact = "nocontact"
var giveTo:Participant?
init(name:String,contact:String){
self.name = name
self.contact = contact
}
var description: String {return name + ((giveTo==nil) ?"":"->\(giveTo!.name)")}
}
class Group: CustomDebugStringConvertible {
static var allGroups:[Group]=[]
static var counter = 0
var participants:[Participant]=[]
var name = "G-x"
var description:String {return name}
var debugDescription:String {return name}
init(participants:[Participant]){
self.participants=participants
name = "G-\(Group.counter++)"
Group.allGroups.append(self)
Group.allGroups = Group.sortGroups(Group.allGroups)
}
func size()->Int{return self.participants.count}
func getRandParticipant()->Participant{
let p = participants[Int(arc4random_uniform(UInt32(participants.count)))]
return p
}
func remove(p:Participant)->Bool{
let originalIndex = participants.count
participants = participants.filter() { $0.name != p.name }
return originalIndex != participants.count
}
static func sortGroups(g:[Group])->[Group]{
//remove groups with size = 0
let rg = g.filter() {return $0.size()>0}
//sort
return rg.sort() {return $0.0.size()>$0.1.size()}
}
static func getRandGroup(groups:[Group])->Group{
return groups[Int(arc4random_uniform(UInt32(groups.count)))]
}
static func removeFromGroups(groups:[Group],p:Participant)->[Group]{
for g in groups{
if(g.remove(p)){
break;
}
}
return sortGroups(groups)
}
static func getPairs(groups:[Group])->[Participant]{
var groups2consume = groups
var giverGroup = getRandGroup(groups2consume)
var searchIn:[Group]
var giver:Participant = giverGroup.getRandParticipant()
let first = giver
var returnArray:[Participant] = []
while(groups2consume.count>0){
let indexO = groups2consume.indexOf() {$0.name==giverGroup.name}
if let index = indexO{
if (index>0){
//if giverGroup is 0 it means it is in the largest group
searchIn = [groups2consume[0]]
}else if groups2consume.count>1{
//if giverGroup is not 0 and the count is >0 then there are smaller groups
searchIn = groups2consume
searchIn.removeFirst()
}else{
//there is only one group left
searchIn = groups2consume
searchIn[0].remove(giver)
}
if searchIn[0].participants.count>0{
let receiverGroup = getRandGroup(searchIn)
let receiver = receiverGroup.getRandParticipant()
groups2consume = removeFromGroups(groups2consume, p: giver)
giver.giveTo=receiver
returnArray.append(giver)
giverGroup = receiverGroup
giver = receiver
}else{
groups2consume = removeFromGroups(groups2consume, p: giver)
}
}else{
break
}
}
giver.giveTo=first
returnArray.append(giver)
return returnArray
}
}
Test
Group(participants:[
Participant(name: "p1", contact: "n1"),
Participant(name: "p2", contact: "n2"),
Participant(name: "p3", contact: "n3"),
Participant(name: "p4", contact: "n4"),
Participant(name: "p5", contact: "n5")])
Group(participants:[
Participant(name: "p1", contact: "n1")])
Group(participants:[
Participant(name: "p1", contact: "n1"),
Participant(name: "p2", contact: "n2")])
Group.allGroups
let pairs = Group.getPairs(Group.allGroups);
print("pairs(\(pairs.count)) done \(pairs)")
Output
pairs(8) done [p1->p1, p1->p4, p4->p2, p2->p2, p2->p1, p1->p3, p3->p5, p5->p1]
Answer: Your code works – as far as I can see – correctly. I cannot judge
the fairness of the algorithm itself, but I have some suggestions
concerning the implementation.
My first point of criticism is the use of the static Group property
allGroups:
Each created Group instance is implicitly appended to allGroups which
is not obvious to the user of your code. The call Group(participants:[...])
creates a group and discards the result (which gives a compiler warning),
so the initializer is called purely for its side effect.
It makes the code less reusable. You cannot create an independent second
set of groups.
I would suggest to build an array of groups explicitly:
let g1 = Group(participants:[...])
let g2 = Group(participants:[...])
let g3 = Group(participants:[...])
let allGroups = [g1, g2, g3]
Similarly, the static Group property counter is used to assign
consecutive names to the each created group. This is acceptable, but why
not create each group with an explicit name (as you already do for the
participants)?
let g1 = Group(name: "G1", participants:[...])
let g2 = Group(name: "G2", participants:[...])
let g3 = Group(name: "G3", participants:[...])
There is too much logic in the Group class. In fact all the static methods
are used for the "Secret Santa" algorithm, and do not use any state of
the Group class. I suggest to create a separate
class for that purpose:
let g1 = Group(name: "G1", participants:[...])
let g2 = Group(name: "G2", participants:[...])
let g3 = Group(name: "G3", participants:[...])
let santa = SecretSanta(groups: [g1, g2, g3])
let pairs = santa.assignPairs()
print(pairs)
Now lets have a look at the Participant class. There is no need to assign
default initial values "noname", "nocontact" because both properties are
assigned to in the init method. Moreover, name and contact do never change,
so they can be constant properties with let:
class Participant {
let name : String
let contact : String
var giveTo : Participant?
init(name: String, contact: String) {
self.name = name
self.contact = contact
}
}
For better structuring of the code, protocol implementations such as
CustomStringConvertible can be written as an extension.
Testing against nil and implicit unwrapping should be avoided in favor
of optional binding:
extension Participant: CustomStringConvertible {
var description: String {
var desc = name
if let receiver = giveTo {
desc += " -> " + receiver.name
}
return desc
}
}
or more compactly using the Optional.map() method:
extension Participant: CustomStringConvertible {
var description : String {
return name + (giveTo.map { " -> " + $0.name } ?? "")
}
}
In the remove() method of Group the name of the participant is used
to identify it in an array. This is error-prone because it relies on unique
names. A better solution is to implement the Equatable protocol, so that
participants can be compared with ==. Since Participant is a class,
i.e. a reference type, this can be done with the "identical-to"
operator ===, i.e. only identical instances are considered equal:
extension Participant : Equatable { }
func ==(lhs : Participant, rhs : Participant) -> Bool {
return lhs === rhs
}
Now a simple participants.indexOf(p) can be used to find a participant
in an array.
The same suggestions apply to the Group class. If we remove the
static properties and methods, this is what it could look like:
class Group {
let name : String
var participants: [Participant]
init(name: String, participants: [Participant]) {
self.name = name
self.participants = participants
}
var size : Int {
return participants.count
}
func randomParticipant() -> Participant {
precondition(participants.count > 0, "participants array is empty")
return participants[Int(arc4random_uniform(UInt32(size)))]
}
func removeParticipant(participant : Participant) {
guard let index = participants.indexOf(participant) else {
fatalError("participant not found in group")
}
participants.removeAtIndex(index)
}
}
extension Group : CustomStringConvertible {
var description : String { return name }
}
extension Group : Equatable { }
func ==(lhs : Group, rhs : Group) -> Bool {
return lhs === rhs
}
I have made size a computed property instead of a function, similar to
count for arrays or length for strings.
In getRandParticipant() I have removed the "get" prefix which should not
be used. In addition, a precondition is added to detect programming errors.
remove() is renamed to removeParticipant() and expects that the
participant is present in the array. Instead of filtering the array,
indexOf and removeAtIndex are used as a small optimization.
The remaining logic is moved to a SecretSanta class:
class SecretSanta {
var remainingGroups : [Group]
init(groups : [Group]) {
// Make a copy of the group list:
self.remainingGroups = groups.map { Group(name: $0.name, participants: $0.participants) }
}
// ....
func assignPairs() -> [Participant] { ... }
}
The init method makes a copy of all groups, so that the algorithm can freely remove
participants without modifying the original groups.
Your
static func sortGroups(g:[Group])->[Group]
static func getRandGroup(groups:[Group])->Group
methods become instance methods without parameters which operate directly
on the remainingGroups property:
func randomGroup() -> Group {
precondition(remainingGroups.count > 0, "groups array is empty")
return remainingGroups[Int(arc4random_uniform(UInt32(remainingGroups.count)))]
}
func sortGroups() {
// Remove groups without participants:
remainingGroups = remainingGroups.filter { $0.size > 0 }
// Sort by number of participants in decreasing order:
remainingGroups.sortInPlace { $1.size < $0.size }
}
Note that you can omit the return statement in a single-expression closure.
Your
static func removeFromGroups(groups:[Group],p:Participant)->[Group]
is not needed anymore. At each point in the algorithm, it is known to which
group a participant belongs, so there is no need to search it in a group
array. Also sorting the groups from within that method is confusing.
Using all that, the main Secret Santa method can be written as
func assignPairs() -> [Participant] {
var sortedParticipants : [Participant] = []
sortGroups()
var giverGroup = randomGroup()
var giver = giverGroup.randomParticipant()
let firstGiver = giver
while true {
sortedParticipants.append(giver)
giverGroup.removeParticipant(giver)
guard let groupIndex = remainingGroups.indexOf(giverGroup) else {
fatalError("group not found")
}
// Determine destination group:
var receiverGroup : Group
if groupIndex > 0 {
// giver is not from the largest group(0)
receiverGroup = remainingGroups[0]
} else if remainingGroups.count > 1 {
// giver is from the largest group(0), there is at least one other group
repeat {
receiverGroup = randomGroup()
} while receiverGroup == giverGroup
} else if remainingGroups[0].size > 0 {
// There is only one group, but at least one receiver left
receiverGroup = remainingGroups[0]
} else {
// This was the last giver
giver.giveTo = firstGiver
break
}
// Determine receiver in destination group and assign:
let receiver = receiverGroup.randomParticipant()
giver.giveTo = receiver
// Prepare for next round:
sortGroups()
giverGroup = receiverGroup
giver = receiver
}
return sortedParticipants
}
I have called it assignPairs because it modifies the participants.
A better name is surely possible.
I have tried to write the code in a way that I understand it :)
Removing the current "giver" from its group early in the loop makes things a little
bit easier later.
The extra groups array searchIn is not needed if we accept that in one
case, several calls to randomGroup() are necessary. In the worst case
(only 2 groups left) the average number of calls is 2.
In your code, at
let indexO = groups2consume.indexOf() {$0.name==giverGroup.name}
index0 is expected to be non-nil. It is better to abort with an error
if that is no satisfied (to detect programming errors early) instead of
"silently" breaking from the loop. Note also that
remainingGroups.indexOf(giverGroup)
takes advantage of the Equatable protocol which we implemented for Group,
and does not rely on unique names anymore.
More suggestions:
Remove the giveTo property from Participants to make the
instances immutable, and return a list of pairs (e.g. tuples)
instead.
Add a check that the same participant is not added to more
that one group (and throw an error if that happens). I did not check how that situation is handled in your
code or my modification, but I don't expect it to work correctly.
More whitespace – at least around operators and {} code blocks. | {
"domain": "codereview.stackexchange",
"id": 17518,
"tags": "algorithm, swift, shuffle"
} |
Degrees of Freedom of a Linear Triatomic Molecule | Question: I was introduced to a formula for finding the DOF of a molecule which was
$3N-k$ and I was told, it was just for translation and rotational degree of freedom. Here $N$ is the no. of atoms in that molecule, and $k$ is the no. of independent criteria (like bonds). For example, if it's a diatomic molecule, then $DOF=3(2)-1$ which is 5. But by this, when I find for Triatomic Linear molecule $DOF=3(3)-2$ which is 7. And visualising the molecule I could find only 5 DOF.
So I raised up the question, but I didn't find a satisfactory response.
If there are any considerations or something that's missing, please help.
Answer:
Source:https://en.wikipedia.org/wiki/Degrees_of_freedom_(physics_and_chemistry)#Counting_the_Minimum_Number_of_Coordinates_to_Specify_a_Position
Cleary you can see all linear molecules at moderate temperature has 5 DOF | {
"domain": "physics.stackexchange",
"id": 74149,
"tags": "thermodynamics, molecules, degrees-of-freedom"
} |
Reduce Set problem to SAT | Question: So the problem is, given some set $M = \{x_1,x_2,\ldots,x_n\}$ and a set of subsets $S = \{S_1, S_2, \ldots, S_m\}$ where $S_i \subseteq M$. We want to find some set $X \subseteq M$ such that $|X| \le k$ and $X \cap S_i \neq \emptyset$ for all $S_i \in S$.
My solution, I would take some set $M = \{x_1,x_2,x_3,x_4\}$ and suppose $S_1 = \{x_1,x_2\}$, $S_2 = \{x_2,x_3\}$, $S_3 = \{x_4\}$. I would then transform this to a SAT instance to get:
$\phi = (x_1 \vee x_2) \wedge (x_2 \vee x_3) \wedge (x_4)$
Clearly if $\phi$ is satisfiable then there exists some $X \subseteq M$ however this does not guarantee that $|X| \le k$.
So my question is, how can I reduce this problem further so that $|X| \le k$ in polynomial time?
EDIT
I realized there may be an easier way to reduce this to the set-cover problem but need confirmation that my idea is correct.
Will post a new question containing this.
Answer: The final constraint you need to encode is that $k$ or fewer variables in the set ${x_1, x_2, ..., x_n}$ is set true. There is a good reference question that outlines several methods for encoding a 1-out-of-$n$ constraint. Those methods can be extended to fit your constraint. Using a chain of adder circuits plus a comparison circuit seems the most straightforward method to me. | {
"domain": "cs.stackexchange",
"id": 3118,
"tags": "complexity-theory, reductions, satisfiability"
} |
Temperature in the definition of entropy? | Question: In the definition of entropy
$$\mathrm d S=\left(\frac{ đQ}{T}\right)_\textrm{rev}$$
is $T$ the temperature of the system or of the environment (reservoirs)?
In Clausius' Theorem,
$$\oint \frac{đQ}{T}\leq 0$$
$T$ is the temperature of the reservoirs
But since to calculate $\mathrm dS$ we consider a reversible transformation, the temperature of the system and of the reservoirs should be always the same.
So can I say that $T$ is the temperature of the system in the definition of entropy?
Answer: As you said, in any reversible transformation the system and the reservoir have the same temperature. So, since the definition of entropy needs that you take the system through a reversible path, you can use the system's temperature or the reservoir's temperature alike. | {
"domain": "physics.stackexchange",
"id": 33855,
"tags": "homework-and-exercises, thermodynamics, entropy, definition"
} |
Finger Exercise: Update book cipher by creating new book | Question: I'm working my way through the finger exercises in John Guttag's book Introduction to Python Programming, third edition. The following finger exercise on page 143 describes encryption/decryption with a book cipher. The first exercise was to implement decoder and decrypt in the same style as encryption is working.
The book mentions this bug and gives the task to implement a solution like they hinted at:
The bug:
When a character occurs in plain text but not in the book,
something bad happens. The code_keys dictionary assigns -1 to
each such character, and decode_keys assigns -1 to the last
character in the book, whatever that may be.
The solution:
Create a new book by appending something to the original book.
My solution new_book uses a lambda function to be in line with the rest of the code. But this overturns the principle of encryption, if the plain text is needed during encryption and decryption
# Mapping individual, unique letters of plaintext to index number
gen_code_keys = (lambda book, plain_text:
{c: str(book.find(c)) for c in plain_text})
# Slicing operator at the end to remove the first '*' at the beginning
encoder = (lambda code_keys, plain_text:
"".join(["*" + code_keys[c] for c in plain_text])[1:])
# Encrypt plain_text by giving the encoder a book for indexing
encrypt = (lambda book, plain_text:
encoder(gen_code_keys(book, plain_text), plain_text))
# Decode key generator analogue to gen_code_keys
gen_decode_keys = (lambda book, cipher:
{c: book[int(c)] for c in cipher.split("*")})
# Implement decoder by looping through all numbers in cipher
decoder = (lambda decode_keys, cipher:
"".join((decode_keys[c] for c in cipher.split("*"))))
# Decrypting the cipher analog to encrypt function
decrypt = (lambda book, cipher:
decoder(gen_decode_keys(book, cipher), cipher))
Don_Quixote = "In a village of La Mancha, the name of which I have no " \
"desire to call to mind, there lived not long since one of those " \
"gentlemen that keep a lance in the lance-rack, an old buckler, a lean " \
"hack, and a greyhound for coursing."
# --------------- My Solution --------------- #
# Creating new book with missing letters in book to fix cipher bug
new_book = (lambda book, plain_text:
book + "".join([c for c in plain_text if c not in book]))
# plain text contains the letters: !ABENQjx
plain_text = "No joke, Abraham Boston had six beer. Everyone it Q!"
# Create new book by appending missing letter from plain_text
book = new_book(Don_Quixote, plain_text)
print(plain_text)
cipher = encrypt(book, plain_text)
print(cipher)
print(decrypt(book, cipher))
Ok, so I changed my code so far that I create the updated book with the missing characters appended before encrypting/decrypting. Thus I removed the function call new_book inside the encrypt and decrypt function. With that at least I don't need the plain_text in my decrypt function.
Are there further improvements I could make?
Answer: It is difficult to answer this question. There is so much about the code I hate, but I have to restrict my review to only your code: the code after the # —- My solution -- # comment marker.
Forget lambda
lambda functions have their uses. This is not one of them. Compare this:
new_book = (lambda book, plain_text:
book + "".join([c for c in plain_text if c not in book]))
with this:
def new_book(book, plain_text):
return book + "".join([c for c in plain_text if c not in book])
They both do exactly the same thing: they create a function accepting two parameters, and assign that function to the identifier new_book. In my opinion, the latter is clearer; the reader can immediately tell a function is being defined. The indentation is less, and a level of parentheses has been removed.
Additionally, several language features are available with the def version - type-hints and """docstrings""" - which can further improve understandability.
def new_book(book: str, plain_text: str) -> str:
"""
Create a new “book” to use for encryption and decryption,
will is a duplicate of the original book, plus any letters
in the `plain_text` message which are missing from the original
book.
"""
return book + "".join([c for c in plain_text if c not in book])
O(mn)
Your book has a certain number of characters. Let’s call that m. Your plain_text has a certain number of characters. Let’s call than n.
[c for c in plain_text if c not in book]
This loops n times, going through all of the letters of plain_text, and for each letter, it searches book for an occurrence of that letter. If a letter is not found in book, that is only determined after scanning through all m letters of the book. n outer loops iterations with up to m iterations in the inner loop means this is has a worst case of \$O(m*n)\$ operations. Potentially quite slow!
This worse case bound can be improved, but it requires a bit of preprocessing … which is easy with the def version of the function:
def new_book(book, plain_text):
book_letters = set(book)
return book + "".join([c for c in plain_text if c not in book_letters])
Turning book into a set of letters is an \$O(m)\$ operation. With that, c not in book_letters becomes an \$O(1)\$ operation. Repeated in the loop n times makes it \$O(n)\$, so the entire function becomes \$O(m + n)\$ … considerably faster.
But wait! While we’re at it, what is with the temporary list of characters [c …]? It is created just to be iterated over in the join(…). Why not just iterate without the temporary list creation?
def new_book(book, plain_text):
book_letters = set(book)
return book + "".join(c for c in plain_text if c not in book_letters)
Better solutions
Why add plain_text to the book? If you and you friend (or spy) have exchanged a book, and then you need to send a message with letters which were not in the book, you have to send a complete new_book to your friend (spy) … which could be intercepted making the encryption useless.
Plus, if a second message is sent with still different letters, yet another book would be necessary. A third message would require a third book, and so on! Decoding the cipher would require knowing which book was used to encode it (book1, book2, book3, …) — information which is not given.
It is a simple matter to expand the book without relying on plain_text. Just unconditionally add string.printable to the original book. This easily takes care of your missing !ABENQjx characters.
Except, it is still not a complete solution. Résumé would still result in -1 encodings. Adding the entire Unicode character set to book would be required, if you stayed with this approach. Instead, why not record the maximum value your book encoder returns, and return that value plus ord(c) for characters not in the book? When an encoded value exceeds the maximum value the book can decode, subtract that maximum, and use chr(#) to convert it back to the required character. | {
"domain": "codereview.stackexchange",
"id": 44083,
"tags": "python, programming-challenge, hash-map, lambda"
} |
Is Zitterbewegung an artefact of single-particle theory? | Question: I have seen a number of articles on Zitterbewegung claiming searches for it such as this one: http://arxiv.org/abs/0810.2186. Others such as the so-called ZBW interpretation by Hestenes seemingly propose to explain electron spin as a consequence of ZBW.
According to Itzykson and Zuber p.62, Zitterbewegung is an artefact of considering a single-particle theory. It has been pointed out in this question's replies that it is not a physical phenomenon: What was missing in Dirac's argument to come up with the modern interpretation of the positron?.
How does the upgrade to many-particle theory solve the issue?
Answer: The Zitterbewegung is more of a relic of the early Dirac equation days. It does not exist in the standard position, velocity and acceleration operators of the single particle field, only in alternatively derived versions. These alternative versions were developed because people thought the standard operators were wrong. In fact they didn't understand the standard operators. The standard method is using:
$\frac{\partial {\cal \tilde{O}}}{dt} ~=~ \frac{i}{\hbar}\!\!\left[~\tilde{H},\tilde{O}~\right]$
Where the misunderstanding comes from is easy to see in the modern Chiral representation. We will show that the standard operators are correct. If we define a position, a velocity and an acceleration operator for the Dirac field then the (averaged) position and velocity and acceleration are given by:
Position, Velocity and Acceleration operators applied on the Dirac field:
$\vec{x}_{avg} ~=~ \frac{1}{2mc}\int dx^3 ~~ \psi^* \vec{X}~\psi ~~~~~~~~~ (\vec{X}: \mbox{position operator}) $
$\vec{v}_{avg} ~=~ \frac{1}{2mc}\int dx^3 ~~ \psi^* \vec{V}~\psi \,~~~~~~~~~ (\vec{V}: \mbox{velocity operator}) $
$\vec{a}_{avg} ~=~ \frac{1}{2mc}\int dx^3 ~~ \psi^* \vec{A}~\psi \,~~~~~~~~~ (\vec{A}: \mbox{acceleration operator}) $
Velocity operator
Now $\vec{X}$ is simply the position $\vec{x}$ of each point of the wavefunction. The velocity operator can be derived by commutating with the Hamiltonian.
$\tilde{V}^i\psi\ =\ \frac{i}{\hbar}\left[~\tilde{H},\tilde{X}^i~\right]\psi\ =\ c \left( \begin{array}{cc} -\sigma^i & 0 \\ 0 & \sigma^i \end{array} \right)\psi$
This velocity operator is in fact totally correct but it was thought to be erroneous in the early days because people misunderstood it to mean that the electron can only move with $\pm\,c$, and therefor it must be wrong, they thought.
What they were actually expecting was something like the $\vec{v}=\vec{p}/m$ as they got in non relativistic theories, but they found something which only contained $\pm\,c$. However, if we evaluate the expression for $\vec{v}_{avg}$ then we get.
$\vec{v}_{avg} ~=~ \frac{c}{2mc}\int dx^3 ~~ \psi^* \left( \begin{array}{cc} -\sigma^i & 0 \\ 0 & \sigma^i \end{array} \right)\psi ~~=~~ \frac{c}{2mc}\int dx^3 ~~ \bar{\psi} \gamma^i \psi ~~=~~ \frac{c}{2mc}\int dx^3 ~ j^i$
This is an integral over the current density, or the momentum with the appropriate units. Now the momentum $\vec{p}$ is a factor $\gamma$ larger as the velocity $\vec{v}$ but the integral over the Lorentz contracted field compensates this so we end up with the velocity of the particle as required! The velocity operator is perfectly fine.
The other big misunderstanding was that the x,y and z-components of the velocity operator do not commute while they do so in the non-relativistic theory and therefor the operator must be wrong, they thought. You can still find this quoted in many textbooks.
But as you see the expression derives the velocity from the momentum and as we know the momentum components (the boost components) should not commute. In fact they should commute just like in the velocity operator. Again the operator behaves exactly in the right way, and it doesn't show a zitter-bewegung at all
Acceleration operator
We'll briefly handle the standard acceleration operator as well and show that there is no zitterbewung and that the result transforms in the right way under Lorentz transform. It can be actually be shown that it transforms like the Lorentz Force
$\psi^*\tilde{A}^i\psi ~~=~~ \frac{i}{m}\frac{d\vec{p}}{dt} ~~\mbox{transforms like:}~~ \frac{iq}{m}\left(\vec{v}\times\vec{B} ~+~ \vec{E}\right)$
Because $\psi^*\tilde{A}^i\psi $ gives rise to two terms which transform like the electron's magnetization and polarization. The construction which therefor transforms like the Lorentz force is thus actually.
$\psi^*\tilde{A}^i\psi ~~\mbox{transforms like:}~~ \frac{iq}{m}\left(-\vec{v}\times\mu_o\vec{M} ~+~ \frac{1}{\epsilon_o}\vec{P}\right)$
If you note that $\vec{v}\times\vec{M}~\propto~\vec{p}\times\vec{j}_A$ then you can recognize the two terms in the standard acceleration operator which is.
$\psi^*\tilde{A}^i\psi ~~=~~ c~\bar{\psi}\left[\,\gamma^i\gamma^5\times(\partial_i-i\frac{e}{\hbar}\!A_i) ~\right]\psi~+~ \frac{imc^3}{\hbar}~\bar{\psi}\gamma^0\gamma^i\psi$
The acceleration is zero in a plane-wave in the absence of a B or E field. In this case the electron field has its own inherent M and P values and the two terms cancel each other. If the inherent M and P values change because of external B and E fields (by addition) then the electron accelerates.
Chiral representation
Now what about the c in the velocity operator. This behavior of the propagator is easy to understand in the modern chiral representation and the propagator of the field. In principle all fields are massless and propagate with c. Due to coupling however propagators can have any speed between +c and -c. The electron has two such massless components.
$\psi~~=~~\left(\begin{array}{c}\psi_L\\ \psi_R \end{array}\right)$
So, these two components do move at the speed of light. In the rest frame they move exactly opposite to each other and the combined speed is zero. The big difference with the zitterbewegung is that they both happen at the same time. There is no overall alternating net velocity.
Now the time evolution in the restframe is.
$e^{-Ht}\left(\begin{array}{c}\psi_L\\ \psi_R \end{array}\right) ~~=~~
\left(\begin{array}{c}\psi_L\cos(mt)-i\psi_R\sin(mt)\\ \psi_R\cos(mt)-i\psi_L\sin(mt) \end{array}\right)$
So, you see the $\psi_L$ and $\psi_R$ alternating but is there a zitterbewegung of $\psi$ or the individual components $\psi_L$ and $\psi_R$? The answer is: NO for electrons and NO for positrons. This is because these are exactly the only two solutions of the Dirac equation which do not show a zitterbewegung. The reason for this is.
electron at rest: $\psi_L=+\psi_R$
positron at rest: $\psi_L=-\psi_R$
The other "exotic" states where $\psi_L\neq\pm\psi_R$ at rest do show a zitterbewegung, for instance $\psi_L=i\psi_R$ or $\psi_L=\sigma_z\psi_R$. This is actually the reason why these states are not allowed. They would radiate away electromagnetic energy with the frequency corresponding to their mass.
Hans. | {
"domain": "physics.stackexchange",
"id": 51484,
"tags": "quantum-field-theory, dirac-equation"
} |
How to correctly reset time | Question:
I am using a bag in a loop because it is just convenient to keep the bag running and testing different algorithms.
The problem is that once the bag starts from the beginning, my nodes using tf start printing out a lot of warnings about TF_OLD_DATA. To solve this, an empty message should be published to /reset_time.
I tried pushing the reset button in RVIZ but nothing happens and for example amcl complains about old tf data. Is there any solution to that problem?
Originally posted by Mehdi. on ROS Answers with karma: 3339 on 2015-08-12
Post score: 3
Answer:
You can use the --clock parameter in rosbag (e.g rosbag play --clock wg-cafe.bag). This way the TF buffer clears automatically on negative time.
See this as well:
http://answers.ros.org/question/206748/tf_old_data-ignoring-data-from-the-past-for-frame-openni_depth_optical_frame/
Originally posted by bergercookie with karma: 257 on 2016-04-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Dhagash Desai on 2016-10-29:
i am running this command only and nothing happens instead.i start gmapping slam_gmapping in one termonal and in other i start the rosbag play --clock file.bag. And nothing happen in the gmapping terminal, And error comes 100% messages dropped. I am using to record bag file of tf and scan in p3dx. | {
"domain": "robotics.stackexchange",
"id": 22434,
"tags": "transform"
} |
What is the spin-orbit force? | Question: I am trying to brush up on my physics knowledge, and I have run across a term used to justify some results in a few texts that I don't recognize, the spin-orbit force $\vec s \cdot \vec l$. Quickly googling around has not yielded much, if any information, and it's just brushed over as being trivial in texts (as are most concepts, I'm finding). Would anyone be able to explain this concept to me, either conceptually or mathematically? My apologies if I'm just missing a core concept I should know inherently, my education is not in physics.
Answer: Magnetic fields repel or attract each other.
An electric charge performing a rotation (angular momentum $\mathbf{L}$) generates a magnetic field $\mathbf{B}_1 \propto \mathbf{L}$.
An electron has an intrinsic magnetic field (as if it had a bar magnetc inside) due to its spin $\mathbf{S}$, $\mathbf{B}_2\propto \mathbf{S}$.
These two magnetic fields have an interaction that goes as $\mathbf{B}_1 \cdot \mathbf{B}_2$, and hence $\mathbf{L} \cdot \mathbf{S}$. | {
"domain": "physics.stackexchange",
"id": 77864,
"tags": "nuclear-physics, atomic-physics, quantum-spin"
} |
AMCL: Extrapolation required for lookup from frame [base_link] to [odom]. Requested time 1531474550.521278314, latest data at 1531474550.516698594 | Question:
Hi,
I am working with a real kuka youbot robot. When I run AMCL I get constantly the following warning and error:
[WARN] [1531475915.405190988]: Failed to compute odom pose, skipping scan (Lookup would require extrapolation into the future. Requested time 1531475915.373467473 but the latest data is at time 1531475915.371227514, when looking up transform from frame [base_link] to frame [odom])
[ERROR] [1531475915.405592384]: Couldn't determine robot's pose associated with laser scan
When I launch Rviz and set the initial pose with "2D Pose Estimate", /initialpose topic echo's the following:
header:
seq: 0
stamp:
secs: 1531476085
nsecs: 404504998
frame_id: "map"
pose:
pose:
position:
x: -0.121083259583
y: -0.0682992935181
z: 0.0
orientation:
x: 0.0
y: 0.0
z: 0.00537968781778
w: 0.999985529375
covariance: [0.25, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.25, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.06853891945200942]
And in the terminal running AMCL appears:
[ INFO] [1531476085.547250607]: Setting pose (1531476085.547161): -0.121 -0.068 0.011
But after that it continues throwing same warning and error mentioned before.
I can not attach TF tree, but it is: map -> odom -> base_footprint -> base_link -> ...And TF transformations are working right at a rate, respectively in those cases, (starting with map -> odom) of: 21.609, 42.11, 50.981.
Here are the results of tf_monitor:
RESULTS: for all Frames
Frames:
Frame: /camera_depth_frame published by unknown_publisher Average Delay: -0.2982 Max Delay: 0
Frame: /camera_depth_optical_frame published by unknown_publisher Average Delay: -0.297965 Max Delay: 0
Frame: /camera_rgb_frame published by unknown_publisher Average Delay: -0.297891 Max Delay: 0
Frame: /camera_rgb_optical_frame published by unknown_publisher Average Delay: -0.297604 Max Delay: 0
Frame: arm_link_0 published by unknown_publisher Average Delay: -0.696617 Max Delay: 0
Frame: arm_link_1 published by unknown_publisher Average Delay: -0.191182 Max Delay: 0
Frame: arm_link_2 published by unknown_publisher Average Delay: -0.19118 Max Delay: 0
Frame: arm_link_3 published by unknown_publisher Average Delay: -0.191179 Max Delay: 0
Frame: arm_link_4 published by unknown_publisher Average Delay: -0.191179 Max Delay: 0
Frame: arm_link_5 published by unknown_publisher Average Delay: -0.191178 Max Delay: 0
Frame: base_footprint published by unknown_publisher Average Delay: -0.190222 Max Delay: 0
Frame: base_laser_front_link published by unknown_publisher Average Delay: -0.696646 Max Delay: 0
Frame: base_link published by unknown_publisher Average Delay: -0.696637 Max Delay: 0
Frame: camera_link published by unknown_publisher Average Delay: -0.29782 Max Delay: 0
Frame: caster_link_bl published by unknown_publisher Average Delay: -0.189073 Max Delay: 0
Frame: caster_link_br published by unknown_publisher Average Delay: -0.189071 Max Delay: 0
Frame: caster_link_fl published by unknown_publisher Average Delay: -0.18907 Max Delay: 0
Frame: caster_link_fr published by unknown_publisher Average Delay: -0.189069 Max Delay: 0
Frame: gripper_finger_link_l published by unknown_publisher Average Delay: -0.191177 Max Delay: 0
Frame: gripper_finger_link_r published by unknown_publisher Average Delay: -0.191176 Max Delay: 0
Frame: gripper_palm_link published by unknown_publisher Average Delay: -0.696653 Max Delay: 0
Frame: odom published by unknown_publisher Average Delay: -0.269718 Max Delay: 0
Frame: plate_link published by unknown_publisher Average Delay: -0.696657 Max Delay: 0
Frame: wheel_link_bl published by unknown_publisher Average Delay: -0.189068 Max Delay: 0
Frame: wheel_link_br published by unknown_publisher Average Delay: -0.189066 Max Delay: 0
Frame: wheel_link_fl published by unknown_publisher Average Delay: -0.189065 Max Delay: 0
Frame: wheel_link_fr published by unknown_publisher Average Delay: -0.189064 Max Delay: 0
All Broadcasters:
Node: unknown_publisher 199.909 Hz, Average Delay: -0.354249 Max Delay: 0
I do not know if there is a problem with the difference in publishing rates between TF transform /odom ->...-> /base_link and AMCL request time of that transform... I am stucked and I do not know how to solve this problem. I would really appreciate any help. Thanks in advance!
Originally posted by kuka_kuka on ROS Answers with karma: 23 on 2018-07-13
Post score: 0
Original comments
Comment by Humpelstilzchen on 2018-07-13:
First check that all computer in your ROS network share the same time. Its recommended to synchronized them with ntp.
Comment by kuka_kuka on 2018-07-16:
All nodes (except Rviz) are running on same machine
Comment by Humpelstilzchen on 2018-07-16:
The timestamps 1531475915.373467473 and 1531475915.371227514 are about 2ms apart, is it always that close? To what value have you set the amcl parameter transform_tolerance? Can you post a sample of your scan?
Comment by kuka_kuka on 2018-07-16:
Time difference varies between 0-5 ms. Transform tolerance is set to default (0.1) and modifying it to 0.5 does not make any change. You mean the result of the 2D map scan? The .yaml and .pgm files? Scan is made using gmapping.
Comment by kuka_kuka on 2018-07-16:
.yaml file:
image: my_map.pgm
resolution: 0.050000
origin: [-5.000000, -5.000000, 0.000000]
negate: 0
occupied_thresh: 0.65
free_thresh: 0.196
I think I cant attach images so I cant show the .pgm 2D map, but I think it is quite good. By the way, thank you for your help :)
Comment by Humpelstilzchen on 2018-07-16:
The output of a single rostopic echo /scan (or whatever your scan topic is)
Comment by kuka_kuka on 2018-07-16:
Cheader:
seq: 74
stamp:
secs: 1531743667
nsecs: 581998729
frame_id: odom
angle_min: -0.546698212624
angle_max: 0.546698212624
angle_increment: 0.00171110546216
time_increment: 0.0
scan_time: 0.0329999998212
range_min: 0.449999988079
range_max: 10.0
Comment by kuka_kuka on 2018-07-16:
ranges: [nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 3.259399652481079, 3.256758689880371, 3.281576156616211, 3.2762844562530518, 3.273649215698242, 3.2710208892822266, 3.2957406044006348, 3.293104410171509, 3.290475368499756 ...
Comment by Humpelstilzchen on 2018-07-16:
You have set frame_id of your scan to odom, is this correct for you? Usually the laser scanner is attached to the vehicle, which for you is probably camera_link.
Comment by kuka_kuka on 2018-07-16:
I changed it to /odom some weeks ago...I have set it to camera_link again and it seems that amcl works correctly now! I have lost so much time on this...Really thank you for your help!! :D
Comment by kuka_kuka on 2018-07-16:
Do I have to add an answer with the result? Or what am I suppose to do now for indicating the answer?
Comment by Humpelstilzchen on 2018-07-16:
You can post the answer if you like. Thats probably good for someone else with the same problem.
Answer:
Thanks to "Humpelstilzchen" I was able to solve this issue. Laser frame was set to /odom instead of to /camera_link. After setting frame to /camera_link amcl warnings and errors dissapeared.
Originally posted by kuka_kuka with karma: 23 on 2018-07-16
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by aarontan on 2018-07-27:
where do you configure this?
Comment by kuka_kuka on 2018-08-14:
As I am working with a Kinect and not with a real laser I use a package that simulates the usage of a laser from the depth cloud received by Kinect. That package is called "depthimage_to_laserscan" and one of the parameters is "output_frame_id". I set that parameter to "camera_link": | {
"domain": "robotics.stackexchange",
"id": 31266,
"tags": "navigation, pose, rviz, amcl, 2d-pose-estimate"
} |
Python Text-To-Speech program | Question: I have some code that asks for a user input, and then uses TTS to convert that into speech and play the file. It then asks the user whether they want to repeat it and convert another input into speech, or whether they want to exit the program. Are there any improvements that I could make to the code?
import webbrowser
import os
import time
import sys
import getpass
from gtts import gTTS
from mutagen.mp3 import MP3
my_file = "C:/Users/%USERNAME%/Desktop/TTS/bob.mp3" #Sets a variable for the file path.
username = getpass.getuser() #Gets the username of the current user.
def repeat():
while True:
if os.path.isfile(my_file): #Checks to see whether there is a file present and, if so, removes it.
os.remove(my_file)
else:
None
tts = gTTS(text = input("Hello there " + username + """. This program is
used to output the user's input as speech.
Please input something for the program to say: """)) #Takes the user's input and uses it for the Text-To-Speech
tts.save('bob.mp3') #Saves a .mp3 file of the user's input as speech.
webbrowser.open(my_file) #Opens the .mp3 file
audio = MP3(my_file) #Sets a variable so that the Mutagen module knows what file it's working with.
audio_length = audio.info.length #Sets a variable of the length of the .mp3 file.
time.sleep((audio_length) + 0.25) #Waits until the file has finished playing.
os.system('TASKKILL /F /IM wmplayer.exe') #Closes Windows Media Player.
time.sleep(0.5) #Waits until Windows Media Player has closed.
while True:
answer = input("Do you want to repeat? (Y/N) ")
if answer == "y" or answer == "Y":
return repeat() #Goes back to the beginning of the function if the user wants to try again.
elif answer == "n" or answer == "N":
if os.path.isfile(my_file): #Checks to see whether there is a file present and, if so, removes it.
os.remove(my_file)
else:
None
sys.exit() #Exits the program.
else:
print("Sorry, I didn't understand that. Please try again with either Y or N.")
continue #Goes back to the beginning of the while loop.
repeat() #Calls the function.
Answer: DRY
Apply the DRY (Do not Repeat Yourself) method. For instance, the check to remove the file. Since it's being called in two places, we can make this a function. I've re-written your code in a more functional style. This is also easier to read what's occurring.
else
Your else: None statements are not doing anything.
continue
Same goes for this in the while loop. This happens anyway.
if
When you check the user's answer, just assume it's going to be varying. Instead of checking for an UPPERCASE 'y' and lowercase, just lower it. Also stripping it to make sure to remove erroneous spaces.
answer = input("Do you want to repeat? (Y/N) ").strip().lower()
if answer in ['yes', 'y']:
# instead of
answer = input("Do you want to repeat? (Y/N): ")
if answer == "y" or answer == "Y":
Exceptions
Since you're interacting with the user from the interpreter, you can catch if the user exits the program and run any cleanup code before cleanly exiting. I've wrapped the loop of the interaction in a try/except statement so that this can be caught.
if name == 'main':
Check this SO post out.
Rewritten
import webbrowser
import os
import time
import getpass
from gtts import gTTS
from mutagen.mp3 import MP3
my_file = "C:/Users/%USERNAME%/Desktop/TTS/bob.mp3" #Sets a variable for the file path.
username = getpass.getuser() #Gets the username of the current user.
def remove_file():
"""Checks if myfile exists and if so, deletes."""
if os.path.isfile(my_file):
os.remove(my_file)
def play_tts():
webbrowser.open(my_file) # Opens the .mp3 file
audio = MP3(my_file) # Sets a variable so that the Mutagen module knows what file it's working with.
audio_length = audio.info.length # Sets a variable of the length of the .mp3 file.
time.sleep(audio_length + 0.25) # Waits until the file has finished playing.
os.system('TASKKILL /F /IM wmplayer.exe') # Closes Windows Media Player.
time.sleep(0.5) # Waits until Windows Media Player has closed.
def ask_and_play():
# Takes the user's input and uses it for the Text-To-Speech
tts = gTTS(text=input(
"Hello there " + username + ". This program isn\n"
"used to output the user's input as speech.\n"
"Please input something for the program to say: ")
)
tts.save('bob.mp3') # Saves a .mp3 file of the user's input as speech.
play_tts()
def check_continue():
"""Checks if the user wants to continue.
Returns a boolean value."""
while True:
answer = input("Do you want to repeat? (Y/N) ").strip().lower()
if answer in ['yes', 'y']:
return True
elif answer in ['no', 'n']:
return False
else:
print("Sorry, I didn't understand that. Please try again with either Y or N.")
def repeat():
"""Repeatedly ask the user to type text to play,
and check if the user wants to repeat or exit."""
while True:
remove_file()
ask_and_play()
if not check_continue():
raise KeyboardInterrupt
if __name__ == '__main__':
try:
repeat() #Calls the function.
except KeyboardInterrupt:
# clean up
remove_file()
print('Goodbye!') | {
"domain": "codereview.stackexchange",
"id": 25821,
"tags": "python, python-3.x, windows, audio"
} |
Pressure inside a soap bubble made in a vacuum | Question: A question I came across:
A soap bubble is made in vacuum by blowing an ideal diatomic gas in it. Assume the heat capacity of the soap film is much greater than that of the gas in the bubble. What will be the molar heat capacity of the gas in the bubble?
The correct answer is given as $4R$ (R being the ideal gas constant), and I was able to reach this answer using the ideal gas equation to find $dV$, substituting into the first law of thermodynamics and using the fact that $C_v$ for a diatomic gas is $5R/2$.
However, to reach this answer I had to take the pressure inside the bubble as $8\sigma/r$ ($\sigma$ being the surface tension of the soap solution) instead of $4\sigma/r$ which is what I would usually take for the excess pressure inside a soap bubble, and since it is in vacuum I can take as the absolute pressure inside the bubble.
Is there a reason for this which I'm missing, or an error in the question/answer?
Answer: Let say you have blown the air bubble to $V$ volume, $P$ Pressuse, $T$ temperature and with $n$ moles of that diatomic gas in it. Also the surface tension of the bubble be $\sigma$
Now, To calculate the molar specific heat i.e $C$, Let's say you gave some heat to the bubble i.e $Q$, due to which there will be some change in pressure, volume and temperature let's say $dP, dV, dT$ respectively and also the number of moles $(n)$ will be constant
As you stated by First Law of Thermodynamics,
$$ Q = dU + W $$
where all the terms have their usual meaning
$$ Q = nCdT$$
$$ dU = nC_v dT$$
Work can be calculated by two ways,
$$ W = PdV$$
$$or$$
$$ W = 2\sigma dA$$
$$W = 2\sigma\times 8\pi r dr$$
where $dA$ is elemental change in surface area
I am going with second you can verify the result for first also
$$nCdT = 2\sigma \times 8\pi rdr + nC_v dT $$
$$C = \mathrm{\frac{2\sigma\ \times 8\pi rdr}{n dT}} + C_v \qquad(1)$$
From ideal gas equation,
$$ P V = nR T$$
differentiating,
$$PdV + VdP = nRdT$$
$$ndT = \mathrm{\frac{PdV+VdP}{R}} $$
So equation (1) becomes,
$$C = \mathrm{\frac{2\sigma\ \times 8\pi rdr\times R}{PdV+VdP}} + C_v$$
$$P = \mathrm{\frac{4\sigma}{r}},V=\mathrm{\frac{4}{3}}\pi r^3, C_v = 5R/2$$
differentiate and substitute you will get the answer. | {
"domain": "physics.stackexchange",
"id": 77464,
"tags": "classical-mechanics, pressure, fluid-statics"
} |
Confusion over assumptions made in the LSZ reduction formula | Question: I've been reading through a derivation of the LSZ reduction formula (http://www2.ph.ed.ac.uk/~egardi/MQFT_2013/, lecture 2, pages 2-3) and I'm slightly confused about the arguments made about the assumptions:
$$
\begin{aligned}
\langle\Omega\vert\phi(x)\vert\Omega\rangle &=0\\ \langle\mathbf{k}\vert\phi(x)\vert\Omega\rangle &=e^{ik\cdot x}
\end{aligned}
$$
For both assumptions the author first relates $\phi(x)$ to $\phi(0)$ by using the 4-momentum operator $P^{\mu}$, i.e.
$$
\phi(x)=e^{iP\cdot x}\phi(0)e^{-iP\cdot x}
$$
such that, in the case of the first assumption, one has $$
\langle\Omega\vert\phi(x)\vert\Omega\rangle =\langle\Omega\vert e^{iP\cdot x}\phi(0)e^{-iP\cdot x}\vert\Omega\rangle =\langle\Omega\vert\phi(0)\vert\Omega\rangle
$$
where we have used that the vacuum state satisfies $P^{\mu}\lvert\Omega\rangle =0$, such that $e^{-iP\cdot x}\vert\Omega\rangle = \vert\Omega\rangle$.
What I don't understand is, why do we need to relate $\langle\Omega\vert\phi(x)\vert\Omega\rangle$ to $\langle\Omega\vert\phi(0)\vert\Omega\rangle$ in the first place? Both $\langle\Omega\vert\phi(x)\vert\Omega\rangle$ and $\langle\Omega\vert\phi(0)\vert\Omega\rangle$ are Lorentz invariant.
Is it simply because, by showing that for any $x^{\mu}$, $\langle\Omega\vert\phi(x)\vert\Omega\rangle$ is equal to the Lorentz invariant number, $v\equiv\langle\Omega\vert\phi(0)\vert\Omega\rangle$ (in principle $\langle\Omega\vert\phi(x)\vert\Omega\rangle$ could have a different value for each spacetime point $x^{\mu}$), we can then simply shift the field $\phi(x)\rightarrow \phi(x)-v$, such that the condition $\langle\Omega\vert\phi(x)\vert\Omega\rangle=0$ is satisfied?
(If this is the case, then I'm guessing the argument is similar for the second condition.)
Answer: The LSZ formula is based on the following assumptions:
There exists a vector $|\Omega\rangle$ that satisfies $P^\mu|\Omega\rangle=J^{\mu\nu}|\Omega\rangle=0$.
The field transforms according to a certain representation of the Poincaré Group, that is, it satisfies
$$
U(a,\Lambda)\phi(x)U(a,\Lambda)^\dagger=D(\Lambda)\phi(\Lambda x+a)
$$
where $a\in\mathbb R^4$ and $\Lambda\in SO(1,d)^+$, and
$$
U(a,\Lambda)\equiv\mathrm e^{-iP_\mu a^\mu}\mathrm e^{-i\omega_{\mu\nu}J^{\mu\nu}}
$$
There exists a certain vector $|\boldsymbol p,\sigma\rangle$ that satisfies $P^\mu|\boldsymbol p,\sigma\rangle=p^\mu|\boldsymbol p,\sigma\rangle$ such that $m^2\equiv p^2$ is an isolated eigenvalue of $P^2$.
The field $\phi(x)$ satisfies $\langle \Omega|\phi(x)|\Omega\rangle=0$.
The field $\phi(x)$ satisfies $\langle \Omega|\phi(x)|\boldsymbol p,\sigma\rangle \neq 0$.
Some other assumptions that are irrelevant for this post (e.g., if the system has a well-defined notion of charge conjugation, then $\phi(x)$ has to commute with $\mathscr C$, and similarly for other internal symmetries).
If $(2)$ is satisfied, and $D(\Lambda)$ is a non-trivial representation of the Lorentz group, then $(4)$ is satisfied automatically; i.e., one need not impose this assumption as a separate condition. Therefore, in this answer we will restrict ourselves to trivial representations of the LG, that is, the scalar representation, where $\phi(x)$ is a scalar field.
In the case of scalar fields, $\langle \Omega|\phi(x)|\Omega\rangle$ is Lorentz invariant regardless of whether it vanishes or not. But we do need to make sure it vanishes, because $(4)$ is a necessary condition for the LSZ formula. Therefore, in order to make sure it vanishes, we note the following: as discussed in the OP, this number satisfies
$$
\langle \Omega|\phi(x)|\Omega\rangle=\langle \Omega|\phi(0)|\Omega\rangle
$$
Therefore, if for some reason $\langle \Omega|\phi(x)|\Omega\rangle$ is non-zero, we redefine the field $\phi(x)$ through
$$
\phi(x)\to\phi(x)-\langle \Omega|\phi(0)|\Omega\rangle
$$
which doesn't spoil any of the conditions $1,2,3,5,6$ provided they were already satisfied by the original field, but it ensures that $4$ is satisfied, by construction.
As for the second condition, the argument is as follows: if we use $\langle \Omega|U(a,\Lambda)=\langle \Omega|$ and $U(a,\Lambda)^\dagger|\boldsymbol p\rangle=\mathrm e^{ipa}|\Lambda\boldsymbol p\rangle$, then we can always write
$$
\begin{aligned}
\langle \Omega|\phi(x)|\boldsymbol p\rangle&=\langle \Omega|\overbrace{U(x,\Lambda)U(x,\Lambda)^\dagger}^1\phi(x)\overbrace{U(x,\Lambda)U(x,\Lambda)^\dagger}^1|\boldsymbol p\rangle\\
&=\overbrace{\langle \Omega|U(x,\Lambda)}^{\langle \Omega|}\overbrace{U^\dagger(x,\Lambda)\phi(x)U(x,\Lambda)}^{\phi(0)}\overbrace{U(x,\Lambda)^\dagger|\boldsymbol p\rangle}^{\mathrm e^{ipx}|\Lambda\boldsymbol p\rangle}\\
&=\langle\Omega|\phi(0)|\Lambda\boldsymbol p\rangle\mathrm e^{ipx}
\end{aligned}
$$
If we now set $x=0$, we see that this implies that
$$
\langle\Omega|\phi(0)|\boldsymbol p\rangle=\langle\Omega|\phi(0)|\Lambda\boldsymbol p\rangle
$$
i.e., the matrix element $\langle\Omega|\phi(0)|\boldsymbol p\rangle$ is a scalar; but the only scalar function of $\boldsymbol p$ is $p^2=m^2$, and therefore this matrix element is just a constant, independent of $\boldsymbol p$:
$$
\langle \Omega|\phi(x)|\boldsymbol p\rangle=c\, \mathrm e^{ipx}
$$
Finally, if, as in $(5)$, we assume that $\langle \Omega|\phi(x)|\boldsymbol p\rangle \neq 0$, then $c\neq 0$ and we can always redefine $\phi(x)$ so that $c=1$; and, as again, this doesn't spoil any of the conditions $1,2,3,4,6$. | {
"domain": "physics.stackexchange",
"id": 37881,
"tags": "quantum-field-theory, special-relativity, vacuum, lorentz-symmetry, s-matrix-theory"
} |
Calculating the Energy released in Fusion between Deuterium and Tritium | Question: I'm trying to calculate the Energy you would get in a fusion reactor from the fusion of deuterium and tritium:
${}^2H+{}^3H \rightarrow {}^4He + n$
Using this Equation:
$E = E_{rest} + E_{kin} = mc^2 + \frac12mv^2$
And these values i found online:
$m_{Deuterium} \approx 2.01410177811u$
$m_{Tritium} \approx 3.01604928u$
$m_{Helium4} \approx 4.002603254u$
$m_{Neutron} \approx 1.03352196257794u$
These velocities are at ~100 million Kelvin
$v_{Deuterium} \approx 1500\frac{km}s$
$v_{Tritium} \approx 1000\frac{km}s$
Plugging in the values i get this:
$E_{Deuterium} \approx 1882.3819988MeV$
$E_{Tritium} \approx 2819.97477352MeV$
$E_{Helium4} \approx 3728.40131MeV$
$E_{Neutron} \approx 962.719610361MeV$
Then the Energy before the reaction minus the energy after the reaction is:
$\Delta E \approx 10.84669MeV$
But on the Wikipedia about fusion it says that the reaction should release $17.59MeV$ in kinetic energy.
I assume the problem could be the inaccurate velocities, but I'm not sure the difference would be so big.
Answer: Kinetic energies do not have to be taken into account: they only serve to overcome Coulomb repulsion.
In the Sun, fusions occur at "low temperature".
Do the usual Q checkup with the masses and you'll easily find the correct answer. | {
"domain": "physics.stackexchange",
"id": 88658,
"tags": "homework-and-exercises, nuclear-physics, mass-energy, fusion, binding-energy"
} |
What is the logic behind applying $X$ or $Z$ conditionally to the received bits? | Question: I implemented teleportation described on this page: Teleporation
Circuit is as follows:
Here, as we are trying to teleport the quantum state from Alice (q0) to Bob(q2).
Bob, on receiving the bits from Alice needs to apply appropriate transformations on his qubit based on Alice measurement to reconstruct Alice's state which are as follows:
Please help me to understand the logic behind applying X-gate in case bits received are 01 or applying z-gate in case bits received are 10..
Answer: The state to be teleported is $|\psi\rangle=\alpha|0\rangle+\beta|1\rangle$. So, the initial system is in state:
$$(\alpha|0\rangle+\beta|1\rangle)|00\rangle$$
After the first Hadamard + CNOT, the state is:
$$(\alpha|0\rangle+\beta|1\rangle)\left(\frac{|00\rangle+|11\rangle}{\sqrt{2}}\right)=\frac{1}{\sqrt{2}}\left(\alpha|000\rangle+\alpha|011\rangle+\beta|100\rangle+\beta|111\rangle\right)$$
We now apply the second CNOT:
$$\frac{1}{\sqrt{2}}\left(\alpha|000\rangle+\alpha|011\rangle+\beta|110\rangle+\beta|101\rangle\right)$$
And finally the second Hadamard:
$$\frac{1}{2}\left(\alpha|000\rangle+\alpha|100\rangle+\alpha|011\rangle+\alpha|111\rangle+\beta|010\rangle-\beta|110\rangle+\beta|001\rangle-\beta|101\rangle\right)$$
Now, we measure the first two qubits. Possible results are $00$, $01$, $10$ and $11$. Recall that this means that with appropriate normalisation, the state will collapse to a superposition of terms such that the first two qubits corresponds to what we've measured.
If we measured $|00\rangle$, then the state collapsed to:
$$\alpha|000\rangle+\beta|001\rangle=|00\rangle(\alpha|0\rangle+\beta|1\rangle)$$
Thus, the thirds qubit is in state $|\psi\rangle$, so nothing to do here.
If we measured $|01\rangle$, then the state collapsed to:
$$\alpha|011\rangle+\beta|010\rangle=|01\rangle(\alpha|1\rangle+\beta|0\rangle)$$
In order for the third qubit to be in state $|\psi\rangle$, we need to change $|0\rangle$ into $|1\rangle$ and reciprocally, which is what the $X$ gate does:
$$X|0\rangle=|1\rangle, X|1\rangle=|0\rangle$$
If we measured $|10\rangle$, then the state collapsed to:
$$\alpha|100\rangle-\beta|101\rangle=|10\rangle(\alpha|0\rangle-\beta|1\rangle)$$
In order for the third qubit to be in state $|\psi\rangle$, we need to change flip the phase of $|1\rangle$, which is what the $Z$ gate does:
$$Z|0\rangle=|0\rangle, Z|1\rangle=-|1\rangle$$
If we measured $|11\rangle$, then the state collapsed to:
$$\alpha|111\rangle-\beta|110\rangle=|11\rangle(\alpha|1\rangle-\beta|0\rangle)$$
In order for the third qubit to be in state $|\psi\rangle$, we need to first exchange $|0\rangle$ and $|1\rangle$ and then to flip the phase of $|1\rangle$, which corresponds to applying an $X$ gate followed by a $Z$ gate. | {
"domain": "quantumcomputing.stackexchange",
"id": 3842,
"tags": "textbook-and-exercises, teleportation"
} |
What should the brake force in this problem be? | Question: Alright so I think I know how to do this but I require help in calculating what acceleration would be in terms of some sort of friction coefficient.
So model a particle going down a hill. The slope is 25 degrees. The mass of the particle is 50kg and the coefficient of sliding friction between the particle and slope is 0.05. When the brake is applied, 260N acts in the opposite direction to the motion of the particle. g = 9.8ms^-2
I won't write out my entire workings are they're lengthy, but I'll give the line I am up to which is:
let $\mu$ = the coefficient of sliding friction
let $b$ = the braking force
$ma_i = (mg \cos{65}-\mu bN)\hat{\mathbf{i}} + (N-mg \sin{65})\hat{\mathbf{j}}$
So I should imagine that b is going to slow down the acceleration, so perhaps it's as simple as 0.05 x 240?
I'm not sure if it would be 240 or 2.4 though, for example - any elucidation on this?
Answer: Let the angle of the slope be $\theta$, $m$ the mass, $a$ the acceleration, $\mu$ the friction coefficient and $g=9.8 ms^{-2}$.
On the mass acts a vertical gravitational force $mg$. Decompose this into a component along the line of the hill, so that is $mg\sin\theta$, and one perpendicular to that, so $mg\cos\theta$.
The latter provides a friction force $\mu mg\cos\theta$.
The balance of forces acting on the particle along the hill’s surface is thus:
$ma= mg\sin\theta - \mu mg\cos \theta$.
Using the OP’s numbers: $ma=185N$.
Applying a braking force of $260 N$ would result in a decelerating force of $185N-260N=-75N$.
Deceleration would be $a=-\frac{75N}{50kg}=-1.5ms^{-2}$. | {
"domain": "physics.stackexchange",
"id": 24082,
"tags": "homework-and-exercises, newtonian-mechanics, forces, friction, vectors"
} |
PDO Insert function using PHP | Question: This code inserts data into the database successfully. I was concerned about whether it is a considered good practise to go about inserting data into the database in this way. Is there something I am not considering?
function insertTest(){
global $db;
$field_names = array("id", "first_name", "last_name", "email");
$alias = array(":id", ":fn", ":ln", ":em");
$assignment = array(":id" => "2k39dk38dk2k39dk38dk2k39dk38dk2k39dk38dk",
":fn" => "Chris",
":ln" => "Moore",
":em" => "test@gmail.com");
$array = array();
$array['table'] = 'users';
$array['field_names'] = $field_names;
$array['alias'] = $alias;
$array['assignment'] = $assignment;
$q = "INSERT INTO `%s` (%s) VALUES (%s)";
$q = sprintf($q, $array['table'], implode(", ",$array['field_names']), implode(", ",$array['alias']));
$q = $db->prepare($q);
$r = $q->execute($assignment);
print_r($r);
return $r;
}
Answer: Is it good practice to insert data into a table this way?
If by that you mean: is it considered good practice to use prepared statements, and a bind array, then yes. It is. If you mean: "can I consider my code to be good practice", then the answer is no. It's not. Here's why:
function insertTest()
{//needs to go on the next line, PSR standards
global $db;
The use of functions + global keyword is bad practice. For starters: you have no real way of ensuring that $db will be an instance of PDO. And seeing as you are using a function, why not require the caller to pass a valid DB connection to your function:
function insertTest(PDO $db)
{
}
Now you know what $db will be, with the added benefit of the type-hint, any decent IDE will know, too. So your code is self-explanatory.
Next we have this:
$field_names = array("id", "first_name", "last_name", "email");
$alias = array(":id", ":fn", ":ln", ":em");
$assignment = array(":id" => "2k39dk38dk2k39dk38dk2k39dk38dk2k39dk38dk",
":fn" => "Chris",
":ln" => "Moore",
":em" => "test@gmail.com");
That looks a tad messy, doesn't it. For starters, it's pretty clear to me that $field_names and $alias belong together: for each field in your INSERT, there needs to be a placeholder. Having 2 separate arrays means that someone can add a value to the first, and simply forget about adding a placeholder in the other array. That would break your code. Just write:
$map = [
'id' => ':id',
'first_name' => ':fn',
'last_name' => ':ln',
'email' => ':email',
];
That addresses that issue. Now you can easily extract the fields you're using (array_keys($map)) and the parameters (array_values($map)). As for the $assignment variable: that really should be an argument that the user passes to your function, so change its signature to:
function inserTest(PDO $db, array $assignment)
{}
In the long run, you'll probably want to turn this function into a method, and define $map as a property, which you can then use to validate $assignment (check if the keys in the array exist in $this->map and throw exceptions if something doesn't quite add up)
Prepared statements are reusable
Another drawback of the way you're using prepared statements in your function is that you're missing out on one of their biggest advantages: prepared statements can be re-used several times:
$stmt = $pdo->prepare('INSERT INTO foo (field1, field2) VALUES (:foo, :bar)');
$stmt->execute([':foo' => 'value 1', ':bar' => 1]);
$stmt->execute([':foo' => 'value 2', ':bar' => 2);
The code above will add 2 rows to the foo table, using the same prepared statement. Creating such a statement in a function, using it and then returning a boolean will GC (garbage collect) the prepared statement, only to create it a second time after when you call the function again. This isn't ideal, obviously.
It might be more useful to harness the power of the PDOStatement instance that PDO::prepare returns.
This can be especially useful if you're using SELECT queries: returning the PDOStatement instance allows the user to fetch the data in a format he sees fit:
$stmt = $obj->getSelectByIdStmt();
$data = [];
foreach ($ids as $id) {
$stmt->execute([':id' => $id]);
$objects = [];
//or PDO::FETCH_ASSOC, PDO::FETCH_CLASS, ...
while ($row = $stmt->fetch(PDO::FETCH_OBJ) {
$objects[] = $row;
}
$data[$id] = $objects;
}
This is something that is quite hard to abstract into one, comprehensive function/method...
Other niggles:
Your usage of this (badly) named $array variable doesn't really make sense:
$array = array();
$array['table'] = 'users';
$array['field_names'] = $field_names;
$array['alias'] = $alias;
$array['assignment'] = $assignment;
For starters, initializing a variable to an empty array, and then adding keys one by one is a lot more work (to type) than a simple:
$array = [
'table' => 'users',
'field_names' => $field_names,
'alias' => $alias,
'assignment' => $assignment,
];
Now if you were to pre-format/stringify array values (like $field_names), you could use vsprintf to create your query, but that's something others who work on the same code will hate you for (they really will, trust me, I've been there).
I've already hinted at the possibility of creating a class, which could be used to store things that are, essentially, immutable (like the table name and the field map). But that's a different matter, for now: just start by leaving out the $array variable. There's also no need to assign the format to $q if you're reassigning $q the next statement. Just write:
$q = sprintf(
'INSERT INTO `%s` (%s) VALUES (%s)',
'users',
implode(', ', $field_names),
implode(', ', $alias)
);
The backticks around the table name imply you're using MySQL, you might want to add the same backticks around your field names (in case some mug decides to call fields "transaction" or "set" - again, I've been there, it happens):
$q = sprintf(
'INSERT INTO `%s` (`%s`) VALUES (%s)',
'users',
implode('`, `', $field_names),
implode(', ', $alias)
); | {
"domain": "codereview.stackexchange",
"id": 15663,
"tags": "php, beginner, pdo"
} |
Multithreaded RViz? | Question:
When running Movit on my desktop, RViz cannot show smoothly arm trajectories. Well, it looks cool at the beginning, but after 2 or 3 extra plannings/executions, CPU load reaches 100% and arm trajectories become jumpy.
The point is that my laptop, with a less powerful CPU (same RAM) does a much better job, with RViz taking ~80% of CPU, because in a thread-by-thread basis, is much faster:
Desktop: AMD FX Series FX-8350 4.0Ghz 8 cores
Laptop: Intel i7-4600U 3.00 GHz dualcore
So... weird as it can sound, I would love a multi-threaded RViz, or recompile it better fitted for my CPU.
Any idea of what can I do? (I tried overclocking but always make the PC unstable)
Hope this gets useful for ROS users thinking to buy a new PC: carefully check per-thread performance!!!
UPDATE
I made some progresses; compiling RViz for my platform improves things, so now, if I disconnect the 3D sensor, RViz never reaches 100% CPU and so the arm moves smoothly:
set(PLATFORM_FAMILY "amd" CACHE STRING "Platform family, usually referring to intel/arm etc.")
set(PLATFORM_NAME "native" CACHE STRING "Platform name, usually referring to the cpu architecture.")
...
set(PLATFORM_CXX_FLAGS "-march=native" CACHE STRING "Compile flags specific to this platform.")
...
set(CMAKE_BUILD_TYPE Release CACHE STRING "Build mode type.")
However, performance still degrades when RViz shows the octomap (but takes longer). Someone knows any additional compilation tweak to try? Btw, how can I override the predefined compiler flags? It uses -O2, while I want to try -Os instead (I saw in a benchmark assuring it's better for my CPU)
Another solution could be to simplify the arm. Yes, I use the tiny turtlebot arm, but the URDF is surprisingly complex, as it adds a link for every frame element, totaling 25 links! (with 23 meshes). Makes this any sense? Or it will be a waste of time?
Thanks!!!
Originally posted by jorge on ROS Answers with karma: 2284 on 2014-07-27
Post score: 1
Original comments
Comment by ahendrix on 2014-07-27:
Are you sure this is related to CPU performance, and not due to poor graphic performance?
Comment by gvdhoorn on 2014-07-28:
I'm inclined to say the same: on my -- even slower -- machine (2.8Ghz, c2d) -- I can run RViz and MoveIt for hours on end, without it eating up my entire CPU. Could you add some more info on the rest of your hardware?
Comment by jorge on 2014-07-28:
Well, I'm pretty sure it's not the graphic card because I have a one that should be more than enough for RViz (AMD HD 7770), while I have none in the laptop. And even like this, the laptop works better. Also, I changed to proprietary AMD drivers without noticing any improve on RViz
Comment by kmhallen on 2014-07-28:
Any parameter starting with 'D' is passed to cmake.
catkin_make -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS=-Os
catkin_make -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-Os -march=native"
Comment by jorge on 2014-07-29:
Yes, I have no problem passing arguments to gcc. But I don't know how to remove the default ones, e.g. -O2.
Comment by Daniel Stonier on 2014-07-29:
When you build in ros, the -O's will be applied by cmake according to the CMAKE_BUILD_TYPE setting.
Comment by Daniel Stonier on 2014-07-29:
Run top on the rviz process, use H (threads) and f (fields, select the last used cpu). Are you seeing all the computation in one thread? Mine is, but I'm not testing on anything significant. Being a qt program, I wonder if all the hard yakka is getting done in the main thread.
Comment by Daniel Stonier on 2014-07-29:
Such a large program with alot going on could farm out some jobs to threads or hang widgets on the main one from external threads, but that starts to require some thinking/planning.
Comment by Daniel Stonier on 2014-07-29:
Would starting two instances of rviz with different things showing in each help you multi-core it?
Comment by jorge on 2014-07-29:
One single big glutton thread takes all CPU. As far as I can say, the thread gets 100% of CPU because it process the monitored_planning_scene topic from moveit, that contains really a lot of information. But... I'm new on moveit, so this is just my guess.
Comment by jorge on 2014-07-30:
My college in the office complain that he set CMAKE_BUILD_TYPE as Debug and still have the -O2 optimization option. There's no way to change pre-set options for a compilation type?
Comment by Daniel Stonier on 2014-07-31:
Those -O2 settings are defined as preset configurations by cmake modules that define the compiler you are using (I'm pretty sure). You can override these with: http://www.cmake.org/cmake/help/v2.8.10/cmake.html#variable:CMAKE_USER_MAKE_RULES_OVERRIDE.
Comment by Daniel Stonier on 2014-07-31:
Basically you create a file with something like: set(CMAKE_CXX_FLAGS_INIT "${CMAKE_CXX_FLAGS_INIT} ${MY_FLAGS_INIT}") and set CMAKE_USER_MAKE_RULES_OVERRIDE with the filename in your cmake pre-cache.
Answer:
After edit: you never mentioned that you were doing things with Octomaps, which is why I assumed (and I think @ahendrix as well) that the performance bottleneck was within any of the subsystems RViz uses, and the performance problems were always there, even when just doing simple path planning with MoveIt.
If performance degrades over time, perhaps you've ran into a memory leak issue. RViz has had some of those in the past (see issues/556 and issues/695 fi). You could use any of the standard tools to catch those.
To get a feeling for which nodes / processes are taking up all the CPU time, try to run rqt_top or just normal top and see what is going on.
Edit: just realised that it could very well be something else (ie: not RViz) that is your performance bottleneck.
Originally posted by gvdhoorn with karma: 86574 on 2014-07-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by ahendrix on 2014-07-29:
If this really is a memory leak, it's entirely possible that rviz is bottlenecking on memory speed (and not CPU speed) on you desktop vs laptop.
Comment by gvdhoorn on 2014-07-29:
If something is constantly malloc-ing/new-ing, then yes, I agree. I can see that becoming a problem with pointclouds/octomap usage.
Comment by jorge on 2014-07-29:
OK, my mistake: I shouldn't have mentioned the octomap, as this can blurry the discussion. The performance degradation happens without visualizing octomaps. It just gets worse if I show them, but this means that RViz gets more loaded showing them, not that the CPU struggles creating them.
Comment by jorge on 2014-07-29:
I'm pretty sure on this because I have monitored the CPU usage of all processes, and the only making heavy use of the CPU is RViz. That said, I have also observed that RViz keeps taking more and more memory after executing a plan, so looks like it has important memory leaks. I will try with valgrind | {
"domain": "robotics.stackexchange",
"id": 18784,
"tags": "rviz, turtlebot, cpu, multi-thread, performance"
} |
How to get pitch and yaw angle from the raw IMU data? | Question:
Hello
Im using the IMU data for some orientation platform purpose. Im able to use the IMU in a ROS massagesses in the Callback like these
void imuCallback(const sensor_msgs::ImuConstPtr& msg)
{
tf::quaternionMsgToTF(msg->orientation, quat);
tf::Matrix3x3(quat).getRPY(r,p,y);
But I would like to check when for example some turn of the platforms occur if its exactly by that yaw and pitch.
For example I wonted to record the start time when my platform is start going up/down the ramp , like this condition
if(fabs(p-initial_pitch) >= ramp_ang)
{
ramp_pitch = fabs(p-initial_pitch)*57.3;
}
So I wont to check the IMU data if that is ok.
Any help?
Thanks
Originally posted by Astronaut on ROS Answers with karma: 330 on 2013-07-25
Post score: 0
Answer:
I am not sure I completely understand your question, but I don't think this will work if you want to check if there was a change by any exact amount. If you look at the IMU data you are streaming right now you will almost certainly see some noise even when the IMU is motionless. As a consequence you are unlikely to ever find that the orientation is exactly some angle; you would have to check that the angle was between some range.
Originally posted by MD_MD with karma: 31 on 2013-07-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Astronaut on 2013-07-25:
Ok. And how to check that the angle was between some range? I wont to check when my platform go up the ramp ( at some pitch angle let we say 3 deg) really occurs at that pitch (more less). Is it more clear now?
Comment by MD_MD on 2013-07-25:
I would do something like
if(pitch>2.5 && pitch<3.5){
do_something;
}
Comment by Astronaut on 2013-07-25:
No. I dont wont to change the code. I just wont to plot my pitch angle getting from IMU data.
Comment by MD_MD on 2013-07-25:
You wil have to change your code in some way if you want it to behave differently. If you are asking about how to get the data from the IMU I would use an IMU filter like imu_filter_madgwick which will calculate the orientation for you.
Comment by Astronaut on 2013-07-25:
Ok. But can I get the orientation without IMU filter like imu_filter_madgwick? I mean using the orientation data like
x: 0.00555401947349
y: 0.00845728162676
z: 0.4551037848
w: 0.89038091898
Is it possible?
Comment by MD_MD on 2013-07-25:
Yes, but for the most part IMUs don't give you the orientation directly. They give information like acceleration and momentum which you then have to process into orientation. You would have to subscribe to the topic /imu_data or /imu_data_raw and use that.
Comment by Astronaut on 2013-07-25:
I already subscribe. So with tf::quaternionMsgToTF(msg->orientation, quat);
tf::Matrix3x3(quat).getRPY(r,p,y); Im able to get the roll , pitch and yaw. But Dont know how to use the orientation raw IMU data. Understand?So how to get the pitch and yzw from the raw IMU orientation x,y,z and w?
Comment by MD_MD on 2013-07-25:
I see what you mean now. I don't know how to do those calculations myself, I use the IMU filter I previously mentioned to do that for me. If you insist on not using that package you could look at their source code and implement what they did in your own node.
Comment by Astronaut on 2013-07-25:
And if I use their package. How to do that? How did you used the IMU filter? Cause the documentation is not so good.
Comment by MD_MD on 2013-07-25:
Follow the instructions on the imu_tools page to install and once you have done that rosrun imu_tools imu_filter_madgwick after your have started your IMU. It will publish a topic called /imu/data and this will contain orientation.
Comment by Astronaut on 2013-07-25:
Strange . I installed and compile it with rosmake imu_tools. When I tried to rosrun imu_tools imu_filter_madgwick i got error massage stack/package imu_tools not found. Whats wrong??
Comment by Astronaut on 2013-07-25:
rosmake imu_tools works good and compiled without errors. I have this. Built 54 packages with 0 failures. So why rosrun imu_tools imu_filter_madgwick is not working?
Comment by MD_MD on 2013-07-25:
I misremembered the name of the node, although it is in thier documentation if you look. It should be rosrun imu_filter_madgwick imu_filter_node. Let us know if that worked.
Comment by Astronaut on 2013-07-25:
Ok. Now its works. But strange. I got this massage
INFO] [1374818078.308309568]: Starting ImuFilter
[ INFO] [1374818078.313142482]: Using dt computed from message headers
[ INFO] [1374818078.313186901]: Imu filter gain set to 0.100000
[ INFO] [1374818078.313201876]: Gyro drift bias set to 0.000000
Comment by Astronaut on 2013-07-25:
Whats wrong?
Comment by MD_MD on 2013-07-25:
I don't see anything wrong, those message are normal. Did you try rostopic echo /imu/data? That should show you calculated orientations form the /imu/data_raw channel. You can use the orientations there to get pitch/yaw/roll.
Comment by Astronaut on 2013-07-25:
I tried.. Not working ..I got rostopic echo /imu/data
WARNING: no messages received and simulated time is active.
Is /clock being published?
Comment by Astronaut on 2013-07-25:
Do I need to remap??Cause my Im topic in bag file is called /raw_imu_throttle
Comment by MD_MD on 2013-07-25:
I am not familiar with the /raw_imu_throttle topic or its message, so I can't say if it will work for you to remap it. I thought you were working from a "regular" imu that published /imu/data_raw. It might work if you remap the topic you have to /imu/data_raw, but I'm not sure
Comment by Astronaut on 2013-07-25:
I tried to remap but not working
Comment by MD_MD on 2013-07-26:
Straight remapping may not work if the message formats are different. I am not sure how to help you beyond this because I am not familiar with /raw_imu_throttle messages. | {
"domain": "robotics.stackexchange",
"id": 15057,
"tags": "ros, imu, data"
} |
Problems with Computing the Friedmann Equation | Question: I am trying to write a program which uses the Friedmann equation to graph the expansion of a universe given the starting parameters $K$ (geometry of spacetime), $R$ (curvature of spacetime) and $\rho$ (average mass density). So far, I have rearranged the equation to the form:
$$\frac{dt}{da} = \frac{1}{\sqrt{\frac{8}{3}\pi G \rho a^2 - K\frac{c^2}{R^2}}}$$
I then input this equation into Wolfram Alpha and integrated it, yielding the result:
$$t = \frac{\sqrt{\frac{3}{2\pi}}\log{\left(\sqrt{2\pi\rho G}\sqrt{8\pi a^2 \rho G - \frac{3c^2K}{R^2}}+4\pi a \rho G\right)}}{2\sqrt{\rho G}} + C$$
The program then plots values of $t$ for values of $a$ ranging from $0.5$ to $1.5$, inclusive.
The problem I have is that, when I run the program, it doesn't work. When $K = 1$, all of the values come out as NaN. When $K = -1$, all of the values of $t$ came out the same and, when $K = 0$, when given the values of our universe, it doesn't look as expected.
I have a feeling that there is a problem in my rearrangement and subsequent integration of the Friedmann equation, but I don't know enough calculus to know where I have gone wrong.
Can someone guide me on what I am doing wrong?
I have been testing this with $R = 1$ and $\rho = 9 \times 10^{-27} kgm^{-3}$.
Edit:
Following Frederich Thomas' advice, I have now adjusted the equation to:
$$t = \frac{\sqrt{\frac{3}{2\pi}}\log{\left(\sqrt{2\pi\rho G}\sqrt{|8\pi a^2 \rho G - \frac{3c^2K}{R^2}|}+4\pi a \rho G\right)}}{2\sqrt{\rho G}} + C$$
At $K = 1$, the values no longer come out as NaN but they are now like $K = -1$ where all of the values of $t$ are the same.
Answer: The equation you want to solve is the first equation of Friedmann-Lemaitre :
\begin{equation}\tag{1}
\dot{a}^2 \equiv \Big( \frac{d a}{d t} \Big)^2 = \frac{8 \pi G}{3} \, \rho \, a^2 - k,
\end{equation}
where $k = -1, 0, 1$ for open, euclidian and closed space, respectively. You first need to assume some state of matter : $\rho \equiv f(a)$. For example $\rho \propto a^{- 3}$ for dustlike matter or $\rho \propto a^{- 4}$ for pure radiation. If you impose $\rho = \mathrm{constant}$, then this is like imposing an empty space with a cosmological constant only. It is then preferable to write
\begin{equation}\tag{2}
\rho = \frac{\Lambda}{8 \pi G}.
\end{equation}
The solution to the following differential equation is well known :
\begin{equation}\tag{3}
\Big( \frac{d a}{d t} \Big)^2 = \frac{\Lambda}{3} \, a^2 - k,
\end{equation}
with solution
\begin{equation}\tag{4}
a(t) = a_0 \cosh{\omega t} \pm \sqrt{a_0^2 - \frac{3 k}{\Lambda}} \, \sinh{\omega t},
\end{equation}
where
\begin{equation}\tag{5}
\omega \equiv \sqrt{\frac{\Lambda}{3}}.
\end{equation}
Since the scale factor is defined up to an arbitrary constant factor, you can set $a_0 = \sqrt{\frac{3}{\Lambda}} \equiv \omega^{-1}$ if $k = 1$ or $k = 0$, and $a_0 = 0$ if $k = -1$, so that (4) becomes
\begin{align}
a(t) = \omega^{-1} \cosh{\omega \, t}, \qquad \text{if $k = 1$}. \tag{6}\\[12pt]
a(t) = \omega^{-1} e^{\pm \, \omega \, t}, \qquad \text{if $k = 0$}. \tag{7}\\[12pt]
a(t) = \omega^{-1} \sinh{\omega \, t}, \qquad \text{if $k = -1$}. \tag{8}
\end{align}
These solutions describe an empty deSitter universe, with a positive cosmological constant, which can be viewed as a vacuum dominated universe. | {
"domain": "physics.stackexchange",
"id": 42505,
"tags": "general-relativity, cosmology, simulations"
} |
Efficiently counting rooms from a floorplan | Question: Note this was the wrong version of the code. The updated version is here: Efficiently counting rooms from a floorplan (version 2) My apologies!
Update
Final version 3 with test harness here: Multithreaded testing for counting rooms from a floor plan solution
I was inspired by Calculating the number of rooms in a 2D house and decided to see if I could come up with efficient code to solve the same problem.
To recap, the problem (from here) is this:
You are given a map of a building, and your task is to count the number of rooms. The size of the map is \$n \times m\$ squares, and each square is either floor or wall. You can walk left, right, up, and down through the floor squares.
Input
The first input line has two integers \$n\$ and \$m\$: the height and width of the map.
Then there are \$n\$ lines of \$m\$ characters that describe the map. Each character is . (floor) or # (wall).
Output
Print one integer: the number of rooms.
Constraints
\$1\le n,m \le 2500\$
Example
Input:
5 8
########
#..#...#
####.#.#
#..#...#
########
Output:
3
Strategy
It seemed to me to be possible to solve the problem by processing line at a time, so that's what my code does. Specifically, it keeps a tracking std::vector<std::size_t> named tracker that corresponds to the rooms from the previous row and starts with all zeros.
As it reads each line of input, it processes the line character at a time. If it's non-empty (that is, if it's a wall), set the corresponding tracker entry to 0.
Otherwise, if the previous row (that is, the matching value from the tracker vector) was a room, then this is part of the same room.
If the previous character in the same row was a room, this is the same room.
The code also has provisions for recognizing that what it "thought" was two rooms turns out to be one room, and adjusts both the tracker vector and the overall roomcount.
Because I wanted to be able to test it with many different inputs, my version of the code keeps reading and processing each floor plan until it gets to the end of the file.
The code is time efficient because it makes only a single pass through the input, and it's memory efficient because it only allocates a single \$1 \times m\$ vector.
Questions
Correctness - The code works correctly on every input I've tried, but if there is any error in either the code or the algorithm, I'd like to know.
Efficiency - Could the code be made even more efficient?
Reusability - This works for a 2D map, but I'd like to adapt it to 3 or more dimensions. Are there things I could do in this code to make such adaptation simpler?
Any hints on style or any other aspect of the code would be welcome as well.
rooms.cpp
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
std::size_t rooms(std::istream &in) {
std::size_t height;
std::size_t width;
std::size_t roomcount{0};
static constexpr char empty{'.'};
in >> height >> width;
if (!in)
return roomcount;
std::vector<std::size_t> tracker(width, 0);
for (auto i{height}; i; --i) {
std::string row;
in >> row;
if (row.size() != width) {
in.setstate(std::ios::failbit);
return roomcount;
}
for (std::size_t j{0}; j < width; ++j) {
if (row[j] == empty) {
// continuation from line above?
if (tracker[j]) {
// also from left?
if (j && tracker[j-1] && (tracker[j-1] != tracker[j])) {
tracker[j] = tracker[j-1] = std::min(tracker[j], tracker[j-1]);
--roomcount;
}
} else {
// continuation from left?
if (j && tracker[j-1]) {
tracker[j] = tracker[j-1];
} else {
tracker[j] = ++roomcount;
}
}
} else {
tracker[j] = 0;
}
}
}
return roomcount;
}
int main() {
auto r = rooms(std::cin);
while (std::cin) {
std::cout << r << '\n';
r = rooms(std::cin);
}
}
test.in
5 8
########
#..#...#
####.#.#
#..#...#
########
9 25
#########################
#.#.#.#.#.#.#.#.#.#.#.#.#
#########################
#.#.#.#.#.#.#.#.#.#.#.#.#
#########################
#.#.#.#.#.#.#.#.#.#.#.#.#
#########################
#.#.#.#.#.#.#.#.#.#.#...#
#########################
3 3
...
...
...
3 3
###
...
###
3 3
###
###
###
7 9
#########
#.#.#.#.#
#.#.#.#.#
#.#...#.#
#.#####.#
#.......#
#########
5 8
########
#..#.#.#
##.#.#.#
#..#...#
########
7 8
########
#..#.#.#
##.#.#.#
#..#...#
########
#..#...#
########
7 9
#########
#.#.#.#.#
#.#.#.#.#
#.#.#.#.#
#.#.#.#.#
#.......#
#########
7 9
#########
#.#.##..#
#.#.##.##
#.#.##..#
#.#...#.#
#...#...#
#########
7 9
#########
#.#.....#
#.#.###.#
#.#...#.#
#.#####.#
#.......#
#########
7 9
#########
#.......#
#.#####.#
#.#.#.#.#
#.#.#.#.#
#.......#
#########
Results
Running the program as rooms <test.in produces this expected result:
3
47
1
1
0
2
2
4
1
1
1
1
Answer: The program does not work correctly for all input. As an example,
the output is 0 (zero rooms) for the input
5 8
########
######.#
#......#
##.#...#
######## | {
"domain": "codereview.stackexchange",
"id": 31396,
"tags": "c++, algorithm, programming-challenge, c++14"
} |
Catch the turtle - Python | Question: I made a game in python, where the objective is to catch a 'turtle'. The controls are the arrow keys.
#Catch the turtle
import turtle
import math
import random
score = 0
print ("\n" * 40)
print("Your score is:\n0")
#Title
t=turtle.Pen()
t.pencolor("Blue")
t.hideturtle()
t.penup()
t.setposition(-70,350)
t.write("Catch the turtle", font=("Verdana", 18))
#Tip
text=turtle.Pen()
t.pencolor("Red")
t.hideturtle()
t.penup()
t.setposition(-70, -350)
t.write("DON'T TOUCH THE EDGES!", font=("Verdana", 18))
#Set up screen
wn = turtle.Screen()
wn.bgcolor("lightblue")
wn.title("Catch the Turtle")
#Draw border
mypen = turtle.Turtle()
mypen.penup()
mypen.speed(10)
mypen.hideturtle()
mypen.setposition(-300,-300)
mypen.pendown()
mypen.pensize(3)
for side in range(4):
mypen.color("yellow")
mypen.forward(300)
mypen.color("black")
mypen.forward(300)
mypen.left(90)
mypen.hideturtle()
#Create player turtle
player = turtle.Turtle()
player.color("blue")
player.shape("arrow")
player.penup()
player.speed(0)
#Create goal
goal = turtle.Turtle()
goal.color("red")
goal.shape("turtle")
goal.penup()
goal.speed(0)
goal.setposition(-100, -100)
#Set speed
speed = 1
#Define functions
def turnleft():
player.left(30)
def turnright():
player.right(30)
def increasespeed():
global speed
speed +=0.5
def decreasespeed():
global speed
speed -= 1
#Set keyboard binding
turtle.listen()
turtle.onkey(turnleft, "Left")
turtle.onkey(turnright, "Right")
turtle.onkey(increasespeed, "Up")
turtle.onkey(decreasespeed, "Down")
while True:
player.forward(speed)
#Boundary check
if player.xcor() > 300 or player.xcor() < -300:
print("GAME OVER")
quit()
if player.ycor() > 300 or player.ycor() < -300:
print("Game OVER")
quit()
#Collision checking
d= math.sqrt(math.pow(player.xcor()-goal.xcor(),2) + math.pow(player.ycor()-goal.ycor(),2))
if d < 20 :
goal.setposition(random.randint(-300,300), random.randint(-300, 300))
score = score + 1
print ("\n" * 40)
print("Your score is")
print (score)
Answer: Using Variables
score = 0
print ("\n" * 40)
print("Your score is:\n0")
You are setting the score to 0, then printing the value 0 as the score. What if you decided, that the player starts with 100 score points? You would have to change the value in two places. Since you already have the value in a variable, it is better to print the value of the variable. Like this: print('Your score is {0}'.format(score)) Note that I also removed the newline from the output, because Your score is 0 reads much more naturally than
Your score is:
0.
Using Functions
#Set up screen
wn = turtle.Screen()
wn.bgcolor("lightblue")
wn.title("Catch the Turtle")
This comment already reads like a function name, so pack the code inside of a function. This way you don't even need the comment, because the function name tells you all you have to know. It could look like this:
def setup_screen():
wn = turtle.Screen()
wn.bgcolor("lightblue")
wn.title("Catch the Turtle")
Even better you can add parameters, to be able to easily change how the screen is setup without having to read and change the setup code:
def setup_screen(background_color, title):
wn = turtle.Screen()
wn.bgcolor(background_color)
wn.title(title)
That way you achieve a good level of abstraction, where you don't have to read all of the details everytime you want to change something.
The same applies to many more blocks of code that begin with such a comment that sounds like "do this and that".
Logic and Bugs
#Title
t=turtle.Pen()
t.pencolor("Blue")
t.hideturtle()
t.penup()
t.setposition(-70,350)
t.write("Catch the turtle", font=("Verdana", 18))
#Tip
text=turtle.Pen()
t.pencolor("Red")
t.hideturtle()
t.penup()
t.setposition(-70, -350)
t.write("DON'T TOUCH THE EDGES!", font=("Verdana", 18))
In the second block, you are first initializing a variable text, but then you continue using the variable t that you used in the block before. This way you are changing values that you already set, and not changing others that you expect to be changed. I think all occurences of t in the second block should be text instead. However, if you were using functions as I explained above, this problem would not occur and you would be less likely to get this bug, which is probably caused by copy-pasting code and forgetting to change it.
#Boundary check
if player.xcor() > 300 or player.xcor() < -300:
print("GAME OVER")
quit()
if player.ycor() > 300 or player.ycor() < -300:
print("Game OVER")
quit()
This is redundant. You have the same code twice, with just different conditions. You can already see why this is bad: You typed the output strings differently without realising, so in some cases when the player is out of bounds the output will be GAME OVER, in some cases it will be Game OVER. This is a bug, and a very simple example of bugs being introduced or at least made much harder to find by redundant code.
To fix it you can either combine the logical expressions into one condition:
#Boundary check
if (player.xcor() > 300 or
player.xcor() < -300 or
player.ycor() > 300 or
player.ycor() < -300):
print("GAME OVER")
quit()
Or, even better, define a function to check the bounds (this is way more readable and maintainable):
def is_player_in_bounds():
return (player.xcor() < 300 and
player.xcor() > -300 and
player.ycor() < 300 and
player.ycor() > -300)
Note that I inverted the logical expression, because I check for in bounds, not for out of bounds. Then you use it like this:
if not is_player_in_bounds():
print("GAME OVER")
quit()
#Collision checking
d= math.sqrt(math.pow(player.xcor()-goal.xcor(),2) + math.pow(player.ycor()-goal.ycor(),2))
if d < 20 :
goal.setposition(random.randint(-300,300), random.randint(-300, 300))
score = score + 1
print ("\n" * 40)
print("Your score is")
print (score)
d is a very bad name for a variable. Since this looks like a calculation of Euler distance, I assume that it is supposed to mean distance, so better call your variable distance or at least dist.
You can write score += 1 here to make it shorter.
You might want to put the ouput code into a function (e. g. print_score()), to separate logic from I/O.
math.sqrt(math.pow(player.xcor()-goal.xcor(),2) + math.pow(player.ycor()-goal.ycor(),2)) is hard to read, because of the formatting, and because it is a long formula with variable names, namespaces and method/function calls. Better put it in a function:
def euler_distance(player, goal):
distance_x = player.xcor() - goal.xcor()
distance_y = player.ycor() - goal.ycor()
return math.sqrt(distance_x ** 2 + distance_y ** 2)
Using a ** b for exponentiation is faster and looks cleaner than using math.pow(). Note that player and goal are local variables here, even if they have the same name as the global ones.
Use the function like this:
if euler_distance(player, goal) < 20:
goal.setposition(random.randint(-300, 300), random.randint(-300, 300))
score += 1
print_score()
Even better if you would define the bounds somewhere as variables and access them that way.
Redundance
#Set speed
speed = 1
This comment tells you nothing about the code that the code does not already tell you. It just bloats the code and requires you to read more redundant information. When you read speed = 1, it is perfectly clear that the speed is being set. Comments should tell you why the code does something (if needed), while the code tells you what it does.
def turnleft():
player.left(30)
def turnright():
player.right(30)
These functions do not add any value. (Hint: Just after reviewing these functions I saw why they are defined, which is using them for event handling. In this case they do add value, but I will leave the info for understanding.) player.left(30) gives you more information than turnleft(). If you read turnleft() somewhere in your code, you do not know what is turned, you do not know how much it is turned. Whenever you want to write a function that only contains one or two lines, think about whether the one line or two lines would be more readable and give you more valuable information.
def increasespeed():
global speed
speed +=0.5
def decreasespeed():
global speed
speed -= 1
Here we have the same, but even worse: The functions change global state, which can make it much harder to find bugs. Try to avoid that whenever possible. Furthermore, speed += 0.5 and speed -= 1 already tell you that the speed is increased/decreased, and they tell you how much they are changed, which is valuable information. If your only reason for defining these functions is that you can easily change how much the speed is increased/decreased, define variables instead and do speed += SPEED_INCREASE. Writing it in uppercase later reminds you not to change it.
#Set keyboard binding
turtle.listen()
turtle.onkey(turnleft, "Left")
turtle.onkey(turnright, "Right")
turtle.onkey(increasespeed, "Up")
turtle.onkey(decreasespeed, "Down")
Since we said above that not defining these functions might be better, you can use lambdas as callbacks. Lambdas are anonymous functions, that means you do not have to define them with def, but you can just use them where you need them. In the cases where you just change a variable however, you will still need the function. This is what it could look like:
#Set keyboard binding
turtle.listen()
turtle.onkey(lambda: player.left(30), "Left")
turtle.onkey(lambda: player.right(30), "Right")
turtle.onkey(increasespeed, "Up")
turtle.onkey(decreasespeed, "Down") | {
"domain": "codereview.stackexchange",
"id": 29085,
"tags": "python, python-3.x, game, turtle-graphics"
} |
About Feynman rules and symmetry factor | Question: I have two simple questions about Feynman rules for Lagrangian that have a derivative of fields.
For example for Lagrangian in this link part a with derivative couplings. Is the vertex depend on direction of momentum? If we have a vertex and then change the momentum direction does it change? (depend on incoming and outgoing direction.)
And other simple question is that the symmetry factor of diagram is invariant under the change of interaction term and only depends on geometry of the diagram, is it right?for example if we change $1/4!$ of $\phi^4$ interaction to $1$, the same diagram have a same symmetry factor?
Answer:
Yes, the Feynman rule (in momentum space) for interaction vertices with derivatives may in general depend on the orientation of the $4$-momentum of its legs. (If there is an even number of derivatives, the orientation doesn't matter.) Be aware that different consistent conventions exist in the literature.
Yes, the symmetry factor of a Feynman diagram does by definition not depend on the normalization of the coupling constant. On the other hand, (the value of) a Feynman diagram will in general obviously depend on the coupling constant and its normalization. Concerning vertex factors, see also this related Phys.SE post. | {
"domain": "physics.stackexchange",
"id": 67446,
"tags": "quantum-field-theory, symmetry, differentiation, feynman-diagrams, interactions"
} |
roslibpy : Publishing and subscribing using custom messages | Question:
are there some examples which could guide me to publishing and subscribing using custom messages with roslibpy ?
Originally posted by aks on ROS Answers with karma: 667 on 2018-10-01
Post score: 1
Original comments
Comment by aks on 2018-10-01:
@gramaziokohler @gonzalocasas
Answer:
For completeness, the question is answered in detail in this github issue, but in short: you don't need to add any additional type information, simply supply a message -which is basically a python dictionary- that matches (key and value-type-wise) to the expected message format and that's it.
Originally posted by gonzalocasas with karma: 180 on 2019-02-19
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by dwd394 on 2022-07-12:
excellent answer my guy | {
"domain": "robotics.stackexchange",
"id": 31843,
"tags": "ros, ros-melodic, rosbridge"
} |
Spinlock for C++ kernel (with x86 ASM) | Question: This is some prototype code for a spin-lock for my toy operating system. Bear in mind this means there are no standard library routines to fall back on. It's also for C++11 compilation and will only ever be compiled by the GNU G++ compiler.
The primary use for this is in the scheduler and other code that accesses the process table.
I think there are lots of ways of getting this wrong, and some of them aren't obvious. So it'd be great to have a second pair of eyes on it.
Firstly the locking code itself in NASM x86 asm:
; *******************************
; void spin_lock(uint32 * p);
;
global __kspin_lock
__kspin_lock:
push ebp
mov ebp, esp ; we might as well the the stack frame right?
mov eax, [ebp + 8] ; eax contains address of uint32 used for the lock
mov ecx, 1
.tryacquire ; try and get the lock.
xchg ecx, [eax]
test ecx, ecx
jz .acquired ; if the lock wasn't 0 (unlocked) repeat.
.pauseloop
pause ; see 'About PAUSE' below
test dword [eax], 1
jne .pauseloop
jmp .tryacquire
.acquired
pop ebp
ret
And secondly a class that wraps it a little. (I will eventually then wrap this with an RAII class to release the lock automatically when it goes out of scope).
#pragma once
#include <std_types.h>
extern "C" {
/** These routines probably shouldn't be used directly. */
void __kspin_lock(uint32 *p);
};
/**
* SpinLock provides a crude locking mechanism based around an atomic
* exchange of a variable in memory.
*
* The SpinLock loops, swapping the value 1 for the value in lock.
* Each time it checks to see if the value retrieved from lock is now 0,
* if it is it knows that lock now contains 1, and it holds the lock.
*
* To ensure all spinlocks reside in a unique cache line the spinlock
* is aligned to 64 bytes and is 64 bytes in size. This avoids two
* processes competing for ownership of the same 64 byte cache line,
* which could be bad for performance.
*/
class __attribute__((aligned(64))) SpinLock
{
public:
SpinLock()
: lock_value(0)
{}
void lock()
{
__kspin_lock(&lock_value);
}
void release()
{
__kspin_lock_release(&lock_value);
}
private:
static void __kspin_lock_release(uint32 *p)
{
*p = 0;
}
/* Difficult to know whether it makes sense to copy a spinlock.
* It might be practical to copy it but always 0 the lock for
* the copy. */
SpinLock(const SpinLock &) = delete;
SpinLock & operator=(const SpinLock &) = delete;
/* padded to ensure it's on a unique cache line. */
uint32 lock_value;
char reserved[64 - sizeof(uint32)];
};
I also think this code could use some through tests, especially on performance, but that might be rather a case of profiling in place in the kernel. Any suggestions welcome.
Incidentally - I'm also thinking that it's going to be useful to add some additional debugging and profiling features to the code. For example, and probably only while running a debug build:
How frequently was the lock contended.
Which thread owns the lock.
Is the lock held by this thread (this would allow functions to check that the lock is held by a calling function properly).
Answer: This is largely a duplicate of https://stackoverflow.com/questions/6935442/x86-spinlock-using-cmpxchg, https://stackoverflow.com/questions/11959374/fastest-inline-assembly-spinlock, etc.
(1) Your code works only on 32-bit x86. If you're using 64-bit (x86-64), the calling convention is totally wrong (you need to save and restore %rbp rather than %ebp, and you're looking for the argument in the wrong place). If you're implementing your own OS, I suppose you know what you're doing... but in general I think it's weird to see 32-bit x86 code these days.
(2) Useless trivia of the day: The two instructions test ecx, ecx; jz .acquired could be replaced with the single instruction jecxz .acquired. I don't know whether this is an optimization or a pessimization on your particular hardware. It would be a size optimization, at least.
(2.5) Really, consider using cmpxchg instead of xchg; test. The cmpxchg mnemonic maps more directly onto the "Compare And Swap" idiom that you're using. It will also help you when it comes time to implement "is the lock held by this thread"; you can set the word to current-thread-id instead of just 1.
(3) See 'About PAUSE' below: Unfortunately there is no "About PAUSE" below. However, it seems like you're doing the right thing there. You could definitely tighten up the code; for example I don't know why you're wasting time with that test dword [eax], 1. You should just retry the xchg as soon as the pause is over.
(4) Consider removing the stack frame; it's not doing anything useful (unless it's necessary for your debugger of choice), just slowing things down. | {
"domain": "codereview.stackexchange",
"id": 12767,
"tags": "c++, c++11, locking, assembly"
} |
What are Quantum Field Theories? | Question: Every time I read about quantum field theories, I wrongly assume and associate the theory to the Standard Model, that is, our current theory of particles and interactions.
However, it seems that the Standard Model is just a type of Quantum Field Theory, of many existent. Unfortunately, everywhere I search about QFT, them are talking about the Standard Model itself (QED,Weak,...) (for example, Wikipedia says "In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics"). So, my question is:
What exactly is a Quantum Field Theory? For example, CFT is a QFT, why? Is that because we imply commutators/anti-commutators algebra, while we impose that the whole theory is invariant under the Poincaré group?
Answer: A field theory is a mathematical model where the "basic ingredients" are fields. Maxwell's theory of electromagnetic fields and continuum mechanics are prominent examples of classical field theories.
A quantum field theory is a quantum theory, where you promote fields to operators, roughly speaking. The standard model is a specific instance of a relativistic quantum field theory.
There are also non-relativistic quantum field theories, used e.g. in (non-relativistic) condensed matter physics, with a plethora of applications, such as the description of phonons in solids, superfluids and superconductors as well as the fractional quantum Hall effect, to name a few.
Two books which discuss quantum field theories in both mentioned domains are:
Quantum Field Theory for the Gifted Amateur. Tom Lancaster, Stephen Blundell. OUP Oxford, 2014.
Quantum Field Theory. An Integrated Approach. Eduardo Fradkin. Princeton University Press, 2021. | {
"domain": "physics.stackexchange",
"id": 97109,
"tags": "quantum-field-theory, field-theory, definition, models"
} |
What is the time complexity problem I need to manage on a Hackerank "Ransom Note" - my code runs perfectly | Question: I am trying to solve the "Ransom Note" problem on hackerank.
Why isn't my code getting submitted?
What can I do differently?
is my total time complexity, in this case, it is O(n^2)?
#!/bin/python3
import math
import os
import random
import re
import sys
# Complete the checkMagazine function below.
def checkMagazine(magazine, note):
for word in note: # O(n)
if word not in magazine: # O(n)
return 'No'
else:
magazine.remove(word) # O(n)
return 'Yes'
if __name__ == '__main__':
mn = input().split()
m = int(mn[0])
n = int(mn[1])
magazine = input().rstrip().split()
note = input().rstrip().split()
print(checkMagazine(magazine, note))
Answer: You can find some ideas in the discussions tab of the original question:
https://www.hackerrank.com/challenges/ctci-ransom-note/forum?h_l=interview&playlist_slugs%5B%5D=interview-preparation-kit&playlist_slugs%5B%5D=dictionaries-hashmaps.
To answer your quesitons: yes, your solution has time complexity of O(n^2).
The optinal solution should have time complexity of O(n):
Go through magazine in O(n), get a map from words to counts.
Go through note in O(n), also get a map from words to counts.
Go through the map in step 2 in O(n), check for each word, whether the count of the map in step 1 is larger or not. Return True only when all the counts are no larger. | {
"domain": "codereview.stackexchange",
"id": 39377,
"tags": "python, performance, python-3.x, complexity"
} |
Seeking improved Objective-C permutation algorithm | Question: This algorithm seeks to find all of the possible permutations of a word:
-(void)permute:(NSMutableArray*)word position:(int)p length:(int)l{
if(p==l){
[self.allPermutations addObject:[word componentsJoinedByString:@""]];
}
else {
for(int i = p;i<l;i++){
NSString* t;
t= word[p];
word[p] = word[i];
word[i] = t;
[self permute:word position:p+1 length:l];
t= word[p];
word[p] = word[i];
word[i] = t;
}
}
}
After that, I clean out the duplicates with this:
NSArray *cleanedArray = [[NSSet setWithArray:self.allPermutations] allObjects];
This starts to show a lag when the words get to 9 letters or greater. 9 letters is 362,880 loops. Any tips to get this down, or speed up this algorithm in a different way?
Answer: Before I comment on performance, I want to comment on style and naming and such.
First and foremost, let's not use single letters for variables. They're meaningless. It's hard to follow what's happening in your code when it's just a handful of letters. Longer variable names has ZERO impact on the runtime performance of your code, but makes a massive difference in the readability and maintainability of your code.
But the thing is, we don't even need the p or l variables. There's an existing Foundation struct design for EXACTLY this purposes. It's called NSRange, which consists of two components, a location (what you're calling p) and a length (what you're calling l).
So, let's change the method name to:
- (void)permute:(NSMutableArray *)word range:(NSRange)range;
Now let's fix the internals of the method. As a note, the following doesn't change any of the logic. It just fixes up the spacing and naming to make the method more readable:
- (void)permute:(NSMutableArray *)word range:(NSRange)range {
if (range.location == range.length) {
[self.allPermutations addObject:[word componentsJoinedByString:@""]];
} else {
for (int i = range.location; i < range.length; ++i) {
NSString *currentWord = word[range.location];
word[range.location] = word[range.length];
word[range.length] = currentWord;
[self permute:word range:NSMakeRange(range.location + 1, range.length)];
currentWord = word[range.location];
word[range.location] = word[range.length];
word[range.length] = currentWord;
}
}
}
Now the code is a bit more readable without having sacrificed any readability.
But there are still problems.
First of all, why are we using NSString within the loop? It doesn't make any sense. We're not calling any methods on these objects, we're just swapping them around. Instead of NSString, why not just id?
Second, what happens if p is greater than word.count? What happens if l + p is greater than word.count? What happens if p is negative? What happens if l is negative? A crash happens--that's what. If you want this method to find all permutations, then start by having an outer method which takes only an NSMutableArray argument and then calls a method that looks like this but the first call it send an NSRange object with a location of 0 and a length of word.count.
Here's another issue:
[self permute:word position:p+1 length:l];
t= word[p];
word[p] = word[i];
word[i] = t;
Are these last 3 lines ever executed in a meaningful way? If you comment every non-brace line after the recursive call out, do the results change? By the time you get to these three lines, you've already executed
[self.allPermutations addObject:[word componentsJoinedByString:@""]];
As far as I can tell, all those last three lines might be doing is straightening the array back out into its original form... is that the intent here? That's crazy to have this many executions of code on the way out of the recursive call just to straighten a mutable array out back to its original form, and there's a much better way to do that!
Perhaps I'm misunderstanding the purpose of these three lines, but if I'm understanding this correctly and the only purpose truly is to rewind the array back to it's original position, then let's try this:
- (void)permute:(NSMutableArray *)word range:(NSRange)range {
static NSMutableArray *copyOfWord;
if (!copyOfWord) {
copyOfWord = [word copy];
}
if (range.location == range.length) {
[self.allPermutations addObject:[copyOfWord componentsJoinedByString:@""]];
copyOfWord = nil; // We're done now. Set back to `nil` so next call can use it
} else {
for (int i = range.location; i < range.length; ++i) {
NSString *currentWord = copyOfWord[range.location];
copyOfWord[range.location] = copyOfWord[range.length];
copyOfWord[range.length] = currentWord;
[self permute:copyOfWord range:NSMakeRange(range.location + 1, range.length)];
}
}
} | {
"domain": "codereview.stackexchange",
"id": 8583,
"tags": "optimization, algorithm, recursion, objective-c, combinatorics"
} |
Heap Implementation in C# | Question: I am learning fundamental data structures, and I want to know if this is a valid implementation of a heap. Can any C# features be used to improve or extend the implementation? In addition to heapsort, what are some applications of a heap?
class BinaryHeap
{
/// <summary>
/// A binary heap implementation used to sort an integer array.
///
/// 1. The Max-Heapify and Min-Heapify methods maintain the heap properties.
///
/// 2. The Build-Heap methods produce the heap from an unordered array.
///
/// 3. The HeapSort methods sort the array in place and runs in O(n ln n) time.
/// </summary>
private int heapSize;
public BinaryHeap()
{
heapSize = 0;
}
private int ParentIndex(int currentIndex)
{
return currentIndex / 2;
}
private int LeftIndex(int currentIndex)
{
return currentIndex * 2;
}
private int RightIndex(int currentIndex)
{
return currentIndex * 2 + 1;
}
// Building the heap
#region
public void BuildMaxHeap(int[] A)
{
heapSize = A.Length - 1;
for(int i = A.Length/2; i >= 0; i--)
{
MaxHeapify(A, i);
}
}
public void BuildMinHeap(int[] A)
{
heapSize = A.Length - 1;
for (int i = A.Length / 2; i >= 0; i--)
{
MinHeapify(A, i);
}
}
#endregion
// Maintaining heap properties
// MaxHeapify: Ensure that parents are larger than children
// MinHeapify: Ensure that children are larger than parents
#region
public void MaxHeapify(int[] A, int i)
{
int leftIndex = LeftIndex(i);
int rightIndex = RightIndex(i);
int largestIndex = 0;
// Check to see which node in the tree subset has the largest value
if (leftIndex <= heapSize && A[leftIndex] > A[i])
{
largestIndex = leftIndex;
}
else
{
largestIndex = i;
}
if (rightIndex <= heapSize && A[rightIndex] > A[largestIndex])
{
largestIndex = rightIndex;
}
// Do not make any switches if the largest node is the parent
if(largestIndex != i)
{
int temp = A[largestIndex];
A[largestIndex] = A[i];
A[i] = temp;
MaxHeapify(A, largestIndex);
}
}
public void MinHeapify(int[] A, int i)
{
int leftIndex = LeftIndex(i);
int rightIndex = RightIndex(i);
int smallest;
if (leftIndex <= heapSize && A[leftIndex] < A[i])
{
smallest = leftIndex;
}
else
{
smallest = i;
}
if (rightIndex <= heapSize && A[rightIndex] < A[smallest])
{
smallest = rightIndex;
}
if (smallest != i)
{
int temp = A[i];
A[i] = A[smallest];
A[smallest] = temp;
MinHeapify(A, smallest);
}
}
#endregion
// Heapsort
#region
public int[] AscendingHeapSort(int[] A)
{
// Ensure all parents are greater than their children
BuildMaxHeap(A);
for (int i = A.Length - 1; i >= 0; i--)
{
int temp = A[0];
A[0] = A[i];
A[i] = temp;
heapSize--;
MaxHeapify(A, 0);
}
return A;
}
public int[] DescendingHeapSort(int[] A)
{
// Ensure all parents are less than their children
BuildMinHeap(A);
for (int i = A.Length - 1; i >= 0; i--)
{
int temp = A[0];
A[0] = A[i];
A[i] = temp;
heapSize--;
MinHeapify(A, 0);
}
return A;
}
#endregion
}
Answer: C#/Design related comments
Use small-letters for passing arguments.
Put your comments (The <summary> ) about the class on top of the class definition, not the field.
The way of using # regions is not really done the way you do in C# usually. There should be a description of the region right after the region on same line.
Use C# comments styling when commenting classes, methods, their parameters.
There is not much sense in designing Heap class that stores just a single integer (heap size). You could simply have public static methods (utility methods) for that purpose, because it seems none of your methods really need to store a state
If you design this as a Heap, depends on use-cases you might want to design Push/Pop methods and store the underlying array in the class. Then having a class would make sense.
Try to use C# Generics instead of strongly defined int. This way you can make your code re-usable for other types (e.g. short) and/or for any object that implements IComparable (check out MSDN for more info on these).
Where to use Heap.
In addition to sorting you can use Heap as PriorityQueue.
If you add Push/Pop methods and slightly modify the implementation, then it can serve as a PriorityQueue, which allows you to add elements(any object that implements IComparable) in random order, and the higher/lower ones will get to the top/bottom of the queue. You can go even further by designing another method - Update(...), which can modify the priority of an element (technically you could implement it by removing the old value from queue and then adding the updated value back to the queue).
Didn't validate your code correctness, but you can/should validate it by testing on some regular and corner cases etc. | {
"domain": "codereview.stackexchange",
"id": 20366,
"tags": "c#, heap"
} |
How electric flux is analog to water? | Question: The definition of electric flux is very often understood through the analogy of water. In the water example, water flux is easy to understand. Water flux is how much water flows per second in a given area. If you know the speed and area, it is all good.
The analogy doesn't work for me too well. $E$ then must be considered as a speed in water analogy. Since $E$ is not speed, nothing actually flows in a given area per second. $E$ field lines are just there (wouldn't call this "flow").
If so, then I am facing difficulty in understanding what exactly E times A actually yields? It cannot give us the number of field lines per second, since nothing is flowing and there is no "second" unit. Perhaps it gives us how many of the total E field lines actually are at the intersection of the given area - but if so, then I don't get why E times A gives us this.
I am looking for a logical explanation to this. I understand Gauss's Law, but it doesn't help me understand this.
Answer:
The analogy doesn't work for me too well.
Well, it is only an analogy but not an equality,
because $\mathbf{E}$ and $\mathbf{v}$ are different things.
The analogy is just motivated by some mathematical similarities:
Both are vector fields.
Both satisfy Gauss's law.
The velocity field satisfies $\nabla\cdot\mathbf{v}=0$
because water is incompressible and cannot be created or destroyed.
The electric field satifies $\nabla\cdot\mathbf{E}=0$, if there are no charges.
$E$ field lines are just there (wouldn't call this "flow").
Correct.
I'm having a difficulty what exactly E times A actually gives ?
$\mathbf{E}$ gives the density of the field lines (i.e number of lines per area).
Hence $\mathbf{E}\cdot\mathbf{A}$ gives the number of field lines passing though the area $\mathbf{A}$. | {
"domain": "physics.stackexchange",
"id": 95893,
"tags": "electrostatics, fluid-dynamics, electric-fields, flow"
} |
Einstein equivalence principle conflict | Question:
The weak principle of equivalence says that freely falling towards the Earth is the same as being in space far away from any stars.
However, imagine that you are freely falling on a planet with $g=9\cdot 10^{99} m/s^2$ (due to the extremely large mass). Then how can it possibly be the same as being in space far from any stars? Because you would die due to the insanely large g-forces acting on your body.
Answer: No matter what the actual value of g is, if g is constant, you won't feel it. In a real life situation you would eventually know you are falling towards the ground because g will not take the same value at all times during your fall. | {
"domain": "physics.stackexchange",
"id": 99136,
"tags": "general-relativity, equivalence-principle, free-fall, tidal-effect"
} |
How can we know the state of a quantum system? | Question: One of the postulates of QM states that given a system in a state $|\psi\rangle$ and given an observable $A$ whose eigenstates are $|\phi_i\rangle$, then the state of the system can be expressed as a linear combination of them such that
$$|\psi\rangle=\sum_ic_i|\phi_i\rangle$$
and the probability of the eigenvalue $a_i$ associated to the eigenstate $|\phi_i\rangle$ of coming out when $A$ is measured is determined by $|c_i|^2$.
So far so good. My question is how are the $c_i$ coefficients determined. I mean, if one can only get eigenvalues when doing measurements, and on top of that the system is left on an eigenstate right after that, how can one know the state in which the system is before performing the measurement (and, with that, the probability of getting the different eigenvalues)?
Answer: Experimental determination of $c_i$ values starts with preparing multiple identical systems, then making measurements. From all the measurements, one determines the probabilities, which are the $|c_i|^2$. The square root of the probabilities will tell you the $c_i$ to within a phase factor of the form $e^{i\beta}$, where $\beta$ is real, and may or may not be determinable.
A simple example of multiple identically prepared systems would be a sample of a radioactive mineral with a single parent nuclide. | {
"domain": "physics.stackexchange",
"id": 30511,
"tags": "quantum-mechanics, measurement-problem"
} |
Showing linear-phase frequency response when only real part is known | Question: I'm studying for an exam, and this particular practice problem is missing the solution:
Given a causal LTI system where only the real part of the frequency response is known,
$$\Re\left\{H(j\omega)\right\}= 1 + \cos(\omega)
$$
show that the system has a linear-phase frequency response.
I know that systems with constant group delay have linear phase, but I can't figure out how to get the group delay without the imaginary part, since group delay is
$$
\tau(\omega)=\Re\left\{\frac{\mathcal F[th(t)]}{\mathcal F[h(t)]}\right\}
$$
and
\begin{align}
\Re\left\{\mathcal F[th(t)]\right\} &= \Re\left\{j\frac{d}{d\omega}H(j\omega)\right\}\\
& = \Re\left\{\frac{d}{d\omega}[jH_{\rm real}(j\omega)+j^2H_{\rm imag}(j\omega)]\right\}\\
& = -\frac{d}{d\omega}H_{\rm imag}(j\omega)
\end{align}
Is my approach right but I made a mistake or can't see some trick, or am I going down the wrong path?
Answer: I agree with Maximilian Matthé's answer, but I'd like to show you another route to the solution, which might be a bit more straightforward, and which avoids the explicit application of the Hilbert transform.
First of all, note that the inverse Fourier transform of the real part of the frequency response corresponds to the even part of the impulse response:
$$H_R(j\omega)=\text{Re}\{H(j\omega)\}=\frac12[H(j\omega)+H^*(j\omega)]\Longleftrightarrow\frac12[h(t)+h^*(-t)]=h_e(t)\tag{1}$$
From $H_R(j\omega)=1+\cos(\omega)=1+\frac12e^{j\omega}+\frac12e^{-j\omega}$ we obtain
$$h_e(t)=\delta(t)+\frac12\delta(t+1)+\frac12\delta(t-1)\tag{2}$$
Since the system is known to be causal, we have $h(t)=0$ for $t<0$, and, consequently, $h^*(-t)=0$ for $t>0$. So from the right-hand part of $(1)$ we obtain $h(t)=2h_e(t)$, $t>0$. For $t=0$ we have $h(t)=h_e(t)$ because the odd part of $h(t)$ must always equal zero at $t=0$ (if that value even exists). So at $t=0$, $h(t)$ is simply equal to its even part.
In sum, the causal impulse response can be obtained from its even part $h_e(t)$ as
$$h(t)=\begin{cases}2h_e(t),&t>0\\h_e(t),&t=0\\0,&t<0\end{cases}\tag{3}$$
which gives
$$h(t)=\delta(t)+\delta(t-1)\tag{4}$$
The corresponding frequency response is
$$H(j\omega)=1+e^{-j\omega}=1+\cos(\omega)-j\sin(\omega)\tag{5}$$
The fact that $H(j\omega)$ is a linear-phase frequency response can most easily be seen by rewriting $(5)$ as
$$H(j\omega)=e^{-j\omega/2}(e^{j\omega/2}+e^{-j\omega/2})=e^{-j\omega/2}2\cos(\omega/2)\tag{6}$$
Apart from jumps of $\pi$ at frequencies $\omega_k=(2k+1)\pi$, the phase is linear: $\phi(\omega)=-\omega/2$. | {
"domain": "dsp.stackexchange",
"id": 4706,
"tags": "phase, homework, frequency-domain, hilbert-transform, group-delay"
} |
How fast do you have to be traveling in order to travel one light year in one year due to relativistic effects? | Question: My apologies if my understanding is incorrect, but I believe that as you approach relativistic speeds you experience time dilation as compared to an outside observer.
So taking into account this effect how fast do you have to be travelling in order to reach an object one light year away in one year (subjective time of the traveller). How much normal time would have passed?
Answer: You need to travel at $v=\sqrt{0.5}c=0.707c$. To see this, notice that because of lenght contraction the traveler will see shorter distance:
$L=L_0 \sqrt{1-(v/c)^2}$ (1)
where $L_0$ is one year light. If you travel for a year $t_y$, then $L=v*t_y=(v/c)ct_y=L_0(v/c)$.
Replacing that in (1) results in $v/c=\sqrt{0.5}$
UPDATE: the time at earth will be $t$, where $t'$ is one year:
$t'=t/\sqrt{1-(v/c)^2}=1.84$ years | {
"domain": "physics.stackexchange",
"id": 22577,
"tags": "homework-and-exercises, special-relativity, observers, inertial-frames"
} |
Loop between two Arrays and change values based on condition | Question: Instead of using 45 IF conditions , I put my two ranges into variant arrays.
Then I used the below code to loop between them and change values of the first array arr1 second elements based on condition.
the first range is only 10K rows and the second range is just 45 rows and code takes about 0.7 second to finish.
I tried to use Application optimizations like (Calculation, ScreenUpdating ,) but it makes no difference on speed.
In advanced grateful for all your help.
Option Explicit
Option Compare Text
Sub LoopTwoArrays2()
Dim ws1 As Worksheet: Set ws1 = ThisWorkbook.Sheets(1)
Dim ws2 As Worksheet: Set ws2 = ThisWorkbook.Sheets(2)
Dim arg As Range, brg As Range
Set arg = ws1.Range("P2:Q" & ws1.Cells(Rows.Count, "P").End(xlUp).Row)
Set brg = ws2.Range("A2:B" & ws2.Cells(Rows.Count, "A").End(xlUp).Row)
Dim arr1 As Variant, arr2 As Variant
arr1 = arg.Value2
arr2 = brg.Value2
Dim i As Long, k As Long
For i = LBound(arr1) To UBound(arr1)
For k = LBound(arr2) To UBound(arr2)
If arr1(i, 1) = arr2(k, 1) Then
arr1(i, 2) = arr2(k, 2)
End If
Next k
Next i
arg.Value = arr1
End Sub
Answer: As @Greedo mentioned, it is much simpler to write a formula, be that XLOOKUP or a combination of INDEX and MATCH with fast results and easier maintenance.
However, if you still need to do VBA for whatever reason, then add a reference to Microsoft Scripting Runtime:
and then use something like this:
Option Explicit
Option Compare Text
Sub LoopTwoArrays2()
Dim ws1 As Worksheet: Set ws1 = ThisWorkbook.Sheets(1)
Dim ws2 As Worksheet: Set ws2 = ThisWorkbook.Sheets(2)
Dim arg As Range, brg As Range
Set arg = ws1.Range("P2:Q" & ws1.Cells(Rows.Count, "P").End(xlUp).Row)
Set brg = ws2.Range("A2:B" & ws2.Cells(Rows.Count, "A").End(xlUp).Row)
Dim arr1() As Variant, arr2() As Variant
RangeToArray arg, arr1
RangeToArray brg, arr2
Dim i As Long, k As Long
Dim dict As New Dictionary
For k = UBound(arr2, 1) To LBound(arr2, 1) Step -1
dict(arr2(k, 1)) = arr2(k, 2)
Next k
On Error Resume Next
For i = LBound(arr1, 1) To UBound(arr1, 1)
arr1(i, 2) = dict(arr1(i, 1))
Next i
On Error GoTo 0
arg.Value2 = arr1
End Sub
Private Sub RangeToArray(ByRef rng As Range, ByRef arr() As Variant)
If rng.Areas(1).Count = 1 Then
ReDim arr(1 To 1, 1 To 1)
arr(1, 1) = rng.Value2
Else
arr = rng.Value2
End If
End Sub
Note that when reading the values from a range you are not guaranteed to get an array hence the need for RangeToArray method.
Also, instead of using:
On Error Resume Next
For i = LBound(arr1, 1) To UBound(arr1, 1)
arr1(i, 2) = dict(arr1(i, 1))
Next i
On Error GoTo 0
you might want something like:
For i = LBound(arr1, 1) To UBound(arr1, 1)
If dict.Exists(arr1(i, 1)) Then
arr1(i, 2) = dict(arr1(i, 1))
Else
arr1(i, 2) = Empty 'Or whatever
End If
Next i
which gives you more control on the return value, if the lookup fails. | {
"domain": "codereview.stackexchange",
"id": 43765,
"tags": "array, vba, excel"
} |
Long initialization in nodelet | Question:
On page http://wiki.ros.org/nodelet#Threading_Model the following is written:
1.6.1 onInit
This method is called on init and should not block or do significant work.
If the nodelet initialization process takes longer (e.g. reading data from database and process it etc.) what is the best practice to execute the long initialization process without blocking the onInit() call? One possible solution is to create a worker thread in the onInit() method which does the long initialization without blocking the call but I am wondering is there any "ROS pattern" to do this job?
Originally posted by Tibor Takacs on ROS Answers with karma: 13 on 2018-02-19
Post score: 1
Answer:
onInit() is called on the nodelet manager's callback thread, so it will block all other nodelets if you are using a single-threaded nodelet manager.
If you need to do a long-running initialization you could use a separate loader thread, or if you have C++11 you could use some of the async or futures primitives that were introduced in C++11.
Originally posted by ahendrix with karma: 47576 on 2018-02-19
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 30087,
"tags": "ros-kinetic, nodelet"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.