anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
If the human body consists of 60% water why can't we put out fire with our body? | Question: I have often heard of people being burned at the stake, but if the body is 60% water shouldn't the fire just be put out?
Answer: We need to consider something. Are the people being truly burnt at the stake or are they being killed by the effects of fire? If the fire is very prolonged then their bodies will dry out and then finally be calcined to ashes.
It is an interesting (but a rather morbid) fact that cremation furnaces typically run at about 800 degrees Celsius for many hours, yet they tend to be unable to burn bodies to dust. Typically after cremation, some large fragments remain. These have to be crushed.
There are tales of spontaneous human combustion, these have been circulating for many years. I recall that the Victorians thought it was associated with extreme alcohol abuse. They thought a build-up of alcohol in the body would lead to the body being able to catch fire. A modern-day explanation is that humans wear clothing, the clothing can act as a wick.
A person dies as a result of something, then their clothing comes into contact with an ignition source. The clothing starts to burn, this heats up fat in their body. The fat melts and runs into the clothing. The clothing then acts like a candle wick and then the burning fat makes more heat which melts more fat thus turning the person into a candle.
I think that this type of human candle effect is very rare, it is likely to require a combination of several things at once. Many people who are killed in serious fires (who suffer burns as well as being exposed to hot and toxic smoke) leave behind bodies which have not been burnt to ashes. | {
"domain": "chemistry.stackexchange",
"id": 10190,
"tags": "thermodynamics"
} |
Apparent frequency as function of distance | Question: So the Doppler effect says that the frequency of sound changes due to relative motion of source and observer. My question is if there any expression that tells how the apparent frequency changes in terms of the distance between the observer and source.
I know we have
$$f'=\frac{(v\pm v_O)}{(v\pm v_s)},$$
but there is no explanation for the rate of change in frequency.
Answer: To a pretty good approximation, the frequency of a sound wave does not change as it travels. In a plane wave solution, all points along the wave are oscillating up and down with the same period; and so the number of cycles per second is the same at the source as at the observer, no matter where the observer is located.
There are some minor effects (due to non-linearities in the medium) which can cause frequencies to change. However, these are usually negligible, and are usually glossed over in introductory physics classes.
EDIT: You commented that there should be a relation between distance and frequency because the pitch changes when it comes closer. But that's not quite true; the pitch changes while it is moving, which is not quite the same thing.
Suppose you have a speaker that is 10 m away and at rest, and it is emitting a 440 Hz tone. While the speaker is at rest, you hear a 440 Hz tone. If the speaker then moves towards you, you will hear a higher frequency while the speaker is moving. But if it stops 5 m away from you, the frequency you hear will return to 440 Hz.
In principle, if you know what the "shifted frequency" was, you could figure out how fast the speaker was moving; and if you measured the amount of time that you heard the shifted frequency, you could multiply the velocity by the amount of time to find the distance the speaker travelled. But that only tells you the displacement of the speaker during its motion, not its initial or final distance from you. | {
"domain": "physics.stackexchange",
"id": 55337,
"tags": "waves, acoustics, doppler-effect, distance"
} |
Moments and Deviation (Motwani and Raghavan, 3.7) | Question:
Let $a$ and $b$ be chosen independently and uniformly at random from
$\mathbb{Z}_n=\{0,1,2,\ldots,n-1\}$, where $n$ is a prime. Suppose we
generate $t$ pseudo-random numbers from $\mathbb{Z}_n$ by choosing
$r_i=ai+b \mod{n}$, for $1\le i\le t$. For any $\epsilon\in[0,1]$,
show that there is a choice of the witness set $W\subset\mathbb{Z}_n$
such that $|W|\ge\epsilon n$ and the probability that none of the
$r_i$'s lie in the set $W$ is at least $(1-\epsilon)^2/4t$.
The above is problem 3.7 from "Randomized Algorithms" by Motwani and Raghavan" from Chapter 3, Moments and Deviation. Some relevant facts (Exercise 3.7 and 3.8) from earlier in the chapter may be that (1) the distribution of each $r_i$ is uniformly on $\mathbb{Z}_n$, (2) $r_i$ and $r_j$ are pairwise independent for $i\neq j$, and (3) that for $X=\sum_{i=1}^mX_i$ for pairwise independent random variables, the variance of $X$ is the sum of the variances of the $X_i$.
For a fixed set $W$, a simple upper bound on the probability that none of the $r_i$'s lie in $W$ is $(1-t\epsilon)$ through a union bound. Moreover, since the expected number of $r_i$'s in $W$ is $\epsilon t$ and the variance is $\epsilon(1-\epsilon)t$, I also see how to use Chebyshev's inequality to upper bound the probability that none of the $r_i$'s is contained in $W$, which would give $\frac{(1-\epsilon)}{\epsilon t}$.
Am I missing something obvious? Thanks in advance!
Answer: I think you're overthinking this. Construct the witness set $W=\{0, 1, ..., \epsilon n\}$ (or really any contigious sequence of size $\epsilon n$).
Consider the events where $b$ was selected from $\{\epsilon n + 1, ..., \epsilon n + \frac{(1-\epsilon)n}{2}\}$, and $a$ was selected from $\{0, ..., \frac{(1-\epsilon)n}{2t}\}$. Then the range of $r_i$s is from $\epsilon n +1$ to $n-1$ (you might need a few floors here). So all the $\{r_i\}$ miss $W$.
This event happens with probability $\frac{1-\epsilon}{2}\cdot \frac{1-\epsilon}{2t}$, so the probability the $\{r_i\}$ miss $W$ is at least that (there could be other disjoint events where you miss $W$). You were instead calculating an upper bound with Chebychev's inequality. | {
"domain": "cstheory.stackexchange",
"id": 5833,
"tags": "pr.probability, randomized-algorithms"
} |
Classical Limit in Quantum Mechanics | Question: Suppose I have a wave function $\Psi$ (which is not an eigenfunction) and a time independent Hamiltonian $\hat{\mathcal{H}}$. Now, If I take the classical limit by taking $\hbar \to 0$ what will happen to the expectation value $\langle\Psi |\hat{\mathcal{H}}|\Psi\rangle$? Will it remain the same (as $\hbar = 1.0$) or will it be different as $\hbar\to 0$? According to correspondence principle this should be equal to the classical energy in the classical limit.
What do you think about this? Your answers will be highly appreciated.
Answer: The above posters seem to have missed the fact that $\Psi$ is not an eigenfunction, but an arbitrary wavefunction. The types of wavefunctions we normally see when we calculate things are usually expressed in terms of eigenfunctions of things like energy or momentum operators, and have little to do, if anything, with classical behaviour (e.g. look at the probability density of the energy eigenstates for the quantum harmonic oscillator and try to imagine it as describing a mass connected to a spring).
What you might want to do is construct coherent states which are states where position and momentum are treated democratically (uncertainty is shared equally between position and momentum).
Then, the quantum number that labels your state might be thought of as the level of excitation of the state. For the harmonic oscillator, this is roughly the magnitude of the amount of energy in the state in that $E = \langle n \rangle \hbar= |\alpha^2| \hbar$. If you naively take $\hbar \to 0$ then everything vanishes. But if you keep, say, the energy finite, while taking $\hbar \to 0$, then you can recover meaningful, classical answers (that don't depend on $\alpha$ or $\hbar$). | {
"domain": "physics.stackexchange",
"id": 10188,
"tags": "quantum-mechanics, classical-mechanics, semiclassical"
} |
Are SU($n$) operations enough for quantum computation? | Question: Usually we want a quantum computer that can perform all foreseeable unitary operations U($n$).
A quantum processor that can naturally perform at least 2 rotation operators $R_k(\theta)=\exp(-i\theta\sigma_k/2)$, where $\sigma_k$ are the Pauli matrices; can generate any SU(2) rotation of the Bloch sphere. And the usual Pauli operations (up to a global phase) can be generated by just choosing the right angle $R_k(-\pi)=i\sigma_k$. By creating some controlled-$i\sigma_x$, I guess we could generate all SU($n$). Would a quantum processor restricted to SU($n$) operations limit the power to simulate quantum systems and other algorithms of interest?
Edit: note that controlled-SU(2) gates are still in SU(4).
Answer: If you want to be more precise about it, quantum (pure, ket) states are elements of complex projective spaces, $\mathbb{CP}^n$. This is the set of equivalence classes of elements of $\mathbb C^{n+1}$ modulo multiplication by complex scalars.
So "gates" should really be described as maps between such equivalence classes. This is the projective unitary group, ${\rm PU}(n)\simeq{\rm PSU}(n)$. You can think of this as the set of equivalence classes of unitaries modulo multiplication by a scalar phase. Note that when you consider transformations on elements of $\mathbb{CP}^n$, the difference between special and "regular" unitary matrices disappears, as reflected by the fact that ${\rm PU}(n)={\rm PSU}(n)$.
Of course, it is generally easier to work on regular linear spaces rather than on their projective counterparts, and simply remember to interpret the results of the calculations appropriately at the end. So in the more standard language, yes ${\rm SU}(n)$ operations are sufficient, though you don't need to only use gates in there, as all gates differing by only a complex phase factor represent the same physical operation.
Case in point, the Pauli $X$ gate is a common choice, despite it not being special, as $i X\in\mathrm{SU}(2)$. | {
"domain": "quantumcomputing.stackexchange",
"id": 2799,
"tags": "quantum-gate, unitarity, quantum-circuit, universal-gates"
} |
Difference between alfven waves and magnetic fluctuations in solar corona | Question: I have read in this paper https://arxiv.org/abs/1911.07973 about particles being accelerated in the solar corona either by Alfven waves (to my understanding when the turbulence is weak) or by magnetic fluctuations (strong turbulence). Could someone explain the difference, please?
Answer:
...particles being accelerated in the solar corona either by Alfven waves (to my understanding when the turbulence is weak) or by magnetic fluctuations (strong turbulence)...
I do not think you realize just how complicated this question is, but it is extremely complicated (not a criticism, just a clarification). The simplest explanation is that the wave has a well defined relationship between the frequency and wave number, called a dispersion relation, while the generic idea of turbulence does not. Note this is not correct either because nuances cause overlap between both ideas (i.e., there is something called Alfvenic turbulence [will discuss below] in plasmas). Typically, waves are radiated by instabilities while turbulence is driven by a cascade of energy from large-to-small scales. Again, this is a gross over simplification but it's about all one can say without being too misleading or overly verbose.
Could someone explain the difference, please?
Not without belaboring several detailed points, but here's a superficial explanation. There can be what is called Alfvenic turbulence (i.e., specific type of MHD turbulence) and there are also Alfven waves. The former is generically driven by large-scale motions of the plasma that cascade to smaller scales through various processes eventually dissipating at small enough scales to directly couple to the plasma (called kinetic scales or the dissipation range). The latter is driven by some sort of kinetic instability due to a free energy source (e.g., an ion/ion two-stream instability). The waves and turbulence can be similar and share properties but the fundamental difference is their source. Further, the turbulence version tends to be characterized by a broad spectrum in wave number whereas the wave form tends to be focused in wave number and frequency (again, gross over simplification). | {
"domain": "physics.stackexchange",
"id": 66107,
"tags": "waves, plasma-physics, sun, turbulence"
} |
Symbol table usage | Question: In which phases of the compiler, is symbol table used and updated ?
According to me,
In lexical analysis, new entry for each variable is created in symbol table so symbol table is only updated
In syntax and semantic analysis, symbol table is just used to check the information about attributes during parse tree creation and type checking. So in both of these phases , symbol table is only used and not updated
In intermediate code generation and code optimization, entry in symbol table is added in case any temporary variables are created and to check the type of temp variable, symbol table is used.So symbol table is both used and updated here.
I am not sure about Target code generation.
Can someone please tell me if I am correct?
Answer:
In which phases of the compiler, is symbol table used and updated?
Symbol table are interacted during all phases of a compiler.
Basically, a compiler operates in phases which are grouped as backend and frontend. The front end deals with those part of code which is independent of the target platform and mainly depends on the language structure, while backend does not depend on the structure of the language but depend on the target platform.
Essentially, a symbol table is a data structure (usually a hash table) containing information about identifiers. Identifiers are detected and stored in the symbol table by the lexical analyzer. During the analysis phase (frontend) the compiler collects various information about identifiers (scope, type, size, etc.), stores it into the symbol table, and later this information are used in various ways in remaining phases, for example by code generator and optimizer. The symbol table are used and updated as necessary and for the most part depends on what kind of compiler you are writing (target platform, architecture, structure of language, etc.).
For example, initially the symbol table may store keywords. The syntax analyzer may store information about types of identifiers, and semantic analyzer may use this information to check semantics of expressions. The code generator uses types of identifiers and store the information about the storage assigned to identifiers.
Using of Symbols Tables are discussed in Chapters 2 and 7 of the Dragon book. | {
"domain": "cs.stackexchange",
"id": 10271,
"tags": "compilers"
} |
What does differentiation of B-cell mean? | Question: Does differentiation of B lymphocytes mean the formation of plasma cells and memory cells by matured B lymphocytes?
Answer: Immature B lymphocytes mature into either plasma cells or memory cells. If a B cell is mature, that means it's already a memory or plasma cell. | {
"domain": "biology.stackexchange",
"id": 6079,
"tags": "immunology"
} |
Compatibility of .pcd files in ROS and standalone PCL | Question:
Hi,
This is not a question but I don't know where to post it, so please move the topic to the appropriate forum.
For the last few days I've had trouble displaying .pcd files in pcl_visualization pcd_viewer that work properly with the standalone PCL's pcd_viewer and vice-versa (mainly I wanted to process the .pcd files saved generated in RGBDSLAM).
I was unable to find a solution on the Web (just suggestions to install the unstable version of ROS) but now I've found one, so I post it here in case someone has the same problem.
SOLUTION: Create a ros package and add a source file with the following code (the output cloud is XYZ with no color):
#include <pcl/ros/conversions.h>
#include <pcl/io/pcd_io.h>
#include <pcl/point_cloud.h>
#include <pcl/point_types.h>
int
main (int argc, char** argv)
{
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
if (pcl::io::loadPCDFile<pcl::PointXYZ> (argv[1], *cloud) == -1)
{
std::cout << "ERROR: couldn't read the pcd file!" << std::endl;
return (-1);
}
pcl::io::savePCDFileASCII ("output_cloud.pcd", *cloud);
std::cout << "Converted!" << std::endl;
return (0);
}
rosmake it and execute passing the name of your .pcd file. You'll be able to open correctly the output file either with ROS and standalone PCL.
Originally posted by Bogdan Harasymowicz-Boggio on ROS Answers with karma: 21 on 2011-08-29
Post score: 2
Answer:
Solution posted in question update.
Originally posted by tfoote with karma: 58457 on 2011-12-11
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6546,
"tags": "ros, pcl, pcd, pointcloud"
} |
Where does the radiation in space come from and can we observe it? | Question: I have recently been reading that space travel is strongly influenced by "space radiation" and how it poses a threat to human space exploration.
Does this radiation originate from stars like our Sun, or is it an omnipresent — let's just call it — "force" in space (such as cosmic noise) that doesn't have any specific source?
Also, can an amateur astronomer visualize this radiation in some way so as to be able to observe it?
Answer: Cosmic rays consist of both electromagnetic radiation (i.e. photons) of different frequencies (radio waves, IR, light, UV light, x-rays, gamma rays), as well as charged particles (protons, electrons, maybe even ions of light elements), and other stuff like neutrinos.
The vast majority of the radiation we encounter around earth will be from the sun, because it is so very close and basically a large radiating blob. Usually with isotropic (equally in all directions) radiating sources, the radiation intensity falls of with the square of the distance. That means radiation diminishes very, very fast. Go twice as far from the sun, and you only get a fourth of the radiation.
The EM radiation from UV and up (X-rays and gamma rays) is probably the most harmful. The earth's magnetic field shields us from these rays, but interplanetary travel will not have this benefit. X-rays and gamma rays may also come from supernovae and other stellar objects, which are far away, but will probably be much too faint to have an effect on astronauts. However, it can be picked up by sensitive specialized telescopes and satellites.
The charged particles may be a problem to spacecrafts and electronics onboard them, but can probably be dampened by shielding in the spacecraft, as to protect the astronauts.
Neutrinos are I think of no concern, since they hardly interact with other matter.
As an amateur, you will have problems detecting UV and above. Mainly because we are mostly shielded from this kind of radiation by the magnetosphere and the atmosphere.
You could detect particle radiation, by taking photos of northern lights, though... :) | {
"domain": "astronomy.stackexchange",
"id": 69,
"tags": "space, radiation"
} |
Running teleop in Safety Mode on Turtlebot | Question:
How can I use teleop in Safety Mode? Right now teleop switches the Create automatically to Full Mode.
(I would like to use Turtlebot for telepresence. It would be nice when the teleoperator couldn't drive the robot down the stairs, even when the connection drops or lags)
Originally posted by Robert Buzink on ROS Answers with karma: 41 on 2012-01-11
Post score: 1
Answer:
I have filed a ticket here: https://kforge.ros.org/turtlebot/trac/ticket/122 to add a parameter to change the default
Originally posted by mmwise with karma: 8372 on 2012-01-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Robert Buzink on 2012-02-12:
Thanks! Could you give me a rough estimate on where in the code I should look to change the default myself in the meantime? (Is it in turtlebot_key.cpp or somewhere in cmd_vel?)
Comment by mmwise on 2012-02-16:
you;ll need to edit the turtlebot_node and turtlebot_driver code to get it to run in safe mode | {
"domain": "robotics.stackexchange",
"id": 7854,
"tags": "ros, turtlebot, teleop, keyboard-teleop, teleoperation"
} |
Pytorch doing a cross entropy loss when the predictions already have probabilities | Question: So, normally categorical cross-entropy could be applied using a cross-entropy loss function in PyTorch or by combing a logsoftmax with the negative log likelyhood function such as follows:
m = nn.LogSoftmax(dim=1)
loss = nn.NLLLoss()
pred = torch.tensor([[-1,0,3,0,9,0,-7,0,5]], requires_grad=True, dtype=torch.float)
target = torch.tensor([4])
output = loss(m(pred), target)
print(output)
The thing is. What if the data at the output is already in a state with the probabilities where the variable pred already has the probabilities. Where the data is presented like the following:
pred = torch.tensor([[.25,0,0,0,.5,0,0,.25,0]], requires_grad=True, dtype=torch.float)
How could the cross-entropy then be completed in PyTorch?
Answer: You can implement categorical cross entropy pretty easily yourself. It is calculated as
$$
\text{cross-entropy} = -\frac{1}{n} \sum_{i=0}^{n} \sum_{j=0}^m \mathbf{y}_{ij} \log \hat{\mathbf{y}}_{ij}
$$
where $n$ is the number of samples in your batch, $m$ is the number of classes, $\mathbf{y}_i$ is the one-hot target for example $i$, $\mathbf{\hat{y}}_i$ is the predicted probability distribution, and $\mathbf{y}_{ij}$ refers to the $j$-th element of this array.
In PyTorch:
def categorical_cross_entropy(y_pred, y_true):
y_pred = torch.clamp(y_pred, 1e-9, 1 - 1e-9)
return -(y_true * torch.log(y_pred)).sum(dim=1).mean()
You can then use categorical_cross_entropy just as you would NLLLoss in the training of a model. The reason that we have the torch.clamp line is to ensure that we have no zero elements, which will cause torch.log to produce nan or inf.
One difference you'll have to make in your code is that this version expects a one-hot target rather than an integer target. You can easily convert your current target list like so:
one_hot_targets = torch.eye(NUM_CLASSES)[targets]
where targets is a torch.tensor with integer values and NUM_CLASSES is the number of output classes that you have. | {
"domain": "datascience.stackexchange",
"id": 5650,
"tags": "neural-network, loss-function, probability, pytorch, softmax"
} |
Frequency response of numerical derivative | Question: Analytical derivative of a function is equivalent to convolution of that function with $s$ in Laplace domain. Numerical derivatives are limited in bandwidth due to finite sampling rate, so they are not synonymous with convolving the signal with with $s$ term. At higher frequencies one would expect attenuation of the numerically differentiated signal from one that was computed analytically. Recently, I found that there are some differences at the low frequency limit as well which I cannot explain.
Attached is a plot of a signal sampled from a normal distribution (blue) and it's first derivative in time (red). As expected, at high frequencies the derivative signal begins to attenuate. But why does it not cross $\omega$ = 1 rad/s or 0.16 Hz as would be the case if the solution was obtained analytically?
Here's the code I am running in MATLAB
sr = 100000;
y = randn(1,sr);
dydt = y;
for i = 2:length(y)-1
dydt(i) = (y(i+1)-y(i-1))*sr*2;
end
hold on, plot(abs(fft(y)));
plot(abs(fft(dydt)));
set(gca, 'YScale', 'log')
set(gca, 'XScale', 'log')
Answer: Even with @MattL.'s fix you are discarding typically non-zero parts of the discrete-time derivative by not including its first and last sample, which destroys its autocorrelation properties near the end points, typically resulting in the low-frequency plateau in the frequency spectrum as you have observed. We can add a bit of a zero-valued safety buffer at the start and at the end of the signal to ensure that what we are discarding will be zero-valued:
sr = 100000;
y = randn(1,sr);
y(1) = 0;
y(2) = 0;
y(end-1) = 0;
y(end) = 0;
dydt = zeros(1,sr);
for i = 2:length(y)-1
dydt(i) = (y(i+1)-y(i-1))*sr*2;
end
hold on, plot(abs(fft(y)));
plot(abs(fft(dydt)));
set(gca, 'YScale', 'log')
set(gca, 'XScale', 'log')
The result is as desired:
Figure 1. Result using safety buffers at the start and at the end of the signal.
Another way is to treat the signal as periodic and to wrap around the subscripts:
sr = 100000;
y = randn(1,sr);
dydt = zeros(1,sr);
dydt(1) = (y(1+1)-y(end))*sr*2;
for i = 2:length(y)-1
dydt(i) = (y(i+1)-y(i-1))*sr*2;
end
dydt(end) = (y(1)-y(end-1))*sr*2;
hold on, plot(abs(fft(y)));
plot(abs(fft(dydt)));
set(gca, 'YScale', 'log')
set(gca, 'XScale', 'log')
This will also give the desired result:
Figure 2. Results when treating the signal as periodic. | {
"domain": "dsp.stackexchange",
"id": 7913,
"tags": "frequency-spectrum, derivative"
} |
Trying to understand the proof of the halting problem presented in Sipser textbook | Question: I'm having some problems to understand the classic proof of the halting problem.
The Proof:
$A_{tm} = ${$<M,w>$ | $M$ is a $TM$ and $M$ accepts $w$}.
We assume that $A_{tm}$ is decidable and obtain a contradiction.
I have no problem to imagine that. It's a machine that accepts some string $w$. But it must not accept others strings $not-w$. And if it's decidable, it always halts.
Suppose that $H$ is a decider for $A_{tm}$. On input $<M, w>$, where $M$ is a $TM$ and $w$ is a string, $H$ halts and accept if $M$ accepts $w$. Furthermore, $H$ halts and rejects if $M$ fails to accept $w$. In other words, we assume that $H$ is a $TM$, where
\begin{equation}
H(<M, w>)=\begin{cases}
accept &\text{if $ M$ accepts $w$}.\\
reject & \text{if $ M$ does not accept $w$}.
\end{cases}
\end{equation}
Here I have the impretion that $H$ is doing the same thing that $A_{tm}$ does.
But If it's saying that $H$ is decider, I can assume that $H$ has some magic power to discover if $A_{tm}$ will halt in input $w$
Now we construct a new Turing machine $D$ with $H$ as a subroutine. This new TM calls H to determine what $M$ does when the input to M is it own description $<M>$. Once $D$ has determined the information, it does the opposite. That is, it rejects if $M$ accpets and accepts if $M$ does not accept. The follow is a descripion of $D$:
No problem here.
\begin{equation}
D=\begin{cases}
1. &\text{Run $H$ on input <M, <M>>}.\\
2. & \text{Output the oposite of what $H$ outputs; that is, if $H$ accpets, reject and if $H$ rejects, accept}.
\end{cases}
\end{equation}
Here is where I find hard to understand. The input string of $H$ is $<M, w>$, how could it run some thing like $<M, <M>>$ ?
If was only $<M>$, I could imagine that $w$ is the empty string.
I understand the halting problem with the following code:
function halts(func) {
// Insert code here that returns "true" if "func" halts and "false" otherwise.
}
function deceiver() {
if(halts(deceiver))
while(true) { }
}
Answer: The first misconception is that $A_{TM}$ is a Turing Macahine. $A_{TM}$ isn't a machine, it's a language, $H$ is the machine that decides the language $A_{TM}$. So you give $H$ a string that consists of two parts, a string $M$ that describes a Turing machine, and another string $w$, which can be anything.
This leads us to the second part, $w$ can be any string - it's the theoretical input to $M$ that you want to know whether $M$ would accept or not. So $H$ should accept if $M$ would accept $w$, and reject otherwise (i.e. when $M$ doesn't accept, which could be by rejecting or never halting). $\langle M \rangle$ is just a string, it happens to coincidentally be a description of a Turing Machine, but it is still just a string. So there's no reason we can't ask "does $M$ accept the string which is its own description?" | {
"domain": "cs.stackexchange",
"id": 21340,
"tags": "computability, halting-problem"
} |
Do hot objects moving at relativistic speeds slow down as they emit radiation? | Question: In an astrophysics class I learned about the Poynting-Robertson effect, by which grains of dust orbiting a star slow down and eventually fall into the star. Every source that I have been able to find on this subject explains it by saying that in the star's reference frame the dust mote emits more light in the direction it is moving due to relativistic beaming. In the dust mote's reference frame it emits radiation isotropically, but due to relativistic aberration it absorbs slightly more radiation on the front than the back. I understand this explanation, but it seems a bit unsatisfying.
What about a hot object that is not orbiting a star? If I heated up a cannon ball and launched it out into deep space at a significant fraction of the speed of light, wouldn't it slow down over time due to relativistic beaming? How does this look in the ball's frame of reference if it isn't absorbing radiation from a star?
If there were nothing else but myself and the cannonball in space and I launch the cannonball directly away from myself, what would this look like?
In the ball's frame, I would appear to accelerate away as I absorb radiation emitted by the ball, but otherwise it emits isotropically. The ball and I appear to be accelerating away from each other in this frame.
In my frame, I would expect to see redshifted blackbody radiation from the cannonball. If it is slowing down, the radiation should become bluer over time. I am also absorbing the momentum of that radiation so maybe it would just stay the same color. If I measured its temperature before launching it, I can infer that it must be emitting blueshifted blackbody radiation on the other side, slowing it down. It is not clear to me whether the cannonball is slowing down or accelerating away in this frame.
I must be missing something here, can anyone help me out?
Answer: In the case of your cannonball in deep space, you need a reference frame to work with as well as one for the cannonball. So, if the cannonball is all by itself, just radiating heat, then in its own frame it's radiating isotropically and will travel in a straight line, and not decelerate. (Remember that even if the radiation had a substantial momentum it's radiating in all directions so the net force on it is zero).
If the cannonball is absorbing radiation from a source the situation is different. You have a source -- say a star -- and radiation coming in and hitting it which is absorbed and re-emitted according to blackbody/ thermodynamics laws. Now you can think about the frame of the star and see that the cannonball would slow down, in accordance with the Poynting Robertson effect, though it would be very small as anything bigger than a dust grain doesn't experience it. | {
"domain": "physics.stackexchange",
"id": 44975,
"tags": "special-relativity"
} |
Can I use Sunflower oil to make soap? | Question: When reading about making glycerine soap, the recipes suggested to melt animal fat and mix it with $\ce{NaOH}$.
Could I use Sunflower oil instead? Or a olive oil for nice flavor? Is heating necessary in that case?
Answer: You can use any triglyceride, but the touch, the firmness, the cleaning effectiveness of the resulting soap will be different. Soaps also can go rancid just like their corresponding fat/oil. DIY soapmakers generally mix oil or use specifically selected ones. Coconut oil, castor oil, olive oil, animal fats are generally popular.
Without the intent to advertise any specific site:
http://www.lovinsoap.com/oils-chart/
Olive oil is actually a pretty common component, and you can buy olive oil based soaps even is shops. Flavor? If you do it well, most probably not much remains.
Heating is generally necessarily, saponification is a slow reaction and most home-made soap is not fully reacted (containing lot of NaOH).
Note on safety:
Many people make soap at home, because it is "more natural/organic" just following some youtube videos without knowing much about lab work or chemistry. Using your own blender, doing in your kitchen or using your own kitchenware is a very bad idea. NaOH can make you blind, so do not ignore the "wear googles!" signs. Also just mixing everything up in a highly corrosive mixture contains or not remains of NaOH or produce all kind of unknown byproducts then putting everything on your skin may not the healthiest choice of living. | {
"domain": "chemistry.stackexchange",
"id": 1736,
"tags": "home-experiment, fats"
} |
Conway's Game of Life without objects | Question: I was wondering if anyone could provide some feedback on my solution to Conway's game of life. I tried to solve it without using objects. It returns a dictionary of living points on a grid and points that were alive at some point. I'm wondering if there is a better way to check the surrounding points or if storing the active points in a dictionary is a bad idea.
def list_to_check(x,y):
return [(x + 1, y),
(x - 1, y),
(x +1, y +1 ),
(x -1, y -1 ),
(x +1, y -1 ),
(x -1, y +1 ),
(x, y+1),
(x, y-1)]
def neighbors_count(x,y,cells):
count = 0
for item in list_to_check(x,y):
if item in cells:
count += cells[item]
return count
def next_state(cells):
next_state = {}
for point in cells.keys():
alive = cells[point] == 1
neighbors = neighbors_count(point[0],point[1],cells)
if not alive and neighbors == 3:
next_state[point] = 1
elif neighbors > 3 or neighbors < 2:
next_state[point] = 0
return next_state
def add_to_grid_alive(cells):
to_check = []
to_add = {}
for item in cells.keys():
checking = [ x for x in list_to_check(item[0],item[1]) if x not in to_check and x not in cells ]
to_check += checking
for item in to_check:
if neighbors_count(item[0],item[1],cells) == 3:
to_add[(item[0],item[1])] = 1
return to_add
def tick(cells):
changes = next_state(cells)
expansions = add_to_grid_alive(cells)
cells.update(changes)
cells.update(expansions)
[ cells.pop(point,None) for point in cells.keys() if cells[point] == 0 ]
def print_grid( size, cells):
grid = []
for y in range(size):
grid.append([])
for x in range(size):
grid[y].append(0)
for point in cells.keys():
if 0 <= point[0] < size and 0 <= point[1] < size:
grid[point[1]][point[0]] = cells[point]
return grid
#testing
def load_list(inputs):
point_dict = {}
for point in inputs:
point_dict[point] = 1
return point_dict
inputs = [(1,2),(2,2),(3,2),(2,3),(3,3),(4,3),(13,2),(13,3),(12,3),(13,4)]
test_array_dict = load_list(inputs)
for x in range(20):
print '>>>>>>>>>>>>>>>>>>>>', x + 1
for item in print_grid(20,test_array_dict)[::-1]:
print item
tick(test_array_dict)
Update
I added [ cells.pop(point,None) for point in cells.keys() if cells[point] == 0 ] to the tick function and it really sped up the the code on higher iterations
Answer: You're representing the cells mainly using a dictionary of coordinates, converting to a two-dimensional array only for the purposes of printing the output. That is a good representation for sparse boards, but not so much for crowded boards.
The algorithm that you use, though, is cumbersome, particularly illustrated by these few lines:
for item in cells.keys():
checking = [ x for x in list_to_check(item[0],item[1]) if x not in to_check and x not in cells ]
to_check += checking
In other words, make a list of the neighbors of each cell, but make sure that each coordinate is listed just once. Then, once you have that list to_check, what do you do with it?
for item in to_check:
if neighbors_count(item[0],item[1],cells) == 3:
to_add[(item[0],item[1])] = 1
You count each cell's neighbors! But the number of neighbors is precisely the amount of overlapped processing that would have occurred had you not bothered to deduplicate the to_check list to begin with! So, instead of drawing a list of interesting cells and counting their neighbors, why not just have each existing cell increment the neighbor count of each neighboring coordinate?
In addition to the change in algorithm, I also recommend:
Take advantage of list/set/dict comprehensions more.
Represent the board as a set of live cells, rather than a dict.
Rename list_to_check() to neighbors().
Rename print_grid() to grid(), since it actually doesn't print anything.
Construct the next state from scratch rather than by mutation: test_array_dict = tick(test_array_dict)
I'd rewrite three functions as follows (and eliminate neighbors_count(), next_state(), and add_to_grid_alive():
def tick(live_cells):
""" Takes a set of coordinates of live cells, and returns a set of
coordinates of the live cells in the next generation. """
neighbor_count = {}
for xy in live_cells:
for neighbor in neighbors(*xy):
neighbor_count[neighbor] = 1 + neighbor_count.get(neighbor, 0)
return set(xy for xy in neighbor_count
if neighbor_count[xy] == 3
or xy in live_cells and neighbor_count[xy] == 2)
def grid(size, cells):
return [[int((x, y) in cells) for x in range(size)] for y in range(size)]
def load_list(inputs):
return set(inputs)
Actually, tick() would be even better written using a collections.Counter. I don't know if you consider that "using an object". | {
"domain": "codereview.stackexchange",
"id": 10653,
"tags": "python, beginner, game-of-life"
} |
iRobot Create 2: Angle Measurement | Question: I have been working on trying to get the angle of the Create 2. I am trying to use this angle as a heading, which I will eventually use to control the robot. I will explain my procedure to highlight my problem.
I have the Create tethered to my computer.
I reset the Create by sending Op code [7] using RealTerm.
The output is:
bl-start
STR730
bootloader id: #x47175347 4C636FFF
bootloader info rev: #xF000
bootloader rev: #x0001
2007-05-14-1715-L
Roomba by iRobot!
str730
2012-03-22-1549-L
battery-current-zero 252
(The firmware version is somewhere in here, but I have no clue what to look for--let me know if you see it!)
I mark the robot so that I will know what the true angle change has been.
I then send the following codes [128 131 145 0x00 0x0B 0xFF 0xF5 142 6]. This code starts the robot spinning slowly in a circle and request the sensor data from the sensors in the group with Packet ID 2. The output from the Create seen in RealTerm is 0x000000000000, which makes sense.
I wait until the robot has rotated a known 360 degrees, then I send [142 2] to request the angle difference. The output is now 0x00000000005B.
The OI specs say that the angle measurement is in degrees turned since the last time the angle was sent; converting 0x5B to decimal is 91, which is certainly not 360 as expected.
What am I doing wrong here? Is the iRobot Create 2 angle measurement that atrocious, or is there some scaling factor that I am unaware of? are there any better ways to get an angle measurement?
Answer: There is a known bug in the angle command. We are still working on a workaround. In the meantime, please extract the angle yourself by using the left and right encoder counts. See this question for detailed equations. | {
"domain": "robotics.stackexchange",
"id": 1032,
"tags": "irobot-create, roomba"
} |
Complex Coordinate change | Question: I have a simple question where I must change the coordinates of a system however I am unsure whether I am correct. I am changing from Cartisian to complex coordinates. Let's say I only have $x$ and $y$ coordinates. Would that mean $$ x =iy - z $$ and $$ y = \frac{z - x}{i} $$
With the time derivatives being $$\dot{x} = i\dot{y} - \dot{z} $$ and likewise for y.
I know this may be a stupid question but it is bugging me and I cannot seem to find any documentation to help.
Note: I cannot use polar complex coordinates.
Answer: Writing $x=iy-z, y=(z-x)/i$ doesn't help you very much because your goal is to introduce new coordinates and then write $(x,y)$ in terms of these. It's nice to start by writing the map in the other direction, i.e.
$$z=x+iy$$
The complex conjugate is then $\bar{z}=x-iy$. These can be inverted to write $(x,y)$ in terms of $(z,\bar{z})$ as
$$ x = \frac{1}{2}(z+\bar{z}), \qquad y = -\frac{i}{2}(z - \bar{z}) $$ | {
"domain": "physics.stackexchange",
"id": 28012,
"tags": "homework-and-exercises, classical-mechanics, lagrangian-formalism, complex-numbers"
} |
Two Independent Harmonic Oscillators is NOT Ergodic! | Question: I read on a book that the system of two or more independent harmonic oscillators in classical mechanics is not ergodic. I want to know why a harmonic oscillator is actually ergodic but two or more ones is not. Is it related to this fact that the phase space of two independent harmonic oscillators is the product $M \times M$ and each one's ergodicity does not force the whole ergodicity?
Answer: The orbits of the harmonic oscillator in 1D are closed curves in the phase space - and the key here is that these curves coincide with the (1D) energy surfaces $S$ of the system, which means that the energy surface is trivially fully explored by a trajectory (so the system may be ergodic).
In more dimensions, i.e., for two or more independent harmonic oscillators, the conserved quantities are the energies of the individual normal modes, defining a surface that doesn't coincide with $S$, which is the surface of total constant energy of the system. In other words, the system's conserved quantities are not constant in $S$, which therefore cannot be fully explored, which means the system cannot be ergodic.
This is detailed explained in a Physics Today article by Lebowitz & Penrose: Modern ergodic theory (e-print).
Also relevant is the question Are there necessary and sufficient conditions for ergodicity?. | {
"domain": "physics.stackexchange",
"id": 47331,
"tags": "statistical-mechanics, harmonic-oscillator, phase-space, ergodicity"
} |
Making bigram features from a particular dataset | Question: I have a folder which has a number of files which have a format like these
madvise
write
write
write
write
read
read
madvise
ioctl
ioctl
getuid
epoll_pwait
read
recvfrom
sendto
getuid
epoll_pwait
that is it is a set of words which repeat.This is how all the files are like.
Now I have created a feature vector table using unigram that is each word becomes a feature and each file becomes a row where I put the frequency of that word occuring in the respective columns.
Now I want to create a similar FVT using bigrams.I was wondering how to do that in this case.
Answer: Bigram is better to be used with sentences. In your case, files contain a list of words, as I could understand. Therefore using bigrams in your project might not yield expecting results. However, if you are still willing to do that, this is how you calculate bigrams:
Take the list of words and count the frequencies of adjacent words.
Ex:
(madvise, write) - 1
(write, write) - 3
(write, read) - 1
(read, read) - 1
(read, madvise) - 1
.
.
.
.
(sendto, epoll_pwait) - 1 | {
"domain": "datascience.stackexchange",
"id": 3203,
"tags": "machine-learning, dataset, data, feature-construction"
} |
caching decorator | Question: I came up with a caching decorator for pure functions. Is it ok? Could it be better/simpler/faster?
def cached(f):
def _(*args):
if args in _._cache:
return _._cache[args]
else:
result = f(*args)
_._cache[args] = result
return result
_._cache = {}
return _
Answer: What you are trying to achieve is called memoization and has already a number of recipe available on the Python wiki.
Your implementation matches the one using nested functions:
# note that this decorator ignores **kwargs
def memoize(obj):
cache = obj.cache = {}
@functools.wraps(obj)
def memoizer(*args, **kwargs):
if args not in cache:
cache[args] = obj(*args, **kwargs)
return cache[args]
return memoizer
A few things to note here on top of yours:
usage of a local variable which resolves faster than an attribute lookup
usage of functools.wraps which keeps some properties of the original function intact (name and docstring, mainly)
explicit names for variables | {
"domain": "codereview.stackexchange",
"id": 18443,
"tags": "python, meta-programming, memoization"
} |
Wedding Photography Cost Calculator | Question: This is my first code. I would really appreciate a critique for it, if anyone can spare the time.
def wedcost(hours):
print " "
try:
hours = int(hours)
except ValueError:
print "Invalid input"
new_hours = raw_input("Please use only numeric characters. ")
hours = int(new_hours)
print " "
tcost = hours * 75 # $75/hr for two shooters
if hours < 4:
tcost = 250 # price starts at a fixed $250 from 1 to 3 hours
if hours >= 6:
tcost = hours * 100 # I don't like long shifts
print "The total cost of our services will be $" + str(tcost) + "."
print " "
def recalc(answer): # this second function is where I had the most trouble
outcome = 0 # only method I could get to work properly
if answer.startswith('y' or 'Y'):
print " "
outcome += 1
wedcost(raw_input("How long will we be on location? "))
if answer.startswith('n' or 'N'):
print " "
print "Have a nice day :)"
outcome += 1
if outcome == 0:
print "Invalid input"
recalc(raw_input("Would you like to recalculate? ('Yes' or 'No') "))
# I had previously tried checking 'answer' to items in lists
recalc(raw_input("Would you like to recalculate? ('Yes' or 'No') "))
wedcost(raw_input("How long will we be on location? "))
I'm sure that there's a better way to go about this task. Any suggestions or hints?
Answer: Well, this code has a few problems:
Typing non-numbers twice in a row causes a ValueError.
It is not quite PEP-8 compliant (the official python style guide)
No docstrings
Final bit of code should be wrapped in a if __name__ == "__main__": block to add reusability.
It is not very reusable
Now let's fix all that:
Wrap your input section in it's own functions like this:
def get_num(prompt, errorMsg = "I'm sorry, that's not a valid number"):
"""Returns a number from the user"""
while True:
try:
input = int(raw_input(prompt))
except ValueError:
print errorMsg
else:
return input
def get_yes_no(prompt, errorMsg = "I'm sorry, I didn't catch that. Could you try again?"):
"""Returns True for yes, False for no"""
while True:
input = raw_input(prompt)
if input.startswith('y' or 'Y'):
return True
elif input.startswith('n' or 'N'):
return False
print errorMsg
This allows you to have more robust and more readable code
Naming: function names should be lowercase and separated by _. So wedcost(hours) becomes wed_cost(hours).
Also your function doesn't do what is advertised, since it keeps on recalculating.
Let's wrap that into a main function like this:
def main():
wed_cost(get_num("How long will we be on location? "))
while get_yes_no("Would you like to recalculate? ('Yes' or 'No') "):
wed_cost(get_num("How long will we be on location? "))
print "Have a nice day :)"
Then we can add this little bit of code to the bottom:
if __name__ == "__main__":
main()
Which will run your code if it is directly called, but also allows you to reuse this code elsewhere with an import without causing any code to be run.
Finally let's put all your magic numbers (such as 75, 100 etc) somewhere more recyclable by defining some constants at the top of the code:
MIN_FEE = 250
HOURLY_FEE = 75
OVERTIME_HOURS = 6
OVERTIME_FEE = 100
Then the wed_cost function becomes:
def wed_cost(hours):
"""Prints a message with the cost of a wedding shoot given a number of hours"""
totalCost = 0
if hours >= OVERTIME_HOURS:
totalCost = hours * OVERTIME_FEE
totalCost = max(hours * HOURLY_FEE, MIN_FEE)
print "The total cost of our services will be ${}.".format(totalCost)
This makes your code more understandable, and more debuggable :) | {
"domain": "codereview.stackexchange",
"id": 8855,
"tags": "python, beginner, python-2.x"
} |
MsgSrvDoc links 404 on wiki: how to correct? | Question:
Hi,
I've noticed in the wiki page of move_base that the links to the message documentation are broken (404).
They lead to http://docs.ros.org/en/move_base_msgs/html/action/MoveBase.html
They should lead to http://docs.ros.org/en/noetic/api/move_base_msgs/html/action/MoveBase.html
I tried to correct it on the wiki but it seems they are auto-generated.
<<MsgSrvDoc(move_base_msgs)>>
## AUTOGENERATED DON'T DELETE
## CategoryPackage
## CategoryPackageROSPKG
Does anyone knows how to make it correct or notify the right person that can correct it ? It seems this issue is global with every packages and every distro.
Originally posted by simchanu on ROS Answers with karma: 20 on 2020-10-19
Post score: 0
Original comments
Comment by gvdhoorn on 2020-10-19:
Looks like this is still fall-out from Migration of docs.ros.org content into /en prefix.
This is not limited to move_base.
I noticed the same when posting on Discourse, but assumed it would clear up after everything had been rebuilt or sorted out. Perhaps that's not the case.
Perhaps @tfoote can comment here.
Answer:
Thanks for noticing this it was a missed url element in the migration.
It's fixed in https://github.com/ros-infrastructure/roswiki/pull/321 and should be live on the site.
Originally posted by tfoote with karma: 58457 on 2020-10-20
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by simchanu on 2020-10-21:
To answer completely the question, I assume the best way to handle this kind of issue is contacting you ?
Comment by tfoote on 2020-10-21:
If there's a systematic issue like that on the wiki please file a ticket at: https://github.com/ros-infrastructure/roswiki/issues | {
"domain": "robotics.stackexchange",
"id": 35653,
"tags": "ros, navigation, move-base, documentation"
} |
Are relative phase Toffoli gates universal for reversible circuits? | Question: Let us define a new three qubit gate as:
$$\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0
\end{bmatrix}. $$
This gate almost coincides with the Toffoli gate, except that it has a minus sign for the state $|101\rangle$. This gate has the advantage that it can be built using just 3 CNOTs (instead of 6) plus single qubit rotations. I cannot figure out if this gate is also universal for reversible circuits as the Toffoli. Could I get a Toffoli gate from this gate?
Answer: You can decompose the Toffoli into this gate if you have an ancilla qubit.
First, note that the gate is equivalent to $CCX \cdot \overline{C}CZ$:
Note that the order of the $CCX$ and the $\overline{C}CZ$ doesn't matter, since they disagree on a control.
Knowing this decomposition, and using a $|0\rangle$ ancilla, it's not too hard to find a way to ensure the phase operations cancel out or have their controls unsatisfied:
This ancilla may bother you. Although, note that in Aaronson et al's classification of reversible gates, they did allow ancilla qubits in $|0\rangle$ and $|1\rangle$ as long as they were restored to their original state by the end of the circuit.
It is possible to fix the fact that the ancilla needs to be in a specific state. You can modify the construction so that it works with any ancilla, no matter its value:
Impossibility without ancilla
There are two relevant parities here: permutation parity and phase parity. The permutation parity of an operation is the number of state swaps needed to implement the operation ignoring phase. The phase parity of an operation is whether an even or odd number of states have their amplitude negated by the operation.
When there are no ancilla qubits, the gate you described has odd permutation parity and odd phase parity. Therefore the gate you described can only be used to implement operations whose permutation and phase parity are the same.
When there are no ancilla qubits, the Toffoli operation has odd permutation parity but even phase parity. It has disagreeing permutation and phase parity. Therefore the Toffoli cannot be decomposed into your gate when their are no ancilla qubits.
This proof breaks when there are ancilla qubits because all the parities become even, due to everything happening in the subspace where the ancilla is 0 and also in the subspace where the ancilla is 1. | {
"domain": "quantumcomputing.stackexchange",
"id": 4174,
"tags": "quantum-gate, universal-gates"
} |
Constructing automata with the same traces, but where a CTL-formula is not equally satisfied | Question: Hard to put this question in a short title. As part of a self-exercise, I'm trying to solve 6.15b of Principles of Model Checking by Baier and Katoen. You're supposed to prove that there does not exist an equivalent LTL formula for the CTL formula $\phi = A\Diamond E\bigcirc A\Diamond \neg a$, without the theorem that says that you can remove all A's and E's.
It is hinted to me that if I can construct two automata $A$ and $A'$ such that $\textrm{Traces}(A) = \textrm{Traces}(A')$, but where $A \models \phi$ and $A' \not\models \phi$, I'm practically done. (Assume $\psi$ is an LTL formula with $\psi \equiv \phi$, then $A\models\psi\iff A'\models\psi$. This is a contradiction, which proves there is no LTL equivalent.)
Now, how to construct such automata? Currently I'm basically constructing simple automata that satisfy $\phi$, but all the variants with similar traces also appear to satisfy $\phi$.
Kinds regards.
Answer: SPOILER ALERT: Since you ask this in TCS, I assume you want the answer and not hints. If you don't want the full answer, don't continue reading...
One example of such automata is as follows.
The first, $A_1$, consists of two states $q,s$, where $L(q)=a$ and $L(s)=\neg a$ ($L$ being the labeling function). $q$ has a transition to $q$ and to $s$, and $s$ has only a self loop.
The second,$A_2$ consists of 4 states $q_1,...,q_4$ with the following transitions:
$$q_1\to q_2\vee q_3$$
$$q_2\to q_2$$
$$q_3\to q_3\vee q_4$$
$$q_4\to q_4$$
And the labels are: $L(q_1)=L(q_2)=L(q_3)=a$ and $L(q_4)=\neg a$.
It is easy to see that the traces of both automata are exactly $a^\omega \cup a^*\cdot a\cdot (\neg a)^\omega$, but path $q_1,q_2^\omega$ makes it so that $A_2\not\models \phi$, whereas $A_1\models \phi$.
By the way, you can remove $q_3$ from $A_2$, but I find it clearer this way. | {
"domain": "cstheory.stackexchange",
"id": 2068,
"tags": "lo.logic, automata-theory, model-checking, linear-temporal-logic"
} |
Quicksort to find median? | Question: Why is the worst scenario $\mathcal{O}\left(n^2\right)$ when using quicksort to find the median of a set of numbers?
If your algorithm continually picks a number larger than or smaller than all numbers in the list wouldn't your algorithm fail? For example if the list of numbers are:
$S = (12,75,82,34,55,15,51)$
and you keep picking numbers greater than $82$ or less than $12$ to create sublists with, wouldn't your set always remain the same size?
If your algorithm continually picks a number that creates sublists of $1$ why is the worst case scenario $\mathcal{O}\left(n^2\right)$? Wouldn't efficiency be linear considering that according to the Master Theorem, $d>\log_b a$?* (and therefore be $\mathcal{O}\left(n^d\right)$ or specifically in this case $\mathcal{O}\left(n\right)$)
*Where $d$ is the efficiency exponent (i.e. linear, exponential etc.), $b$ is the factor the size of problem is reduced by at each iteration, $a$ is the number of subproblems and $k$ is the level. Full ratio: $T(n) = \mathcal{O}\left(n^d\right) * (\frac{a}{b^d})^k$
Answer: Quicksort is for sorting, the algorithm you refer to is a selection algorithm known as Quick Select.
Since you can only pick as pivot a number that is in the list case #1 never happens.
Worst case is that you continually pick a number that partitions the list into 2 lists: A list with just one element and a list with $n-1$ elements, if this is the case in each iteration you only rule out a single element.
So first iteration you do $n$ operations, second iteration $n-1$ operations, third one
$n-2$ operations, ... last iteration 1 operation.
Which its the sum of the first $n$ natural numbers:
$1+2+3+... +(n-2)+(n-1)+ n = n(n-1)/2$ operations = $O(n^2)$ | {
"domain": "cs.stackexchange",
"id": 213,
"tags": "algorithms, algorithm-analysis, search-algorithms"
} |
How to perform multi-class classification with qiskit's VQC? | Question: I am following the tutorial given in qiskit's website Neural Network Classifier and Regressor. In the first part, classification, the third section refers to qiskit's VQC library. Everything works fine with the given X and y where there are only two classes. I modified the X and y slightly to include four classes instead of two using the following lines:
num_inputs = 2
num_samples = 100
X = 2*np.random.rand(num_samples, num_inputs) - 1
y = np.random.choice([0,1,2,3], 100)
y_one_hot = np.zeros(( num_samples, 4 ))
for i in range(len(y)):
y_one_hot[i, y[i]]=1
The rest of the code is untouched. VQC with ZZFeatureMap, RealAmplitudes ansatz, cross_entropy loss function and COBYLA() optimizer.
However, when I try to fit with this new data, the classifier is only running for 5 iterations and the weights are not being changed at all. The loss or objection function's value is always calculated as "nan".
There is a similar question I had posted about weights not being optimized with VQC, but then I thought it was because of my data or VQC's configuration. After trying this example, I realised it is clearly to do something with multiple classes and not just the classifier's configuration.
Please shine light on how to do multi-class classification using the qiskit's VQC library.
Answer: This has been resolved now. The above mentioned problem was an issue with 0.3.0 version qiskit_machine_learning library. You can get more details about the issue and how it has been resolved here.
TLDR: Install the 0.4.0 developer's version of qiskit_machine_learning library. Clone this repository and from with the folder, run pip install . | {
"domain": "quantumcomputing.stackexchange",
"id": 3483,
"tags": "qiskit, programming, quantum-enhanced-machine-learning"
} |
/rtabmap/rtabmap: Did not receive data since 5 seconds | Question: trying to run RtabMap on my realsense-435 device. I've set up all the topics and made sure they are published and subscribed, nevertheless, RtabMap gives me this error:
[ WARN] [1649509757.946524434]: /rtabmap/rgbd_odometry: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set.
/rtabmap/rgbd_odometry subscribed to (approx sync):
/rtabmap/odom \
/dynamic_image \
/dynamic_masked_image_raw \
/sync_camera_info
P.S.: I've synced all these ( the last three topics), therefore there aren't any delays among them.
/rtabmap/odom topic doesn't get published. I'm guessing the issue might have been raised from a bad tf tree??!!
any help is appreciated
the codes I ran:
roslaunch realsense2_camera rs_camera.launch align_depth:=true
then
roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" depth_topic:=/dynamic_masked_image_raw rgb_topic:=/dynamic_image camera_info_topic:=/sync_camera_info approx_sync:=false
the three topics(i.e./dynamic_masked_image_raw /dynamic_image /sync_camera_info) are published with the same timestamp and contain the frames and camera info.
each terminal is sourced with the specific workspace.
out put of rostopic hz (the topics):
/dynamic_image 16.09 0.04809 0.09111 0.008274 354
/dynamic_masked_image_raw 16.09 0.04802 0.09153 0.008369 354
/sync_camera_info 16.09 0.048 0.09235 0.00834 353
full output error:
[ WARN] [1649758359.612186938]: /rtabmap/rtabmap: Did not receive data since 5 seconds! Make sure the input topics are published ("$ rostopic hz my_topic") and the timestamps in their header are set. If topics are coming from different computers, make sure the clocks of the computers are synchronized ("ntpdate"). Parameter "approx_sync" is false, which means that input topics should have all the exact timestamp for the callback to be called.
/rtabmap/rtabmap subscribed to (exact sync):
/rtabmap/odom \
/dynamic_image \
/dynamic_masked_image_raw \
/sync_camera_info \
/rtabmap/odom_info
Answer: turns out the culprit was the type of encoding that I used to convert the depth topic from ROS message to OpenCV. For others facing the same issue, the depth image should have 16UC1 encoding. I'm still trying to fix the problem but for now I've found the source of the issue | {
"domain": "robotics.stackexchange",
"id": 2529,
"tags": "ros, slam, visual-odometry"
} |
Sensor logger for Raspberry Pi in a stratospheric probe | Question: I'm writing a Python Script for a Raspberry Pi to measure different sensors. We are planning to send the Pi with that Script running to the stratosphere, so the power usage for the Pi is limited.
I excuse myself in advance for the code, I had no prior experience with Python.
Are there any ways I can make this code more battery friendly? Would it be beneficial to write 10 rows at once instead of writing one row at a time?
#!/usr/bin/env python3
from sense_hat import SenseHat
import time
import csv
import datetime
sense = SenseHat()
sense.clear()
sense.set_imu_config(True, True, True)
sense.low_light = True
with open('data.csv', mode='w') as file:
writer = csv.writer(file, delimiter=';', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['Zeit','Temperatur1', 'Temperatur2', 'Temperatur3', 'Luftdruck', 'Luftfeuchtigkeit', 'Yaw', 'Pitch', 'Roll', 'Compass X', 'Compass Y', 'Compass Z', 'Gyro X', 'Gyro Y', 'Gyro Z'])
with open('acc.csv', mode='w') as file:
writer = csv.writer(file, delimiter=';', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['Zeit','Acc_X','Acc_Y','Acc_Z'])
with open('log.csv', mode='w') as file:
writer = csv.writer(file, delimiter=';', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow(['Zeit','Fehler'])
# Farben definieren
red = (255, 0, 0)
green = (0, 255, 0)
black = (0,0,0)
def writeDataToCsv(temperature, temperature2, temperature3, pressure, humidty, yaw, pitch, roll, mag_x, mag_y, mag_z, gyro_x, gyro_y, gyro_z):
with open('data.csv', mode='a') as file:
writer = csv.writer(file, delimiter=';', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow([datetime.datetime.now(),temperature, temperature2, temperature3, pressure, humidty, yaw, pitch, roll, mag_x, mag_y, mag_z, gyro_x, gyro_y, gyro_z])
def writeAccelerationToCsv(x,y,z):
with open('acc.csv', mode='a') as file:
writer = csv.writer(file, delimiter=';', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow([datetime.datetime.now(),x,y,z])
sense.set_pixel(0, 0, green)
time.sleep(.05)
sense.set_pixel(0, 0, black)
def main():
sense.set_pixel(0, 0, black)
counter = 0
try:
while True:
#Region Acceleration
acceleration = sense.get_accelerometer_raw()
acc_x = acceleration['x']
acc_y = acceleration['y']
acc_z = acceleration['z']
writeAccelerationToCsv(acc_x,acc_y,acc_z)
time.sleep(.250)
counter+=1
#Region Data
if(counter == 4):
temperature = sense.get_temperature()
temperature2 = sense.get_temperature_from_humidity()
temperature3 = sense.get_temperature_from_pressure()
pressure = sense.get_pressure()
humidty = sense.get_humidity()
orientation = sense.get_orientation()
yaw = orientation["yaw"]
pitch = orientation["pitch"]
roll = orientation["roll"]
mag = sense.get_compass_raw()
mag_x = mag["x"]
mag_y = mag["y"]
mag_z = mag["z"]
gyro = sense.get_gyroscope_raw()
gyro_x = gyro["x"]
gyro_y = gyro["y"]
gyro_z = gyro["z"]
writeDataToCsv(temperature, temperature2, temperature3, pressure, humidty, yaw, pitch, roll, mag_x, mag_y, mag_z, gyro_x, gyro_y, gyro_z)
counter = 0;
except Exception as e:
with open('log.csv', mode='a') as file:
writer = csv.writer(file, delimiter=';', quotechar='"', quoting=csv.QUOTE_MINIMAL)
writer.writerow([datetime.datetime.now(),str(e)])
sense.set_pixel(1, 0, red)
finally:
pass
main()
if __name__ == '__main__':
main()
Answer: Have you already executed the code to see how it performs and if the battery will last? There is that famous Donald Knuth quote saying premature optimization is the root of all evil (or at least most of it) in programming.
I never had to think about the energy consumption of a program, so I cannot tell you about the power efficieny. But as vnp already did, I can also share my opinion about the code structure to help you to identify bottlenecks more easily. Also, a different structure should help you to still log some data even in case of exceptions.
Here is what struck me on first read:
most of the code is defined in the main method
you overwrite the complete data files at the beginning of the program
very broad exception clause
repetition of the csv write (violates the zen of python - not dry - dont repeat yourself)
I tried to resolve some of the issues and refactored the structure of the code:
#!/usr/bin/env python3
from sense_hat import SenseHat
import time
import csv
import datetime
# defined constants on moduel level and capitalized the names (pep8: https://www.python.org/dev/peps/pep-0008/#constants)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLACK = (0,0,0)
class DataLogger(object):
def __init__(self, init_csv_files=False):
# initalize the commonly ued sensor
self.sense = SenseHat()
self.sense.clear()
self.sense.set_imu_config(True, True, True)
self.sense.low_light = True
# only initialize the csv files, if intended
# I would suggest not to init them in the same program though.
# If - for some reasons - the python interpreter crashes and the script is restarted,
# the init of the csv_files will overwrite all the data which was logged so far.
if init_csv_files:
self.init_csv_files()
def write_data_to_file(self, data, file_name, mode='a', delimiter=';', quotechar='"', quoting=csv.QUOTE_MINIMAL):
"""
Helper method to write the given data to a csv file. Using 'append' as default mode to avoid accidental overwrites.
"""
with open(file_name, mode=mode) as file:
writer = csv.writer(file, delimiter=delimiter, quotechar=quotechar, quoting=quoting)
writer.writerow(data)
def init_csv_files(self):
# see comment in init method
data_headings = ['Zeit','Temperatur1', 'Temperatur2', 'Temperatur3', 'Luftdruck', 'Luftfeuchtigkeit', 'Yaw', 'Pitch', 'Roll', 'Compass X', 'Compass Y', 'Compass Z', 'Gyro X', 'Gyro Y', 'Gyro Z']
self.write_data_to_file(data_headings, 'data.csv', 'w')
acc_headings = ['Zeit','Acc_X','Acc_Y','Acc_Z']
self.write_data_to_file(acc_headings, 'acc.csv', 'w')
log_headings = ['Zeit','Fehler']
self.write_data_to_file(log_headings, 'log.csv', 'w')
def start_logging(self):
# actual execution
sense.set_pixel(0, 0, BLACK)
counter = 0
while True:
# moved the accelleration logging to a different method
# and catched possible exceptions there, so the counter will still be increase
# and the rest of the data may still be logged even if the accelleration data
# always raises exceptions
self.log_accelleration()
time.sleep(.250)
counter += 1
# using counter % 4 == 0 instead of counter == 4
# this will evaluate to true for every number divisible by 4
# If you do the strict comparision, you could find yourself in the scenario
# where the data logging is never executed, if the counter is larger than 4
# (in this case this is very unlikely, but in threaded scenarios it would be possible,
# so doing modulo 4 is more defensive)
if(counter % 4 == 0):
self.log_data()
counter = 0
def log_accelleration(self):
acceleration_data = get_accelleration()
if acceleration_data:
try:
self.write_data_to_file(acceleration_data, 'acc.csv')
except Exception as e:
self.log_exception(e)
pass
else:
# no exception occurred
self.sense.set_pixel(0, 0, green)
time.sleep(.05)
finally:
self.sense.set_pixel(0, 0, black)
def log_data(self):
# saving datetime first, before reading all the sensor data
data = [datetime.datetime.now()]
# moved each of the calls to sense in a separate method
# exceptions will lead to empty entries being logged but
# if e.g. get_pressure raises an exceptions, the other data may still get logged
data += self.get_temperature()
data += self.get_pressure()
data += self.get_humidity()
data += self.get_orientation()
data += self.get_mag()
data += self.get_gyro()
self.write_data_to_file(data, 'data.csv')
def log_exception(self, exception):
sense.set_pixel(1, 0, red)
self.write_data_to_file([datetime.datetime.now(), str(exception)], 'log.csv')
sense.set_pixel(0, 0, black)
def get_accelleration(self):
try:
acceleration = self.sense.get_accelerometer_raw()
except Exception as e:
self.log_exception(e)
return
acc_x = acceleration['x']
acc_y = acceleration['y']
acc_z = acceleration['z']
return[datetime.datetime.now(), acc_x, acc_y, acc_z]
def get_temperature(self):
try:
temperature1 = sense.get_temperature()
temperature2 = sense.get_temperature_from_humidity()
temperature3 = sense.get_temperature_from_pressure()
except Exception as e:
return [None, None, None]
return [temperature1, temperature2, temperature3]
def get_pressure(self):
try:
pressure = sense.get_pressure()
except Exception as e:
return [None]
return [pressure]
def get_humidity(self):
try:
humidty = sense.get_humidity()
except Exception as e:
return [None]
return [humidty]
def get_orientation(self):
try:
orientation = sense.get_orientation()
except Exception as e:
return [None, None, None]
return [orientation["yaw"], orientation["pitch"], orientation["roll"]]
def get_mag(self):
try:
mag = sense.get_compass_raw()
except Exception as e:
return [None, None, None]
return [mag["x"], mag["y"], mag["z"]]
def get_gyro(self):
try:
gyro = sense.get_gyroscope_raw()
except Exception as e:
return [None, None, None]
return [gyro["x"], gyro["y"], gyro["z"]]
if __name__ == '__main__':
data_logger = DataLogger(init_csv_files=True)
try:
data_logger.start_logging()
except Exception as e:
data_logger.log_exception(e)
Further steps for improvements:
Catch specific exceptions (e.g. IOErrors in the write csv, or SenseHat specific exceptions
Log exceptions (where needed) and return different defaults in cases of error
Refactor the write to - as you suggested - log the data in memory and only write every 10th entry to the csv. Attention: If you only log every 10th or even every 100th data entry and the python interpreter crashes, the recently logged data will be lost
Don't write the csv headers in code, but manually prepare the csv files and put them next to the script
Use a sqlite database and log the data here instead of in CSVs
In order to figure out where to start with the optimizations, you can now profile the helper methods (write_data_to_file, get_temperature and the other get_... methods) and derive appropriate measurements to take.
PS. Fair warning: I never executed the code in a python shell, so it may not be free from bugs :see_no_evil:. | {
"domain": "codereview.stackexchange",
"id": 33668,
"tags": "python, csv, logging, embedded, raspberry-pi"
} |
How to start creating a package to read a signal and publish it in a ROS network | Question:
Hi,
I am new to ROS and programing and I would like some advices to develop a ROS package/node that reads a radio signal, so far I can only access it from its IPs address from a browser like a internet wifi modem, and pusblish it so I could use it in a ROS network.
Any Advices, papers, links, to start this journey ???
Thanks a lot in advance.
Originally posted by Paulo on ROS Answers with karma: 13 on 2019-06-13
Post score: 0
Original comments
Comment by ChuiV on 2019-06-13:
I think further clarification is needed here. What "radio signal" are you talking about? Radar? Are you asking about how to write a node that communicates with some sensor and publishes the data in ROS?
Comment by Paulo on 2019-06-13:
Hi. It's basically a rocket m900, one access point and one client connect via wifi. I would like to monitor it and send the signal strength through a ROS network. Although I'm not familiar with getting the signal data, that is not in any ROS package, and publish it in my ROS network. I can initially access it like a common wifi router and see some statistics. I would like some tips and material to start developing a way to publish this that in my ROS network. Thank you!
Answer:
Your first task will be to write a c++ or python program that can access the signal strength information you need. If you can monitor it over HTTP can you send a CURL request and receive a text response that contains the signal strength? Your program will then need to parse this text and extract the raw strength reading.
When you have this working you can adapt the simple publisher example to publish the data to a topic.
Hope this gives you enough to get started.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2019-06-13
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Paulo on 2019-06-13:
Nice . I'll check that as well. Thank you
Comment by Paulo on 2019-06-18:
Hello, it helped me indeed. Thanks. I am able to get my data through a PHP script now.
Is it possible to use it with ROS in a python package?
Thank you a lot
Comment by PeteBlackerThe3rd on 2019-06-18:
Yes it is certainly possible. I recommend us using one of the python CURL libraries such as pycURL That way you will be able to get the data in the same way from a python ROS node. Then is should be fairly straight forward to publish the data over a topic. | {
"domain": "robotics.stackexchange",
"id": 33173,
"tags": "ros, ros-kinetic, publisher, network"
} |
keras flow_from_directory returns 0 images | Question: When I try to use the following snippet of code to try to predict on a batch of images, I get a message saying that no image were found:
test_datagen = ImageDataGenerator()
test_generator = test_datagen.flow_from_directory(
directory='test/',
target_size=(300, 300),
color_mode="rgb",
shuffle = False,
class_mode='binary',
batch_size=1)
filenames = test_generator.filenames
My directory structure is as follows.
I first have a main directory named dogs_vs_cats in which I have two sub directories train and test containing the respective images and also the notebook which contains this code.
Answer: Keras generator alway looks for subfolders (representing the classes). Images insight the subfolders are associated with a class.
So when you work on C:\images\ and you have two classes, say C1, C2, you need to create subfolders C:\images\C1\ and C:\images\C2\. The directory insight the generator function should point to C:\images\.
See this post for the case of image prediction: https://stackoverflow.com/a/55991598/9524424 | {
"domain": "datascience.stackexchange",
"id": 9420,
"tags": "keras, image-classification"
} |
Classically, if the magnetic moment of a particle is aligned with a time-varying magnetic field, can its spin flip? | Question: Consider the time-varying magnetic field:
$$
\mathbf{B}=B \tanh{\Big(\frac{t}{\tau}\Big)}\hat{\mathbf{z}}.
$$
If the magnetic moment (which is proportional to the angular momentum) of a particle at $t=-\infty$ is in the $\hat{\mathbf{z}}$-direction, will it change as soon as the direction of $\mathbf{B}$ flips from $-\hat{\mathbf{z}}$ to $\hat{\mathbf{z}}$ at $t=0$?
With the potential energy of the particle being $U=-\vec{\mu}\cdot\mathbf{B}$, it seems to me as if the moment, so as to minimize $U$, would change its direction with time to align with $\mathbf{B}$, but this seems to violate the principle of angular momentum conservation.
How can these two ways of thinking about angular momentum in this context be made consistent with one another?
Answer: Note that the potential energy $U=-\vec{\mu}\cdot\mathbf{B}=-\mu B\cos\theta$ has two equilibria, for $\theta=0$ (dipole and field aligned) and for $\theta=\pi$ (dipole and field antialigned). The equilibrium at $\theta=0$ is stable, while the equilibrium at $\theta=\pi$ is unstable. If the situation happens exactly as you describe (the external field is always precisely in the $\hat{z}$-direction, and the dipole is precisely aligned with the $\hat{z}$-direction), then there will never be any torque on the dipole, since the system jumped from one equilibrium to the other (from a stable equilibrium to an unstable equilibrium), so the dipole will not flip. But for $t>0$ (i.e. when the dipole and field are anti-aligned), if there is any perturbation of either the external field or the dipole, then the system will deviate from this unstable equilibrium and settle into the stable one. In other words, the dipole will flip if anything is perturbed even slightly, meaning that in any realistic situation, the dipole will flip for $t>0$.
Angular momentum of the dipole is not conserved when it flips because there is a torque being exerted on the dipole, namely from the external magnetic field. But of course, for the system consisting of the dipole and the external field, angular momentum is conserved. This means that when the dipole flips, there is a transfer of angular momentum between the dipole and the external field. This transfer is accomplished by the electromagnetic radiation that results from the dipole's rotation. | {
"domain": "physics.stackexchange",
"id": 53150,
"tags": "angular-momentum, classical-electrodynamics, magnetic-moment"
} |
Assigning values to array elements based on a lookup table | Question: I am writing a C# program wherein I need to populate an array based on a lookup table and set of string arrays with metadata. My lookup table looks like this (Table with key: transmitter, value: Array of receiver):
{
LED1: ["px1","px2","px3"],
LED2: ["px4","px5","px6"]
}
and my meta arrays looks like this (it is dynamic (just an example) and comes as a response from a DB query):
var transmitters = new string[] { "LED1", "LED2" };
var receivers = new string[] { "px1", "px2", "px3", "px4", "px5", "px6" };
My requirement is:
If the transmitter LED1 or LED2 (or any other transmitter) is present in the lookup table, the value of the transmitter (i.e. ["px1","px2","px3"]) has to be compared with the receiver which are present in the lookup and led has to be marked yellow.
Orphan transmitter or/ receiver has to be marked red.
Example
Lookup
{
LED1: ["px1", "px2", "px3"],
LED2: ["px5", "px8"]
}
Transmitters and receivers
var transmitters = new string[] { "led1", "led2" };
var receivers = new string[] { "px1", "px2", "px3", "px4", "px5", "px6" };
The result should be a list as:
led1-yellow
px1-yellow
px2-yellow
px3-yellow
led2-yellow
px5-yellow
px4-red
px6-red.
I have written code that works:
using System;
using System.Collections.Generic;
using System.Linq;
public class Program
{
public static void Main()
{
var transmitters = new string[] { "led1", "led2", "led3" };
var receivers = new string[] { "px1", "px2", "px3", "px4", "px5", "px6" };
var lookup = new Dictionary<string, string[]>() {
{ "led1", new string[] { "px1", "px2", "px3" } },
{ "led2", new string[] { "px5", "px8"} }
};
var blocks = new List<Block>();
var blocksTracker = new List<string>();
foreach (var transmitter in transmitters)
{
if (lookup.ContainsKey(transmitter))
{
var receiverLookup = lookup[transmitter];
var intersection = receivers.Intersect(receiverLookup).ToArray();
if (intersection.Length > 0)
{
blocks.Add(new Block() { Id = transmitter, status = "yellow"});
blocksTracker.Add(transmitter);
foreach (var receiver in intersection)
{
blocks.Add(new Block(){Id = receiver, status = "yellow"});
blocksTracker.Add(receiver);
}
} else
{
blocks.Add(new Block(){Id = transmitter, status = "red"});
blocksTracker.Add(transmitter);
}
}
}
var ungrouped = receivers.Except(blocksTracker).ToArray();
foreach (var receiver in ungrouped)
{
blocks.Add(new Block(){Id = receiver, status = "red"});
blocksTracker.Add(receiver);
}
foreach (var i in blocks)
{
Console.WriteLine(i.Id + "-"+i.status);
}
}
public class Block
{
public string Id { get; set; }
public string status { get; set; }
}
}
I am new to C# and I wanted to know if there is a better way of doing this. You can see the working Fiddle here.
Answer:
if (lookup.ContainsKey(transmitter))
{
var receiverLookup = lookup[transmitter];
This searches for the KeyValuePair twice. There's a more efficient approach:
if (lookup.TryGetValue(transmitter, out var receiverLookup))
var ungrouped = receivers.Except(blocksTracker).ToArray();
foreach (var receiver in ungrouped)
{
blocks.Add(new Block(){Id = receiver, status = "red"});
blocksTracker.Add(receiver);
}
The ToArray() there is unnecessary: the enumerable can be left as a lazy enumerable because the only use is to iterate over it once.
var intersection = receivers.Intersect(receiverLookup).ToArray();
if (intersection.Length > 0)
{
blocks.Add(new Block() { Id = transmitter, status = "yellow"});
blocksTracker.Add(transmitter);
foreach (var receiver in intersection)
{
blocks.Add(new Block(){Id = receiver, status = "yellow"});
blocksTracker.Add(receiver);
}
} else
{
blocks.Add(new Block(){Id = transmitter, status = "red"});
blocksTracker.Add(transmitter);
}
This seems rather complicated. I think the whole thing could be simplified:
var transmittersPaired = new HashSet<string>();
var receiversPaired = new HashSet<string>();
foreach (var transmitter in transmitters)
{
if (lookup.TryGetValue(transmitter, out var receiverLookup) && receiverLookup.Any())
{
transmittersPaired.Add(transmitter);
foreach (var receiver in receiverLookup)
{
receiversSeen.Add(receiver);
}
}
}
var blocks = new List<Block>();
foreach (var transmitter in transmitters)
{
blocks.Add(new Block { Id = transmitter, status = transmittersPaired.Contains(transmitter) ? "yellow" : "red" });
}
foreach (var receiver in receivers)
{
blocks.Add(new Block { Id = receiver, status = receiversPaired.Contains(receiver) ? "yellow" : "red" });
}
There's still some repeated code, which might be simplified in one of two ways. If there's a guarantee that the transmitters and receivers will never share IDs then transmittersPaired and receiversPaired could be merged into one set, and the foreach loops at the end could be merged into one loop over transmitters.Concat(receivers). Alternatively, a method could be factored out. | {
"domain": "codereview.stackexchange",
"id": 34347,
"tags": "c#, .net"
} |
How do separable states equate to energy eigenstates? | Question: Let's say I have some state vector $|\Psi(t)\rangle$, and I express it as a linear combination of eigenstates of some operator, $\hat{Q}$, with a discrete spectrum, which we will call $|q_n\rangle$. Let's say that $\hat{Q}$ is not the Hamiltonian, so these are not energy eigenstates. Then, I have:
$$|\Psi(t)\rangle \ = \ \displaystyle\sum_{n} |q_n\rangle \langle q_n |\Psi(t)\rangle \ = \ \displaystyle\sum_{n} r_n(t) |q_n\rangle$$
Now, if I express this in the position basis, I get something like:
$$\Psi(x,\,t) \ = \ \displaystyle\sum_{n} r_n(t) \psi_n(x)$$
Now each term of this sum is orthogonal, so they obey the Schrödinger Equation. But isn't it true that the wavefunctions that are separable are equivalent to the set of energy eigenfunctions with time dependence? How can this be reconciled with the fact that $r_n(t) \psi_n(x)$ is a separable wavefunction that is not an eigenfunction of the Hamiltonian?
I'm probably making a stupid assumption/mistake somewhere in my reasoning, so any clarification is much appreciated.
Answer: Any separable solution to the Schrodinger equation is stationary. The mistaken step in your reasoning is "Each term of this sum is orthogonal, so they obey the Schrödinger Equation." That's not true.
If $Q$ commutes with the Hamiltonian (and there's no degeneracy), then $r_n(t) \psi_n(x)$ is a stationary state. If $Q$ does not commute with the Hamiltonian, then $r_n(t) \psi_n(x)$ does not solve the Schrodinger equation. In no situation is $r_n(t) \psi_n(x)$ a non-stationary separable solution to the Schrodinger equation. | {
"domain": "physics.stackexchange",
"id": 61587,
"tags": "quantum-mechanics"
} |
A one-pass heavy hitter algorithm | Question: I was shown this problem from a class last year and I am still not sure what the right answer is.
Items that occur with high frequency in a dataset are sometimes called heavy hitters. Accordingly, let us define the HEAVY-HITTERS problem, with real parameter $\varepsilon>0$, as follows. The input is a stream $\sigma$. Let $m, n, f$ have their usual meanings. Let
$$
\mathrm{HH}_{\varepsilon}(\sigma)=\left\{j \in[n]: f_{j} \geq \varepsilon m\right\}
$$
be the set of $\varepsilon$-heavy hitters in $\sigma$. Modify Misra-Gries to obtain a one-pass streaming algorithm that outputs this set "approximately" in the following sense: the set $H$ it outputs should satisfy
$$
\mathrm{HH}_{\varepsilon}(\sigma) \subseteq H \subseteq \mathrm{HH}_{\varepsilon / 2}(\sigma)
$$
Your algorithm should use $O\left(\varepsilon^{-1}(\log m+\log n)\right)$ bits of space.
How can you do this?
I found a paper by Manku and Motwani but it isn't a modification of Misra-Gries as far as I can see. From the stated complexity it looks like you should set $k=\frac{1}{\epsilon}$ and run a constant number of copies of Misra-Gries but I can't get that to work.
Answer: Choose $k = 2/\epsilon$, and output all items satisfying $\hat{f}_j \geq \frac{\epsilon}{2}m$.
The final estimates satisfy
$$
f_j - \frac{\epsilon}{2} m \leq \hat{f}_j \leq f_j.
$$
If $j \in \mathrm{HH}_\epsilon(\sigma)$ then $\hat{f}_j \geq \epsilon m - \frac{\epsilon}{2} m = \frac{\epsilon}{2} m$, and so we output $j$. Conversely, if we output $j$ then $f_j \geq \hat{f}_j \geq \frac{\epsilon}{2} m$, and so $j \in \mathrm{HH}_{\epsilon/2}(\sigma)$. | {
"domain": "cs.stackexchange",
"id": 18974,
"tags": "streaming-algorithm"
} |
2 approximation algorithm for the single machine scheduling problem | Question: We are given one machine and $n$ jobs that we want to process.
For the $n$ jobs we have the following data:
$r_{1}, ... , r_{n}$ are the release times
$p_{1}, ... , p_{n}$ are the processing times
$d_{1}, ... , d_{n}$ are the deadlines/due dates
$c_{1}, ... , c_{n}$ are the completion times(when the jobs are finished executing)
$l_{1}, ... , l_{n}$ are the latenesses achieved where $l_{j} = c_{j} - d_{j}$
Our goal is to find a schedule that minimizes the maximum lateness achieved ($L^{*}_{max}$)
Since even deciding whether there is a schedule such that $L^{*}_{max} \leq 0$ is np hard, this implies that we can not come up with an approximation algorithm of any ratio $\rho > 0$ that runs in polynomial time and produces a schedule with maximum lateness at most $\rho L^{*}_{max}$. So this is why we make the assumption that the due dates are negative, ie $d_{i} < 0$
Before trying to come up with an algorithm, we can prove that for any subset $S$ of jobs, $L^{*}_{max} \geq r(S) + p(S) - d(S)$ where $r(S) = min_{j\in S}r_{j}$, $p(S) = \sum_{j\in S}p_{j}$ and $d(S) = max_{j \in S} d_{j}$.
To prove this what we have to do is look at the last job of the optimal schedule and try to find its lateness, which will also be a lower bound for the maximum lateness achieved.
Now, having this as a fact for any schedule, assume our algorithm produces a schedule as follows. At each moment the machine is idle, start processing next the available job with the earliest due date.
The analysis I have in my book (the design and analysis of algorithms by Williamson and Shmoys) is as follows:
We consider the schedule produced by our algorithm and let $j$ be the job of maximum lateness in the schedule. That is $L_{max} = c_{j} - d_{j}$. We focus on the time $c_{j}$ in this schedule and try to find the earliest point in time $t \leq c_{j}$ such that the machine was processing without any idle time for the entire period $[t, c_{j})$. By our choice of $t$, we have $r(S)=t$. Let $S$ be the set of jobs that are processed in that interval. From the bound about the maximum lateness we found before, we have that $L^{*}_{max} \geq r(S) + p(S) -d(S) \geq r(S) + p(S) = t+p(S) = c_{j}$
Then we only focus at the last job $j$, so we get $L^{*}_{max} \geq r_{j} + p_{j} - d_{j} \geq -d_{j}$
combining both the equations we get $c_{j} - d_{j} \leq 2L^{*}_{max}$
what I don't understand is, where do we actually need the earliest due time rule? Where do we use it in the analysis? What would happen if we used the longest due time rule? Could we then apply the same analysis to come up with the same approximation ratio?
Answer: This is an error in the book. The earliest deadline condition is not used any where in the analysis. This means that we can achieve the 2-approximation algorithm even if we drop this condition. | {
"domain": "cs.stackexchange",
"id": 5748,
"tags": "algorithms, approximation, greedy-algorithms"
} |
ROS orocos tutorial | Question:
Hi all,
I am currently reviewing Orocos and would like to use it as my control framework. I would like to use ROS as my transport layer so I have found rtt_ros_integration to be to the point.
I have read all the orocos doc I could find, and would like to pratise on the examples, so I have now installed rtt-ros-integration in indigo (under ubuntu 14.04). But it seems that the orocos component manual is no longer up-to-date as it states to use the following command line in order to create an orocos component package rosrun ocl orocreate-pkg HelloWorld. But this command line outputs an error saying orocreate-pkg does not exist in package ocl, which is true (I have checked in the said package). The same happens when trying to start the deployer with rosrun ocl deployer-gnulinux.
I guess the instructions of the orocos component manual correspond to an old version of the ROS / orocos integration. Hence my question: anyone knows more than I do on the new ways to start orocreate-pkg, the deployer, or the task browser with rtt-ros-integration?
I have run the rtt_ros_examples (the ops version, not the LUA version which looks bugged), but I could not figure out the answer so far...
Thanks,
Antoine.
Originally posted by arennuit on ROS Answers with karma: 955 on 2015-08-17
Post score: 2
Answer:
ok, I think I have some elements for the answer.
Basically:
Do not follow the steps given in the orocos doc, they are deprecated and should be updated (as for the integration into ROS - the other parts of the doc are still relevant)
-> the last doc corresponds to version 2.6 of orocos and we are now at version 2.8
Follow the steps given in the rtt_ros_integration doc
In addition to this:
create a folder in your workspace (as if you were doing a normal ROS package) and put all your code in there (CMakeLists.txt, package.xml, HelloWorld.cpp, helloworld.ops...)
for the content of these files (CMakeLists.txt, package.xml, HelloWorld.cpp, helloworld.ops...), you can get inspiration from the examples in rtt_ros_integration_examples
all instructions in helloworld.ops should end with a ";", no idea why...
no longer use OCL_CREATE_COMPONENT in your .cpp, but ORO_CREATE_COMPONENT
-> this is the new way of doing things
compile your component the normal ROS way (catkin_make in your workspace)
dynamically load your component: rosrun rtt_ros deployer -s helloworld.ops -linfo
Cheers,
Antoine.
Originally posted by arennuit with karma: 955 on 2015-08-18
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Balaji on 2016-05-09:
When i run this code, rosrun rtt_ros deployer -s helloworld.ops -linfo
I am getting this error, /home/coe/ws/underlay/src/rtt_ros_integration/rtt_ros/scripts/deployer: line 5: deployer: command not found
Comment by Balaji on 2016-05-09:
I cannot see the Helloworld.ops and helloworld.cpp file. I am getting error as failed to open ocl when i run catkin_make_isolated --install
and im getting error as cmake failed when i run catkin_make | {
"domain": "robotics.stackexchange",
"id": 22465,
"tags": "ros, orocos"
} |
Controlling Gazebo Cessna Plane with ROS | Question:
Hi.
https://answers.ros.org/questions/scope:all/sort:activity-desc/tags:melodic/page:1/
I am new to Gazebo and ROS and i want to control the example Cessna plane in Gazebo using ROS.
I am starting the Gazebo using following command:
gazebo --verbose worlds/cessna_demo.world
It allows me control the plane with keyboard input.
But i want to control the plane with ROS (rospy and python).
In the world file there is a plugin:
<!-- Plugins for controlling the thrust and control surfaces -->
<plugin name="cessna_control" filename="libCessnaPlugin.so">
<propeller>cessna_c172::propeller_joint</propeller>
<propeller_max_rpm>2500</propeller_max_rpm>
<left_aileron>cessna_c172::left_aileron_joint</left_aileron>
<left_flap>cessna_c172::left_flap_joint</left_flap>
<right_aileron>cessna_c172::right_aileron_joint</right_aileron>
<right_flap>cessna_c172::right_flap_joint</right_flap>
<elevators>cessna_c172::elevators_joint</elevators>
<rudder>cessna_c172::rudder_joint</rudder>
<propeller_p_gain>10000</propeller_p_gain>
<propeller_i_gain>0</propeller_i_gain>
<propeller_d_gain>0</propeller_d_gain>
<surfaces_p_gain>2000</surfaces_p_gain>
<surfaces_i_gain>0</surfaces_i_gain>
<surfaces_d_gain>0</surfaces_d_gain>
</plugin>
How can i edit this so i can control the plane with ROS?
Thanks...
Originally posted by thegreek on ROS Answers with karma: 11 on 2019-03-16
Post score: 1
Original comments
Comment by zuygar on 2020-06-03:
Hi, if you have solved this issue could you please share your solution ? Thanks...
Answer:
Hi there, I was having the same challenge and I figure a solution for this problem. I know the post is old but it may give some clues for someone having the same troubles.
So first, the C++ file where the libCessnaPlugin.so comes from does not use any instance of ROS. It does mention that publishes the Pose and subscribes the motor speed but no through a ROS network since this files doesn't call anything from there. So I found this repo (https://github.com/AurelienRoy/ardupilot_sitl_gazebo_plugin), where there is a plugin called aircraft_plugin which in essence is the same as the cessna_plugin but now this one does make use of a ROS node and subscribes to a topic called motor_message. This topic used the message CommandMotorSpeed that comes from the mav_com_mav_msgs package (I got it from here https://github.com/PX4/mav_comm).
Now, I tried to catkin_make the aircraftplugin and found several error due to deprecated fucntions of Gazebo since I was using Gazebo 9, and this plugin is old, so I had to change the old functions with the replacements base on this post (https://github.com/osrf/gazebo/blob/gazebo11/Migration.md) and then I has able to compile the package.
Now, in order to attach the plugin to the Cessna model I added these lines in the URDF:
<gazebo>
<plugin name="aircraft_plugin" filename="libaircraft_plugin.so">
<commandSubTopic>/command/motor_speed</commandSubTopic>
<left_aileron>left_aileron_joint</left_aileron>
<left_flap>left_flap_joint</left_flap>
<right_aileron>right_aileron_joint</right_aileron>
<right_flap>right_flap_joint</right_flap>
<elevators>elevators_joint</elevators>
<rudder>rudder_joint</rudder>
<propeller>propeller_joint</propeller>
<propeller_link>propeller</propeller_link>
<propeller_max_rpm>37500</propeller_max_rpm>
</plugin>
</gazebo>
Finally, it took me a while to know how to publish in the topic so I could move the aricraft control surfaces, and this format worked for me (For testing purposes)
rostopic pub -r 10 /command/motor_speed mav_msgs/CommandMotorSpeed '{motor_speed: [100, 100, 100, 100, 100]}'
I am still new on Gazebo ROS development, but at least this worked for me, so I don't know if there was an easier solution, but it worked for me.
Hope it helps.
Originally posted by cuenca2524 with karma: 16 on 2022-02-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 32659,
"tags": "ros-melodic"
} |
Platformer in C | Question:
GitHub repo.
main.c
#include <stdio.h>
#include "level.h"
static void flush_stdin(void);
int main()
{
int level;
puts("Enter a negative integer to exit.");
for (;;)
{
printf("Enter level: ");
if (scanf("%d", &level) != 1)
{
puts("Please try again.");
flush_stdin();
}
else if (level < 0)
{
puts("Goodbye!");
break;
}
else
{
run_level(level);
flush_stdin();
}
}
return 0;
}
static void flush_stdin(void)
{
while (getchar() != '\n');
}
level.h
#ifndef LEVEL_H
#define LEVEL_H
void run_level(int level);
#endif
level.c
#include "level.h"
#include <stdio.h>
#include "level_t.h"
#include "level_renderer.h"
#include "bass.h"
static float last_frame_time = 0;
static float elapsed_time = 0;
static char *game_over_string = NULL;
static bool game_over(void);
static bool hero_reached_destination(void);
static void resolve_collisions(void);
static bool hero_is_outside(void);
static void do_empty(void);
static void do_full(void);
static void do_south(void);
static void do_north(void);
static void do_west(void);
static void do_east(void);
static void do_south_west_corner(void);
static void do_south_east_corner(void);
static void do_north_west_corner(void);
static void do_north_east_corner(void);
static void do_south_west(void);
static void do_south_east(void);
static void do_north_west(void);
static void do_north_east(void);
static void do_north_west_diag(void);
static void do_north_east_diag(void);
static void bounce_south(void);
static void bounce_north(void);
static void bounce_west(void);
static void bounce_east(void);
static float get_south_penetration(void);
static float get_north_penetration(void);
static float get_west_penetration(void);
static float get_east_penetration(void);
static void react_to_input(void);
static void push_west(void);
static void push_east(void);
static void apply_velocity(void);
void run_level(int level)
{
if (!init_level_t(level)) return;
init_renderer();
init_bass(lvl.bass_frequency, lvl.bass_peak_volume, lvl.bass_duration);
reset_timer();
while (!game_over())
{
render_level();
react_to_input();
apply_velocity();
resolve_collisions();
}
exit_bass();
exit_renderer();
exit_level_t();
}
static bool game_over(void)
{
if (window_should_close()) game_over_string = "Window closed.";
if (hero_reached_destination()) game_over_string = "You won!";
if (game_over_string != NULL)
{
printf("%s Time: %fs.\n", game_over_string, get_time());
game_over_string = NULL;
elapsed_time = last_frame_time = 0;
return true;
}
return false;
}
static bool hero_reached_destination(void)
{
return lvl.hero_pos.x > lvl.dest_pos.x
&& lvl.hero_pos.y > lvl.dest_pos.y
&& lvl.hero_pos.x + 1 < lvl.dest_pos.x + lvl.dest_sz.x
&& lvl.hero_pos.y + 1 < lvl.dest_pos.y + lvl.dest_sz.y;
}
static void resolve_collisions(void)
{
if (hero_is_outside())
{
lvl.hero_vel.x = lvl.hero_vel.y = 0;
game_over_string = "You ventured outside the level bounds.";
return;
}
else
{
elapsed_time = get_time() - last_frame_time;
lvl.hero_vel.x += lvl.grav_vel.x * elapsed_time;
lvl.hero_vel.y += lvl.grav_vel.y * elapsed_time;
last_frame_time = get_time();
}
int south_west = (int) lvl.hero_pos.y * lvl.bmp.w + (int) lvl.hero_pos.x;
int south_east = south_west + 1;
int north_west = south_west + lvl.bmp.w;
int north_east = north_west + 1;
if ( lvl.bmp.i[south_west] || lvl.bmp.i[south_east]
|| lvl.bmp.i[north_west] || lvl.bmp.i[north_east]) drop_bass();
if (lvl.bmp.i[south_west] == 0)
if (lvl.bmp.i[south_east] == 0)
if (lvl.bmp.i[north_west] == 0)
if (lvl.bmp.i[north_east] == 0)
do_empty();
else
do_north_east();
else
if (lvl.bmp.i[north_east] == 0)
do_north_west();
else
do_north();
else
if (lvl.bmp.i[north_west] == 0)
if (lvl.bmp.i[north_east] == 0)
do_south_east();
else
do_east();
else
if (lvl.bmp.i[north_east] == 0)
do_north_west_diag();
else
do_north_east_corner();
else
if (lvl.bmp.i[south_east] == 0)
if (lvl.bmp.i[north_west] == 0)
if (lvl.bmp.i[north_east] == 0)
do_south_west();
else
do_north_east_diag();
else
if (lvl.bmp.i[north_east] == 0)
do_west();
else
do_north_west_corner();
else
if (lvl.bmp.i[north_west] == 0)
if (lvl.bmp.i[north_east] == 0)
do_south();
else
do_south_east_corner();
else
if (lvl.bmp.i[north_east] == 0)
do_south_west_corner();
else
do_full();
}
static bool hero_is_outside(void)
{
return lvl.hero_pos.x < 0 || lvl.hero_pos.y < 0 ||
lvl.hero_pos.x >= lvl.bmp.w - 1 || lvl.hero_pos.y >= lvl.bmp.h - 1;
}
static void do_empty(void) {}
static void do_full(void)
{
lvl.hero_vel.x = lvl.hero_vel.y = 0;
game_over_string = "You were crushed.";
}
static void do_south(void)
{
bounce_north();
}
static void do_north(void)
{
bounce_south();
}
static void do_west(void)
{
bounce_east();
}
static void do_east(void)
{
bounce_west();
}
static void do_south_west_corner(void)
{
do_south();
do_west();
}
static void do_south_east_corner(void)
{
do_south();
do_east();
}
static void do_north_west_corner(void)
{
do_north();
do_west();
}
static void do_north_east_corner(void)
{
do_north();
do_east();
}
static void do_south_west(void)
{
if (get_south_penetration() < get_west_penetration()) bounce_north();
else bounce_east();
}
static void do_south_east(void)
{
if (get_south_penetration() < get_east_penetration()) bounce_north();
else bounce_west();
}
static void do_north_west(void)
{
if (get_north_penetration() < get_west_penetration()) bounce_south();
else bounce_east();
}
static void do_north_east(void)
{
if (get_north_penetration() < get_east_penetration()) bounce_south();
else bounce_west();
}
static void do_north_west_diag(void)
{
if (get_south_penetration() <= 0.5) do_south_west_corner();
else do_north_east_corner();
}
static void do_north_east_diag(void)
{
if (get_south_penetration() <= 0.5) do_south_east_corner();
else do_north_west_corner();
}
static void bounce_south(void)
{
lvl.hero_pos.y = (int) lvl.hero_pos.y;
lvl.hero_vel.y = -lvl.bounce_vel.y;
}
static void bounce_north(void)
{
lvl.hero_pos.y = (int) lvl.hero_pos.y + 1;
lvl.hero_vel.y = lvl.bounce_vel.y;
}
static void bounce_west(void)
{
lvl.hero_pos.x = (int) lvl.hero_pos.x ;
lvl.hero_vel.x = -lvl.bounce_vel.x;
}
static void bounce_east(void)
{
lvl.hero_pos.x = (int) lvl.hero_pos.x + 1;
lvl.hero_vel.x = lvl.bounce_vel.x;
}
static float get_south_penetration(void)
{
return ((int) lvl.hero_pos.y + 1) - lvl.hero_pos.y;
}
static float get_north_penetration(void)
{
return lvl.hero_pos.y - (int) lvl.hero_pos.y;
}
static float get_west_penetration(void)
{
return ((int) lvl.hero_pos.x + 1) - lvl.hero_pos.x;
}
static float get_east_penetration(void)
{
return lvl.hero_pos.x - (int) lvl.hero_pos.x;
}
static void react_to_input(void)
{
if (key_pressed(lvl.key_west)) push_west();
if (key_pressed(lvl.key_east)) push_east();
if (key_pressed(lvl.key_exit)) game_over_string = "Aborted by user.";
}
static void push_west(void)
{
lvl.hero_vel.x += lvl.key_west_vel.x;
lvl.hero_vel.y += lvl.key_east_vel.y;
}
static void push_east(void)
{
lvl.hero_vel.x += lvl.key_east_vel.x;
lvl.hero_vel.y += lvl.key_east_vel.y;
}
static void apply_velocity(void)
{
if (lvl.hero_vel.x > lvl.term_vel.x) lvl.hero_vel.x = lvl.term_vel.x;
if (lvl.hero_vel.x <-lvl.term_vel.x) lvl.hero_vel.x =-lvl.term_vel.x;
if (lvl.hero_vel.y > lvl.term_vel.y) lvl.hero_vel.y = lvl.term_vel.y;
if (lvl.hero_vel.y <-lvl.term_vel.y) lvl.hero_vel.y =-lvl.term_vel.y;
lvl.hero_pos.x += lvl.hero_vel.x * elapsed_time;
lvl.hero_pos.y += lvl.hero_vel.y * elapsed_time;
}
level_t.h
#ifndef LEVEL_T_H
#define LEVEL_T_H
#include <stdbool.h>
typedef struct colour_t
{
float r, g, b;
} colour_t;
typedef struct vector_t
{
float x, y;
} vector_t;
typedef struct intmap_t
{
int w, h, *i;
} intmap_t;
typedef struct level_t
{
int win_w, win_h, win_fscr, key_west, key_east, key_exit;
float bass_frequency, bass_peak_volume, bass_duration;
colour_t bg_clr, fg_clr, hero_clr, dest_clr;
vector_t hero_pos, hero_vel, dest_pos, dest_sz;
vector_t grav_vel, term_vel, bounce_vel, key_west_vel, key_east_vel;
intmap_t bmp; // Bit map
} level_t;
extern level_t lvl;
bool init_level_t(int level);
void exit_level_t(void);
#endif
level_t.c
#include "level_t.h"
#include <stdlib.h>
#include <stdio.h>
#define LEVEL_ADDR_HEAD "levels/level"
#define LEVEL_ADDR_TAIL ".txt"
level_t lvl;
static char *mk_addr(int level)
{
static char buffer[100];
sprintf(buffer, LEVEL_ADDR_HEAD "%d" LEVEL_ADDR_TAIL, level);
return buffer;
}
bool init_level_t(int level)
{
FILE *f = fopen(mk_addr(level), "r");
if (f == NULL)
{
puts("Level does not exist.");
return false;
}
fscanf(f, "%d%d%d%d%d%d%f%f%f%f%f%f%f%f%f%f%f%f%f%f%f"
"%f%f%f%f%f%f%f%f%f%f%f%f%f%f%f%f%f%f%d%d",
&lvl.win_w, &lvl.win_h, &lvl.win_fscr,
&lvl.key_west, &lvl.key_east, &lvl.key_exit,
&lvl.bass_frequency, &lvl.bass_peak_volume, &lvl.bass_duration,
&lvl.bg_clr.r, &lvl.bg_clr.g, &lvl.bg_clr.b,
&lvl.fg_clr.r, &lvl.fg_clr.g, &lvl.fg_clr.b,
&lvl.hero_clr.r, &lvl.hero_clr.g, &lvl.hero_clr.b,
&lvl.dest_clr.r, &lvl.dest_clr.g, &lvl.dest_clr.b,
&lvl.hero_pos.x, &lvl.hero_pos.y, &lvl.hero_vel.x, &lvl.hero_vel.y,
&lvl.dest_pos.x, &lvl.dest_pos.y, &lvl.dest_sz.x, &lvl.dest_sz.y,
&lvl.grav_vel.x, &lvl.grav_vel.y, &lvl.term_vel.x, &lvl.term_vel.y,
&lvl.bounce_vel.x, &lvl.bounce_vel.y,
&lvl.key_west_vel.x, &lvl.key_west_vel.y,
&lvl.key_east_vel.x, &lvl.key_east_vel.y,
&lvl.bmp.w, &lvl.bmp.h);
lvl.bmp.i = malloc(lvl.bmp.w * lvl.bmp.h * sizeof(*lvl.bmp.i));
for (int y = lvl.bmp.h - 1; y >= 0; --y)
{
for (int x = 0; x < lvl.bmp.w; ++x)
{
fscanf(f, "%d", &lvl.bmp.i[y * lvl.bmp.w + x]);
}
}
fclose(f);
return true;
}
void exit_level_t(void)
{
free(lvl.bmp.i);
}
level_renderer.h
#ifndef LEVEL_RENDERER_H
#define LEVEL_RENDERER_H
#include <stdbool.h>
void init_renderer(void);
void exit_renderer(void);
void render_level(void);
bool window_should_close(void);
void reset_timer(void);
float get_time(void);
bool key_pressed(int key);
#endif
level_renderer.c
#include "level_renderer.h"
#include "level_t.h"
#include "gutil.h"
#include <string.h>
#include <stdlib.h>
#define WIN_TITLE "Bounce"
#define INDEX_T GLubyte
#define INDEX_MACRO GL_UNSIGNED_BYTE
const char *VERT =
"#version 330 core\n"
"layout (location = 0) in vec2 pos;\n"
"void main(void) { gl_Position = vec4(pos, 0, 1); }\n";
const char *FRAG =
"#version 330 core\n"
"out vec4 colour;\n"
"uniform float r, g, b;\n"
"void main(void) { colour = vec4(r, g, b, 1); }\n";
typedef enum PRG_VARS
{
VAR_R,
VAR_G,
VAR_B,
N_VARS
} PRG_VARS;
typedef struct object_t
{
int n_indices, n_points;
GLuint indices, points;
} object_t;
typedef struct renderer_t
{
GLFWwindow *win;
GLuint vao, prg;
GLint locs[N_VARS];
object_t fg, hero, dest;
} renderer_t;
typedef struct rect_t
{
int x, y, w, h;
} rect_t;
typedef struct point_t
{
int x, y;
} point_t;
static renderer_t ren;
static void init_fg(void);
static rect_t expand_rect(int x, int y);
static bool is_row(rect_t rect);
static void delete_rect(rect_t rect, intmap_t dmp);
static void attach_rect(rect_t rect, intmap_t imp, INDEX_T**is, vector_t**pts);
static void attach_point(point_t p, intmap_t imp, vector_t **points);
static void attach_index(point_t p, intmap_t imp, INDEX_T **indices);
static void mk_vbos(object_t *obj, INDEX_T *is, vector_t *pts, GLenum usage);
static void init_hero(void);
static void init_dest(void);
static void exit_obj(object_t *obj);
static void draw_obj(object_t obj, GLenum mode, colour_t rgb);
static void adjust_hero_position(void);
void init_renderer(void)
{
const char *SRCS[] = {VERT, FRAG};
const GLenum TYPES[] = {GL_VERTEX_SHADER, GL_FRAGMENT_SHADER};
const char *NAMES[] = {"r", "g", "b"};
init_glfw();
ren.win = mk_win(lvl.win_w, lvl.win_h, WIN_TITLE, false, lvl.win_fscr);
ren.vao = mk_vao();
glEnableVertexAttribArray(0);
ren.prg = mk_prg(2, SRCS, TYPES, N_VARS, NAMES, ren.locs);
glClearColor(lvl.bg_clr.r, lvl.bg_clr.g, lvl.bg_clr.b, 1.0f);
init_fg();
init_hero();
init_dest();
}
static void init_fg(void)
{
size_t sz = lvl.bmp.w * lvl.bmp.h * sizeof(*lvl.bmp.i); // for deletion map
int imp_w = lvl.bmp.w + 1, imp_h = lvl.bmp.h + 1; // for indices map
intmap_t dmp = {lvl.bmp.w, lvl.bmp.h, memcpy(malloc(sz), lvl.bmp.i, sz)};
intmap_t imp = {imp_w, imp_h, calloc(imp_w * imp_h, sizeof(*imp.i))};
INDEX_T *indices = NULL;
vector_t *points = NULL;
for (int y = 0; y < lvl.bmp.h; ++y)
for (int x = 0; x < lvl.bmp.w; ++x)
if (dmp.i[y * lvl.bmp.w + x] == 1)
{
rect_t rect = expand_rect(x, y);
delete_rect(rect, dmp);
attach_rect(rect, imp, &indices, &points);
}
mk_vbos(&ren.fg, indices, points, GL_STATIC_DRAW);
free(indices);
free(points);
free(dmp.i);
free(imp.i);
}
static rect_t expand_rect(int x, int y)
{
rect_t rect = {x, y, 1, 1};
while (x + rect.w < lvl.bmp.w&&lvl.bmp.i[y*lvl.bmp.w+x+rect.w]==1)++rect.w;
while (y + rect.h < lvl.bmp.h && is_row(rect)) ++rect.h;
return rect;
}
static bool is_row(rect_t rect)
{
int base = (rect.y + rect.h) * lvl.bmp.w + rect.x;
for (int i = base; i < base + rect.w; ++i)if(lvl.bmp.i[i]!=1) return false;
return true;
}
static void delete_rect(rect_t rect, intmap_t dmp)
{
for (int y = rect.y; y < rect.y + rect.h; ++y)
for (int x = rect.x; x < rect.x + rect.w; ++x)
dmp.i[y * dmp.w + x] = 0;
}
static void attach_rect(rect_t rect, intmap_t imp, INDEX_T**is, vector_t**pts)
{
point_t p1 = {rect.x, rect.y};
point_t p2 = {rect.x + rect.w, rect.y};
point_t p3 = {rect.x + rect.w, rect.y + rect.h};
point_t p4 = {rect.x, rect.y + rect.h};
point_t points[] = {p1, p2, p3, p4};
point_t indices[] = {p1, p2, p3, p3, p4, p1};
for (int i = 0; i < 4; ++i) attach_point(points[i], imp, pts);
for (int i = 0; i < 6; ++i) attach_index(indices[i],imp, is);
}
static void attach_point(point_t p, intmap_t imp, vector_t **points)
{
if (imp.i[p.y * imp.w + p.x] == 0)
{
*points = realloc(*points, ++ren.fg.n_points * sizeof(**points));
(*points)[ren.fg.n_points - 1] = (vector_t) {p.x, p.y};
imp.i[p.y * imp.w + p.x] = ren.fg.n_points;
}
}
static void attach_index(point_t p, intmap_t imp, INDEX_T **indices)
{
*indices = realloc(*indices, ++ren.fg.n_indices * sizeof(**indices));
(*indices)[ren.fg.n_indices - 1] = imp.i[p.y * imp.w + p.x] - 1;
}
static void init_hero(void)
{
INDEX_T indices[] = {0, 1, 2, 2, 3, 0};
vector_t points[]={{0,0}, {1,0}, {1,1}, {0,1}};
ren.hero.n_indices = 6;
ren.hero.n_points = 4;
mk_vbos(&ren.hero, indices, points, GL_DYNAMIC_DRAW);
}
static void init_dest(void)
{
INDEX_T indices[] = {0, 1, 1, 2, 2, 3, 3, 0};
vector_t points[] = {
{lvl.dest_pos.x, lvl.dest_pos.y},
{lvl.dest_pos.x + lvl.dest_sz.x, lvl.dest_pos.y},
{lvl.dest_pos.x + lvl.dest_sz.x, lvl.dest_pos.y + lvl.dest_sz.y},
{lvl.dest_pos.x, lvl.dest_pos.y + lvl.dest_sz.y}};
ren.dest.n_indices = 8;
ren.dest.n_points = 4;
mk_vbos(&ren.dest, indices, points, GL_STATIC_DRAW);
}
static void mk_vbos(object_t *obj, INDEX_T *is, vector_t *pts, GLenum usage)
{
for (int i = 0; i < obj->n_points; ++i)
{
pts[i].x = (pts[i].x / lvl.bmp.w) * 2 - 1;
pts[i].y = (pts[i].y / lvl.bmp.h) * 2 - 1;
}
size_t sz_i = obj->n_indices * sizeof(*is);
size_t sz_p = obj->n_points * sizeof(*pts);
obj->indices = mk_vbo(GL_ELEMENT_ARRAY_BUFFER,sz_i,is,GL_STATIC_DRAW);
obj->points = mk_vbo(GL_ARRAY_BUFFER, sz_p, pts, usage);
}
void exit_renderer(void)
{
exit_obj(&ren.dest);
exit_obj(&ren.hero);
exit_obj(&ren.fg);
ren.prg = rm_prg(ren.prg);
glDisableVertexAttribArray(0);
ren.vao = rm_vao(ren.vao);
ren.win = rm_win(ren.win);
exit_glfw();
}
static void exit_obj(object_t *obj)
{
obj->n_indices = obj->n_points = 0;
obj->indices = rm_vbo(obj->indices);
obj->points = rm_vbo(obj->points);
}
void render_level(void)
{
glClear(GL_COLOR_BUFFER_BIT);
draw_obj(ren.fg, GL_TRIANGLES, lvl.fg_clr);
draw_obj(ren.dest, GL_LINES, lvl.dest_clr);
adjust_hero_position();
draw_obj(ren.hero,GL_TRIANGLES, lvl.hero_clr);
glfwSwapBuffers(ren.win);
}
static void draw_obj(object_t obj, GLenum mode, colour_t rgb)
{
glUniform1f(ren.locs[0], rgb.r);
glUniform1f(ren.locs[1], rgb.g);
glUniform1f(ren.locs[2], rgb.b);
glBindBuffer(GL_ARRAY_BUFFER, obj.points);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, obj.indices);
glDrawElements(mode, obj.n_indices, INDEX_MACRO, 0);
}
static void adjust_hero_position(void)
{
vector_t points[] = {{lvl.hero_pos.x, lvl.hero_pos.y},
{lvl.hero_pos.x + 1, lvl.hero_pos.y},
{lvl.hero_pos.x + 1, lvl.hero_pos.y + 1},
{lvl.hero_pos.x, lvl.hero_pos.y + 1}};
for (int i = 0; i < 4; ++i)
{
points[i].x = (points[i].x / lvl.bmp.w) * 2 - 1;
points[i].y = (points[i].y / lvl.bmp.h) * 2 - 1;
}
glBindBuffer(GL_ARRAY_BUFFER, ren.hero.points);
glBufferData(GL_ARRAY_BUFFER, sizeof(points), points, GL_DYNAMIC_DRAW);
}
bool window_should_close(void)
{
glfwPollEvents();
return glfwWindowShouldClose(ren.win);
}
void reset_timer(void)
{
glfwSetTime(0);
}
float get_time(void)
{
return glfwGetTime();
}
bool key_pressed(int key)
{
return glfwGetKey(ren.win, key);
}
gutil.h
#ifndef GUTIL_H
#define GUTIL_H
#include <GL/glew.h>
#include <GLFW/glfw3.h>
#include <stdbool.h>
#define GUTIL_CONTEXT_VERSION_MAJOR 3
#define GUTIL_CONTEXT_VERSION_MINOR 3
#define GUTIL_OPENGL_FORWARD_COMPAT GL_TRUE
#define GUTIL_OPENGL_PROFILE GLFW_OPENGL_CORE_PROFILE
#define GUTIL_GLEW_EXPERIMENTAL GL_TRUE
void init_glfw(void);
void exit_glfw(void);
GLFWwindow *mk_win(int w, int h, const char *title, bool rsz, bool fscr);
GLFWwindow *rm_win(GLFWwindow *win);
GLuint mk_vao(void);
GLuint rm_vao(GLuint vao);
GLuint mk_prg(int n_shds, const char **srcs, const GLenum *types,
int n_vars, const char **names, GLint *locs);
GLuint rm_prg(GLuint prg);
GLuint mk_vbo(GLenum type, size_t sz, void *data, GLenum usage);
GLuint rm_vbo(GLuint vbo);
#endif // GUTIL_H
gutil.c
#include "gutil.h"
#include <stdlib.h>
static GLuint mk_shd(const char *src, GLenum type);
static GLuint rm_shd(GLuint shd);
void init_glfw(void)
{
glfwInit();
}
void exit_glfw(void)
{
glfwTerminate();
}
GLFWwindow *mk_win(int w, int h, const char *title, bool rsz, bool fscr)
{
glfwWindowHint(GLFW_RESIZABLE, rsz);
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, GUTIL_CONTEXT_VERSION_MAJOR);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, GUTIL_CONTEXT_VERSION_MINOR);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GUTIL_OPENGL_FORWARD_COMPAT);
glfwWindowHint(GLFW_OPENGL_PROFILE, GUTIL_OPENGL_PROFILE);
glewExperimental = GUTIL_GLEW_EXPERIMENTAL;
GLFWmonitor *mon = fscr ? glfwGetPrimaryMonitor() : NULL;
GLFWwindow *win = glfwCreateWindow(w, h, title, mon, NULL);
glfwMakeContextCurrent(win);
glewInit();
return win;
}
GLFWwindow *rm_win(GLFWwindow *win)
{
glfwDestroyWindow(win);
return NULL;
}
GLuint mk_vao(void)
{
GLuint vao;
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
return vao;
}
GLuint rm_vao(GLuint vao)
{
glDeleteVertexArrays(1, &vao);
return 0;
}
GLuint mk_prg(int n_shds, const char **srcs, const GLenum *types,
int n_vars, const char **names, GLint *locs)
{
GLuint prg = glCreateProgram();
GLuint *shds = malloc(n_shds * sizeof(*shds));
for (int i = 0; i < n_shds; ++i) shds[i] = mk_shd(srcs[i], types[i]);
for (int i = 0; i < n_shds; ++i) glAttachShader(prg, shds[i]);
glLinkProgram(prg);
for (int i = 0; i < n_shds; ++i) glDetachShader(prg, shds[i]);
glValidateProgram(prg);
glUseProgram(prg);
for (int i = 0; i < n_vars; ++i)locs[i]=glGetUniformLocation(prg,names[i]);
for (int i = 0; i < n_shds; ++i) shds[i] = rm_shd(shds[i]);
free(shds);
return prg;
}
GLuint rm_prg(GLuint prg)
{
glDeleteProgram(prg);
return 0;
}
static GLuint mk_shd(const char *src, GLenum type)
{
GLuint shd = glCreateShader(type);
glShaderSource(shd, 1, &src, NULL);
glCompileShader(shd);
return shd;
}
static GLuint rm_shd(GLuint shd)
{
glDeleteShader(shd);
return 0;
}
GLuint mk_vbo(GLenum type, size_t sz, void *data, GLenum usage)
{
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(type, vbo);
glBufferData(type, sz, data, usage);
return vbo;
}
GLuint rm_vbo(GLuint vbo)
{
glDeleteBuffers(1, &vbo);
return 0;
}
bass.h
#ifndef BASS_H
#define BASS_H
#define BASS_SAMPLE_RATE 44100
#define BASS_BUFFER_SIZE 256
void init_bass(float frequency, float volume, float duration);
void exit_bass(void);
void drop_bass(void);
#endif // BASS_H
bass.c
#include "bass.h"
#include <portaudio/portaudio.h>
#include <stdbool.h>
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
PaStream *stream = NULL;
float *table = NULL;
bool playing = false;
int n_buffers = 0;
int buffer_index = 0;
static void init_pa(void);
static void init_table(float frequency, float volume, float duration);
static int cb(const void *in, void *out, unsigned long fpb,
const PaStreamCallbackTimeInfo *time_info,
PaStreamCallbackFlags status, void *p);
void init_bass(float frequency, float volume, float duration)
{
init_pa();
init_table(frequency, volume, duration);
Pa_OpenDefaultStream(&stream, 0, 2, paFloat32,
BASS_SAMPLE_RATE, BASS_BUFFER_SIZE, cb, NULL);
Pa_StartStream(stream);
}
static void init_pa(void)
{
FILE *orig_stderr; // ALSA puts garbage. I don't know to portably shut it.
orig_stderr = stderr;
stderr = tmpfile();
Pa_Initialize();
fflush(stderr);
fclose(stderr);
stderr = orig_stderr;
}
static void init_table(float frequency, float volume, float duration)
{
int actual_size = duration * BASS_SAMPLE_RATE;
n_buffers = actual_size / BASS_BUFFER_SIZE + 1;
int n_samples = n_buffers * BASS_BUFFER_SIZE;
table = malloc(n_samples * sizeof(*table));
float radians = (M_PI * 2 * frequency * duration) / actual_size;
for (int i = 0; i < actual_size; ++i)
{
float one_to_zero = 1 - (float) i / actual_size;
table[i] = sin(i * radians) * volume * one_to_zero;
}
for (int i = actual_size; i < n_samples; ++i) table[i] = 0;
}
void exit_bass(void)
{
Pa_StopStream(stream);
Pa_CloseStream(stream);
Pa_Terminate();
free(table);
table = NULL;
playing = false;
}
void drop_bass(void)
{
playing = true;
buffer_index = 0;
}
static int cb(const void *in, void *out, unsigned long fpb,
const PaStreamCallbackTimeInfo *time_info,
PaStreamCallbackFlags status, void *p)
{
(void) in;
(void) fpb;
(void) time_info;
(void) status;
(void) p;
float *output = out;
if (playing)
{
int base_index = buffer_index * BASS_BUFFER_SIZE;
for (int i = 0; i < BASS_BUFFER_SIZE; ++i)
{
output[i * 2 + 0] = table[base_index + i];
output[i * 2 + 1] = table[base_index + i];
}
++buffer_index;
if (buffer_index >= n_buffers) playing = false;
}
else for (int i = 0; i < BASS_BUFFER_SIZE * 2; ++i) output[i] = 0;
return paContinue;
}
Makefile (first time, I usually build it with Code::Blocks IDE)
install:
gcc -c *.c
gcc *.o -lm -lGL -lGLEW -lglfw -lportaudio -o bounce
rm -f *.o
uninstall:
rm -f bounce
Answer: I'd like to make some suggestions about the Makefile.
Make knows about .c and .o files
It even has an implicit rule to compile any .c files for which the corresponding .o file is needed. So a common thing to do is make a variable of all the objects that need to be in the final build line. You can list them individually or use make's special powers (I used the GNU manual, so some of these may be GNU-specific.)
SOURCES= $(notdir $(wildcard ./*.c))
OBJECTS= $(patsubst %.c,%.o,$(SOURCES))
Then you can use the variable in a dependencies line or a command. Probably both.
all:bounce
bounce:$(OBJECTS)
$(CC) $(CFLAGS) -o $@ $^ $(LDLIBS)
The special variable $@ expands to the target of the production, here bounce. And the special variable $^ expands to the dependencies from the production, which usually are on the line directly above, hence the caret as an up-arrow.
make is for making, make install is for installing
Notice I used all as the master (first) target which triggers all others. I believe this is a common thing to do. This way, make all == make.
Installing is usually considered a distinct thing from compiling. When you install a program, you make a home directory for it in some appropriate place in the file-system; and copy the program to its installed location; and copy any other related files to their appropriate places (libraries, .pc files (which declare what other external libraries are needed), manpages, fonts?, other data files).
So I would suggest you not use the word install at all as a target unless it's doing something permanent to make the program accessible. And adding an uninstall target is a very worthwhile companion. (Which you did, but again not implementing the expected behavior of removing the program from some installed location.)
The rest of the program is very impressive! | {
"domain": "codereview.stackexchange",
"id": 20220,
"tags": "c, game, opengl"
} |
Energy transfer through damping | Question: When a damper is attached to a bridge, its tuned to the bridges natural frequency. I dont get how will that allow for maximum transfer of energy? And also how will heavily damped systems prevent energy from going back to the system? I thought energy is just lost with damping.
Answer: I'm not a structural engineer, but as I understand it, catastrophic collapses of bridges were often due to loading that caused the bridge to oscillate at its natural frequency. Dampers tuned to the natural frequency allow the oscillation energy to be safely dissipated as heat. The dissipated heat will not naturally flow back to the structure causing it to again oscillate (in violation of the second law of thermodynamics).
ADDENDUM
This responds to your follow up questions.
but I don't get how is energy transferred to the damper in the first place?
In a similar way as a shock absorber in the suspension of a car works. Without the shock absorber, the car would bounce up and down on its suspension springs at the natural frequency of the springs. The shock absorber is basically a cylinder with hydraulic fluid and the fluid absorbs the oscillation energy.
plus what has natural frequency to do with the amount of energy thats transferred?
Let’s say the bridge is subjected to very strong, but intermittent, cross winds. The bridge may begin to oscillate if the frequency of the winds matches the natural oscillation frequency of the bridge structure. That’s the only frequency that will cause the bridge to fail. So it only makes sense to design the damper so that it absorbs energy at that natural oscillating frequency. If the damper is too “stiff”, it will not give at all and simply be another rigid part of the bridge structure. If the damper is too “soft”, it will simply oscillate along with the bridge and not dissipate any energy.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 56299,
"tags": "oscillators"
} |
How can i interface ros with FPGA kit? | Question:
i want to control my multi-link arm(it is based on fpga)using ros....Is it possible?how?
Thanks in advance...
Originally posted by pavanpatel on ROS Answers with karma: 61 on 2013-08-13
Post score: 1
Answer:
How do you currently communicate with the FPGA? Do you have another computer running ROS that you want to communicate with the FPGA? Or do you want to run ROS itself on the FPGA?
One idea would be to have a PC running ROS, that communicates with the FPGA. Then you write a ROS node on the PC (either in python or C++) that subscribes to some topic to receive commands for the robot, and then sends those commands to the FPGA using whatever protocol it has. And then gets feedback data back from the FPGA and publishes it on another topic.
Originally posted by davr with karma: 46 on 2017-10-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15250,
"tags": "ros"
} |
ardrone goes through the mobile robot after changing the stl file | Question:
I have two robots an AR. Drone and mobile robot. the aim is that the ardrone should see the mobile robot as an obstacle from its laser data. All of the work is in gazebo simulator.
Since the ardrone is flying, I changed the stl file for the mobile robot. I made the robot very long so it acts as a moving wall in order for the ardrone to sense it. now the problem is that the ardrone goes through the mobile robot and it cant sence it ??
how can I fix that??? also any suggestion of dynamic obstacles package ?? I want some thing like a wall moving ??
Originally posted by RSA_kustar on ROS Answers with karma: 275 on 2014-09-24
Post score: 0
Answer:
I solved this problem.
When changing the look of the robot and make it tall in the stl file, I had to change in the both the origin of the base link and origin of the collision tag and to make it as long as the one I change in the shape.
This is changed form the launch file of the robot urdf file description.
I didn't find any package that do that so, I had to create my own.
what I did it something like a moving wall
Originally posted by RSA_kustar with karma: 275 on 2014-10-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19505,
"tags": "ros, gazebo, mobile-base, stl, ardrone"
} |
Image Processing - find the centre of the estimated circle of eye | Question: I have been recently working on detecting the direction of eye pointed out and done some basic operations on the image of the eye. the below is the image of boundry of the eye.
Now the problem is how do i detect the centre of the rough circle i have obtained. please help me out by providing me a matlab code.
Answer: Unfortunately, I dont have access to a MATLAB image processing toolbox. I've coded a solution in python, showing the idea of how to solve it:
import skimage
import skimage.io
import scipy.ndimage as ndi
import skimage.morphology as m
import skimage.measure as measure
plt.figure(figsize=(8, 12))
I = skimage.io.imread("https://i.stack.imgur.com/eCVzl.jpg")
plt.imshow(I[:,:,2])
bw = I[50:250,100:300,1] < 100 # Import and binary image. Cut the region of interest (e.g. remove the white border)
plt.subplot(321)
plt.imshow(bw, interpolation='none')
plt.subplot(322)
plt.title('Distance transform of input image')
R, D = m.medial_axis(bw, return_distance=True) # Calculate the distance transform (Matlab: bwdist)
plt.imshow(D, interpolation='none')
plt.subplot(323)
plt.title('Convex hull of input image')
C = m.convex_hull_image(bw == 0) # Calculate the convex hull of the pixels (i.e. the are where the maxima can be)
plt.imshow(C)
masked = C * D # zero-out all distances that are not within the convex hull
plt.subplot(324)
plt.imshow(masked)
plt.title('Masked distances according to convex hull')
M = measure.moments(C.astype(np.uint8), order=2);
cy, cx = M[0, 1] / M[0, 0], M[1, 0] / M[0, 0] # Calculate the convex hull center of mass
x, y = np.meshgrid(np.arange(C.shape[0])-cx, np.arange(C.shape[1])-cy);
weights = np.exp(-0.003*(x*x+y*y)) # calculate a weighting mask
plt.subplot(325)
plt.imshow(weights)
plt.title('Distance weights')
masked_weighted = masked * weights
plt.subplot(326)
max_pos = np.unravel_index(masked_weighted.argmax(), masked.shape) # detect maximum pixel and its 2d-coordinates
plt.imshow(masked_weighted)
plt.plot(max_pos[1], max_pos[0], 'kx', mew=2)
plt.title('Weigthed distances with detected center')
plt.subplot(321)
plt.plot(max_pos[1], max_pos[0], 'kx', mew=2)
plt.title('input image with detected center')
plt.tight_layout()
The idea is based on an operation called "Distance Transform" and that the center of a circle is the point that has the largest distance to its boundary. The program assumes that the center of the circle is roughly in the middle of area spanned by the edges. Furthermore, it assumes that the maximum distance pixel corresponds to the circle center (i.e. other edges are closer together). So, in rough steps:
calculate distance transform of Image
calculate convex hull of the edge pixels
caluclate center of the convex hull to detect the rough center
weight the obtained distances from 1) by the distance from the estimated center
use the strongest pixel as the center of the circle.
The code sohuld be easy to translate into Matlab, esp. as I have plotted all intermediate images. | {
"domain": "dsp.stackexchange",
"id": 4705,
"tags": "image-processing, matlab"
} |
Equation for Hubble Value as a function of time | Question: I am trying to write the equation for the situation where the Hubble parameter $H$ would be changing over time. In other words, it would represent an accelerated expansion of the Universe. That is, $H$ can no longer be the simple $H=1/t$. In the new equation, I should be able to plug a future time and see what the Hubble Value will be in that future.
I think I got most of the concepts right. First of all I understand that the key to the problem is in $H=a˙(t)/a(t)$. Where $a(t)$ is the scale factor from the Friedman equations. I also understand that if $H$ is changing then $a¨(t)> 0$ and also $H'(a)>0$. But I'm still very uncomfortable when my pencil meets the paper. The Friedman equations are not stated as a function of $t$, but as a function of $a$ where $a$ is the time scale, and frankly, I don't know how to use the time scale factor.
In any event, here is my poor attempt to do it. According to Wikipedia, one of the solutions of Friedman equation is (assume flat space k=0):
$a(t) = a_0 t ^{2/3(w+1)}$
Therefore:
$a'(t) = d(a_0 t ^{2/3(w+1)}) / dt$
$a'(t) = (2a_0/3(w+1)) t^{ -(1+3w)/3(w+1)}$
And I suppose that we could now substitute: $H=a˙(t)/a(t)$ with the above:
$H = (2a_0/3(w+1)) t^{ -(1+3w)/3(w+1)} / a_0 t ^{2/3(w+1)}$
Simplified:
$H = (2/3(w+1)) t^{-(3w+2)/3(w+1)}$
And $w$ is typically known from observation.
I will appreciate if someone can let me know if I am in the right path or totally derailed. I have a feeling that $a(t) = a_0 t ^{2/3(w+1)}$ was not the right place to start because if $w=-1$, then everything goes down the drain. But then again, in an accelerated expansion, $w$ would not equal -1. It would always be less than -1. Also, in the final equation, if $w<-1$ then H<0, which could not be right. So I'm not sure what to think.
Many thanks in advance,
Luis
Answer: The general solution works as follows:
We start with the Friedmann equation
$$
\dot{a}^2 - \frac{8\pi G}{3}\rho a^2 = -kc^2,
$$
with $k=0,\ 1,\ $or $-1$, and $\rho$ the total density. Since the right-hand side is constant, we can write
$$
\dot{a}^2 - \frac{8\pi G}{3}\rho a^2 = \dot{a}_0^2 - \frac{8\pi G}{3}\rho_0 a_0^2,
$$
where the subscript 0 denotes the present-day values. If we introduce the Hubble constant
$$
H_0 = \frac{\dot{a}_0}{a_0}
$$
and the present-day critical density
$$
\rho_{c,0} = \frac{3H_0^2}{8\pi G},
$$
we get
$$
\frac{\dot{a}^2}{a_0^2} - H_0^2\frac{\rho}{\rho_{c,0}} \frac{a^2}{a_0^2} = H_0^2 - H_0^2\frac{\rho_0}{\rho_{c,0}}
$$
or
$$
H^2 = \frac{\dot{a}^2}{a^2} = H_0^2\left[\frac{\rho}{\rho_{c,0}} + \frac{a_0^2}{a^2}\left(1 - \frac{\rho_0}{\rho_{c,0}}\right)\right].
$$
Now, there are three contributions to the total density: radiation, matter (normal and dark) and dark energy:
$$
\rho = \rho_R + \rho_M + \rho_{\Lambda}.
$$
These densities change over time as follows: the matter density decreases as the volume of the universe increases, so $\rho_M\sim a^{-3}$, as you'd expect. The radiation falls off as $\rho_R\sim a^{-4}$ (the extra factor is due to redshift). And in the Standard Model, the dark energy remains constant: $\rho_{\Lambda} = \text{const}$. In other words,
$$
\begin{align}
\rho_R a^4 &= \rho_{R,0}\, a_0^4,\\
\rho_M a^3 &= \rho_{M,0}\, a_0^3,\\
\rho_\Lambda &= \rho_{\Lambda,0},
\end{align}
$$
and finally, with the notations
$$
\Omega_{R,0} = \frac{\rho_{R,0}}{\rho_{c,0}},\quad
\Omega_{M,0} = \frac{\rho_{M,0}}{\rho_{c,0}},\quad
\Omega_{\Lambda,0} = \frac{\rho_{\Lambda,0}}{\rho_{c,0}},\\
\Omega_{K,0} = 1 - \Omega_{R,0} - \Omega_{M,0} - \Omega_{\Lambda,0},
$$
we find
$$
H(a) = H_0\sqrt{\Omega_{R,0}\,a^{-4} + \Omega_{M,0}\,a^{-3} + \Omega_{K,0}\,a^{-2} + \Omega_{\Lambda,0}},
$$
where we used the convention $a_0=1$. Also note that
$$
\dot{a} = H_0\sqrt{\Omega_{R,0}\,a^{-2} + \Omega_{M,0}\,a^{-1} + \Omega_{K,0} + \Omega_{\Lambda,0}\,a^2},\\
\ddot{a} = -\frac{1}{2}H_0^2\left(2\,\Omega_{R,0}\,a^{-3}+\Omega_{M,0}\,a^{-2}
-2\,\Omega_{\Lambda,0}\,a\right).
$$
The latest values of the parameters, obtained from the Planck mission, are
$$
H_0 = 67.3\;\text{km}\,\text{s}^{-1}\text{Mpc}^{-1},\\
\Omega_{R,0} = 9.24\times 10^{-5},\qquad\Omega_{M,0} = 0.315,\\
\Omega_{\Lambda,0} = 0.685,\qquad\Omega_{K,0} = 0.
$$
So now we have the Hubble parameter as a function of the scale radius $a$. How can we convert this into a function of time? From
$$
\dot{a} = \frac{\text{d}a}{\text{d}t}
$$
we get
$$
\text{d}t = \frac{\text{d}a}{\dot{a}} = \frac{\text{d}a}{aH(a)} = \frac{a\,\text{d}a}{a^2H(a)},
$$
so that
$$
\begin{align}
t(a) &= \int_0^a \frac{a'\,\text{d}a'}{a'^2H(a')}\\
&= \frac{1}{H_0}\int_0^a
\frac{a'\,\text{d}a'}{\sqrt{\Omega_{R,0} + \Omega_{M,0}\,a' + \Omega_{K,0}\,a'^2 + \Omega_{\Lambda,0}\,a'^4}}.
\end{align}
$$
Inverting this relation, we get $a(t)$. Unfortunately, this inversion has to be done numerically. And finally,
$$
H(t) = H(a(t)).
$$
P.S. The solution that you mentioned is the case where radiation and matter are negligible, and dark energy has a more general form (called quintessence):
$$
\rho_R=\rho_M=0,\quad \rho_\Lambda = \rho_{\Lambda,0}\,a^{-3(1+w)},
$$
where $w=-1$ corresponds with the normal case of a cosmological constant. In this case, for a universe with no curvature,
$$
H^2 = H_0^2\,a^{-3(1+w)},\qquad
t(a) = \frac{1}{H_0}\int_0^a a'^{(1+3w)/2}\,\text{d}a',
$$
with solution $a\sim t^{2/(3+3w)}$, for $w>-1$. Solutions with $w\leqslant-1$ have no big bang, i.e. the lower bound in the integral $t(a)$ cannot be zero.
In any case, these are not accurate descriptions of our universe, since they ignore the contributions of matter and radiation. | {
"domain": "physics.stackexchange",
"id": 11177,
"tags": "general-relativity, cosmology, universe, space-expansion, big-bang"
} |
Temperature conversion table | Question: This is another solution for this challenge.
Problem statement:
In this challenge, write a program that takes in three arguments, a
start temperature (in Celsius), an end temperature (in Celsius) and a
step size. Print out a table that goes from the start temperature to
the end temperature, in steps of the step size; you do not actually
need to print the final end temperature if the step size does not
exactly match. You should perform input validation: do not accept
start temperatures less than a lower limit (which your code should
specify as a constant) or higher than an upper limit (which your code
should also specify). You should not allow a step size greater than
the difference in temperatures.
I want to learn more about C++, so if there is some cool C++ feature I could should have used, please comment (or include it in your answer).
#import <iostream>
#import <cmath>
#define COLUMN_SEPARATOR "\t| "
#define MAX_TEMP 500
#define MIN_TEMP -500
inline bool between(double x, double max, double min) {
return max >= x && min <= x;
}
void getInput(double &lower, double &upper, double &step) {
double temp1, temp2;
std::cout << "Please enter consecutively the upper and lower limits, both between " << MIN_TEMP << " and " << MAX_TEMP << "." << std::endl;
std::cin >> temp1;
std::cin >> temp2;
while (!between(temp1, MAX_TEMP, MIN_TEMP) || !between(temp2, MAX_TEMP, MIN_TEMP)) {
std::cout << "At least one of the temperatures is out of bounds. Please reenter:" << std::endl;
std::cin >> temp1;
std::cin >> temp2;
}
upper = std::max(temp1, temp2);
lower = std::min(temp1, temp2);
std::cout << "Please enter a positive stepsize, smaller than the difference between the limits." << std::endl;
std::cin >> step;
while (step < 0 || step > upper - lower) {
std::cout << "The stepsize is out of bounds. Please reenter:" << std::endl;
std::cin >> step;
}
}
double toFahrenheit(double celsius) {
return celsius*(9/5) + 32;
}
void printTable(double start, double end, double step) {
std::cout << "Celsius" << COLUMN_SEPARATOR << "Fahrenheit" << std::endl;
std::cout << "=======" << COLUMN_SEPARATOR << "==========" << std::endl;
for (double i = start; i < end; i += step) {
std::cout << i << COLUMN_SEPARATOR << toFahrenheit(i) << std::endl;
}
}
int main() {
double start, end, step;
getInput(start, end, step);
printTable(start, end, step);
return 0;
}
Sample run:
192:Challenges 11684$ ./a.out Please enter consecutively the upper and lower limits, both between -500 and 500.
3.692
65.937
Please enter a positive stepsize, smaller than the difference between the upper and lower limit.
5.3729
Celsius | Fahrenheit
======= | ==========
3.692 | 35.692
9.0649 | 41.0649
14.4378 | 46.4378
19.8107 | 51.8107
25.1836 | 57.1836
30.5565 | 62.5565
35.9294 | 67.9294
41.3023 | 73.3023
46.6752 | 78.6752
52.0481 | 84.0481
57.421 | 89.421
62.7939 | 94.7939
Answer: You don't check for invalid input.
std::cin >> temp1;
std::cin >> temp2;
What if I type BLA BLA<enter> on the input?
As user input is line based. Most programers decide to get a single value at a time.
std::cout << "Please enter consecutively the upper and lower limits, both between " << MIN_TEMP << " and " << MAX_TEMP << "." << std::endl;
Now your technique is not wrong. But you definitely make it harder for your self to validate the input and user interaction is not that great as they are used to typing one value return (and getting feedback on that value).
I would change that getInput() so that each value is queried for separately (and use a function to get the value).
lower = getUserInput("Please Enter the lower limit of the table", [](int x){return x >= MIN_TEMP;});
upper = getUserInput("Please Enter the upper limit of the table", [](int x){return x <= MAX_TEMP;});
step = getUserInput("Please Enter the step size", [](int){return true;});
Don't use macros for constants.
#define MAX_TEMP 500
#define MIN_TEMP -500
That's really old school C. Macros have no concept of scope or type. As such they can potentially clash with other people's macros. Prefer to use const values.
static const int maxTemp = 500;
static const int minTemp = -500;
Or if you are using C++11 and above.
static constexpr int maxTemp = 500;
static constexpr int minTemp = -500;
As a physics persons. You may find that -500 is too low a value (you can not cool things to that temperature ( 0 Kelvin is the lowest temperature theoretically (though there have been experiments that show a temperature a few fractions below this but that has more to do with how we measure the temperature and people are still arguing about it))). | {
"domain": "codereview.stackexchange",
"id": 8786,
"tags": "c++, programming-challenge, converting"
} |
Dynamically create Javascript Object | Question: Given a strings like 'param1.param2.param3' and a value that should be set on the last parameter, is there a better way than the following to dynamically create an Object and reuse the function to create more parameters on the Object, some which may share a parent parameter? This function also assumes every parameter on the parent Object is an Object with parameters. The purpose is to construct a JSON Object dynamically for a PUT/PATCH request.
function(o,prop,val) {
prop = prop.split('.');
prop.forEach(function(property,i){
if(i===0 && typeof(o[property]) === 'undefined'){
o[property] = {};
if(prop.length === 2){
o[prop[0]][prop[1]] = val;
}
}
else if(i===1 && typeof(o[prop[0]][property]) === 'undefined'){
o[prop[0]][property] = {};
if(prop.length === 3){
o[prop[0]][prop[1]][prop[2]] = val;
}
} else if(i===1 && typeof(o[prop[0]][property]) === 'object'){
if(prop.length === 3){
o[prop[0]][prop[1]][prop[2]] = val;
}
}
else if(i===2 && typeof(o[prop[0]][prop[1]][property]) === 'undefined'){
o[prop[0]][prop[1]][property] = {};
if(prop.length === 4){
o[prop[0]][prop[1]][prop[2]][prop[3]] = val;
}
} else if(i===2 && typeof(o[prop[0]][prop[1]][property]) === 'object'){
if(prop.length === 4){
o[prop[0]][prop[1]][prop[2]][prop[3]] = val;
}
}
});
return o;
};
Here is a Fiddle https://jsfiddle.net/cn25o1vf/
Answer: It seems odd to special-case the string "object" when preparing an object for serialization, so I came up with the following code:
var createObjectFromParam = function (o, prop, val) {
var parts = prop.split('.');
var last = o;
while (parts.length) {
var part = parts.shift();
if (parts.length > 0) {
last[part] = last[part] || {};
last = last[part];
} else {
last[part] = val; // Add conditional expressions here
}
}
return last;
}
The code does the following:
Creates a temporary variable (last) to contain the current "level" of the object hierarchy.
Loops through the array. Note that while (parts.length) will stop the loop when there are no longer parts in the array.
Take the next part of the prop. shift pulls the first element of the array out.
If we're not at the last element
If the element with the specified name exists at the level, use that
If not, create an object at that level
Otherwise, add a property with the specified value to the object.
Finally, return the created object.
Given the example in the jsFiddle (updated here), this would produce the following structure:
{
"ready": {
"set": "go"
},
"another": "object",
"happy": {
"happy": {
"joy": "joy",
"foo": "bar",
"happy": "happy"
}
}
}
To keep that original special-casing, just replace the commented line with something like:
last[part] = val === 'object' ? {} : val === 'undefined' ? : undefined : val; | {
"domain": "codereview.stackexchange",
"id": 14867,
"tags": "javascript"
} |
Is the mass defect in Einstein's $E=mc^2$ the mass of the force-carrying particles within the nucleus? | Question: Basically, what the title says. Is the difference in mass between the sum of the masses of individual nucleons and the nucleus itself the mass of all the force carrying particles I.e. $W$ and $Z$ bosons?
Answer: You are mentioning mass defect, and as it says, it is a defect, meaning the mass of the nucleus is less then the mass of the separate constituents.
Nuclear binding energy is the minimum energy that would be required to disassemble the nucleus of an atom into its component parts.
The mass of an atomic nucleus is less than the sum of the individual masses of the free constituent protons and neutrons
This 'missing mass' is known as the mass defect, and represents the energy that was released when the nucleus was formed.
https://en.wikipedia.org/wiki/Nuclear_binding_energy
As you can see it is related to the nuclear binding energy, that is the residual strong force, that binds the protons and neutrons together to form a nucleus. The force carrying particles you mention are related to another force, the weak force.
The force between the protons and neutrons (residual strong force) is modeled in mathematics with virtual particles. But even if you would like to relate this binding energy to a force carrying particle, these would be the force carriers of the residual strong force.
The nuclear force (or nucleon–nucleon interaction or residual strong force) is a force that acts between the protons and neutrons of atoms.
The nuclear force occurs by the exchange of virtual light mesons, such as the virtual pions, as well as two types of virtual mesons with spin (vector mesons), the rho mesons and the omega mesons.
https://en.wikipedia.org/wiki/Nuclear_force | {
"domain": "physics.stackexchange",
"id": 69446,
"tags": "nuclear-physics, mass-energy, binding-energy, carrier-particles"
} |
Can not get the data from ROS Services, only entering the server but data is not out, why? | Question:
Hi
I need to read data ( lets say pressure) from serial Port Microcontroller by Client Request. I check the tutorial for ROS Services in Python but still my code is not giving the data value to to client. Here first the Service Server Python node
#!/usr/bin/env python3
from __future__ import print_function
import rospy
import numpy as np
from os import system
import time
import Microcontroller_Manager_Serial as Serial
import Pressure_Functions as Pressure
import Modem_Functions as Modem
import threading
import time
import serial
import serial.tools.list_ports
from time import sleep
from std_msgs.msg import Float32
from std_msgs.msg import String
from demo_teleop.srv import ImuValue
Communication_Mode_ = 0
def handle_ros_services():
global P0
data_received = Pressure.Pressure_Get_Final_Values(1,1)
print("Server Read Data:")
P0 = (np.int16((data_received[6]<<24) | (data_received[7]<<16) | (data_received[8]<<8) | (data_received[9])))/10000
P=P0
pressure = P/9.81
current_x_orientation_s=pressure
print("Returning ", current_x_orientation_s)
#return ImuValue(current_x_orientation_s)
def ros_serice_server():
#rospy.init_node('ros_serice_server')
s = rospy.Service('imu_value', ImuValue, handle_ros_services)
print("Ready to get_value")
rospy.spin()
if __name__ == '__main__':
rospy.init_node('server_node_f')
Serial.Serial_Port_Standard()
while not rospy.is_shutdown():
try:
print("entering service")
ros_serice_server()
except:
print("pass")
When I call the server I got this output
entering service
Ready to get_value
And here the client node
#!/usr/bin/env python3
from __future__ import print_function
import rospy
import sys
import numpy as np
from os import system
import time
import threading
import Microcontroller_Manager_Serial as Serial
import IMU_Functions as IMU
import Pressure_Functions as Pressure
import time
import serial
import serial.tools.list_ports
from time import sleep
from std_msgs.msg import Float32
from std_msgs.msg import String
from demo_teleop.srv import ImuValue
Communication_Mode_ = 0
def imu_client():
rospy.wait_for_service('handle_ros_services')
print("Request call send")
imu_value = rospy.ServiceProxy('imu_value', ImuValue)
#resp1 = imu_value
#return imu_value.current_x_orientation_s
if __name__ == "__main__":
rospy.init_node('client_node_f')
while not rospy.is_shutdown():
try:
print("entering client")
imu_client()
except:
print("pass")
When i call the client only got
entering client
So means the server never enter the
handle_ros_services()
and the client never enter
imu_client():
functions. What is wrong with the code?
Here is my ImuValue.srv file
float64 current_x_orientation_c
---
float64 current_x_orientation_s
bool success
Originally posted by Astronaut on ROS Answers with karma: 330 on 2022-03-18
Post score: 0
Original comments
Comment by abhishek47 on 2022-03-20:
Follow-up to (or duplicate of) #q397753
Answer:
Client node
You wait for a service named 'handle_ros_services' which is not advertised in the provided code. As explained in the documentation, rospy.wait_for_service() is blocking, and without a specified timeout it's gonna block indefinitely.
The server advertises a service with the name "imu_value", which you're actually correctly using with rospy.ServiceProxy()
Server node
while not rospy.is_shutdown() is not needed, simply call ros_serice_server(). It helps to understand the difference between while not rospy.is_shutdown() and rospy.spin() so you know when to use which.
The service server must return a response.
Originally posted by abhishek47 with karma: 228 on 2022-03-19
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Astronaut on 2022-03-21:
So when do the changes i can enter the client but in server node never enter the handle_ros_services() function so Im not able to get the data. Is that because server didn't return a response? How to make server return response?
Comment by abhishek47 on 2022-03-21:\
never enter the handle_ros_services() function
How did you verify this? Is everything alright with Pressure.Pressure_Get_Final_Values(1,1)?
so Im not able to get the data
Might have to do with the commented out return statement (btw ImuValue isn't the correct return type, it should be ImuValueResponse)
How to make server return response?
See Writing a Service Node, note the line return AddTwoIntsResponse(req.a + req.b)
Comment by Astronaut on 2022-03-21:
I verify with print("Server Read Data:") in the function. It never print it. Yes all good with Pressure.Pressure_Get_Final_Values(1,1) . I only have one srv file and that is the ImuValue. Understand? I don't have ImuValueResponse
Comment by Astronaut on 2022-03-21:
I edit in the question the ImuValue srv file. I dont know what ImuValueResponse srv file should contain. Any help?
Comment by abhishek47 on 2022-03-21:\
only have one srv file and that is the ImuValue. Understand? I don't have ImuValueResponse
ImuValueResponse is auto-generated :) See here
Comment by Astronaut on 2022-03-21:
ok when do that with from demo_teleop.srv import ImuValue,ImuResponse i got error ImportError: cannot import name 'ImuResponse' from
Comment by Astronaut on 2022-03-21:
Still not able to enter the handle_ros_services(). I generated ImuValueResponse but still no data returned and not entering handle_ros_services()
Comment by Astronaut on 2022-03-21:
The problem remain Im not able to enter the function handle_ros_services. Why? | {
"domain": "robotics.stackexchange",
"id": 37514,
"tags": "ros, python, ros-service"
} |
Momentum state of a particle | Question: Why is the momentum state of a particle in quantum mechanics given by the Fourier transform of its position state? For instance, in one dimension given by
$$\varphi(p)=\frac{1}{\sqrt{2\pi\hbar}}\int \mathrm dx \, e^{-i p x/\hbar} \psi(x).$$
Answer: Let's start from scratch. Take the positions eigenvectors, $\left|x\right>$. They are such that $X\left|x\right> = x\left|x\right>$. Now, take a general ket for a wavefunction, $\left|\psi\right>$. If we want to know $\psi(x)$, that is, the wavefunction in the position representation, then we take the following scalar product : $\left<x\right|\left|\psi\right> = \psi(x)$. Indeed, this is true since the position representation of $\left|x\right>$ is $\delta(x)$ (I can show this if need be). From this is also follows that $\int\left|x\right>\left<x\right|dx = I$ where I is the identity (called the completeness relation).
So, let's get back to the question. Analogously, we have that $\psi(p) = \left<p\right|\left|\psi\right> =\int \left<p\right|\left|x\right>\left<x\right|\left|\psi\right>dx$ using the completeness relation. All we have to do now, is determine $\left<p\right|\left|x\right>$. This is done by the defining equation of $\left|p\right>$ which simply is $P\left|p\right> = p\left|p\right>$.
Taking the scalar product with $\left<x\right|$ and using the positiong representation of $P = -i\hbar\nabla$ we get the following equation :
$$ -i\hbar\frac{d p(x)}{d x} = pp(x)$$
Where $p(x) = \left<x\right|\left|p\right>$
Solving this equation you find $p(x) = Ae^{ip/\hbar x}$
Finally, using the hermiticity properties of the scalar product and plugging back in our initial integral we get :
$$\psi(p) = \int Ae^{-ip/\hbar x}\psi(x)$$
The constant $A$ is taken to be $\frac{1}{\sqrt{2\pi\hbar}}$ arbitrarily to get the usual form of the Fourier transform. This is because since the position representation of the $p$ eigenvectors cannot be normalised, this constant $A$ is arbitrary. | {
"domain": "physics.stackexchange",
"id": 36446,
"tags": "quantum-mechanics, momentum, wavefunction, fourier-transform"
} |
Number of atomic and molecular orbitals in different Hartree-Fock references | Question: For a given molecule and electronic configuration,
Are the number of contracted atomic orbitals (AOs) different in a restricted open-shell Hartree-Fock (ROHF) calculation from an RHF calculation? What about unrestricted Hartree-Fock (UHF)?
Are the number of molecular orbitals (MOs) different in a ROHF calculation from an RHF calculation? What about UHF?
For example, if we are using the STO-3G basis set for a water molecule, we get 1 + 1 + 5 = 7 contracted functions for AOs in an RHF calculation, leading to 7 MOs. Are any of these numbers different in an ROHF calculation? What about UHF?
Essentially, what are the core differences between AOs and MOs within each approach?
For water molecule using sto-3g basis I obtained the orbitals in the following order For RHF: Occupied (A1) (A1) (B2) (A1) (B1) Virtual (A1) (B2) For ROHF: Occupied (A1) (A1) (B2) (A1) (B1) Virtual (A1) (B2) But if we take alcohols for an e.g., the number of such orbitals is not the same. I want to know how they are calculated, so that atleast I could do them by hand. For the RHF one, it is nicely explained in "Modern Quantum chemistry" by Szabo-Ostlund. I want some similar information on ROHF and UHF (if possible).
Answer: Straight answer to your question: whatever method you use, RHF, ROHF or UHF, your calculations will use the same number of basis functions (7 contracted functions for your water example). How will they be used and combined, however, is different. In your comment on the question you talk about molecular orbitals - it's important to keep these conceptually separate from the basis set you are using to construct the orbitals.
How many functions do you use for an orbital in ROHF?
In general, UHF/ROHF approaches use a separate molecular orbital for each electron, so you would use basis functions twice to describe an unrestricted pair compared to a restricted pair.
In other words, basis functions aren't "split" between the electrons when treated as an unrestricted pair; rather, two orbitals are calculated using the same basis functions, one corresponding to the alpha electron and another for the beta electron.
So, if you compare UHF with RHF, you'll be using the same basis functions to construct twice the number of molecular orbitals.
Let's take an example to illustrate it: $\mathrm{O_2}$.
For singlet $\mathrm{O_2}$ ($\mathrm{S=0}$), there are 8 alpha and 8 beta electrons. Using STO-3G, we'll have 10 contracted functions, those corresponding to $\mathrm{1s}$, $\mathrm{2s}$ and 3 x $\mathrm{2p}$ for each oxygen atom, each of them composed of 3 primitive gaussians.
In RHF, we can combine these 10 basis functions in a single Slater determinant, producing 10 molecular orbitals. The first 8 of these will be doubly occupied, the last 2 will be vacant.
In ROHF, we have exactly the same as for RHF, because ROHF is exactly the same as RHF for molecules with $\mathrm{S=0}$.
In UHF, we use the same 10 basis functions, but we use them to build two sets of 10 molecular orbitals - 10 for the alpha electrons and 10 for the beta electrons - in a single Slater determinant. We occupy them with 8 alpha electrons and 8 beta electrons, 1 electron per orbital, with 2 alpha and 2 beta orbitals left vacant.
For triplet $\mathrm{O_2}$ ($\mathrm{S=1}$), there are 9 alpha and 7 beta electrons. We have the same basis functions as before.
In UHF, we do the same as before, and we also get 10 alpha and 10 beta molecular orbitals in a single Slater determinant. Occupation changes, though, so now we have 9 occupied and 1 vacant alpha orbitals and 7 occupied and 3 vacant beta orbitals.
In ROHF, and since now we have an unequal number of alpha and beta electrons, we have a more complex construction of the Fock operator - one that is not unique. Essentially, we'll calculate alpha and beta electrons separately under the hood, and then combine them; and we'll be treating full, partially full and empty orbitals as three coupled sets. We'll produce 10 orbitals, of which we'll doubly fill the first 7 and partially fill the following 2, leaving 1 unoccupied.
In conclusion: the number of basis functions in all cases is the same: 10 contracted functions comprising 30 primitive gaussians. However, these basis functions are used a different number of times for RHF, UHF and ROHF; RHF uses them once in a simple Slater determinant, UHF uses them twice to form a Slater determinant with a two sets of orbitals, one for alpha and the other for beta electrons, and ROHF uses them twice but ends up producing a single set of full, partially or non-occupied orbitals that share their spatial part. | {
"domain": "chemistry.stackexchange",
"id": 9796,
"tags": "computational-chemistry, theoretical-chemistry"
} |
What does $\mathcal{N}=2$ mean? | Question: I have seen in some places (especially in the context of theoretical physics) the notation $\mathcal{N}=2$, but I'm not that capable of reading and understanding these materials, thus I'm now wondering what does this notation mean? How important it is in physics? and Where can I learn things about it?
For my background: I'm an undergrad (sophomore), I have learnt some QFT this semester, and some GR. Thanks for your help!
Answer: The notation $\mathcal{N}=2$ in the context of theoretical physics, particularly in supersymmetry and supergravity, refers to a specific kind of extended supersymmetry. In simple terms, supersymmetry is a theoretical symmetry between fermions (particles that follow the Fermi-Dirac statistics, like electrons) and bosons (particles that follow the Bose-Einstein statistics, like photons).
The "$\mathcal{N}$" in $\mathcal{N}=2$ supersymmetry denotes the number of independent supersymmetry generators in the theory. Each generator is responsible for transforming fermions into bosons and vice versa. In $\mathcal{N}=2$ supersymmetry, there are two such generators, typically denoted as $\mathbf{Q}_1$ and $\mathbf{Q}_2$. This implies a richer structure compared to $\mathcal{N}=1$ supersymmetry, which has only one supersymmetry generator.
The algebra these generators follow is known as the super-Poincaré algebra, which extends the Poincaré algebra of special relativity to include supersymmetry transformations. The super-Poincaré algebra includes the usual Poincaré algebra (commutation relations of momentum and angular momentum operators) plus additional anticommutation relations involving the supersymmetry generators.
For $\mathcal{N}=2$ supersymmetry, the anticommutation relations for the supersymmetry generators are of the form:
\begin{equation}
\{ \mathbf{Q}_{\alpha}^i, \bar{\mathbf{Q}}_{\dot{\beta}j} \} = 2\sigma^\mu_{\alpha \dot{\beta}} \mathbf{P}_\mu \delta^i_j
\end{equation}
Here, $\mathbf{Q}_{\alpha}^i$ and $\bar{\mathbf{Q}}_{\dot{\beta}j}$ are the supersymmetry generators (with spinor indices $\alpha, \dot{\beta}$ and $\mathcal{N}=2$ indices $i, j$), $\sigma^\mu$ are the Pauli matrices (incorporating spacetime structure into the algebra), and $\mathbf{P}_\mu$ is the four-momentum operator. The delta symbol $\delta^i_j$ ensures that the algebra closes within each supersymmetry.
$\mathcal{N}=2$ supersymmetry has significant implications in theoretical physics, particularly in string theory, where it helps in constructing more stable and less divergent models of particle physics. It also plays a crucial role in the study of supergravity and has implications in the mathematical field of topology through its connections to topological quantum field theories.
To learn more about $\mathcal{N}=2$ supersymmetry and super-Poincaré algebra, one typically needs a strong background in quantum field theory, general relativity, and group theory. Standard textbooks on quantum field theory and supersymmetry, such as "Supersymmetry and Supergravity" by Julius Wess and Jonathan Bagger, can be excellent starting points. Another great resource is Weinberg's "Quantum Theory of Fields" vol.3. | {
"domain": "physics.stackexchange",
"id": 98476,
"tags": "field-theory, notation, supersymmetry"
} |
Java Tic Tac Toe console | Question: everyone!I have created a bit ugly looking Tic Tac Toe console game using Java. I'm learning how to code and I would love to hear some comments about my code. Because it's quite streatforward, I'm not using classes and objects ( my assignment was not to use any ). Thank you in advance!
package com.company;
import java.util.ArrayList;
import java.util.Scanner;
public class Main {
public static Scanner scan = new Scanner(System.in);
public static char[][] table = {
{'1', '2', '3'},
{'4', '5', '6'},
{'7', '8', '9'}
};
public static ArrayList<Character> picksTillNow = new ArrayList<Character>();
public static char playerOne = ' ';
public static char playerTwo = ' ';
public static void main(String[] args) {
for(char i = '1'; i <= '9'; i++){
picksTillNow.add(i);
}
printTable();
while (true){
playerOneChoice();
endGame();
playerTwoChoice();
endGame();
}
}
/**
* Prints the table with no players choices made ( 1 - 9 )
*/
public static void printTable() {
for (int i = 0; i < table.length; i++) {
for (int j = 0; j < table[i].length; j++) {
System.out.print(table[i][j] + " ");
}
if (table[i][2] != 9) {
System.out.println();
}
}
}
/**
* Prints the table with the players choice included
* @param player - player 1 or player 2
* @param choice - char from 1 - 9
*/
public static void printTable(int player, char choice){
for (int i = 0; i < table.length ; i++) {
for (int j = 0; j < table[i].length; j++) {
if(choice == table[i][j] && player == 1){
//X
table[i][j] = 'X';
}else if(choice == table[i][j] && player == 2){
//O
table[i][j] = 'O';
}
System.out.print(table[i][j] + " ");
}
if(table[i][2] != 9){
System.out.println();
}
}
}
/**
* Asks for player one choice method and prints the table, checks if that position has been chosen
*/
public static void playerOneChoice(){
System.out.print("Играч 1: ");
playerOne = scan.next().charAt(0);
if(picksTillNow.contains(playerOne)){
picksTillNow.remove(picksTillNow.indexOf(playerOne));
printTable(1, playerOne);
}else{
playerOneChoice();
}
}
/**
* Asks for player one choice method and prints the table, checks if that position has been chosen
*/
public static void playerTwoChoice(){
System.out.print("Играч 2: ");
playerTwo = scan.next().charAt(0);
if(picksTillNow.contains(playerTwo)){
picksTillNow.remove(picksTillNow.indexOf(playerTwo));
printTable(2, playerTwo);
}else{
playerTwoChoice();
}
}
/**
* Checks for a winner. If yes - exits program
*/
public static void endGame(){
for(int i = 0; i < table.length; i++){
if(table[i][0] == table[i][1] && table[i][1] == table[i][2] && table[0][i] != 'О'){
System.out.println("Победа!");
System.exit(0);
}
}
for(int j = 0; j < table.length; j++){
if(table[0][j] == table[1][j] && table[1][j] == table[2][j] && table[j][0] != 'O'){
System.out.println("Победа!");
System.exit(0);
}
}
if(table[0][0] == table[1][1] && table[1][1] == table[2][2] && table[0][0] != 'O'){
System.out.println("Победа!");
System.exit(0);
}
if(table[0][2] == table[1][1] && table[1][1] == table[2][0] && table[0][2] != 'O'){
System.out.println("Победа!");
System.exit(0);
}
if(picksTillNow.isEmpty()) System.exit(0);
}
}
Answer: Your formatting seems to be inconsistent, use an automatic code formatter (your IDE most likely has one).
public static Scanner scan = new Scanner(System.in);
The usage of the scanner (or streams in general) should most likely be scoped. Streams are rather easily associated with native resources which must be destroyed explicitly to be freed.
public static char[][] table = {
{'1', '2', '3'},
{'4', '5', '6'},
{'7', '8', '9'}
};
Why is the table prefilled with values?
public static ArrayList<Character> picksTillNow = new ArrayList<Character>();
Try to always use the most super-class you can use and get away with, makes it easier to make sure that you're coupling classes through methods of the actual implementation instead of the interface. In this case, declare the variable with List.
public static char playerOne = ' ';
public static char playerTwo = ' ';
You're preparing these, but you only use them in a very limited scope, you should declare them there.
for(char i = '1'; i <= '9'; i++){
picksTillNow.add(i);
}
Autoboxing.
while (true){
You could also have a boolean like running or playing or gameRunning and set it depending on the return value of `endGame().
Or, as the number of turns is fixed, you might do a mixture of both:
int turn = 0;
while (turn++ < 9 && nobodyHasWonYet) {
// Logic.
}
endGame();
That's a bad name for the method, as it does not always end the game.
for (int i = 0; i < table.length; i++) {
for (int j = 0; j < table[i].length; j++) {
I'm a persistent advocate that you're only allowed to use single-letter variable names when dealing with dimensions ("x", "y", "z"), and that excludes using i and j). In this case actually, using x and y as variable names would improve the readability of the code. Even better would be using row and column.
if (table[i][2] != 9) {
System.out.println();
}
But the default value of this field is overwritten at some point, isn't it?
}else{
playerOneChoice();
}
You're recursing here, so keeping to hit Return without entering something should at some point crash your game because of a stackoverflow.
public static void endGame(){
for(int i = 0; i < table.length; i++){
if(table[i][0] == table[i][1] && table[i][1] == table[i][2] && table[0][i] != 'О'){
System.out.println("Победа!");
System.exit(0);
}
}
for(int j = 0; j < table.length; j++){
if(table[0][j] == table[1][j] && table[1][j] == table[2][j] && table[j][0] != 'O'){
System.out.println("Победа!");
System.exit(0);
}
}
if(table[0][0] == table[1][1] && table[1][1] == table[2][2] && table[0][0] != 'O'){
System.out.println("Победа!");
System.exit(0);
}
if(table[0][2] == table[1][1] && table[1][1] == table[2][0] && table[0][2] != 'O'){
System.out.println("Победа!");
System.exit(0);
}
if(picksTillNow.isEmpty()) System.exit(0);
}
If I don't completely suck at math (and Tic Tac Toe), there should only be 8 winning positions, and even then there are only 4 and the other 4 are mirrored or rotated. So it might be more interesting and easier to maintain if these are hardcoded.
System.exit(0);
Something to keep in mind is that System.exit is not "exit the application" but "kill the JVM process". When invoking this not even finally block may run. In this case it does not matter, but something to keep in mind.
If I read this right, you have the playing field with the numbers in it, which then get replaced with the player choices. Whether it is valid choice is validated through the remaining fields in your list. Instead I suggest you keep the state limited to only the playing field. Assuming your field definition, you can check whether a field is still free by calculating the position of the input and then checking whether there is an O or X placed, like this:
int fieldChoice = getChoice();
int row = fieldChoice % 3;
int column = fieldChoice / 3; // Might be other way around...never can remember.
char selectedField = playingField[row][column];
if (selectedField != 'X' && selectedField != 'O') {
// Set it.
} else {
// No dice.
}
playerOneChoice and playerTwoChoice can be rolled into one, by passing the wanted field player as parameter. Using my above example:
public static void playerChoice(char playerCharacter) {
int fieldChoice = getChoice();
int row = fieldChoice % 3;
int column = fieldChoice / 3; // Might be other way around...never can remember.
char selectedField = playingField[row][column];
if (selectedField != 'X' && selectedField != 'O') {
playingField[row][column] = playerCharacter;
} else {
// No dice.
}
}
Having said that, your print function would then boil down to this:
for (int row = 0; row < 3; row++) {
for (int column = 0; column < 3; column++) {
System.out.println(playingField[row][column] + " ");
}
System.out.println();
}
Hardcoding the size here is not a bad thing, as long as you assume a Tic Tac Toe game with a 3x3 playing field. If you want to support larger playing fields, you'll have to adjust the rest of your logic anyway.
Coming around to checking whether somebody has won, the winning conditions would boil down to:
// Rows
playingField[0][0] == playingField[0][1] && playingField[0][1] == playingField[0][2];
playingField[1][0] == playingField[1][1] && playingField[1][1] == playingField[1][2];
playingField[2][0] == playingField[2][1] && playingField[2][1] == playingField[2][2];
// Columns
playingField[0][0] == playingField[1][0] && playingField[1][0] == playingField[2][0];
playingField[0][1] == playingField[1][1] && playingField[1][1] == playingField[2][1];
playingField[0][2] == playingField[1][2] && playingField[1][2] == playingField[2][2];
// Diagonal
playingField[0][0] == playingField[1][1] && playingField[1][1] == playingField[2][2];
playingField[0][2] == playingField[1][1] && playingField[1][1] == playingField[2][0];
You will notice, that is just as short as using your loops, and we can chain them with || directly for a return. However, that will not tell us who one. For that need some additional logic, namely an if on every line. But we could also assume that the winner is the player that last played, which is a fair assumption, actually. To make it more readable we can add us three helper functions:
public static boolean isWinningRow(int rowIndex) {
return playingField[row][0] == playingField[row][1] && playingField[0][1] == playingField[row][2];
}
public static boolean isWinningColumn(int column) {
return playingField[0][column] == playingField[1][column] && playingField[1][column] == playingField[2][column];
}
public static boolean isLeftRightDiagonalWin() {
return playingField[0][0] == playingField[1][1] && playingField[1][1] == playingField[2][2];
}
public static boolean isRightLeftDiagonalWin() {
return playingField[0][2] == playingField[1][1] && playingField[1][1] == playingField[2][0];
}
This does not simplify the overall complexity, but does improve the readability:
return isWinningRow(0)
|| isWinningRow(1)
|| isWinningRow(2)
|| isWinningColumn(0)
|| isWinningColumn(1)
|| isWinningColumn(2)
|| isLeftRightDiagonalWin()
|| isRightLeftDiagonalWin();
So you could rewrite your logic to something like this:
int turn = 0;
boolean running = true;
while (turn < 9 && running) {
int currentPlayer = (turn % 2) + 1;
playerChoice(currentPlayer);
printPlayingField();
if (hasWon()) {
System.out.println("Winner: " + Integer.toString(currentPlayer);
running = false;
}
turn++;
}
I'm a little bit unhappy with the code to choose the player, but it should work. And overall should give you a good idea where to start. | {
"domain": "codereview.stackexchange",
"id": 40646,
"tags": "java, tic-tac-toe"
} |
Comparing XGBR with CatBoost performance | Question: I saw on a CatBoost site that it supposed to outperform any other boosted training model and decided to try it myself on a Kaggle's https://www.kaggle.com/c/house-prices-advanced-regression-techniques.
I created some basic kernel without any complex preprocessing, feature selection, GridSearch, stacking, etc... just to compare the XGBR and CatBoost performance. But as far as I see the XGBR always outperforms the CatBoost.
https://www.kaggle.com/markbquant/compare-catboostregressor-vs-xgbregressor
The CatBoost parameters: iterations=100, depth=3, learning_rate=0.1
The XGBR parameters: subsample=0.7, colsample_bytree=0.7, n_estimators=500, learning_rate=0.03, max_depth=5, min_child_weight=3
For example, inside the Kaggle, the XGBR received score 0.134 and the CatBoot only reached 0.197 (I tried both one_hot_max_size and cat_features). I'll be thankful if someone could point me what is wrong with the CatBoost model, may be some optimization is missing.
Answer: Performing such benchmark is not that easy. Meaning one can not just pick a few data set and run these models as there is a data dependency. In such cases, one need to simulate data through various process - the simulation helps to design various data in various condition. for example perhaps model one is doing a better job at binning so, the data with various binning condition must be in place before hand, or the depth of the tree. So just picking house-prices data is not enough.
Bare in mind such out performance can be really really small. Don't expect 10% difference ! they lay down within 1% often.
What distinguish Catboost from Xgboost is the thread safing in the production environment. Xgboost is not a thread safe - therefore, can not be used in any serious deployment environment. | {
"domain": "datascience.stackexchange",
"id": 9637,
"tags": "python, regression, xgboost, kaggle, boosting"
} |
Does the line integral definition of Work involve distance or displacement? | Question: My textbook reports the following definition of Work:
where ds is the infinitesimal displacement.
I know that an infinitesimal displacement is usually denoted by dr and I also know that the magnitude of dr is given by ds (infinitesimal distance) Now, if we are talking about displacement (in Work definition), why should we use ds instead of dr?
I ask this because my textbook always refers to infinitesimal displacement as dr. I have always associated 's' to distance, so I see ds as an infinitesimal "distance vector", but I am quite sure that distance is only a scalar quantity, not a vector.
Answer: Its just a matter of what you use to call as displacement and as distance .
I have seen the usage of:
dx
ds
dr
as the displacement too.
Wikipedia says :
The work done by a constant force of magnitude F on a point that moves a displacement (not distance) s in the direction of the force is the product,
W = Fs.
Note the usage of s as dispacement .
All in all , its the displacement that is used in calculating work and one may refer to it in many ways.(probably your textbook used different notations in different chapters)
And Distance is a scalar quantity. | {
"domain": "physics.stackexchange",
"id": 20522,
"tags": "newtonian-mechanics, work, distance, displacement"
} |
building ROS on macOS 10.12 at `qt_gui_cpp` | Question:
Hi, there
I'm following the tutorial Installation Instructions for Kinetic in OS X to get ROS up and running on my Mac. However, I failed (and tried so much methods to solve, still failed) at building qt_gui_cpp when executing:
./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release
I bet the problems lie at the package sip (installed via brew) or catkin_ws/src/python_qt_binding, because the failed codes are generated from sip, related error output is like this (with VERBOSE=ON):
[ 5%] Linking CXX shared library /Users/victor/Repo/ros/ros_catkin_ws/devel_isolated/qt_gui_cpp/lib/libqt_gui_cpp.dylib
cd /Users/victor/Repo/ros/ros_catkin_ws/build_isolated/qt_gui_cpp/src/qt_gui_cpp && /usr/local/Cellar/cmake/3.7.1/bin/cmake -E cmake_link_script CMakeFiles/qt_gui_cpp.dir/link.txt --verbose=ON
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -std=c++11 -fPIC -O3 -DNDEBUG -dynamiclib -Wl,-headerpad_max_install_names -o /Users/victor/Repo/ros/ros_catkin_ws/devel_isolated/qt_gui_cpp/lib/libqt_gui_cpp.dylib -install_name /Users/victor/Repo/ros/ros_catkin_ws/devel_isolated/qt_gui_cpp/lib/libqt_gui_cpp.dylib CMakeFiles/qt_gui_cpp.dir/composite_plugin_provider.cpp.o CMakeFiles/qt_gui_cpp.dir/generic_proxy.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_bridge.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_context.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_descriptor.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_provider.cpp.o CMakeFiles/qt_gui_cpp.dir/recursive_plugin_provider.cpp.o CMakeFiles/qt_gui_cpp.dir/settings.cpp.o CMakeFiles/qt_gui_cpp.dir/__/__/include/qt_gui_cpp/moc_plugin.cpp.o CMakeFiles/qt_gui_cpp.dir/__/__/include/qt_gui_cpp/moc_plugin_bridge.cpp.o CMakeFiles/qt_gui_cpp.dir/__/__/include/qt_gui_cpp/moc_plugin_context.cpp.o /usr/local/lib/libboost_filesystem-mt.dylib /usr/local/lib/libtinyxml.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libclass_loader.dylib /usr/local/lib/libPocoFoundation.dylib /usr/lib/libdl.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_log4cxx.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_backend_interface.dylib /usr/local/lib/liblog4cxx.dylib /usr/local/lib/libboost_regex-mt.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librostime.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libcpp_common.dylib /usr/local/lib/libboost_system-mt.dylib /usr/local/lib/libboost_thread-mt.dylib /usr/local/lib/libboost_chrono-mt.dylib /usr/local/lib/libboost_date_time-mt.dylib /usr/local/lib/libboost_atomic-mt.dylib /usr/local/lib/libconsole_bridge.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libroslib.dylib /usr/local/lib/libboost_filesystem-mt.dylib /usr/local/lib/libboost_system-mt.dylib /usr/local/lib/QtWidgets.framework/QtWidgets /usr/local/lib/libtinyxml.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libclass_loader.dylib /usr/local/lib/libPocoFoundation.dylib /usr/lib/libdl.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_log4cxx.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_backend_interface.dylib /usr/local/lib/liblog4cxx.dylib /usr/local/lib/libboost_regex-mt.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librostime.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libcpp_common.dylib /usr/local/lib/libboost_thread-mt.dylib /usr/local/lib/libboost_chrono-mt.dylib /usr/local/lib/libboost_date_time-mt.dylib /usr/local/lib/libboost_atomic-mt.dylib /usr/local/lib/libconsole_bridge.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libroslib.dylib /usr/local/lib/QtGui.framework/QtGui /usr/local/lib/QtCore.framework/QtCore
[ 83%] Built target qt_gui_cpp
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/build.make src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/depend
cd /Users/victor/Repo/ros/ros_catkin_ws/build_isolated/qt_gui_cpp && /usr/local/Cellar/cmake/3.7.1/bin/cmake -E cmake_depends "Unix Makefiles" /Users/victor/Repo/ros/ros_catkin_ws/src/qt_gui_core/qt_gui_cpp /Users/victor/Repo/ros/ros_catkin_ws/src/qt_gui_core/qt_gui_cpp/src/qt_gui_cpp_sip /Users/victor/Repo/ros/ros_catkin_ws/build_isolated/qt_gui_cpp /Users/victor/Repo/ros/ros_catkin_ws/build_isolated/qt_gui_cpp/src/qt_gui_cpp_sip /Users/victor/Repo/ros/ros_catkin_ws/build_isolated/qt_gui_cpp/src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/DependInfo.cmake --color=
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/build.make src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/build
[ 88%] Compiling generated code for qt_gui_cpp_sip Python bindings...
cd /Users/victor/Repo/ros/ros_catkin_ws/build_isolated/qt_gui_cpp/sip/qt_gui_cpp_sip && make
c++ -c -pipe -fPIC -Os -Wall -W -DNDEBUG -DQT_NO_DEBUG -DQT_CORE_LIB -DQT_GUI_LIB -I. -I/Users/victor/Repo/ros/ros_catkin_ws/src/qt_gui_core/qt_gui_cpp/src/qt_gui_cpp_sip/../../include -I/Users/victor/Repo/ros/ros_catkin_ws/install_isolated/include -I/usr/local/include -I/usr/local/Cellar/console_bridge/0.2.5/include -I/usr/include/python2.7 -I/usr/local/Cellar/sip/4.18.1/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I/usr/local/Cellar/qt5/5.7.0/mkspecs/macx-g++ -I/usr/local/Cellar/qt5/5.7.0/include/QtCore -I/usr/local/Cellar/qt5/5.7.0/include/QtGui -I/usr/local/Cellar/qt5/5.7.0/include/QtWidgets -I/usr/local/Cellar/qt5/5.7.0/include/QtPrintSupport -I/usr/local/Cellar/qt5/5.7.0/include -o siplibqt_gui_cpp_sipcmodule.o siplibqt_gui_cpp_sipcmodule.cpp
In file included from siplibqt_gui_cpp_sipcmodule.cpp:7:
In file included from ./sipAPIlibqt_gui_cpp_sip.h:13:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/QMetaType:1:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qmetatype.h:44:
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:1133:23: warning: rvalue references are a C++11 extension [-Wc++11-extensions]
void qAsConst(const T &&) Q_DECL_EQ_DELETE;
^
In file included from siplibqt_gui_cpp_sipcmodule.cpp:7:
In file included from ./sipAPIlibqt_gui_cpp_sip.h:13:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/QMetaType:1:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qmetatype.h:44:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:1145:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qatomic.h:46:
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:61:4: error: "Qt requires C++11 support"
# error "Qt requires C++11 support"
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:90:13: error: unknown type name 'QAtomicOps'
typedef QAtomicOps<T> Ops;
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:90:23: error: expected member name or ';' after declaration specifiers
typedef QAtomicOps<T> Ops;
~~~~~~~~~~~~~~~~~~^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:93:23: error: use of undeclared identifier 'QAtomicOpsSupport'
Q_STATIC_ASSERT_X(QAtomicOpsSupport<sizeof(T)>::IsSupported, "template parameter is an integral of a size not supported on this platform");
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:93:53: error: no member named 'IsSupported' in the global namespace
Q_STATIC_ASSERT_X(QAtomicOpsSupport<sizeof(T)>::IsSupported, "template parameter is an integral of a size not supported on this platform");
~~^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:761:63: note: expanded from macro 'Q_STATIC_ASSERT_X'
#define Q_STATIC_ASSERT_X(Condition, Message) Q_STATIC_ASSERT(Condition)
^~~~~~~~~
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:756:110: note: expanded from macro 'Q_STATIC_ASSERT'
enum {Q_STATIC_ASSERT_PRIVATE_JOIN(q_static_assert_result, __COUNTER__) = sizeof(QStaticAssertFailure<!!(Condition)>)}
^~~~~~~~~
In file included from siplibqt_gui_cpp_sipcmodule.cpp:7:
In file included from ./sipAPIlibqt_gui_cpp_sip.h:13:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/QMetaType:1:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qmetatype.h:44:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:1145:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qatomic.h:46:
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:93:5: error: type name requires a specifier or qualifier
Q_STATIC_ASSERT_X(QAtomicOpsSupport<sizeof(T)>::IsSupported, "template parameter is an integral of a size not supported on this platform");
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:761:47: note: expanded from macro 'Q_STATIC_ASSERT_X'
#define Q_STATIC_ASSERT_X(Condition, Message) Q_STATIC_ASSERT(Condition)
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:756:121: note: expanded from macro 'Q_STATIC_ASSERT'
enum {Q_STATIC_ASSERT_PRIVATE_JOIN(q_static_assert_result, __COUNTER__) = sizeof(QStaticAssertFailure<!!(Condition)>)}
^
In file included from siplibqt_gui_cpp_sipcmodule.cpp:7:
In file included from ./sipAPIlibqt_gui_cpp_sip.h:13:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/QMetaType:1:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qmetatype.h:44:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qglobal.h:1145:
In file included from /usr/local/Cellar/qt5/5.7.0/include/QtCore/qatomic.h:46:
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:95:14: error: use of undeclared identifier 'Ops'
typename Ops::Type _q_value;
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:95:19: error: expected a qualified name after 'typename'
typename Ops::Type _q_value;
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:95:23: error: expected ';' at end of declaration list
typename Ops::Type _q_value;
^
;
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:99:44: error: use of undeclared identifier 'Ops'
T load() const Q_DECL_NOTHROW { return Ops::load(_q_value); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:99:54: error: use of undeclared identifier '_q_value'
T load() const Q_DECL_NOTHROW { return Ops::load(_q_value); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:100:45: error: use of undeclared identifier 'Ops'
void store(T newValue) Q_DECL_NOTHROW { Ops::store(_q_value, newValue); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:100:56: error: use of undeclared identifier '_q_value'
void store(T newValue) Q_DECL_NOTHROW { Ops::store(_q_value, newValue); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:102:51: error: use of undeclared identifier 'Ops'
T loadAcquire() const Q_DECL_NOTHROW { return Ops::loadAcquire(_q_value); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:102:68: error: use of undeclared identifier '_q_value'
T loadAcquire() const Q_DECL_NOTHROW { return Ops::loadAcquire(_q_value); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:103:52: error: use of undeclared identifier 'Ops'
void storeRelease(T newValue) Q_DECL_NOTHROW { Ops::storeRelease(_q_value, newValue); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:103:70: error: use of undeclared identifier '_q_value'
void storeRelease(T newValue) Q_DECL_NOTHROW { Ops::storeRelease(_q_value, newValue); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:107:86: error: use of undeclared identifier 'Ops'
static Q_DECL_CONSTEXPR bool isReferenceCountingNative() Q_DECL_NOTHROW { return Ops::isReferenceCountingNative(); }
^
/usr/local/Cellar/qt5/5.7.0/include/QtCore/qbasicatomic.h:108:88: error: use of undeclared identifier 'Ops'
static Q_DECL_CONSTEXPR bool isReferenceCountingWaitFree() Q_DECL_NOTHROW { return Ops::isReferenceCountingWaitFree(); }
^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
1 warning and 20 errors generated.
make[3]: *** [siplibqt_gui_cpp_sipcmodule.o] Error 1
make[2]: *** [/Users/victor/Repo/ros/ros_catkin_ws/devel_isolated/qt_gui_cpp/lib/python2.7/site-packages/qt_gui_cpp/libqt_gui_cpp_sip.dylib] Error 2
make[1]: *** [src/qt_gui_cpp_sip/CMakeFiles/libqt_gui_cpp_sip.dir/all] Error 2
make: *** [all] Error 2
make: INTERNAL: Exiting with 5 jobserver tokens available; should be 4!
<== Failed to process package 'qt_gui_cpp':
Command '['/Users/victor/Repo/ros/ros_catkin_ws/install_isolated/env.sh', 'make', '-j4', '-l4']' returned non-zero exit status 2
Reproduce this error by running:
==> cd /Users/victor/Repo/ros/ros_catkin_ws/build_isolated/qt_gui_cpp && /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/env.sh make -j4 -l4
Command failed, exiting.
There are 2 problems:
qt_gui_cpp is compiled without -std=c++11 that is causing panics. However, I failed to add this compiler flag, no matter I put add_definitions(-std=c++11) or
set(CMAKE_CXX_FLAGS "-std=c++11 ${CMAKE_CXX_FLAGS}")
# note the catkin documents explicitly said that CMAKE_CXX_FLAGS is a "Forbidden variables", why is that?
into the CMakeLists.txt file, or set the shell environment variable CXX_FLAGS to '-std=c++11'. The compiler simply don't use this flag!
I've tried to manually run the failed code with -std=c++11 add like this:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -std=c++11 -fPIC -O3 -DNDEBUG -dynamiclib -Wl,-headerpad_max_install_names -o /Users/victor/Repo/ros/ros_catkin_ws/devel_isolated/qt_gui_cpp/lib/libqt_gui_cpp.dylib -install_name /Users/victor/Repo/ros/ros_catkin_ws/devel_isolated/qt_gui_cpp/lib/libqt_gui_cpp.dylib CMakeFiles/qt_gui_cpp.dir/composite_plugin_provider.cpp.o CMakeFiles/qt_gui_cpp.dir/generic_proxy.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_bridge.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_context.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_descriptor.cpp.o CMakeFiles/qt_gui_cpp.dir/plugin_provider.cpp.o CMakeFiles/qt_gui_cpp.dir/recursive_plugin_provider.cpp.o CMakeFiles/qt_gui_cpp.dir/settings.cpp.o CMakeFiles/qt_gui_cpp.dir/__/__/include/qt_gui_cpp/moc_plugin.cpp.o CMakeFiles/qt_gui_cpp.dir/__/__/include/qt_gui_cpp/moc_plugin_bridge.cpp.o CMakeFiles/qt_gui_cpp.dir/__/__/include/qt_gui_cpp/moc_plugin_context.cpp.o /usr/local/lib/libboost_filesystem-mt.dylib /usr/local/lib/libtinyxml.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libclass_loader.dylib /usr/local/lib/libPocoFoundation.dylib /usr/lib/libdl.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_log4cxx.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_backend_interface.dylib /usr/local/lib/liblog4cxx.dylib /usr/local/lib/libboost_regex-mt.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librostime.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libcpp_common.dylib /usr/local/lib/libboost_system-mt.dylib /usr/local/lib/libboost_thread-mt.dylib /usr/local/lib/libboost_chrono-mt.dylib /usr/local/lib/libboost_date_time-mt.dylib /usr/local/lib/libboost_atomic-mt.dylib /usr/local/lib/libconsole_bridge.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libroslib.dylib /usr/local/lib/libboost_filesystem-mt.dylib /usr/local/lib/libboost_system-mt.dylib /usr/local/lib/QtWidgets.framework/QtWidgets /usr/local/lib/libtinyxml.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libclass_loader.dylib /usr/local/lib/libPocoFoundation.dylib /usr/lib/libdl.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_log4cxx.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librosconsole_backend_interface.dylib /usr/local/lib/liblog4cxx.dylib /usr/local/lib/libboost_regex-mt.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/librostime.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libcpp_common.dylib /usr/local/lib/libboost_thread-mt.dylib /usr/local/lib/libboost_chrono-mt.dylib /usr/local/lib/libboost_date_time-mt.dylib /usr/local/lib/libboost_atomic-mt.dylib /usr/local/lib/libconsole_bridge.dylib /Users/victor/Repo/ros/ros_catkin_ws/install_isolated/lib/libroslib.dylib /usr/local/lib/QtGui.framework/QtGui /usr/local/lib/QtCore.framework/QtCore
And the error is gone!
SO, my first question is how to enable -std=c++11 in catkin?
I actually have fixed a problem, so the error is not shown in the logs: Qt5 components are frameworks on macOS, but the generated compiler command has something like -lQtCore -lQtGui ..., which causing "library couldn't find" errors.
I've modified the catkin_ws/src/python_qt_binding/cmake/sip_configure.py, related codes:
def custom_platform_lib_function(self, clib, framework=0):
if os.path.isabs(clib):
return clib
return default_platform_lib_function(self, clib, 1)#framework)
# call with parameter framework = 1
Any help is appreciated, thank you all!
I've spent a whole night on this issue, it's a pure nightmare :(
Originally posted by ZOU Lu on ROS Answers with karma: 143 on 2016-12-14
Post score: 0
Original comments
Comment by gvdhoorn on 2016-12-15:
Please note: ROS Answers does not use (Github flavoured) Markdown, at least not for formatting code blocks (ie: three backticks won't work). Please use the Preformatted Text button (the one with 101010 on it) next time. Just select the code or console copy/paste and click the button.
Thanks.
Answer:
I ran into the same problem. sipconfig generates the Makefile. Therefore, tweaking cmake flags does not work. It seems, homebrew's current (bottled) version of sip (418.1) is not fully compatible with homebrew's current version of qt5 (5.7.1_1). I finally ended up editing the configuration in sipconfig.py. The changes I made to
/usr/local/Cellar/sip/4.18.1/lib/python2.7/site-packages/sipconfig.py
are: Inside _pkg_config
'platform': 'macx-clang++',
and
'qt_framework': 1,
Inside _default_macros
'CXXFLAGS': '-pipe -std=c++11',
With these changes, the generated Makefiles worked.
Originally posted by dischu with karma: 56 on 2017-01-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26487,
"tags": "ros"
} |
Repository for source install? | Question:
Using the source ( http://www.ros.org/wiki/diamondback/Installation/Ubuntu/Source) I could install Diamondback (1.4.6) on Ubuntu 9.10, Karmic Koala. However I am not sure which repository to use ? Would it be advisable to use Ubuntu 10.04 (Lucid) or Ubuntu 10.10 (Maverick) source lists ?
Any help is appreciated
Originally posted by Arkapravo on ROS Answers with karma: 1108 on 2011-05-11
Post score: 0
Answer:
If you are doing a source based installation you do not need to add anything to your sources.list files.
Originally posted by tfoote with karma: 58457 on 2011-05-18
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 5551,
"tags": "ubuntu, ros-diamondback"
} |
Time evolution in quantum mechanics of states not contained in the Hilbert space | Question: Eigenstates of, for example, $\hat p$, are not elements of the standard quantum mechanical Hilbert space, i.e. $\psi(x)=e^{ipx}\notin\mathcal L^2(\Bbb R)$. This prompts the question of - given that after measurement the state of the system becomes one of these seemingly problematic states - how the time evolution can be defined such that we are able to "re-enter" the space $\mathcal L^2(\Bbb R)$ in such a way that the time-evolution is a continuous operation.
Answer: Generalized eigenfunctions are most naturally formalized as tempered distributions - linear maps from $\mathcal S\subset L^2(\mathbb R)$ to $\mathbb C$, where $\mathcal S$ is the Schwartz space of rapidly decreasing functions. For example, we can define the distribution
$$\mathcal F_k: \varphi \mapsto \frac{1}{\sqrt{2\pi\hbar}}\int \mathrm dx \ e^{-ikx} \varphi(x)$$
This looks exactly like the inner product $\langle f_k,\varphi\rangle$ with $f_k(x) = e^{ikx}/\sqrt{2\pi\hbar}$, except for the fact that $f_k\notin L^2(\mathbb R)$, as you say. However, this will provide a guiding intuition.
If an operator $\hat A$ is defined on the Schwartz space $\mathcal S$, we can extend its action$^\ddagger$ to a tempered distribution $D$ via
$$(\hat A D)[\varphi] = D[\hat A^\dagger\varphi]$$
This definition is motivated by the fact that if $D = \langle \psi,\cdot \rangle$ for some $\psi\in L^2(\mathbb R)$, then we should have $\hat A D = \langle \hat A \psi,\cdot \rangle = \langle \psi, \hat A^\dagger \cdot \rangle$.
This extension allows us to define a notion of a generalized eigenvector. Note that for $\hat P := -i\hbar \frac{d}{dx}$,
$$(\hat P \mathcal F_k)[\varphi] =\frac{1}{\sqrt{2\pi}} \int \mathrm dx \ e^{-ikx} \big(-i\hbar \varphi'(x)\big) = \frac{\hbar k}{\sqrt{2\pi}}\int\mathrm dx\ e^{-ikx}\varphi(x) = \hbar k \mathcal F_k[\varphi]$$
Therefore, $\mathcal F_k$ is a generalized eigenvector of $\hat P$ with eigenvalue $\hbar k$.
In developing this technology, we have also answered your question. If $\hat U_t = e^{-it\hat H/\hbar}$ is the time evolution operator, then the time evolution of $\mathcal F_k$ is given by $\hat U_t \mathcal F_k$. In the case of a free particle, this yields
$$\mathcal F_k(t) [\varphi] = \frac{1}{\sqrt{2\pi}}\int\mathrm dx\ e^{-ikx} e^{i\frac{\hbar k^2}{2m}t} \varphi(x)$$
which leads us to say somewhat less formally that the time evolution of $e^{ikx}$ yields $e^{ikx}e^{-i \frac{\hbar k^2}{2m} t}$.
$^\ddagger$Strictly speaking we should also specify that $\mathrm{range}(\hat A^\dagger)\subseteq \mathcal S$ as well. | {
"domain": "physics.stackexchange",
"id": 80354,
"tags": "quantum-mechanics, hilbert-space, momentum, hamiltonian, time-evolution"
} |
Universal quantum computation by Clifford gates plus magic state | Question: In the paper Universal quantum computation with ideal Clifford gates and noisy ancillas, it is claimed that a circuit composed by Clifford gates, plus a so-called "magic state", can perform any quantum computation.
This paper is based on the formalism developed for fault tolerance. However, the claim
above does not have to do with fault tolerance, nor with "noise".
Indeed, the section III, which discusses the universal quantum computation, does not even mention the "noise" and the magic state is simply a well-defined pure state. It looks like it is possible to build any circuit using Clifford gates and simulating the T gate by using the so-called magic state, which is simply $\alpha\left|0\right>+\beta\left|1\right>$ with well defined $\alpha$ and $\beta$ (together with a particular procedure and additional Clifford gates).
So I would like to understand what I miss, if I do not understand the discussion about the fault tolerance and noisy qubits. Looking in literature, I feel that I actually miss something. For example, here I read this sentence:
quantum computers with magic states will most likely have the vast majority of its usable qubits be used for the distillation of magic states
Thus it seems that this "distillation" is needed: is it simply the preparation of $\alpha\left|0\right>+\beta\left|1\right>$ with known $\alpha$ and $\beta$?
So I would like to have a sketch of the idea of the relation between the discussion on universal computation and fault tolerance, or a reference
discussing this idea more in depth but without relying too much on the
language of fault tolerance.
Answer: From Sec. V on in the Bravyi-Kitaev paper, it is all about noise - in particular magic state distillation, which is the process of distilling the magic state from many copies of some noisy state $\rho$.
Let us zoom out a bit. The magic state gadget replacing the $T$ gate looks like the following:
If we can do the Clifford gates and the measurement fault-tolerantly, which we can for a suitable stabilizer code, this circuit is fault-tolerant. However, the problem is that the preparation of the magic state $|T\rangle = \frac{1}{\sqrt{2}}(|0\rangle + e^{i\pi/4}|1\rangle)$ might not be.
So how would you prepare such a state?
First, one can observe that $|T\rangle$ is the +1 eigenstate of a suitable Clifford unitary $U$. Hence, by preparing e.g. a logical $|0\rangle$ state and then measuring in its eigenbasis (applying $U^\dagger$ to $|0\rangle$, then measuring in the computational basis), we get either the +1 or -1 eigenstate as post-measurement state. Post-selecting on the +1 outcome results in $|T\rangle$. This is actually fault-tolerant, but the physical error rate can be quite high, so you would have to repeat very often (see e.g. the discussion on p. 3 in "Roads towards fault-tolerant universal quantum computation" by Campbell, Terhal, and Vuillot).
You prepare a noisy approximation $\rho$ to $|T\rangle$, somehow. Then you use a magic state distillation protocol to distill a better approximation to $|T\rangle$ from $\rho^{\otimes n}$. In fact, it is enough to use stabilizer protocols only, i.e. protocols which only use stabilizer circuits. This is also interesting from a foundational perspective: Which states can you actually distill and which not? This is discussed in the Bravyi-Kitaev paper. Unfortunately, magic state distillation is also very costly.
I think the discussion of fault-tolerance in Nielsen & Chuang is not such a bad reference to begin with. In particular, Chapter 10.6 and the section "Fault-tolerant $\pi/8$ gate" on page 485. Unfortunately, magic state distillation is not discussed there. | {
"domain": "quantumcomputing.stackexchange",
"id": 3450,
"tags": "resource-request, clifford-group, magic-states"
} |
Can a diamond pipe prevent water from freezing and bursting the pipe? | Question: I know a diamond vacuum blimp is not possible because it will implode under air pressure.
Is it possible for a diamond pipe to keep from bursting with frozen water?
I found nothing online except Black Diamond, WA, Blue Diamond Plumbing, Diamond Glass Co, etc.
Answer: Diamond isn't the strongest material (and certainly not the cheapest for a given strength). It is possible to construct vessels that are stronger than the pressure created by the freezing ice (at least small ones). It needs to withstand 300MPa. Looking online, you can find pipe manufacturers that make pipes with maximum pressures in that region.
It's possible you would have failures at other locations (like faucets that are not so strongly constructed) as the water in the pipe is pressurized. And you would probably be paying more for the pipes than it would cost to repair several failures in regular pipes. | {
"domain": "physics.stackexchange",
"id": 53673,
"tags": "pressure, water, material-science, stress-strain, states-of-matter"
} |
Time in 0 gravity points | Question: If being close to a supermassive body like a black hole makes time pass more slowly for us than for an observer from a point of view with a weaker gravitational field, if we get to be at a point in space where there is no gravitational influence, would that make X time on earth a long time at that point?
I'm not talking about a Lagrangian point, where as I've studied, the field strengths cancel each other out. (I am a high school student from Spain).
If we could spend our lives at a point with this characteristic, would we be longer lived or would we still live ~85 earth years?
Has it been calculated what would be the time equivalence in such a place?
Answer:
makes time pass more slowly for us
This is a fundamental misunderstanding of time dilation, which only says anything about the relative rates that clocks run compared to a clock that is in your own frame of reference.
All observers "experience time" in the same way.
The second problem with your post is that "gravitational time dilation" is not directly connected to the gravitational field strength. What matters is the gravitational potential. The difference in the relative rates at which clocks run (according to observers with those clocks) depends on the difference in gravitational potential.
A region with "zero gravitational field" is just a place where the gradient of the potential is zero. It says nothing about the gravitational potential itself. For example, the gravitational field at the centre of the Earth is zero but clocks here would appear to run slow compared with clocks on the Earth's surface.
An observer on (the surface of the) Earth will judge that a clock in orbit is running faster (ignoring any time dilation due to relative motion); a clock a long way from the Sun will run a bit faster still and one beyond our Galaxy a little bit faster again.
Edit: For example, the size of the effect can be approximated by saying a clock far from the Galaxy would run at $\sim 1 + GM/Rc^2$ times the rate of a clock orbiting the Galaxy at the distance of the Sun (there is also time dilation due to relative motion, which I'm ignoring), where $M$ is the mass interior to the Sun's Galactic orbit and $r$ is the radius of that orbit. $M \sim 10^{11}$ solar masses and $r \sim 8000$ pc gives an increased rate of $1 + 6\times 10^{-7}$ (or about 20 seconds per year). | {
"domain": "astronomy.stackexchange",
"id": 6211,
"tags": "gravity, time, time-dilation, lagrange-point"
} |
What is a spectrometer, and why are they so useful in science? | Question: I've heard reference to many telescope and spacecraft that have a device known as a spectrometer, and I'm curious, what is the purpose of these device? What's the working principal behind them and what do we use them for?
Answer: EM radiation, including light, is a spectrum of different wavelengths. Spectroscopy is the detailed analysis of a light signal by wavelength. Ordinary color images break up light into 3 channels (red, green, and blue), but spectroscopy is generally concerned with breaking up light into a higher number of bands (e.g. 10, 100, or more), and a spectrometer is the instrument that does just that.
The basic principle of spectrometry is simple, various methods (the most ordinary being the use of a prism) can be used to cause the different wavelengths of light to follow different paths, which can be used in combination with a monochromatic imaging sensor to record the spectrum. Alternately, multiple images of the same scene can be recorded while using different narrow band filters (either separate filters or a device which can be adjusted to pass through different wavelengths such as a fabry-perot filter).
Spectrometry has multiple uses:
Composition
Ions of different elements have different emission spectra due to the differences in electron energy levels. This makes it possible to determine the elemental composition of objects that are significantly ionized such as stars (which are composed of high temperature plasma). Additionally, at lower temperatures molecules have characteristic absorption and emission spectra which can be used to determine the composition of lower temperature objects such as planets and asteroids.
Temperature
The large scale structure of a light spectrum will be dominated by the characteristics of the black body spectrum, making it possible to determine an object's temperature.
Motion
As mentioned above the composition of an object will result in a very characteristic spectrum. However, this spectrum will be shifted a certain amount one way or another depending on whether the object is moving away or towards us, due to doppler shifting. This makes it possible to measure the relative velocity of an object along the line of sight. By studying changes in an object's motion we can infer certain information about the object such as whether or not it is orbited by another otherwise unseen object. To date this is one of the most prolific methods for detecting extrasolar planets.
Since spectroscopy splits up a light signal into many tiny buckets it's very helpful to have as much light to work with as possible, which is why most of the largest telescopes in the world (such as the Keck or VLT telescopes) spend a lot of their time collecting spectra and have very sophisticated spectrometers.
The invention of CCDs and other electronic imagers has been a gigantic boon to spectrometry, since such devices have very high quantum efficiency (meaning the vast majority of photons from the source light are converted into usable signals) and can have fairly flat spectral response curves. Most importantly, they are already finely divided into different bins spatially and they contain a huge number of individual detectors (pixels).
One of the most interesting advances in modern spectroscopy is the increasing predominance of "imaging spectrometers" in interplanetary spacecraft and observatories. Instead of merely collecting multiple color channel data for each pixel in an image these instruments collect entire spectra for every pixel. This dramatically increases the amount of data collected and the speed of data collection by a spacecraft many fold, making it possible to extract a lot more information from a single view of a planet, moon, rock or what-have-you than was possible before. A few examples of imaging spectrometers would be the Mars Reconnaissance Orbiter's CRISM, the JWST's NIRSpec, and Dawn's VIR instrument. | {
"domain": "physics.stackexchange",
"id": 3000,
"tags": "spectroscopy, instrument"
} |
Simple RabbitMQ client wrapper | Question: After reading this question, I've realized that I can do a lot to improve the quality of my question, so I've edited this question quite a bit.
I've been teaching myself F# in my spare time off and on for the last 6 months. I've finally started getting comfortable enough with the language to feel that a lot of my code could be much better. The problem is, I don't know what changes to make.
Here's what I'm interested in:
I'm using higher order functions to return functions for interacting with a specific message queue. Is this a good design. Would another F# developer feel comfortable with this?
Does this fit the idiomatic style of F#?
If you know RabbitMQ, are there any bugs which I may be creating here.
Here's the context of the little block of code:
I'm doing a lot of experiments with messaging systems and I've been using RabbitMQ as a messaging framework. There's a .Net library for RabbitMQ but it's written in and for C#. I can use it in F# but it feels clunky. I wanted a small wrapper around the RabbitMQ library which which convert it into a more functional interface. Also, this will hopefully make it very easy to use RabbitMQ in an F# program.
My wrapper handles the following for RabbitMQ:
Connect to a RabbitMQ server
Create a function which will let you read one message from a queue
Create a function which will write a message to a queue
For both 2 and 3, if the queue doesn't exist, the queue will be created (that's the declareQueue)
module Client =
let connectToRabbitMqServerAt address =
let factory = new ConnectionFactory(HostName = address)
factory.CreateConnection()
let openChannelOn (connection:IConnection) = connection.CreateModel()
let private declareQueue (channel:IModel) queueName =
channel.QueueDeclare( queueName, false, false, false, null )
let private publishToQueue (channel:IModel) queueName (message:string) =
let body = Encoding.UTF8.GetBytes(message)
channel.BasicPublish("", queueName, null, body)
let createQueueReader channel queue =
declareQueue channel queue |> ignore
fun () ->
let ea = channel.BasicGet(queue, true)
if ea <> null then
let body = ea.Body
let message = Encoding.UTF8.GetString(body)
Some message
else
None
let createQueueWriter channel queue =
declareQueue channel queue |> ignore
publishToQueue channel queue
An example use case would be:
// open a connection to a RabbitMQ broker
let connection = connectToRabbitMqServerAt "localhost"
let myChannel = openChannelOn connection
// Connect to a queue for writing
let writeToHelloQueue = createQueueWriter myChannel "hello"
// write the message "Hello, World" to the queue "hello"
"Hello, World" |> writeToHelloQueue
Answer: Overall, this code looks very good. Just two nitpicks here:
Give your function definitions lines to themselves:
let openChannelOn (connection:IConnection) = connection.CreateModel()
Use match statements unless you are checking a boolean value directly:
if ea <> null then
let body = ea.Body
let message = Encoding.UTF8.GetString(body)
Some message
else
None
Becomes:
match ea with
| null -> None
| _ ->
let body = ea.Body
let message = Encoding.UTF8.GetString(body)
Some message | {
"domain": "codereview.stackexchange",
"id": 23672,
"tags": "beginner, f#"
} |
Not understanding how to eval a VAE model? | Question: As I understanding the VAE, it's a model to get the P(x) of x(final job like image generation).
When i train it, It input x from dataset to get mu and var from encoder, and to get a sample z from mu and var by z = mu + standard_normal * torch.exp(0.5 * log_var). and get x' by z from decoder.
So when i want to use/eval the model I shouldn't have the original x. right? If i already have x, this job is meaningless. then how can i get z if I dont have x (cant get mu and var by encoder)?
I googled and look some code on github. normally people eval this model by put x into it. But I don't get it......
Answer: Usually, VAE's are trained as image generators. In that use case, at inference time you only use the decoder part: you just sample a random vector $z$ and give it to the decoder to obtain the image. | {
"domain": "datascience.stackexchange",
"id": 12115,
"tags": "autoencoder, vae"
} |
Priority based categorization using pandas/python | Question: I have invoice and code data in the below Dataframes
Invoices
df = pd.DataFrame({
'invoice':[1,1,2,2,2,3,3,3,4,4,4,5,5,6,6,6,7],
'code':[101,104,105,101,106,106,104,101,104,105,111,109,111,110,101,114,112],
'qty':[2,1,1,3,2,4,7,1,1,1,1,4,2,1,2,2,1]
})
+---------+------+-----+
| invoice | code | qty |
+---------+------+-----+
| 1 | 101 | 2 |
+---------+------+-----+
| 1 | 104 | 1 |
+---------+------+-----+
| 2 | 105 | 1 |
+---------+------+-----+
| 2 | 101 | 3 |
+---------+------+-----+
| 2 | 106 | 2 |
+---------+------+-----+
| 3 | 106 | 4 |
+---------+------+-----+
| 3 | 104 | 7 |
+---------+------+-----+
| 3 | 101 | 1 |
+---------+------+-----+
| 4 | 104 | 1 |
+---------+------+-----+
| 4 | 105 | 1 |
+---------+------+-----+
| 4 | 111 | 1 |
+---------+------+-----+
| 5 | 109 | 4 |
+---------+------+-----+
| 5 | 111 | 2 |
+---------+------+-----+
| 6 | 110 | 1 |
+---------+------+-----+
| 6 | 101 | 2 |
+---------+------+-----+
| 6 | 114 | 2 |
+---------+------+-----+
| 7 | 112 | 1 |
+---------+------+-----+
Codes
Hot = [103,109]
Juice = [104,105]
Milk = [106,107,108]
Dessert = [110,111]
My task is to add a now column, category based on the following priorities:
If any invoice has more than \$10\$ qty it should be categorized as "Mega".
E.g. The total qty of invoice 3 is \$12\$ - \$4 + 7 + 1\$.
If any of the invoice's codes are in the milk list; the category should be "Healthy".
E.g. Invoice 2 contains the code 106 which is in the milk list. So the entire invoice is categorized as Healthy regardless of other items.
If any of the invoices's codes are in the juice list;
If the total qty of juices is equal to 1; the category should be "OneJuice".
E.g. Invoice 1 has code 104 and qty 1.
Otherwise; the category should be "ManyJuice".
E.g. Invoice 4 has codes 104 and 105 with a total qty of 2 - \$1 + 1\$.
If any of the invoices's codes are in the hot list; the category should be "HotLovers".
If any of the invoices's codes are in the dessert list; the category should be "DessertLovers".
All other other invoice should be categorized as "Others".
My desired output is as below.
+---------+------+-----+---------------+
| invoice | code | qty | category |
+---------+------+-----+---------------+
| 1 | 101 | 2 | OneJuice |
+---------+------+-----+---------------+
| 1 | 104 | 1 | OneJuice |
+---------+------+-----+---------------+
| 2 | 105 | 1 | Healthy |
+---------+------+-----+---------------+
| 2 | 101 | 3 | Healthy |
+---------+------+-----+---------------+
| 2 | 106 | 2 | Healthy |
+---------+------+-----+---------------+
| 3 | 106 | 4 | Mega |
+---------+------+-----+---------------+
| 3 | 104 | 7 | Mega |
+---------+------+-----+---------------+
| 3 | 101 | 1 | Mega |
+---------+------+-----+---------------+
| 4 | 104 | 1 | ManyJuice |
+---------+------+-----+---------------+
| 4 | 105 | 1 | ManyJuice |
+---------+------+-----+---------------+
| 4 | 111 | 1 | ManyJuice |
+---------+------+-----+---------------+
| 5 | 109 | 4 | HotLovers |
+---------+------+-----+---------------+
| 5 | 111 | 2 | HotLovers |
+---------+------+-----+---------------+
| 6 | 110 | 1 | DessertLovers |
+---------+------+-----+---------------+
| 6 | 101 | 2 | DessertLovers |
+---------+------+-----+---------------+
| 6 | 114 | 2 | DessertLovers |
+---------+------+-----+---------------+
| 7 | 112 | 1 | Others |
+---------+------+-----+---------------+
I have got the following. It works but it seems pretty naive and not at all Pythonic.
When I apply it to the original dataset the code is also very slow.
# Calculating Priority No.1
L = df.groupby(['invoice'])['qty'].transform('sum') >= 10
df_Large = df[L]['invoice'].to_frame()
df_Large['category'] = 'Mega'
df_Large.drop_duplicates(['invoice'], inplace=True)
# Calculating Priority No.2
df_1 = df[~L] # removing Priority No.1 calculated above
M = (df_1['code'].isin(Milk)
.groupby(df_1['invoice'])
.transform('any'))
df_Milk = df_1[M]['invoice'].to_frame()
df_Milk['category'] = 'Healthy'
df_Milk.drop_duplicates(['invoice'], inplace=True)
# Calculating Priority No.3
# 3.a Part -1
df_2 = df[~L & ~M] # removing Priority No.1 & 2 calculated above
J_1 = (df_2['code'].isin(Juice)
.groupby(df_2['invoice'])
.transform('sum') == 1)
df_SM = df_2[J_1]['invoice'].to_frame()
df_SM['category'] = 'OneJuice'
df_SM.drop_duplicates(['invoice'], inplace=True)
# 3.b Part -2
J_2 = (df_2['code'].isin(Juice)
.groupby(df_2['invoice'])
.transform('sum') > 1)
df_MM = df_2[J_2]['invoice'].to_frame()
df_MM['category'] = 'ManyJuice'
df_MM.drop_duplicates(['invoice'], inplace=True)
# Calculating Priority No.4
df_3 = df[~L & ~M & ~J_1 & ~J_2] # removing Priority No.1, 2 & 3 (a & b) calculated above
H = (df_3['code'].isin(Hot)
.groupby(df_3['invoice'])
.transform('any'))
df_Hot = df_3[H]['invoice'].to_frame()
df_Hot['category'] = 'HotLovers'
df_Hot.drop_duplicates(['invoice'], inplace=True)
# Calculating Priority No.5
df_4 = df[~L & ~M & ~J_1 & ~J_2 & ~H ] # removing Priority No.1, 2, 3 (a & b) and 4 calculated above
D = (df_4['code'].isin(Dessert)
.groupby(df_4['invoice'])
.transform('any'))
df_Dessert = df_4[D]['invoice'].to_frame()
df_Dessert['category'] = 'DessertLovers'
df_Dessert.drop_duplicates(['invoice'], inplace=True)
# merge all dfs
category = pd.concat([df_Large,df_Milk,df_SM,df_MM,df_Hot,df_Dessert], axis=0,sort=False, ignore_index=True)
# Final merge to the original dataset
df = df.merge(category,on='invoice', how='left').fillna(value='Others')
Answer: Your code is pretty impressive. Many python programmers don't know how to use pandas as well as you. Your code might not look very "Pythonic", but you did a great job utilizing vectorized methods with indexing. In this answer, I include one section on Python code conventions and a second attempting to optimizing your code.
Python Code Conventions
Many companies have standardized style guides that make code easier to read. This is invaluable when many people write to the same code base. Without consistency, the repo would degrade to a mess of idiosyncrasies.
You should consider adopting the following code conventions to make your code easier to read:
Follow standard variable naming conventions: Google Python Style Guide On Naming
Include a space after commas: Google Python Style Guide On Spaces
# most python programmers use CaseLikeThis (pascal case) for class names
# constants are often written in CASE_LIKE_THIS (snake case)
SODA = [101, 102]
HOT = [103, 109]
JUICE = [104, 105] # remember spaces after commas
MILK = [106, 107, 108]
DESSERT = [110, 111]
Attempt to Optimize
To optimize your code, you should time how long each step takes. This can be done by checking the clock before and after a segment of code.
import time
t0 = time.time() # check clock before (milliseconds elapsed since jan 1, 1970)
# segment you want to measure; something like your group by or merge...
t1 = time.time() # check clock after
time_to_run_step = t1 - t0
By measuring how long each step takes to run, you can focus your energy optimizing the slowest steps. For example, optimizing a 0.1 second operation to be 100x faster is less good than optimizing a 10 second operation to be 2x faster.
When thinking how to optimize your code, two questions came to mind:
Can we apply the priorities in backward order to avoid filtering already categorized priorities?
Can we perform all the group by work at the same time?
Group by and merge are expensive operations since they generally scale quadratically (# of invoices X # of codes). I bet these are the slowest steps in your code, but you should time it to check.
# Act 1: set up everything for the big group by
# priority 1
# will be setup at the end of Act 2
# priority 2
df['milk'] = df['code'].isin(MILK)
# priority 3.a
# priority 3.b
juice = df['code'].isin(JUICE)
df['juice_qty'] = df['qty']
df.loc[~juice, 'juice_qty'] = 0 # I thought df['juice_qty'][~juice] was intuitive, but it gave a warning https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
# distinguish single from many juice in Act 2
# priority 4
df['hot'] = df['code'].isin(HOT)
# priority 5
df['dessert'] = df['code'].isin(DESSERT)
# Act 2: the big group by and merge
invoices = df.groupby(['invoice']).agg({
'qty': 'sum',
'milk': 'any',
'juice_qty': 'sum',
'hot': 'any',
'dessert': 'any',
}).rename(columns={
'qty': 'total', # this is renamed because joining with duplicate names leads to qty_x and qty_y
'juice_qty': 'juice_total',
})
# priority 1
invoices['mega'] = invoices['total'] >= 10
# priority 3.a
# priority 3.b
invoices['one_juice'] = invoices['juice_total'] == 1
invoices['many_juice'] = invoices['juice_total'] > 1
df = df.merge(invoices, on='invoice', how='left')
# Act 3: apply the categories
# apply the categories in reverse order to overwrite less important with the more important
df['category'] = 'Others'
df.loc[df['dessert_y'], 'category'] = 'DessertLovers'
df.loc[df['hot_y'], 'category'] = 'HotLovers'
df.loc[df['many_juice'], 'category'] = 'ManyJuice'
df.loc[df['one_juice'], 'category'] = 'OneJuice'
df.loc[df['milk_y'], 'category'] = 'Healthy'
df.loc[df['mega'], 'category'] = 'Mega'
df = df[['invoice', 'code', 'qty', 'category']] # get the columns you care about
@Tommy and @MaartenFabré noticed a bug with how single and many juice was categorized. I edited this answer with a correction.
Edit: There are quite a few answers for this question spanning into stack overflow as well. Below a summary as of 09/20/2020.
original Priority based categorization using pandas/python
one_group_by https://codereview.stackexchange.com/a/249481/230673
np_select https://stackoverflow.com/a/63947686/14308614
np_select_where https://codereview.stackexchange.com/a/249586/230673
https://codereview.stackexchange.com/a/249486/230673 was not plotted because the time complexity was different
Performance was plotted using the code from https://stackoverflow.com/a/63947686/14308614 | {
"domain": "codereview.stackexchange",
"id": 39454,
"tags": "python, performance, beginner, python-3.x, pandas"
} |
Why is the phase of the FFT not exactly 0 degrees for a cosine and 90 degrees for sine wave? | Question: Let's say I have two signals. The first is a cosine wave and the second is a sine wave. Each oscillates at 0.01 Hz. The sample rate is 1 Hz and the length of time series is 1000 seconds. Each has an amplitude of 1.
My understanding is that an FFT of these two signals should recover a spike in the amplitude spectrum of 0.5 at 0.01 Hz (and similarly at the corresponding negative frequency). This all makes sense and I can get this to work as expected.
But the phase of the FFT is a bit perplexing. My expectation is that the cosine wave will give a phase of 0° at 0.01 Hz, and the sine wave will give a phase of 90° at 0.01 Hz. However, the result I get gives a phase of 3.6° for cosine, and 86.4° for sine. (The negative frequencies are complex conjugates).
Why can the FFT recover the precise amplitudes, but can't do so with the phases? Is there some reason for this? Is it just some sort of numerical or indexing issue or is there some deeper reason? Is my Fourier frequency list incorrect and I'm not actually sampling the spectrum at exactly 0.01 Hz?
MATLAB code to replicate is below.
Any help is appreciated.
f = 0.01; %signal frequency
fs = 1; %sample rate
dt = 1./fs; %length of sample
t = (dt:dt:1000)'; %time vector
df = fs/length(t); %frequency spacing
fAxis = (0:df:(fs-df)) - (fs-mod(length(t),2)*df)/2; %frequency axis with negative freqs
b1 = cos(2*pi*f*t); %first time series signal
b2 = sin(2*pi*f*t); %second time series signal
%FFT each signal and scale
B1 = fftshift((fft(b1)./length(fAxis)));
B2 = fftshift((fft(b2)./length(fAxis)));
%Find index of 0.01 frequency
indf = find(abs(fAxis-0.01)<10^-9);
%Magnitudes look okay. Each returns 0.5
abs(B1(indf))
abs(B2(indf))
%But phases???
angle(B1(indf))*180/pi
angle(B2(indf))*180/pi
Answer: Your time vector is slightly off, it should start at t=0, and end at t=999 (this latter modification keeps the length of the FFT at 1000 to keep a resolution $d_f = 1/1000$, allowing for the frequency component at $f = 0.01\texttt{Hz}$ to fall in one bin):
t = (0:dt:1000-1/fs)'; %time vector
Result:
ans =
8.8369e-14
ans =
-90.0000 | {
"domain": "dsp.stackexchange",
"id": 12273,
"tags": "fourier-transform, frequency-spectrum, phase"
} |
Is the complexity of this algorithm O($\sqrt{n}$) or linear? | Question: Let's say I have two identical jars and I want to find the height that the jars will breaks when dropped from various heights. I can drop the jars from height increments using steps on a staircase. I want to solve this problem sublinearly.
I decided to increment using square numbers (1, 4, 9, 16..etc) for a complexity of O($\sqrt{n}$), where n is the number of steps on the stairs, so I know that I have an upper and lower bound once the first jar breaks. If the jar breaks at 16, I know that the height that it breaks at is between 9 + 1 and 16. If I were to run the algorithm linearly (on the second jar) from 10 to 16, would this make my algorithm linear?
Answer: No, it would not make the algorithm linear. The worst-case scenario would be $n = m^2 + 1$, where $m$ is some positive integer. In this case, you have an additional $(m+1)^2 - m^2 = 2m + 1$ many tries (since the next square after $m^2$ is $(m+1)^2$ and $n < (m+1)^2$), which is roughly equal to $2\sqrt{n} \in \Theta(\sqrt{n})$. | {
"domain": "cs.stackexchange",
"id": 13067,
"tags": "algorithms"
} |
X-Ray crystallography using Bragg's Law | Question: I was looking up X-Ray crystallography using Bragg's Law:
$2d\sin\theta = n\lambda$
and I can understand the values of everything except this integer value $n$.
As far as my research got $n$ is used to describe the atom spacing in the crystal lattice, but I don't understand how you'd express $n$ or how it would describe it.
Could someone please explain this to me please?
Note: diagrams tend to be very useful in developing my understanding and if anyone has any reference to a video that might help as well. Thanks.
Answer: EDIT:
I was wrong. The problem is as follows:
You can get several peaks for the same plane ($n=1$ peak, $n=2$ peak etc.). So if you, after measuring angles and making calculations, get plane distances $d$ and $\frac{d}{2}$, $\frac{d}{2}$ is just $n=2$ peak and $d$ is $n=1$ peak of the same set of diffraction planes.
Also, here is good explanation:
http://www.bruker-axs.de/fileadmin/user_upload/xrfintro/sec1_8.html | {
"domain": "physics.stackexchange",
"id": 3514,
"tags": "x-ray-crystallography, braggs-law"
} |
Calcium sulfate soluble in water | Question: In the volumetric estimation of calcium in a given solution as calcium oxalate. We convert the calcium oxalate to oxalic acid by dissolving the former in hot (~70 °C) 2 N sulfuric acid solution. This is titrated against $\ce{KMnO4}$.
The reaction would be
$\ce{H2SO4 + CaC2O4 -> H2C2O4 + CaSO4}$
The calcium sulfate formed is normally not soluble in water. But here we get a clear solution which we titrate against $\ce{KMnO4}$.
Why does the calcium sulfate not precipitate in the solution?
Answer: The overall reaction is really better given as: $$\ce{2H2SO4 + CaC2O4 -> H2C2O4 + Ca^{2+} + 2HSO4^{-}}$$ since at that strong of a pH the predominate sulfuric acid species is $\ce{HSO4^{-}}$.
Better yet the reaction could be written as $$\ce{2H^+ + CaC2O4 ->C[{0.2 N \ H2SO4, 70 °C}]\ H2C2O4 + Ca^{2+}}$$
As was already pointed out in the comments, calcium sulfate has a fair solubility 0.2 g per 100 ml of water at at 20 °C, and the $K_\text{sp} = 4\times10^{-5}$.
The solubility varies with temperature. The solubility of calcium sulfate dihydrate increases from 0.223 g/100 ml at 0 C to about 0.265 g/100 ml at 40 C, then decreases to 0.205 g/100 ml at 100 C. | {
"domain": "chemistry.stackexchange",
"id": 10458,
"tags": "physical-chemistry, aqueous-solution, analytical-chemistry, titration"
} |
Nabla Operator in Kinetic Energy Hamiltonian in 2nd Quantization | Question: Why can I, in the 2nd quantisation representation of a kinetic energy Hamiltonian
$$
H=\frac { -\hbar ^ { 2 } } { 2 m } \nabla^2
$$
write the Laplace (=Nabla$^2$) operator out like this?
$$
\hat { T } = \sum _ { i j } t _ { i j } \hat { a } _ { i } ^ { \dagger } \hat { a } _ { j } = \sum _ { i j } \hat { a } _ { i } ^ { \dagger } \hat { a } _ { j } \int d \mathbf { r } \phi _ { i } ^ { * } ( \mathbf { r } ) \left[ - \frac { \hbar ^ { 2 } \nabla ^ { 2 } } { 2 m } \right] \phi _ { j } ( \mathbf { r } ) = \underline{\frac { \hbar ^ { 2 } } { 2 m } \int d \mathbf { r } \nabla \hat { \psi } ^ { \dagger } ( \mathbf { r } ) \nabla \hat { \psi } ( \mathbf { r } )}
$$
Is $\nabla^2$ not only acting on the right wave function?
Answer: This is just multivariate integration by parts:
\begin{align}
\int_V f\, \nabla^2 g \,d^3\mathbf{x} = &\int_{V}\boldsymbol\nabla\cdot\left(f\boldsymbol{\nabla} g\right) d^3\mathbf{x} -\int_V\left(\boldsymbol{\nabla} f\right)\cdot\left(\boldsymbol{\nabla} g\right)d^3\mathbf{x} \\= &\int_{\partial V}f\left(\boldsymbol{\nabla} g\right) \cdot d^3\mathbf{A} -\int_V\left(\boldsymbol{\nabla} f\right)\cdot\left(\boldsymbol{\nabla} g\right)d^3\mathbf{x},
\end{align}
Since the wavefunction goes to zero at infinity, the boundary term vanishes, and you're left with
$$
\int_V f\, \nabla^2 g \,d^3\mathbf{x} = -\int_V\left(\boldsymbol{\nabla} f\right)\cdot\left(\boldsymbol{\nabla} g\right)d^3\mathbf{x}
$$ | {
"domain": "physics.stackexchange",
"id": 60077,
"tags": "quantum-mechanics, statistical-mechanics, operators, second-quantization"
} |
is_subset Python implementation | Question: I wanted to ask for a code review for my implementation of Set:
#!python
from linkedlist import Node, LinkedList
from hashtable import HashTable
class Set(object):
def __init__(self, elements=None):
# intilaize the size of the set, starts with intial size of 10
if elements is None:
initial_size = 10
else:
initial_size = len(elements)
#
self.data = HashTable(initial_size)
for item in elements:
if self.data.contains(item):
continue
else:
self.data.set(item, None)
def __str__(self):
return str(self.data.keys())
def set_contents(self):
"""Get the contents of the set [key inside a HashTable]"""
return self.data.keys()
def size(self):
"""Find size of the set"""
return self.data.size
def contains(self, element):
"""return a boolean contained inside of the set [key inside a HashTable]"""
"""Best case running time for contains is O(1) near beginning of set
Worst case running time for contains O(n) near end of set """
return self.data.contains(element)
def add(self, element):
"""Add the element of the set"""
# O (1)
"""Best case running time: O(1) near beginning of list of keys
Worst case running time: O(n) near end of list of keys """
if self.contains(element):
return
else:
self.data.set(element, None)
def remove(self, element):
# Raise value error if not available
if self.contains(element):
self.data.delete(element)
else:
raise ValueError("Element not in set")
def union(self, second_set):
"""Return a new set, that is a union of first_set and second_set"""
# O(n) since it goes through every item and has contains"""
# create a new set that has the set conents
result_set = self.set_contents()
for item in second_set.set_contents():
if self.contains(item):
continue
else:
result_set.append(item)
return Set(result_set)
def intersection(self, second_set):
"""Return a new set, that is intersection of this set and second_set."""
"""O(n) since it goes through every item and has contains"""
# create an empty set
result_set = []
for item in second_set.set_contents():
# check if the set contains the item
if self.contains(item):
result_set.append(item)
# else:
# return ValueError("Set is empty")
return Set(result_set)
Is_Subset
def is_subset(self, second_set):
"""Return True if second set is a subset of this set,else False"""
# O(n); goes through every item and has contains
# Compariing the size of the 2 set
# to make sure if set is in the second set
# for bucket in self.buckets:
# for element in bucket.iterate():
# if not other.contains(element)
if self.size() <= second_set.size():
for item in self.set_contents():
if second_set.contains(item):
continue
else:
return False
return True
else:
return False
# set_set_test = Set()
set_test = Set([1, 2, 3, 4, 5,6,7,8,9,10])
set_test2 = Set([ 11,12])
test_intersection = set_test.intersection(set_test2)
print(test_intersection)
set_test2 = Set([6, 7, 8,9,10])
print(set_test)
print(set_test2)
set_test.add(1)
print(set_test)
print(set_test.intersection(set_test2))
print(set_test.union(set_test2))
print(set_test.is_subset(set_test2))
set_test = Set([1, 2, 3])
set_test2 = Set([1, 2, 3, 4, 5, 6, 7, 8])
print(set_test.is_subset(set_test2))
set_test = Set([1, 2, 3])
set_test2 = Set([1, 2, 3])
print(set_test.is_subset(set_test2))
set_test = Set([1, 2, 3, 4])
set_test2 = Set([1, 2, 3])
print(set_test.is_subset(set_test2))
set_test = Set([1, 2, 3, 4, 5])
set_test2 = Set([4, 5, 6, 7, 8])
print(set_test.is_subset(set_test2))
Answer: Since you explicitly ask about your is_subset implementation in the title, I will review this first, and the rest of the code afterwards.
Bug
As I already commented on the question, the method works the opposite of what its documentation says, and also of what the method name implies. It returns True if this set is a subset of the second_set. If the code is correct and the comment wrong (which I don't think is the case), then I suggest fixing the comment and changing the method name to is_subset_of(self, superset_candidate). If the method name and comment are correct, then after fixing the code I recommend to rename the parameter second_set to subset_candidate.
Comments
You have code commented out inside your method. That can make the reader wonder why that is. Is it commented out because it can be deleted? Will it be needed later? If a reader looks for a bug in the method, was the bug caused by the commented code that is actually needed?
Logic
This looks complicated as it is nested quite deeply:
if self.size() <= second_set.size():
for item in self.set_contents():
if second_set.contains(item):
continue
else:
return False
return True
else:
return False
You can improve it by returning early if the first condition is False, and you can omit the continue by inverting the if-condition:
if not self.size() <= second_set.size():
return False
for item in self.set_contents():
if not second_set.contains(item):
return False
return True
The Rest of the Code
from linkedlist import Node, LinkedList
It seems like you are not using that import.
In your __init__ method you use a # to separate two blocks. Rather just use a blank line instead.
You use data as the name of your variable that holds the elements. elements might be a better name, as data could be anything and you expect a set to contain elements, not data.
for item in elements:
if self.data.contains(item):
continue
else:
self.data.set(item, None)
Checking whether the item is already a key in the HashTable should not be needed. Just overwriting it with the value None if it is already there is basically the same effect as skipping it. So the 5 lines could be shortened to 2 lines.
def set_contents(self):
"""Get the contents of the set [key inside a HashTable]"""
return self.data.keys()
This looks like it should be get_contents. I see that it is supposed to mean "the contents of the set", but since getters and setters are common in many programming languages the reader would expect a method with this name to change a value named contents, i. e. to set the value. You could name it get_contents() instead, or just contents(). But: Accessing the contents like this is not a common operation for a set. A set should already represent the elements itself, giving access to them via iteration. If you do want to implement this method to access the contents directly, I suggest the name to_list. Generally I recommend having a look at the Python documentation for sets to then reimplement the same interface.
As a side note, I don't quite understand what your comment is trying to say, especially that part in brackets. If the intention is to tell the reader how the method is implemented, then that does not belong in the method's doc comment. How the interface is realized by the implementation should not be important to the user.
def size(self):
"""Find size of the set"""
return self.data.size
Python supports using the len(x) function for types that implement the __len__ method. It is also part of the actual interface of Set in Python. Being consistent with how you check for a collection's size or length can be important, because a user of your type might not expect it to have a size() method and could get an error when calling len() for your type.
Apart from that, the comment is misleading, as "find" is typically used for operations that search through many items, like in a database, and simply returning a value is referred to as "getting" the value.
def contains(self, element):
"""return a boolean contained inside of the set [key inside a HashTable]"""
"""Best case running time for contains is O(1) near beginning of set
Worst case running time for contains O(n) near end of set """
return self.data.contains(element)
Here your comment is quite confusing again. A better comment would be something like """returns True if the element is in the Set, otherwise returns False""".
Also, have a look at the __contains__ method in Python, which enables you to check if a collection contains an element by using a in b.
def add(self, element):
"""Add the element of the set"""
# O (1)
"""Best case running time: O(1) near beginning of list of keys
Worst case running time: O(n) near end of list of keys """
if self.contains(element):
return
else:
self.data.set(element, None)
Like in your __init__ method, you skip or return if the value is already contained. However, if it is contained, overwriting it with None would make your method shorter and more readible, and would probably even be more efficient, since you wouldn't need to look through all of your keys to check if it is already contained. Your first comment should say """Add element to the set""", probably just a typo again that it says "of the set". Then your next comment line suggests that the operation is of complexity O(1), contradicting the following line that says that it is O(1) only near the beginning of the list of keys. Again, that is an implementation detail that should not be in your doc string, and I recommend not to mix # and """ """ comments. The triple quotes version for the documentation comment that describes what the method does, and # to clarify parts of your code. And since """ """ is a multiline string, you don't need to put two of those in following lines. Just put the whole comment in one multilines string, perhaps using single blank lines to separate logical blocks of the comment.
def remove(self, element):
# Raise value error if not available
if self.contains(element):
self.data.delete(element)
else:
raise ValueError("Element not in set")
That comment is obsolete, as it just describes what the code says 4 lines later without adding any information.
def union(self, second_set):
"""Return a new set, that is a union of first_set and second_set"""
# O(n) since it goes through every item and has contains"""
# create a new set that has the set conents
result_set = self.set_contents()
for item in second_set.set_contents():
if self.contains(item):
continue
else:
result_set.append(item)
return Set(result_set)
If I'm not mistaken, this is not O(n) but O(n²), because for every item in the second set, you go through the whole first set to see if it is already in it. That is if the sets are approximately of the same size, which can be assumed in the average case. Considering that you do basically the same in your constructor, it is even less efficient. But you don't even need to care about duplicates first if you check for duplicates in the constructor anyway. Since you don't even need to check if the element is already contained, and can just overwrite it, you can actually do it in O(n):
def union(self, second_set):
return Set(self.set_contents() + second_set.set_contents())
def intersection(self, second_set):
"""Return a new set, that is intersection of this set and second_set."""
"""O(n) since it goes through every item and has contains"""
# create an empty set
result_set = []
for item in second_set.set_contents():
# check if the set contains the item
if self.contains(item):
result_set.append(item)
# else:
# return ValueError("Set is empty")
return Set(result_set)
For similar reasons as above with union, this is not O(n). Also you have comments again that just describe what the code does, which is obsolete, however in the case of the first one it is even wrong, because you don't create an empty set, but an empty list. And you have a comment that is actually code, which is confusing. Is it not needed? Can it be deleted?
The method implementation can be done more efficiently and more readible by using a list comprehension:
def intersection(self, second_set):
items = [i for i in self.set_contents if second_set.contains(i)]
return Set(items)
If you have the operations union and intersection, you usually also want an operation difference which gives you the subset of elements which are not contained in the other set. An example implementation would be this:
def difference(self, second_set):
items = [i for i in self.set_contents if not second_set.contains(i)]
return Set(items) | {
"domain": "codereview.stackexchange",
"id": 28753,
"tags": "python, reinventing-the-wheel, set"
} |
How do models of hypercomputation overcome the Halting Problem? | Question: Hypercomputation refers to models of computation that are not possible to simulate using Turing machines. (Hypercomputers are not necessarily physically realisable!) Some hypercomputers have access to a resource that allows the Halting Problem for standard Turing machines to be solved. Call this a "superpower": a hypercomputer with a superpower can decide whether any standard Turing machine terminates.
What kinds of "superpowers" do hypercomputers use?
Ed Blakey's thesis sets up a formal framework to classify some of the major kinds of resources used in hypercomputing, but it does not try to provide a comprehensive survey of superpowers. I am not interested in a list of hypercomputers (there is a nice list in the Wikipedia article), but in understanding what "special sauce" each model uses, perhaps thought of as a unique kind of resource.
This question is inspired by How fundamental is undecidability?. Also related is What would it mean to disprove Church-Turing thesis? which generated lots of interesting discussion, and Are there any models of computation currently being studied with the possibility of being more powerful than Turing Machines?.
Answer: In the paper On the power of multiplication in random access machines
it has been proven by Hartmanis and Simon that, if we add unit cost multiplication instruction in a RAM (called MRAM) then for this model P = NP. In addition the languages decided in polynomial time in the MRAM model are exactly the languages in PSPACE.
As stated in the paper, this results shows that multiplication have the same complexity as addition iff P = PSPACE.
A more related result I have heard of, is that if we add a division instruction with infinite precision in a RAM then we can solve undecidable problems. However I could not find the paper that proves this result. If anyone is familiar with it please comment and I will update the answer. | {
"domain": "cstheory.stackexchange",
"id": 2996,
"tags": "computability, hypercomputation"
} |
Practical limits of big-O performance scaling | Question: Lets imagine we have an algorithm made up of a set of operations. Let assume that it has three kind of operations and the time complexity is $t(n) = An + Bn\log n + Cn^2$. This algorithm has asymptotic performance $O(n^2)$.
Now, on a real computer (or a more sophisticated abstract machine that doesn't assume infinite capacity, I guess), I eventually hit issues with the complexity of these operations (e.g. a garbage collector, memory paging, cache, ...) which cause $C$ which is a constant in the abstract model to become itself a function of $n$.
Is there a name for this? Is there a framework in which these kinds of issues can be placed?
Obviously from the question and the lack of jargon, I'm a practitioner, not a computer scientist (my PhD was in AI, not CompSci). But the issue is critically important for studying actual algorithm behavior, and so I'd appreciate any help in finding resources that put it into a less ad-hoc context.
Answer: While I'm not entirely sure what you're getting at, there are a number of frameworks that explicitly capture the cost changes that show up when we hit real resource limits. Three examples:
the external memory framework that emphasizes the expense of going to disk (idealized by setting main memory reference costs to 0, and disk reference costs to some fixed constant)
The cache-oblivious model that assumes an unknown cache size and performance hits for going beyond.
The streaming computational model in which only you're only allowed sublinear working storage.
Are these along the lines that you're thinking ? | {
"domain": "cstheory.stackexchange",
"id": 819,
"tags": "ds.algorithms, reference-request, application-of-theory"
} |
Can anyone verify my NN diagram if it is properly drawn? | Question: I am working on a Neural Network that can estimate building's carbon footprint based on the set of features and an image of urban surroundings (via CNN).
I have used Netron to visualize the network (up) but this image is not readable in a publication so I drawn one myself (down).
Can anyone comment if this is proper representation - especially the last part after the concatenation - I am not sure if there should be 2 or 3 dense layers?
Answer: I do not think there is a special kind of format that needs to be followed as long as the image is clear and readable, which (imho) it is for your case. Regarding the last 2/3 layers, the final layer is the output with 1 unit, so you pictured it correctly, along as the article mentions the output shape (that is not a multi-output situation).
Good luck with publication! | {
"domain": "datascience.stackexchange",
"id": 7951,
"tags": "neural-network, keras, cnn, convolution"
} |
Clean code attempt at ATM problem on codechef.com | Question: The problem asks you to take an integer (debit amount) and a double (credit or initial balance amount) and process the requested debit verifying that 1 it's a multiple of the minimum denomimation amount of $5 and that it's also smaller than the credit/balance. If either is untrue, it is supposed to return the initial deposit amount otherwise it will return the new balance.
Full problem description
I have created 3 objects for this problem:
Transaction - This object reads in the two initial values given and then is used in ATM
ATM - Takes the transaction and applies them to the account and then displays the new balance.
Account - This object keeps track of the current account balance and updates the balance if the ATM passes it a value.
Limitations:
I understand that it can only process a single account, but that is more a limitation set by the problem description than it is me not accounting for multiple accounts. Also no error is returned if the balance cannot be updated, but it is not a requirement. I also understand I made a mountain out of a molehill with this problem as it can be solved by much less code.
In what ways can I improve this code other than the limitations mentioned?
#include <istream>
#include <iostream>
#include <iomanip>
class Account {
public:
Account()
: mBalance(0.0)
{}
void updateBalance(double transaction) {
mBalance += transaction;
}
double getBalance() {
return mBalance;
}
private:
double mBalance;
};
class Transaction {
public:
Transaction()
: mDebit(0)
, mCredit(0.0)
{}
int getDebit() {
return mDebit;
}
double getCredit() {
return mCredit;
}
friend std::istream& operator>>(std::istream& input, Transaction& transaction) {
input >> transaction.mDebit;
input >> transaction.mCredit;
return input;
}
private:
int mDebit;
double mCredit;
};
class ATM {
public:
ATM()
: mAccount()
, mMinDenomination(5)
, kWithdrawal_fee(0.50)
{}
void processTransaction(Transaction& transaction) {
credit(transaction);
debit(transaction);
}
void displayBalance() {
std::cout << mAccount.getBalance() << '\n';
}
private:
Account mAccount;
int mMinDenomination;
const double kWithdrawal_fee;
bool debit(Transaction& transaction) {
if(isWithdrawable(transaction.getDebit())){
mAccount.updateBalance(-1*(transaction.getDebit() + kWithdrawal_fee));
return true;
}
return false;
}
void credit(Transaction& transaction) {
if(transaction.getCredit() > 0) {
mAccount.updateBalance(transaction.getCredit());
}
}
bool isWithdrawable(int transaction) {
if(transaction % mMinDenomination == 0) {
return mAccount.getBalance() >= transaction + kWithdrawal_fee;
}
return false;
}
};
int main() {
std::iostream::sync_with_stdio(false);
std::cout << std::setprecision(2) << std::fixed;
Transaction transaction;
ATM atm;
std::cin >> transaction;
atm.processTransaction(transaction);
atm.displayBalance();
return 0;
}
Answer: Design.
You use a mixture of int and doubles to represent monatary units. This is not a good idea. double (like all fixed with decimal representations, can not hold all values exactly). You should use an integer like type (where all values are represented exactly). If you are in America and using dollars and cents then I would use an integer but the balance of the account is held in cent. When you print it out you can then place the decimal point in the correct place.
Code Review
In:
class Account {
I always think getters are wrong. They break encapsulation. Looking forward in your code you use them for two reasons. 1) Printing. 2) To test if the account has enough funds for withdraw. In both cases you should add explicit methods.
double getBalance() {
return mBalance;
}
I would replace the above with:
friend std::ostream& operator<<(std::ostream& s, Account const& data)
{
// Assuming you changed (as suggested above to hold account balance in cent.
s << "$" << data.mBalance / 100 << "." << data.mBalance % 100;
}
virtual bool canWithdraw(double amount)
{
return mBalance > amount;
}
This logic protects you against future improvements to the system. What happens if you add the ability of some accounts to go overdrawn (for a fee). Then in your code you have to find all locations where the balance is being checked and modify those. In the method I propose you only need to modify one place (the Account class). You have localized the test for whether the account can withdraw money.
In:
A debit is an integer and a credit is a double.
I don't understand the logic here.
int mDebit;
double mCredit;
They should be the same. If you have some compelling reason for the difference then I need a big comment about why they are different (you may have a good reason, but you will need to explain it in the code).
Personally I would just have an amount. A negative amount is a debt and positive amount a credit.
Getters. Ahhh. horrible.
int getDebit() {
return mDebit;
}
double getCredit() {
return mCredit;
}
Again the only use is do tests and fiddling that should be part of the Accounts responsibility. You should send the transaction to the account which may reject the transaction if it fails any of the account specific validations (ie you can have a negative balance).
Like this.
friend std::istream& operator>>(std::istream& input, Transaction& transaction) {
input >> transaction.mDebit;
input >> transaction.mCredit;
return input;
}
But usually when you have an input stream reader you also have an output stream writer that mirrors the reader. So when you persist to a stream the class can also read the value in.
In ATM:
Interesting. You have a debit action and credit action applied for every transaction. Does this mean that a transaction can perform both operations?
void processTransaction(Transaction& transaction) {
credit(transaction);
debit(transaction);
}
Its OK to have a print method().
void displayBalance() {
std::cout << mAccount.getBalance() << '\n';
}
But usually it is best for this to just call the stream operator.
void displayBalance() {
std::cout << mAccount; // The account should know how to serialize itself.
}
This shows how bad an idea it is to have functions that have success state.
bool debit(Transaction& transaction) {
if(isWithdrawable(transaction.getDebit())){
mAccount.updateBalance(-1*(transaction.getDebit() + kWithdrawal_fee));
return true;
}
return false;
}
You do it all correctly yet it is still broken. Because the calling code does not check the return value. Yes internally within a class it is absolutely fine to return status codes (because you do not expose the interface publicly). But you must also make sure you do actually test the result codes.
Note: It is never (very rarely) OK to expose status codes that need checking publicly. As we can see in the C world (were this practice is the norm)it is so easy to not check the error codes and thus invalidate any following code. You should write code so it can not be used incorrectly which means forcing your users to do the correct thing (or the program exits (exceptions)). | {
"domain": "codereview.stackexchange",
"id": 9204,
"tags": "c++, beginner, c++11, programming-challenge, finance"
} |
Is there any source which tabulates quantum computing algorithms for simulating physical systems? | Question: I was wondering if there is a source (online or review article) which tabulates recent algorithms, and their complexities, used in simulating various physical systems. Something along the lines of:
Physical System 1: Quantum Field Theory (scattering)
Complexity: Polynomial in number of particles, energy, and precision
Source: Quantum Algorithms for Quantum Field Theories (Jordan, Lee & Preskill, 2011)
Physical System 2: Atomic Energy levels
And so on.
Answer: I believe what you're after is NIST's Quantum Zoo, a comprehensive catalog of quantum algorithms maintained by Stephen Jordan. Its sections include:
Algebraic and Number Theoretic Algorithms (14 items)
Oracular Algorithms (34 items)
Approximation and Simulation Algorithms (12 items)
and for each algorithm it includes its speedup, a description and relevant references. The third category would be the answer to the present question. | {
"domain": "quantumcomputing.stackexchange",
"id": 48,
"tags": "quantum-algorithms, simulation, resource-request"
} |
Plasma phase transtition from a Landau symmetry-breaking perspective | Question: My question is a follow-up to Is there a phase transition between a gas and plasma?, with an emphasis on the symmetry-breaking aspect. These questions only refer to electromagnetic plasmas - not quark-gluon plasmas, which are very different.
1) Is the plasma phase transition (in increasing order of "severity") (a) a mere crossover, with no true non-analyticities in any thermodynamic quantities, (b) a second-order transition, or (c) a first-order transition?
2) It it's a crossover, which quantities cross at the transition?
3) If it's a true phase transition, what is the order parameter that changes non-analytically?
4) If it's a second-order transition, what's its universality class?
5) If it's a true phase transition, is it possible to adiabatically connect the gas and plasma states without passing through a phase transition - like it is for the liquid and gaseous states of water - so that from a Landau symmetry-breaking perspective they're actually the same phase?
6) If not, then what symmetry of the Hamiltonian is broken in the gas phase and unbroken in the plasma phase?
(I can't ask these questions in separate posts, because their well-posedness depends on the answers to the previous questions.)
Answer: 1) The plasma-gas phase transition is a smooth crossover. There are no local order parameters that distinguish the plasma and the gas, and no change of symmetry.
2) In the crossover the ionization fraction changes from 0 to 100% (asymptotically). Other quantities that change are observables that relate to plasma properties, like the Debye radius or the plasma frequency. | {
"domain": "physics.stackexchange",
"id": 42134,
"tags": "phase-transition, symmetry-breaking, plasma-physics"
} |
Importance of normal Distribution | Question: I have been reading about probability distributions lately and saw that the Normal Distribution is of great importance. A couple of the articles stated that it is advised for the data to follow normal distribution. Why is that so? What upper hand do I have if my data follows normal distribution and not any other distribution.
Answer: This is an interesting question. So sorry for a long winded answer. The tl:dr; is it is a mix of some real applicability, theoretical basis, historical baggage (due to limited compute power) and obsession for analytically tractable models (instead of simulation/computational models). We should be very careful and discerning while using it in real problems.
Details
The importance of normal distribution comes from the following facts/observations,
Many naturally occurring phenomenon seem to follow normal distribution when sample size is large (more on this below).
In Bayesian statistics, if you assume a Normal distribution prior on parameters, then posterior distribution is also normal. This makes computations easier.
Somewhat related, central limit theorem tells us that average of a samples from any distribution (no fat tails) follows normal distribution. So normal distribution is useful and provides theoretical basis for doing population level parameter estimates from samples (think of election predictions). But again, this assumes the underlying data comes from a distribution which is well behaved and extreme values are very unlikely.
In short, normal distribution can be thought of as a good base case, which is analytically tractable, easy to code up and also seems to be applicable to many models of nature. Somewhat broken analogy, but in Physics we consider linear second order differential equations to study many systems. Now not all systems actually are linear second order, but it is a reasonable approximation under some constraints that is easier to analyze and code up.
And over-usage of normal distribution everywhere is actually controversial.
As we have to more computing power and access to Monte Carlo based simulation based method, we are no longer limited by using only analytically tractable distributions. We can use distributions which more accurately fit the reality.
Normal distributions are useful for natural phenomenon (heights of students in a class) but are way inaccurate to model mostly man made systems (income of people in a town, potential swings of stock indices during panic).
For example, many critics of probabilistic financial models observe that the underlying models use normal distribution. But real market swings are mostly fat tailed (distributions where extreme outcomes are more likely than normal distributions). If you want to go down on deeper into this, start with statistical consequences of fat tails by Nassim Nicholas Taleb. Fun fact, if you look at the wild swings in the price of GameStop stock from the r/wallstreeetbets saga, Taleb pointed out that the swings are not actually wild if you consider a fat tailed distribution. | {
"domain": "datascience.stackexchange",
"id": 9048,
"tags": "machine-learning, statistics, mathematics"
} |
How to get predictions with predict_generator on streaming test data in Keras? | Question: In the Keras blog on training convnets from scratch, the code shows only the network running on training and validation data. What about test data? Is the validation data the same as test data (I think not). If there was a separate test folder on similar lines as the train and validation folders, how do we get a confusion matrix for the test data. I know that we have to use scikit learn or some other package to do this, but how do I get something along the lines of class wise probabilities for test data? I am hoping to use this for the confusion matrix.
Answer: To get a confusion matrix from the test data you should go througt two steps:
Make predictions for the test data
For example, use model.predict_generator to predict the first 2000 probabilities from the test generator.
generator = datagen.flow_from_directory(
'data/test',
target_size=(150, 150),
batch_size=16,
class_mode=None, # only data, no labels
shuffle=False) # keep data in same order as labels
probabilities = model.predict_generator(generator, 2000)
Compute the confusion matrix based on the label predictions
For example, compare the probabilities with the case that there are 1000 cats and 1000 dogs respectively.
from sklearn.metrics import confusion_matrix
y_true = np.array([0] * 1000 + [1] * 1000)
y_pred = probabilities > 0.5
confusion_matrix(y_true, y_pred)
Additional note on test and validation data
The Keras documentation uses three different sets of data: training data, validation data and test data. Training data is used to optimize the model parameters. The validation data is used to make choices about the meta-parameters, e.g. the number of epochs. After optimizing a model with optimal meta-parameters the test data is used to get a fair estimate of the model performance. | {
"domain": "datascience.stackexchange",
"id": 5525,
"tags": "machine-learning, python, deep-learning, keras, confusion-matrix"
} |
Quantum mechanics position/momentum state operator proof | Question: Is there any way to prove
$$
e^{-i\beta p}|q\rangle = |q+\beta\rangle
$$
just by using these identities $$
[q,\mathcal{F}(p)]=i\hbar \mathcal{F}'(p) \;\;\;\;[q,p]=i\hbar
$$
in quantum mechanics?
Answer: Yes, you just have to check that $e^{-i\beta P/\hbar}|q\rangle$ is an eigenket of $Q$ (uppercase represents operators) with eigenvalue $q+\beta$. We evaluate
$$
Q \big ( e^{-i\beta P/\hbar}|q\rangle \big )
$$
by using the first identity (which is actually a consequence of the canonical commutation relation):
$$
Q e^{-i\beta P/\hbar} = e^{-i\beta P/\hbar}Q + i\hbar \left ( \frac{ -i \beta}{\hbar} e^{-i\beta P/\hbar} \right ),
$$
which upon substitution gives
$$
Q \big ( e^{-i\beta P/\hbar}|q\rangle \big ) = (q + \beta ) \big ( e^{-i\beta P/\hbar}|q\rangle \big ).
$$
This means that $e^{-i\beta P/\hbar}|q\rangle$ is proportional to an eigenket of $Q$ with eigenvalue $q+\beta$, namely $c|q+\beta\rangle$. Fortunately, since the displacement operator $e^{-i\beta P/\hbar}$ is unitary, $c$ must satisfy $|c|^2 = 1$ to preserve normalization of the position eigenkets. This means $c$ is just an arbitrary phase factor which can promptly be chosen to be unity and then we have our final result:
$$
e^{-i\beta P/\hbar}|q\rangle = |q + \beta\rangle
$$ | {
"domain": "physics.stackexchange",
"id": 64314,
"tags": "quantum-mechanics, homework-and-exercises, hilbert-space, operators, commutator"
} |
Confusing notation in Wikipedia's quantum channel article | Question: In the Wikipedia's Quantum channel article, it is said that a purely quantum channel $\phi$ (it's not exactly the same phi calligraphy but it's close), in the Schrodinger picture, is a linear map between density matrices acting on two Hilbert spaces. Then properties of this mapping are given and we find the following notation :
$$I_n \otimes \Phi$$
We find again this difference of notation in the Heisenberg picture paragraph.
However I believe $\phi$ and $\Phi$ are the same and I don't understand the change of notation between the twos. Is there a difference justifying this change or is it a lack of coherence of the notation ?
Answer: I think it's just notational inconsistency. If you look at the page code, the symbols are generated in two different ways: in the text, someone has just inserted the greek letter symbol (presumably a unicode character) whereas in the equation, they've used LaTeX. They're clearly both supposed to be capital phi. | {
"domain": "quantumcomputing.stackexchange",
"id": 1364,
"tags": "quantum-operation, terminology-and-notation"
} |
Periodic acid oxidation of carbonyls? | Question: The following reaction seem to be valid;
$\ce{CH2OH-CHOH-CH2OH ->[HIO4]CH2O + HCOOH + CH2O + 2H2O}$
Apparently, there are similar reactions by 2,3-dihydroxypropanal, and also by the straight chain form of fructose.
The first two reactions involve doing the Malaprade reaction on molecules with adjacent alcohol and aldehyde groups, and the last one involves doing it on a ketone. Both times, it seems sufficient to just cleave the carbon-carbon bonds and oxidise each resulting molecule one step up the ladder.
As far as I know, the Malaprade reaction is for vicinal diols, and I can't stretch the mechanism to work for carbonyls.
Can anyone help me with how these reactions work?
Answer: In 1928, Malaprade demonstrated that periodic acid reacted with ethylene glycol to produce iodic acid and formaldehyde (Ref.1). Hence, the oxidation of adjacent diols with periodic acid or its salt in aqueous solution is now generally known as the Malaprade reaction, the mechanism of which is depicted below:
The reaction proceeds faster under acidic conditions and can be applied to higher polyhydric alcohols, which behave similar to ethylene glycol. The necessity for the reaction is to have 1,2-dihydroxy functionality $\left(\ce{>C^1(OH)-C^2(OH)<}\right)$, and upon exposer to periodic acid, the middle $\ce{C^1\!-C^2}$ bond would oxidatively cleave to give two compounds with gem-diol functionality $\left(\ce{>C^1(OH)2 \ and \ (OH)2C^2<}\right)$, which are essentially carbonyl groups in aqueous solutions. The reaction has also been further extended to the cleavage of α‐hydroxy carbonyl compounds, 1,2‐dicarbonyl compounds, α‐amino alcohols, α‐amino acids, and polyhydroxy alcohols since then, and successfully applied for structural analysis specifically of sugars:
(source, Ref.2)
As shown in the figure, each $\ce{C-C}$ oxidative cleavage of $\ce{C-C}$ bond gives $\ce{OH}$ group to each carbon. For example, 1,2-bond chevage between $\ce{-CH(OH)-CHO}$ gives aldehyde carbon another $\ce{OH}$ group, making it formic acid as shown in the diagram. Meanwhile, 1,2- and 2,3-bond cleavages gives $\ce{C}$2 carbon two extra $\ce{OH}$ groups so that resultant molecule is formic acid again. Thus, complete oxidation of $\pu{1 mol}$ of D-glucose gives $\pu{5 mol}$ of formic acid and $\pu{1 mol}$ of formaldehyde.
The requirement is to have 1,2-diol functionality. So, how ketone and aldehyde with $\alpha$-hydroxy groups get oxidized? In aqueous solutions, ketones and aldehydes are in equilibrium with their corresponding gem-diols, thus providing 1,2-diol feature.
The 1,2-diol does not have to be cis-orientation. The trans-diol will also get oxidized but with slower rate (see here and Ref.3).
References:
Zerong Wang, "Malaprade Reaction (Malaprade Oxidation)," In Comprehensive Organic Name Reactions and Reagents; John Wiley & Sons, Inc.: New York, NY, 2010 (https://doi.org/10.1002/9780470638859.conrr406). ISBN: 9780471704508.
Fathia Mohammed Ibrahim, Mubark Elsayed Osman, "Elucidation of Sugars Structure through Periodic Acid Oxidation Cleavage," International Journal of Science and Research (IJSR) 2018, 7(1), 1152-1155 (https://www.ijsr.net/search_index_results_paperid.php?id=6121702)(PDF)
G. J. Buist, C. A. Bunton, J. H. Miles, “149. The mechanism of oxidation of α-glycols by periodic acid. Part V. Cyclohexane-1 : 2-diols,” J. Chem. Soc. 1959, 743-748 (https://doi.org/10.1039/JR9590000743). | {
"domain": "chemistry.stackexchange",
"id": 15459,
"tags": "organic-chemistry, reaction-mechanism, carbonyl-compounds, alcohols, organic-oxidation"
} |
How is excess grid power dissipated? | Question: I know that power/electricity generated (from conventional power plants or renewables) is generally instantaneously consumed, with grid operators constantly ramping generation to equal demand. My question is, what happens to the excess power, assuming insufficient storage? Regardless if it is a minor excess from an imperfect generation/demand alignment or if its from intermittent solar/wind power that doesn't have sufficient temporary storage, what happens to power that has no home? Is it just dissipated in some waste resistor to be drawn off as heat? Where does the excess power go?
Answer: First off, energy storage doesn't really come into play in grid control. (Grid-scale energy storage is basically an experimental future technology that doesn't have a practical impact yet.) So the grid is basically about managing things such that supply=demand within very close tolerances. So:
This is mostly done by throttling natural gas plants up and down, because they can change power quickly and most of their cost is fuel, so it makes sense to throttle them. (By contrast, for nuclear and hydroelectric most of the cost is initial construction, so they're always run at 100% power rather than throttled up and down. They're known as "base load" plants, as opposed to "peaking" plants.)
Smaller supply-demand mismatches can be handled in a couple ways:
A lot of loads will naturally draw more power if line voltage is higher (motors will run slightly faster, heaters will be slightly hotter). So, if generation is slightly too high, the voltage of the power lines will increase slightly, which causes more energy to be dissipated. The U.S. power grid is designed to deliver 120V +/- 5% to your house, and devices are designed to handle that slop.
Some larger industrial loads need a lot of power but are not time-critical. Utilities will make deals with these industries to give them cheaper power in exchange for being able to turn their power on/off to help balance the load. This is called "virtual generation" or "virtual demand". (For example, some oil fields turn off their pumps during times of the day when consumers are running their air conditioners.) | {
"domain": "physics.stackexchange",
"id": 72315,
"tags": "thermodynamics, electricity"
} |
Why can I use conservation of momentum to understand a bag getting dropped on a moving truck, if the ground provides an external force? | Question: Please help me with this SAT question from a released test.
A toy truck with a mass of $0.6$ kg initially coasts horizontally at a speed of $2$ meters per second. A child drops a beanbag with a mass of $0.2$ kg straight down onto the truck. What is the speed of the truck after it?
Attempt:
My teacher told me to use conservation of momentum in the horizontal direction. However, when the beanbag is dropped, isn’t the ground pushing the truck up so that an external force is acting on the system? If not what does an external force mean in momentum conservation as momentum is conserved if and only if no external forces act.
Also, I tried to use the fact that energy is conserved so that $(0.5) (0.6 )2^2 = 0.5 (0.6+0.2) v^2 $ but this doesn’t work. As the correct answer is $1.5$. Why is the kinetic energy not conserved during this collision?
Thank you in advance
Answer:
isn’t the ground pushing the truck up so that an external force is acting on the system?
Yes, but it is exclusively acting in the vertical direction. (And, indeed, vertical momentum is not conserved, since the vertical momentum of the bag is lost during the collision.) However, there are no external forces with any horizontal components acting on the system, so the horizontal component of momentum is conserved.
Why is the kinetic energy not conserved during this collision?
The collision is quite clearly inelastic, so there is no requirement for kinetic energy conservation. | {
"domain": "physics.stackexchange",
"id": 60541,
"tags": "homework-and-exercises, newtonian-mechanics, momentum"
} |
Gazebo crashes using skid_steer, but works fine with diff_drive | Question:
I have a skid steer robot which I'm trying to simulate in Gazebo. In my wheel macro, I have:
<transmission name="${wheel_prefix}_transmission">
<type>transmission_interface/SimpleTransmission</type>
<joint name="${wheel_prefix}_wheel_joint">
<hardwareInterface>EffortJointInterface</hardwareInterface>
</joint>
<actuator name="${wheel_prefix}_motor">
<hardwareInterface>EffortJointInterface</hardwareInterface>
<mechanicalReduction>1</mechanicalReduction>
</actuator>
</transmission>
and in my main urdf I have:
<plugin name="skid_steer_drive_controller" filename="libgazebo_ros_skid_steer_drive.so">
<updateRate>100.0</updateRate>
<robotNamespace>/</robotNamespace>
<leftFrontJoint>front_left_wheel_joint</leftFrontJoint>
<rightFrontJoint>front_right_wheel_joint</rightFrontJoint>
<leftRearJoint>back_left_wheel_joint</leftRearJoint>
<rightRearJoint>back_right_wheel_joint</rightRearJoint>
<wheelSeparation>${wheelbase}</wheelSeparation>
<wheelDiameter>${2*wheel_radius}</wheelDiameter>
<robotBaseFrame>base_link</robotBaseFrame>
<torque>20</torque>
<commandName>cmd_vel</topicName>
<broadcastTF>false</broadcastTF>
</plugin>
If I use a diff drive controller, like this:
<plugin name="differential_drive_controller" filename="libgazebo_ros_diff_drive.so">
<alwaysOn>true</alwaysOn>
<updateRate>20</updateRate>
<leftJoint>front_left_wheel_joint</leftJoint>
<rightJoint>front_right_wheel_joint</rightJoint>
<wheelSeparation>${wheelbase}</wheelSeparation>
<wheelDiameter>${2*wheel_radius}</wheelDiameter>
<robotBaseFrame>base_link</robotBaseFrame>
<torque>20</torque>
<commandTopic>cmd_vel</commandTopic>
<odometryTopic>odom</odometryTopic>
<odometryFrame>odom</odometryFrame>
</plugin>
Then things work and I can at least use teleop to move the robot backwards and forwards in Gazebo. Any attempt to use the skid steer plugin and Gazebo crashes immediately. It doesn't ever produce a log file, which is unhelpful, so I have no idea what's causing it to break. If I comment out the transmission elements in my xacro file, then Gazebo loads, but obviously I don't get a cmd_vel topic etc. So I guess it's something in the transmission blocks that needs twiddling.
Is there a way to enable verbose output from Gazebo via the launch file?
<include file="$(find gazebo_ros)/launch/empty_world.launch" />
It doesn't help that the main documentation is out of date, e.g. it suggets <topicName> which is a deprecated tag, so I don't know if I'm setting up correctly.
Here's the stdout:
started roslaunch server http://control:33695/
SUMMARY
========
PARAMETERS
* /robot_description: <?xml version="1....
* /rosdistro: kinetic
* /rosversion: 1.12.12
* /use_sim_time: True
NODES
/
gazebo (gazebo_ros/gzserver)
gazebo_gui (gazebo_ros/gzclient)
spawn_urdf (gazebo_ros/spawn_model)
auto-starting new master
process[master]: started with pid [4819]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 2ed86bbe-18a4-11e8-99e9-78d00420dd75
process[rosout-1]: started with pid [4832]
started core service [/rosout]
process[gazebo-2]: started with pid [4842]
process[gazebo_gui-3]: started with pid [4858]
process[spawn_urdf-4]: started with pid [4866]
[ INFO] [1519395408.516996288]: Finished loading Gazebo ROS API Plugin.
[ INFO] [1519395408.518270471]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
[ INFO] [1519395408.906177783, 0.022000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1519395408.941460375, 0.056000000]: Physics dynamic reconfigure ready.
[ WARN] [1519395409.123089568, 0.149000000]: GazeboRosSkidSteerDrive Plugin (ns = //) missing <commandTopic>, defaults to "cmd_vel"
[ WARN] [1519395409.123128894, 0.149000000]: GazeboRosSkidSteerDrive Plugin (ns = //) missing <odometryTopic>, defaults to "odom"
[ WARN] [1519395409.123145859, 0.149000000]: GazeboRosSkidSteerDrive Plugin (ns = //) missing <odometryFrame>, defaults to "odom"
[ WARN] [1519395409.123172229, 0.149000000]: GazeboRosSkidSteerDrive Plugin (ns = //) missing <covariance_x>, defaults to 0.000100
[ WARN] [1519395409.123191756, 0.149000000]: GazeboRosSkidSteerDrive Plugin (ns = //) missing <covariance_y>, defaults to 0.000100
[ WARN] [1519395409.123209074, 0.149000000]: GazeboRosSkidSteerDrive Plugin (ns = //) missing <covariance_yaw>, defaults to 0.010000
Segmentation fault (core dumped)
[gazebo-2] process has died [pid 4842, exit code 139, cmd /opt/ros/kinetic/lib/gazebo_ros/gzserver -e ode worlds/empty.world __name:=gazebo __log:=/home/control/.ros/log/2ed86bbe-18a4-11e8-99e9-78d00420dd75/gazebo-2.log].
log file: /home/control/.ros/log/2ed86bbe-18a4-11e8-99e9-78d00420dd75/gazebo-2*.log
[spawn_urdf-4] process has finished cleanly
log file: /home/control/.ros/log/2ed86bbe-18a4-11e8-99e9-78d00420dd75/spawn_urdf-4*.log
Originally posted by josh on ROS Answers with karma: 41 on 2018-02-23
Post score: 2
Original comments
Comment by Martin Günther on 2018-02-23:\
So I guess it's something in the transmission blocks that needs twiddling.
Probably not. It's more likely that the plugin just doesn't reach the point where it's crashing because it aborts before when it cannot find the transmissions.
Answer:
Turned out to be an embarassingly simple problem with joint names, but without gazebo running in verbose mode, there's no feedback.
I changed my launch file to include:
<include file="$(find gazebo_ros)/launch/empty_world.launch">
<arg name="verbose" value="true"/>
<!--arg name="debug" value="true"/-->
</include>
Which then gave me the more useful output:
[Err] [gazebo_ros_skid_steer_drive.cpp:273] EXCEPTION: GazeboRosSkidSteerDrive Plugin (ns = //) couldn't get left rear hinge joint named "back_left_wheel_joint"
[Err] [Model.cc:1010] Exception occured in the Load function of plugin with name[skid_steer_drive_controller] and filename[libgazebo_ros_skid_steer_drive.so]. This plugin will not run.
Although I guess this is a bug in Gazebo, it should fail to load the plugin, not crash out entirely.
This error wasn't caught with the diff_drive controller because I was using the front wheels joints. My back wheels were prefixed with rear not back hence the failure.
Originally posted by josh with karma: 41 on 2018-02-23
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Max Kiva on 2018-02-27:
Exactly the same problem ! Thanks !
Comment by marcoarruda on 2021-10-29:
Same here! Thanks
Comment by ymzkp on 2022-04-19:
Thanks! This Helped | {
"domain": "robotics.stackexchange",
"id": 30136,
"tags": "ros, gazebo, diff-drive-controller, ros-kinetic"
} |
Does the human body have a resonant frequency? If so, how strong is it? | Question: Inspired by this question on Music beta SE, I'm wondering if the human body has a strong resonant frequency. I guess the fact that it's largely a bag of jelly would add a lot of damping to the system, but is that enough to dampen it entirely?
What models for resonance might be used to model the human body? (E.g. weight-on-a-spring, with legs as springs?) What about individual, semi-independent body parts, like legs, or lung cavity (acoustic resonance?).
Answer: There seem to be a lot of human body mechanical models, such as this one:
As for applications, I have heard that sub-audio frequency vibrations have been considered as nonlethal weapons for riot control.
Addendum:
Guys, stop upvoting this. The image was not composed by me. I found it so long ago there's no chance to find the original source. Google reverse image search says it might be newbedev.com. In the "related images" section there are other similar interesting sketches on human resonant frequency. | {
"domain": "physics.stackexchange",
"id": 84587,
"tags": "acoustics, frequency, biophysics, oscillators, resonance"
} |
How to change angle_min- angle_max and time_increment on laserscan message | Question:
Hi,
I use urg-04lx ug01 hokuyo lidar and its range 240 degree. but in LaserScan.msg shows like below:
angle_min: -2.35619449615
angle_max:2.09234976768
angle_increement: 0.00613592332229
time_increement: 9.76562732831e-05
is these parameters are normal? -2.35619449615 radyan is about -135 degree
2.09234976768 radyan is about 120 degree is there no mistake??
And how do I change these values? For example ı want to angle_min =0 angle_max=240
Originally posted by Mekateng on ROS Answers with karma: 85 on 2018-03-07
Post score: 0
Answer:
You have some tutorials available to dynamically reconfigure the parameters.
Originally posted by Delb with karma: 3907 on 2018-03-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Mekateng on 2018-03-07:
thank you very much:) | {
"domain": "robotics.stackexchange",
"id": 30228,
"tags": "ros, 2dlaserscan, sensor-msgs, ros-indigo"
} |
Would ABS plastic degrade in a dishwasher? | Question: I printed an object with a 3D printer and I used a roll of light green ABS plastic filament.
I would like to use the dishwasher to clean it. The temperature would be kept below 70°C, well below the glass transition temperature of about 110°C.
The dishwasher uses standard tabs, not one of the phosphate-free tabs.
What would be the effect of said cleaning on objects made of ABS plastic?
Answer: According to several application guides/compatibility charts for ABS plastic (like this one), ABS is fairly resistant to the conditions present in a dishwasher (mild to strong alkaline, salts, no organic solvents, mild temperatures).
Please note that while the ABS structural backbone may be attacked by nucleophilic agents at the nitrile carbon, and that the nitrile group may be hydrolised in acidic or basic environments, the conditions required are much harsher that a common dishwasher.
For example (p.8), glacial acetic acid was found to cause significant swelling, while 25% w/w sodium hydroxide and 25% w/v HCl barely caused changes.
So, while the concern for release of toxic additives present in the formulation of the original ABS filament may or not be significant, from the chemical point of view it is almost sure it won't degrade noticeably.
Also, anecdotally, I used to wash old LEGO pieces in a dishwasher (kids are dirty), and not degradation was observed at all. | {
"domain": "chemistry.stackexchange",
"id": 13152,
"tags": "plastics"
} |
Prove longest path shares two points | Question: If there are two longest paths in a connected graph with path lengths of 79 (going through 80 vertices each). How can I prove that they share more than 1 vertex.
Answer: If they only cross at a single vertex $v$, consider the distance from that $v$ to the four path ends. Since the lengths are odd, $v$ can not be in the exact middle of the paths. But then, are the paths really the longest ones? | {
"domain": "cs.stackexchange",
"id": 10249,
"tags": "graphs"
} |
Are there massless bosons at scales above electroweak scale? | Question: Spontaneous electroweak symmetry breaking (i.e. $SU(2)\times U(1)\to U(1)_{em}$ ) is at scale about 100 Gev. So, for Higgs mechanism, gauge bosons $Z$ & $W$ have masses about 100 GeV. But before this spontaneous symmetry breaking ( i.e. Energy > 100 GeV) the symmetry $SU(2)\times U(1)$ is not broken, and therefore gauge bosons are massless.
The same thing happens when we go around energy about $10^{16}$ GeV, where we have the Grand Unification between electroweak and strong interactions, in some bigger group ($SU(5)$, $SO(10)$ or others). So theoretically we should find gauge bosons $X$ and $Y$ with masses about $10^{16}$ GeV after GUT symmetry breaks into the Standard Model gauge group $SU(3)\times SU(2)\times U(1)$, and we should find massless X and Y bosons at bigger energies (where GUT isn't broken).
So this is what happened in the early universe: when temperature decreased, spontaneous symmetry breaking happened and firstly $X$ & $Y$ gauge bosons obtained mass and finally $Z$ & $W$ bosons obtained mass.
Now, I ask: have I understood this correctly? In other words, if we make experiments at energy above the electroweak scale (100 GeV) we are where $SU(2)\times U(1)$ isn't broken and then we should (experimentally) find $SU(2)$ and $U(1)$ massless gauge bosons, i.e. $W^1$, $W^2$, $W^3$ and $B$ with zero mass? But this is strange, because if I remember well in LHC we have just make experiments at energy about 1 TeV, but we haven't discovered any massless gauge bosons.
Answer: I think you have understood it almost well.
The masses do not change, they are what they are; at least at colliders. At high energy, it is true that the impact of masses and, more generally, of any soft term, becomes negligible. The theory for $E\gg v$ becomes very well described by a theory that respects the whole symmetry group.
Notice that to do so consistently in a theory of massive spin $-1$, you have to introduce the Higgs field as well at energies above the symmetry breaking scale. For the early universe, the story is slightly different because you are not in the Fock-like vacuum, and there are actual phase transitions (controlled by temperature and pressure) back to the symmetric phase where in fact the gauge bosons are massless (except perhaps for a thermal mass, not sure about it).
EDIT
I'd like to edit a little further about the common misconception that above the symmetry breaking scale gauge bosons become massless. I am going to give you an explicit calculation for a simple toy mode: a $U(1)$ broken spontaneously by a charged Higgs field $\phi$ that picks vev $\langle\phi\rangle=v$. In this theory we also add two dirac fields $\psi$ and $\Psi$ with $m_\psi\ll m_\Psi$. In fact, I will take the limit $m_\psi\rightarrow 0$ in the following just for simplicity of the formulae. Let's imagine now to have a $\psi^{+}$ $\psi^-$ machine and increase the energy in the center of mass so that we can produce on-shell $\Psi^{+}$ $\Psi^{-}$ pairs via the exchange in s-channel of the massive gauge boson $A_\mu$. In the limit of $m_\psi\rightarrow 0$ the total cross-section for $\psi^-\psi^+\rightarrow \Psi^-\Psi^+$ is given (at tree-level) by
$$
\sigma_{tot}(E)=\frac{16\alpha^2 \pi}{3(4E^2-M^2)^2}\sqrt{1-\frac{m_\Psi^2}{E^2}}\left(E^2+\frac{1}{2}m_\Psi^2\right)
$$
where $M=gv$, the $A_\mu$-mass, is given in terms of the $U(1)$ charge $g$ of the Higgs field. In this formula $\alpha=q^2/(4\pi)$ where $\pm q$ are the charges of $\psi$ and $\Psi$.
Let's increase the energy of the scattering $E$, well passed all mass scales in the problem, including $M$
$$
\sigma_{tot}(E\gg m_{i})=\frac{\pi\alpha^2}{3E^2}\left(1+\frac{M^2}{2E^2}+O(m_i^2/E^4)\right)
$$
Now, the leading term in this formula is what you would get for a massless gauge boson, and as you can see it gets correction from the masses which are more irrelevant as $m_i/E$ is taken smaller and smaller by incrising the energy of the scattering.
Now, this is a toy model but it shows the point: even for a realistic situation, say with a GUT group like $SU(5)$, if you scatter multiplets of $SU(5)$ at energy well above the unification scale, the masses of the gauge bosons will correct the result obtained by scattering massless gauge bosons only by $M/E$ to some power. | {
"domain": "physics.stackexchange",
"id": 74798,
"tags": "particle-physics, standard-model, symmetry-breaking, gauge-symmetry, grand-unification"
} |
Does the component of vector depend on the orientation of the axes? | Question: the question was:
A situation may be described by using different sets of co-ordinate axes having different orientations. Which of the following do not depend on the orientation of the axes?
(a) the value of a scalar
(b) component of a vector
(c) a vector
(d) the magnitude of a vector
and the answer is option -(a) (c) (d)
I did not understand option b although my teacher explained to me as follows:
Sets of co-ordinate axes are simply lines of references to describe the position and orientation of vectors or similar things. Their orientations cannot change (a) the value of scalar, (b) a vector or (d) the magnitude of a vector. But when a vector is resolved along axes, the component is dependent on the angle between the vector and the axis along which it is being resolved. This angle will vary if the orientation of axes is changed. So the component of a vector will depend upon the orientation of the axes.
please explain to me in simple terms with a little bit of illustration.
Answer: The components of the vector with the xy-axes, $(x_2-x_1, y_2-y_1)$, are not the same as with the x'y'-axes, $(x'_2-x'_1, y'_2-y'_1)$.
$(x_2-x_1)^2 + (y_2-y_1)^2 = (x'_2-x'_1)^2 + (y'_2-y'_1)^2 = (\text{magnitude of vector})^2$ | {
"domain": "physics.stackexchange",
"id": 61164,
"tags": "homework-and-exercises, reference-frames, vectors, coordinate-systems"
} |
Batch norm: why the initial normalization? | Question: I'm a beginner in NNs and the first thing I don't understand with batch norm is the following 2 steps:
First we normalize the batch data on a z parameter to Mu=0, sigma^2=1
Then we change z via the coefficients of Mu, sigma^2 (usu. alpha, beta) by updating them as learnable parameters.
I don't understand why the first step is necessary if we change the distribution in the second step anyway. Could someone explain please?
Answer: The first step helps to reduce something called “internal covariate shift” of the network. Normalizing the layer inputs before applying the shift and scaling in step two, speeds up the training process (see BN paper).
This normalization comes with a cost, namely, it can reduce the number of possible representations a layer can provide. E.g. Normalized inputs of a sigmoid are constrained to the linear regime of the function. See BN paper on page three.
The second step is there to address this problem. Scaling and shifting the values to “not just” linear domains of the nonlinearities solves the representional issue while keeping the internal covariate shift at a minimum. | {
"domain": "datascience.stackexchange",
"id": 2855,
"tags": "neural-network, gradient-descent, batch-normalization"
} |
Collection of XmlNode control | Question: I have implemented the structure below at least 5 times, and it is getting cumbersome. Is there a better approach?
Current code-segment implemented
public XmlNodeList Templates {
get {
return this.GetElementsByTagName( "Templates" );
}
set {
foreach( UITemplateAssoc nd in value ) {
this.AddTemplate( nd );
}
}
}
public UITemplateAssoc GetTemplate (string name) {
foreach( UITemplateAssoc nd in this.Templates ) {
if( nd.Name == name ) {
return nd;
}
}
return null;
}
public void AddTemplate ( UITemplateAssoc val ) {
for( int i = this.Templates.Count - 1 ; i >= 0 ; i-- ) {
if( this.Templates[i].Name == val.Name ) {
this.ReplaceChild( val , this.Templates[i] );
return;
}
}
this.AppendChild( val );
}
Answer: There's a couple of nice alternatives for you.
Extensions
If you don't need to encapsulate the XmlNodeList instances, you could make your methods extensions for XmlDocument or whatever you've extended:
public static class TemplateExtensions
{
public static XmlNodeList GetTemplates(this XmlDocument document)
{
return document.GetElementsByTagName("Templates");
}
public static UITemplateAssoc GetTemplate(this XmlNodeList list, string name)
{
foreach(var nd in list)
{
if (nd.Name == name) return nd;
}
}
// ...
}
// ...
var templates = yourDocument.GetTemplates();
var template = templates.GetTemplate("abc");
More on extensions methods here:
http://msdn.microsoft.com/en-us/library/bb383977.aspx
Inheritance
Put these methods in a base class and have the behavior that differs in inherited classes.
[edit]Updated to show how the base class can be generic[/edit]
public abstract class TemplateListBase<T>
where T : XmlNode
{
private XmlNodeList templates;
public XmlNodeList Templates
{
get
{
return templates.GetElementsByTagName( "Templates" );
}
}
protected TemplateListBase(XmlNodeList initialList)
{
templates = initialList;
}
public void AddRange(XmlNodeList list)
{
foreach( T nd in list )
{
AddTemplate( nd );
}
}
public T GetTemplate (string name)
{
foreach( T nd in templates )
{
if( nd.Name == name ) return nd;
}
return null;
}
public void AddTemplate ( T val )
{
for( int i = templates.Count - 1 ; i >= 0 ; i-- )
{
if( templates[i].Name == val.Name )
{
templates.ReplaceChild( val , templates[i] );
return;
}
}
templates.AppendChild( val );
}
}
public class UITemplateAssocList : TemplateListBase<UITemplateAssoc>
{
public UITemplateAssocList(XmlNodeList listOfUITemplates)
: base(listOfUITemplates)
{
}
public void FancyLogicA(string data)
{
foreach(var node in Templates)
{
// differing logic
}
}
}
public class ConcreteTemplateB : TemplateListBase<SomeOtherKindOfNode>
{
// ..
public void FancyLogicB(XmlNodeList list)
{
for (var node in list)
{
if (LikeThisNode(node)) AddTemplate(node);
}
}
}
Notice that I replaced your setter with an AddRange method.
It's ususally good practice not to replace an entire encapsulated collection using a setter.
Although you can validate the entries in the setter, a property should not do much logic other than returning or setting privates.
If you have a look at the collections in the BCL, they all expose an AddRange method and a method Clear to empty the collection. The AddRange method also gives you the option of adding multiple sets in turn.
If you need different logic when stuff is added etc, have a look at the template method pattern:
http://en.wikipedia.org/wiki/Template_method_pattern
If you need multiple base classes with differing logic, you could combine the inheritance with the extension methods, or another base class. :) | {
"domain": "codereview.stackexchange",
"id": 1349,
"tags": "c#, xml"
} |
Change line type in gpplot in R | Question: Context: I have two variables under emotion_dict that I am graphing in the same line graph.
Problem: However when I change the linetype in geom_line, it changes the appearance of both variables.
Question: Does anyone know how to alter the code below to keep the line types separate and display a key? I have done this before only with the ggline function, but that is not appropriate here.
See attached file and code below for more context:
posneg_plot2 <- d_posneg %>%
ggplot(mapping = aes(x=year, y=rel_freq, group=emotion_dict, colour=emotion_dict)) +
geom_line(alpha = 1, size=0.7, linetype=2, colour="black") +
theme_light() +
labs(x="Year", y="Positive and Negative Sentiment (%)") +
scale_x_continuous(breaks=seq(1970,2017,2)) +
theme(axis.text.x=element_text(angle=45, hjust=1)) +
scale_color_hue(labels = c("Negative Sentiment", "Positive Sentiment")) +
labs(colour = "LIWC Dictionaries") + theme(legend.position = "bottom") +
theme(text=element_text(family="Times New Roman", size=17))
#stat_cor(, method = "pearson", p.accuracy = 0.001, r.accuracy = 00.01, size = 4.5, colour = "black")
EDIT: Answer below and now attempting to change the labels within the group accordingly:
Answer: You should be able to simply specify the field to be used for the linetype for the linetype argument within an aes mapping as follows:
posneg_plot2 <- d_posneg %>%
ggplot(mapping = aes(x=year, y=rel_freq, group=emotion_dict, colour=emotion_dict)) +
geom_line(aes(linetype=emotion_dict), alpha = 1, size=0.7, colour="black") | {
"domain": "datascience.stackexchange",
"id": 10398,
"tags": "r, visualization"
} |
Why can a plane wave function be viewed a beam and not as a single particle? | Question: My professor tells me that the following wave function can not be normalized, therefore it does not represent a particle.
$\psi(x) = Ae^{ikx}$
However, he goes on to say that the wave function can be thought of as being a beam of particles by using fourier series, however I don't understand how this is even possible and wondered if anyone could provide perhaps some proof of sorts?
Answer: There's a fairly good discussion of the free particle case here. I'm assuming you have shown that $\psi$ is not normalizable.
Why does $\psi$ not being normalizable mean that it cannot represent a particle? Well, this represents a plane wave with constant amplitude everywhere (only the phase changes). As the amplitude of the wave function tells you about the probability to find a particle at a given location, you can think of this as implying that the particle has the same probability to be anywhere in the universe, which does not really correspond to our idea of a particle. In purely mathematical terms, we usually restrict our attention to solutions of the Schrödinger equation which are square-integrable, which this solution is not. This means that this solution is not part of our Hilbert space. Why does this happen? Note that $\psi$ has a single $k$-component, which corresponds to the momentum. Thus, it has an infinitely sharp momentum (its momentum is exactly $k$), and therefore an infinitely smeared out position by the Heisenberg uncertainty principle.
However, that does not mean that this solution is not important! The Schrödinger equation is a linear differential equation, so the principle of superposition holds. This means that if $\psi_1$ satisfies the Schrödinger equation and $\psi_2$ satisfies the Schrödinger equation, then $\psi_1 + \psi_2$ will also satisfy the Schrödinger equation!
Why is this important? Well, we can build normalizable solutions by taking linear combinations of free particles, the so called wave-packets. Let us take some function $f(k)$ which describes how our amplitudes fluctuates in $k$. This corresponds to taking multiple components of momentum, e.g. introducing some "spread" in momentum value. Then form a linear superposition as:
$$
\psi_3(x) = A\int_{-\infty}^{\infty} dk\ f(k) e^{ikx}
$$
This is called a wavepacket solution. Note that this is a solution of the Schrödinger equation, by linearity (think of the integral as an infinite sum). However, for appropriate choices of $f(k)$, $\psi_3$ can be made normalizable! Essentially, you are summing multiple particles of different momentum $k$, which is what your professor means by a "beam" of particles. If you want to think about it in physical terms, then we're making the wavefunction less localized in momentum, and thereby achieve more localization in position.
So how is this related to Fourier series? Well, note that $\psi_3(x)$ above is (up to constant factors) nothing but the Fourier tranform of the function $f(k)$. This is useful once you solve the time-dependent Schrödinger equation, as the free particle solution has the simplest possible time-evolution, and any reasonable initial state of the system can be Fourier transformed. Thus, one way to get time-evolution in quantum mechanics is to Fourier transform the initial wavefunction and add the time-evolution. This is equivalent to the seperation of variables method for solving PDEs.
There are other options for fixing the free-particle solution of course. One is to require that the system lives in some (arbitrarily large) finite box. Then the solution is normalizable, and lives in a Hilbert space. This is an interesting solution, because it clearly leads to problems with relativity if the box is large enough. This, and other related problems, were what lead to the development of relativistic quantum mechanics, and ultimately quantum field theory. | {
"domain": "physics.stackexchange",
"id": 74180,
"tags": "quantum-mechanics, energy, particle-physics, waves"
} |
How to calculate the requirements for completing a reaction | Question: What are the most common critical thermodynamic and kinetic variables required to convert a secondary or tertiary alcohol to an alkane.? Is bond enthalpy data relevant?
What is the process/processes commonly used to then calculate or determine the temperature and duration it would take to complete the reaction from the thermodynamic and kinetic data?
Answer: You can't - at least if the only information you have is just the bond energy.
The problem you're running up against here is the difference between thermodynamics and kinetics. The difference can sometimes be subtle, but (roughly) thermodynamics is normally concerned with state functions, whereas kinetics deals with the process of interconversion of states.
The bond enthalpy is a thermodynamic quantity. It's based off a state function. It's the free energy difference between the bond-broken and bond-formed states. It doesn't matter how you go from the bond-formed to bond-broken states, the free energy difference is the same. In order to calculate it, you only need to know information about the two endpoints, not the details about the states which connect them.
In contrast, reaction rates are kinetic quantities. The rate of reaction is highly dependent on the path (reaction mechanism and conditions) you take to interconvert the states. You can't tell anything about the path if all you have is thermodynamic information about the two end points.
That said, you can theoretically calculate reaction rates if you have additional information about the path you're taking. Most notably you would need to know the activation energy of the reaction you're interested in. This is different from the bond energy, and is related to the free energy of the transition state, the hypothetical high-energy intermediate along the reaction pathway. If you have the activation energy along with some other information for a single-step reaction, you can calculate reaction rates for various temperatures with the Arrhenius equation.
Again, these rates are highly dependent on the exact mechanism of the reaction you're examining. Single step versus multi-step, radical versus electron pair transfer mechanisms, the presence of catalysts, etc. all can change the activation energies of the various steps and the reaction rates. So there is no way to give a general answer about bond breakage rates. | {
"domain": "chemistry.stackexchange",
"id": 3888,
"tags": "experimental-chemistry"
} |
Benefits of stochastic gradient descent besides speed/overhead and their optimization | Question: Say I am training a neural network and can fit all my data into memory. Are there any benefits to using mini batches with SGD in this case? Or is batch training with the full gradient always superior when possible?
Also, it seems like many of the more modern optimization algorithms (RMSProp, Adam, etc.) were designed with SGD in mind. Are these methods still superior to standard gradient descent (with momentum) with the full gradient available?
Answer: On large datasets, SGD can converge faster than batch training because it performs updates more frequently. We can get away with this because the data often contains redundant information, so the gradient can be reasonably approximated without using the full dataset. Minibatch training can be faster than training on single data points because it can take advantage of vectorized operations to process the entire minibatch at once. The stochastic nature of online/minibatch training can also make it possible to hop out of local minima that might otherwise trap batch training.
One reason to use batch training is cases where the gradient can't be approximated using individual points/minibatches (e.g. where the loss function can't be decomposed as a sum of errors for each data point). This isn't an issue for standard classification/regression problems.
I don't recall seeing RMSprop/Adam/etc. compared to batch gradient descent. But, given their potential advantages over vanilla SGD, and the potential advantages of vanilla SGD over batch gradient descent, I imagine they'd compare favorably.
Of course, we have to keep the no free lunch theorem in mind; there must exist objective functions for which each of these optimization algorithms performs better than the others. But, there's no guarantee whether or not these functions pertain to the set of practically useful, real-world learning problems. | {
"domain": "datascience.stackexchange",
"id": 1424,
"tags": "neural-network, gradient-descent"
} |
Why is this integral for a uniform electric field of a charged plate not evaluating correctly? | Question: I have spent the past two and a half hours attempting to understand why the electric field on either side of an infinitely sized charged plate is uniform. I get it conceptually, in that as a point moves farther away from the plate, it is able to see a greater amount of charge in a more focused field of view, canceling out the reduction in strength caused by increasing the distance between the point and the plate. My issue is that I want to be able to derive this relationship for myself using calculus and I when doing it by hand, I always arrive at a relationship that reduces the strength of the electric field as the distance increases.
I eventually found a paper online that derives the property of uniformity, but it has a step when evaluating an improper integral that makes no sense to me:
$$
\begin{align}
E_P &= \frac{\sigma r}{4\pi\epsilon} \int_{-\infty}^\infty \int_{-\infty}^\infty \frac{1}{(x^2 + y^2 + r^2)^{3/2}} \; dx \; dy \\
&= \frac{\sigma r}{4\pi\epsilon} \int_{-\infty}^\infty \frac{2}{(y^2 + r^2)^{3/2}} \; dy \tag{1} \\
&= \frac{\sigma r}{4\pi\epsilon} \frac{2\pi}{r} \tag{2} \\
&= \frac{\sigma}{2\epsilon}
\end{align}
$$
What happened between lines (1) and (2)? When I did it by hand, I ended up with,
$$
\frac{\sigma r}{4\pi\epsilon} \int_{-\infty}^\infty \frac{2}{(y^2 + r^2)^{3/2}} \; dy
=
\frac{\sigma r}{4\pi\epsilon} \frac{4}{r^2}
=
\frac{4\sigma}{r\pi\epsilon}
$$
I checked this result with Maxima and Wolfram, and they both confirm my answer. What am I not seeing?
Answer: Consider the transition from the line above (1) to line (1). Based on my calculation, line (1) should be $$E_P = \frac{\sigma r}{4 \pi \epsilon} \int_{-\infty}^{\infty} \frac {2}{(r^2 + y^2)} dy $$
This gives the appropriate result in line (2) and the final answer. | {
"domain": "physics.stackexchange",
"id": 40620,
"tags": "electrostatics, electric-fields, gauss-law, integration"
} |
Hackerrank - value of friendship (II) | Question: Problem statement
You're researching friendships between groups \$n\$ of new college students where each student is distinctly numbered from \$1\$ to \$n\$. At the beginning of the semester, no student knew any other student; instead, they met and formed individual friendships as the semester went on. The friendships between students are:
Bidirectional
If student \$a\$ is friends with student \$b\$, then student \$b\$ is also friends with student \$a\$.
Transitive
If student \$a\$ is friends with student \$b\$ and student \$b\$ is friends with student \$c\$ , then student \$a\$ is friends with student \$c\$. In other words, two students are considered to be friends even if they are only indirectly linked through a network of mutual (i.e., directly connected) friends.
The purpose of your research is to find the maximum total value of a group's friendships, denoted by \$total\$. Each time a direct friendship forms between two students, you sum the number of friends that each of the \$n\$ students has and add the sum to \$total\$.
You are given \$q\$ queries, where each query is in the form of an unordered list of \$m\$ distinct direct friendships between \$n\$ students. For each query, find the maximum value of \$total\$ among all possible orderings of formed friendships and print it on a new line.
Input Format
The first line contains an integer, \$q\$, denoting the number of queries. The subsequent lines describe each query in the following format:
The first line contains two space-separated integers describing the respective values of \$n\$ (the number of students) and \$m\$ (the number of distinct direct friendships).
Each of the \$m\$ subsequent lines contains two space-separated integers describing the respective values of \$x\$ and \$y\$ (where \$x<>y\$) describing a friendship between student \$x\$ and student \$y\$.
Constraints
1. 1 <= q <= 16
2. 1 <= n <= 100000
3. 1 <= m <= min(n(n-1)/2, 200000)
Output Format
For each query, print the maximum value of \$total\$ on a new line.
Sample Input 0
1
5 4
1 2
3 2
4 2
4 3
Sample Output 0
32
Explanation 0
The value of \$total\$ is maximal if the students form the m = 4 direct friendships in the following order:
Students \$1\$ and \$2\$ become friends:
We then sum the number of friends that each student has to get 1 + 1 + 0 + 0 + 0 = 2.
Students \$2\$ and \$4\$ become friends:
We then sum the number of friends that each student has to get 2 + 2 + 0 + 2 + 0 = 6.
Students 3 and 4 become friends:
We then sum the number of friends that each student has to get 3 + 3 + 3 + 3 + 0 = 12.
Students 3 and 2 become friends:
We then sum the number of friends that each student has to get 3 + 3 + 3 + 3 + 0 = 12.
When we add the sums from each step, we get total = 2 + 6 + 12 + 12 = 32. We then print 32 on a new line.
My introduction of algorithm
The algorithm is the hard one and also one of Hackerrank weekcode \$28\$ contest in January 2017, after the contest, I did post a question here. I am training myself, so I continued to study all other C# submissions on hackerrank, spent hours to rewrite a solution. Think about posting a second question on this algorithm, but I did not fully understand union find algorithm.
So, I read some union find questions on this site - my favorite one, studied a lecture note of union find, but then I came cross the tutorial on hackerearth - Disjoint set union, I felt more confident about my understanding with more examples with diagrams and clear discussion of various concern of disjoint set union as an algorithm, time complexity, ideas to improve the time complexity. So, I think that it is ready for review, because I can talk about one term weighted-union operation - and relate to the implementation in the following.
The algorithm I did code review is using the ideas in disjoint set union, for example, weighted-union operation (see the above hackerearth's tutorial link). It will balance the tree formed by performing the operations - the subset containing less number of elements will join the bigger subset. The class \$Graph\$ method \$MergeSmallGroupToLargeOne\$ is the example.
The code passes on test cases on Hackerrank. And also I learned a few things about the implementation, for example, apply constraints in the design, declare array with maximum size of friendships - \$m\$.
Hightlights of changes
User meaningful variable names and class names; Extract some code to form a new class called GroupManagement; Define a new function MergeSmallGroupToLargeOne inside struct Group.
Please join me to review this C# solution.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
class Solution
{
/*
* January 19, 2016
*/
public struct Group
{
public int Links;
public Stack<int> Nodes;
/*
* Small group will join the bigger group.
*/
public static void MergeSmallGroupToLargeOne(
Group[] groups,
int smallGroupId,
int bigGroupId,
int[] nodeGroupId)
{
groups[bigGroupId].Links += groups[smallGroupId].Links + 1;
Stack<int> destination = groups[bigGroupId].Nodes;
Stack<int> source = groups[smallGroupId].Nodes;
while (source.Count > 0)
{
int node = source.Pop();
nodeGroupId[node] = bigGroupId;
destination.Push(node);
}
}
/*
* Go over the calculation formula
*
*/
public static ulong CalculateValue(Group[] sortedGroups)
{
ulong additionalLinks = 0;
ulong totalValueOfFriendship = 0;
ulong totalFriends = 0;
// Each group is maximized in order... additionalLinks added at end
foreach (Group group in sortedGroups)
{
ulong links = (ulong)(group.Nodes.Count - 1);
ulong lookupValue = FriendshipValueCalculation.GetLookupTable()[links];
totalValueOfFriendship += lookupValue + totalFriends * links;
additionalLinks += (ulong)group.Links - links;
totalFriends += links * (links + 1);
}
totalValueOfFriendship += additionalLinks * totalFriends;
return totalValueOfFriendship;
}
/*
* filter out empty group, check Group class member
* @groupCount - total groups, excluding merged groups
* @groupIndex - total groups, including merged groups
*
* check Nodes in the stack, if the stack is empty, then the group is empty.
*/
public static Group[] GetNonemptyGroups(int groupCount, int groupIndex, Group[] groups)
{
Group[] nonEmptyGroups = new Group[groupCount];
int index = 0;
for (int i = 1; i <= groupIndex; i++)
{
if (groups[i].Nodes.Count > 0)
{
nonEmptyGroups[index++] = groups[i];
}
}
return nonEmptyGroups;
}
}
/*
* Design talk:
* 1 <= n <= 100,000, n is the total students
* 1 <= m <= 2 * 100,000, m is the total friendship
* @groups -
* @groupIdMap -
*/
public class GroupManagement
{
public Group[] groups;
public int[] groupIdMap;
public int groupIndex = 0;
public int groupCount = 0;
public GroupManagement(int totalStudents)
{
groups = new Group[totalStudents / 2 + 1]; //
groupIdMap = new int[totalStudents + 1]; // less than 2MB
groupIndex = 0;
groupCount = 0;
}
/*
1) neither in a group: create new group with 2 nodes
2) only one in a group: add the other
3) both already in same group - increase Links
4) both already in different groups... join groups
*
*/
public void AddFriendshipToGroups(int id1, int id2)
{
int groupId1 = groupIdMap[id1];
int groupId2 = groupIdMap[id2];
if (groupId1 == 0 || groupId2 == 0)
{
if (groupId1 == 0 && groupId2 == 0)
{
groupIndex++;
groupCount++;
groups[groupIndex].Links = 1;
groups[groupIndex].Nodes = new Stack<int>();
groups[groupIndex].Nodes.Push(id1);
groups[groupIndex].Nodes.Push(id2);
groupIdMap[id1] = groupIndex;
groupIdMap[id2] = groupIndex;
}
else if (groupId1 == 0)
{
// add student1 into student2's group
groups[groupId2].Nodes.Push(id1);
groups[groupId2].Links++;
groupIdMap[id1] = groupId2;
}
else
{
// add student2 into studnet1's group
groups[groupId1].Nodes.Push(id2);
groups[groupId1].Links++;
groupIdMap[id2] = groupId1;
}
}
else
{
if (groupId1 == groupId2)
{
groups[groupId1].Links++;
}
else // merge two groups
{
groupCount--;
int groupSize1 = groups[groupId1].Nodes.Count;
int groupSize2 = groups[groupId2].Nodes.Count;
if (groupSize1 < groupSize2)
{
// small, big, groupId, nodeGroupId
Group.MergeSmallGroupToLargeOne(groups, groupId1, groupId2, groupIdMap);
}
else
{
Group.MergeSmallGroupToLargeOne(groups, groupId2, groupId1, groupIdMap);
}
}
}
}
}
/*
* descending
*/
public class GroupComparer : Comparer<Group>
{
public override int Compare(Group x, Group y)
{
return (y.Nodes.Count - x.Nodes.Count);
}
}
/*
* add some calculation description here.
*/
public class FriendshipValueCalculation
{
public static long FRIENDSHIPS_MAXIMUM = 200000;
public static ulong[] GetLookupTable()
{
ulong[] friendshipsLookupTable = new ulong[FRIENDSHIPS_MAXIMUM]; // 1.6 MB
ulong valueOfFriendship = 0;
for (int i = 1; i < FRIENDSHIPS_MAXIMUM; i++)
{
valueOfFriendship += (ulong)i * (ulong)(i + 1);
friendshipsLookupTable[i] = valueOfFriendship;
}
return friendshipsLookupTable;
}
}
static void Main(String[] args)
{
ProcessInput();
//RunSampleTestcase();
//RunSampleTestcase2();
}
public static void ProcessInput()
{
GroupComparer headComparer = new GroupComparer();
int queries = Convert.ToInt32(Console.ReadLine());
for (int query = 0; query < queries; query++)
{
string[] tokens_n = Console.ReadLine().Split(' ');
int studentsCount = Convert.ToInt32(tokens_n[0]);
int friendshipsCount = Convert.ToInt32(tokens_n[1]);
GroupManagement groupManager = new GroupManagement(studentsCount);
for (int i = 0; i < friendshipsCount; i++)
{
string[] relationship = Console.ReadLine().Split(' ');
int id1 = Convert.ToInt32(relationship[0]);
int id2 = Convert.ToInt32(relationship[1]);
groupManager.AddFriendshipToGroups(id1, id2);
}
// Get all groups large to small
Group[] sortedGroups =
Group.GetNonemptyGroups(
groupManager.groupCount,
groupManager.groupIndex,
groupManager.groups);
Array.Sort(sortedGroups, headComparer);
Console.WriteLine(Group.CalculateValue(sortedGroups));
}
}
/*
*
* Need to work on the sample test case
* 1. student 1 and 2 become friends
* 1-2 3 4 5, we then sum the number of friends that each student has
* to get 1 + 1 + 0 + 0 + 0 = 2.
* 2. Student 2 and 3 become friends:
* 1-2-3 4 5, we then sum the number of friends that each student has to get
* 2 + 2 + 2 + 0 + 0 = 6.
* 3. Student 4 and 5 become friends:
* 1-2-3 4-5, we then sum the number of friends that each student has to get
* 2 + 2 + 2 + 1 + 1 = 8.
* 4. Student 1 and 3 become friends: (we hold to add 1 and 3 until 4 and 5
* are added to maximize the value.)
* 1-2-3 4-5, we then sum the number of friends that each student has to get
* 2 + 2 + 2 + 1 + 1 = 8.
* Total is 2 + 6 + 8 + 8 = 24.
*/
public static void RunSampleTestcase()
{
string[][] datas = new string[1][];
datas[0] = new string[2];
datas[0][0] = "5";
datas[0][1] = "4";
string[][] allFriendships = new string[1][];
allFriendships[0] = new string[4];
allFriendships[0][0] = "1 2";
allFriendships[0][1] = "2 3";
allFriendships[0][2] = "1 3";
allFriendships[0][3] = "4 5";
Console.WriteLine(HelpTestCase(datas, allFriendships));
}
public static void RunSampleTestcase2()
{
string[][] datas = new string[1][];
datas[0] = new string[2];
datas[0][0] = "5";
datas[0][1] = "4";
string[][] allFriendships = new string[1][];
allFriendships[0] = new string[4];
allFriendships[0][0] = "1 2";
allFriendships[0][1] = "3 2";
allFriendships[0][2] = "4 2";
allFriendships[0][3] = "4 3";
Console.WriteLine(HelpTestCase(datas, allFriendships));
}
private static ulong HelpTestCase(string[][] datas, string[][] allFriendships)
{
GroupComparer headComparer = new GroupComparer();
int studentsCount = Convert.ToInt32(datas[0][0]);
int friendshipsCount = Convert.ToInt32(datas[0][1]);
GroupManagement groupManager = new GroupManagement(studentsCount);
for (int i = 0; i < friendshipsCount; i++)
{
string[] relationship = allFriendships[0][i].Split(' ');
int id1 = Convert.ToInt32(relationship[0]);
int id2 = Convert.ToInt32(relationship[1]);
groupManager.AddFriendshipToGroups(id1, id2);
}
// Get all groups large to small
Group[] sortedGroups =
Group.GetNonemptyGroups(
groupManager.groupCount,
groupManager.groupIndex,
groupManager.groups);
Array.Sort(sortedGroups, headComparer);
return Group.CalculateValue(sortedGroups);
}
}
Answer: Almost everything looks good except for a few bits, so we'll go top-to-bottom:
public int Links;
public Stack<int> Nodes;
You should never expose public fields in C#, especially in a struct. These should always be properties:
public int Links { get; set; }
public Stack<int> Nodes { get; set; }
Of course, if possible they should be immutable, get; private set; if possible. In your case neither setter can be private, but for the future we try to be as restrictive as possible until we don't need to be.
Then you have another group:
public Group[] groups;
public int[] groupIdMap;
public int groupIndex = 0;
public int groupCount = 0;
First, C# public member naming rules indicate that PascalCase should always be used; second, we talked about the properties thing already; third, the value 0 is the default for int types. .NET languages have mandatory default constructors on struct objects that initialize all fields/properties to their default value, for any numeric type (int, long, float, ulong) that's 0, for bool it's false, etc.
public Group[] Groups { get; set; }
public int[] GroupIdMap { get; set; }
public int GroupIndex { get; set; }
public int GroupCount { get; set; }
So obviously that's pretty simple stuff, you may or may not have been aware of.
Next, we'll get into some of the actual code and talk about things that can make life easier and whatnot.
C# has implicit typing available through the use of the var keyword (similar to Variant in VB.NET or let in F#. Something like the following:
Stack<int> destination = groups[bigGroupId].Nodes;
Stack<int> source = groups[smallGroupId].Nodes;
Can be implicitly typed:
var destination = groups[bigGroupId].Nodes;
var source = groups[smallGroupId].Nodes;
Whether or not you want to actually use LINQ is up to you, but I'll give you a nice example that you can try to apply more generally:
public static Group[] GetNonemptyGroups(int groupCount, int groupIndex, Group[] groups)
{
Group[] nonEmptyGroups = new Group[groupCount];
int index = 0;
for (int i = 1; i <= groupIndex; i++)
{
if (groups[i].Nodes.Count > 0)
{
nonEmptyGroups[index++] = groups[i];
}
}
return nonEmptyGroups;
}
With LINQ (System.Linq) this can be made one line:
public static Group[] GetNonemptyGroups(int groupCount, int groupIndex, Group[] groups)
{
return groups
.Where(g => g.Nodes.Count > 0)
.ToArray();
}
That keeps it really simple. (It'll probably be slightly slower, LINQ is usually slower than a hand-written loop, which is why I leave it up to you if you like it or not.)
You have this method:
public static void MergeSmallGroupToLargeOne(
Group[] groups,
int smallGroupId,
int bigGroupId,
int[] nodeGroupId)
{
groups[bigGroupId].Links += groups[smallGroupId].Links + 1;
Stack<int> destination = groups[bigGroupId].Nodes;
Stack<int> source = groups[smallGroupId].Nodes;
while (source.Count > 0)
{
int node = source.Pop();
nodeGroupId[node] = bigGroupId;
destination.Push(node);
}
}
Which on first glance shouldn't work. It wasn't until I remembered that Stack<T> is a reference type that I realized why it works. I have no idea if there's anything that can be done about it, but it confused me heavily for a moment. Perhaps instead of using destination just call groups[bigGroupId].Nodes.Push instead.
If you have access to C#6.0 then some of these method calls can become a little simpler:
public class GroupComparer : Comparer<Group>
{
public override int Compare(Group x, Group y)
{
return (y.Nodes.Count - x.Nodes.Count);
}
}
Can become:
public class GroupComparer : Comparer<Group>
{
public override int Compare(Group x, Group y) =>
y.Nodes.Count - x.Nodes.Count;
}
Casting in C# is expensive almost always, and you should do it as little as possible.
public class FriendshipValueCalculation
{
public static long FRIENDSHIPS_MAXIMUM = 200000;
public static ulong[] GetLookupTable()
{
ulong[] friendshipsLookupTable = new ulong[FRIENDSHIPS_MAXIMUM]; // 1.6 MB
ulong valueOfFriendship = 0;
for (int i = 1; i < FRIENDSHIPS_MAXIMUM; i++)
{
valueOfFriendship += (ulong)i * (ulong)(i + 1);
friendshipsLookupTable[i] = valueOfFriendship;
}
return friendshipsLookupTable;
}
}
First: that FRIENDSHIPS_MAXIMUM should be a const, it doesn't need to be a static const (const members cannot be static) but it should be a const. Right now anyone can reassign it.
It should also be a private member since it's not used outside your class.
As far as naming, usually C# avoids SHOUTY_SNAKE_CASE but personally I use that casing type so that when I am looking at a name I know immediately that it's a constant. Generally const members follow the same naming convention as normal: public and protected are PascalCase and private is camelCase.
We have three casts here (yes, I did say three). The first two are obvious, casting i to ulong. The third is an implicit cast from i to long in the condition i < FRIENDSHIPS_MAXIMUM. Yes, that is a cast.
What can we do to fix this? First, since i has to fit within the range of an index of an array (which is always an Int32), then we can just either change the type of FRIENDSHIPS_MAXIMUM to int, or create a local variable which is an int version of it.
Next, We want to eliminate that ulong cast we do twice, but how? There are two methods: create a new local variable that is a ulong and manage it in the loop, or create a cast of i one time in the loop.
var bigI = (ulong)i;
Then use bigI in the loop.
I'm going to use the first version since it should be faster.
public class FriendshipValueCalculation
{
// Remove implicit cast from `int` to `long`, make it a `const` since it shouldn't ever change, make it `private` since no one else needs it
private const int FRIENDSHIPS_MAXIMUM = 200000;
public static ulong[] GetLookupTable()
{
const int startIndex = 1;
var friendshipsLookupTable = new ulong[FRIENDSHIPS_MAXIMUM]; // 1.6 MB
var valueOfFriendship = 0ul;
var valueIndex = (ulong)startIndex;
for (var i = startIndex; i < FRIENDSHIPS_MAXIMUM; i++, valueIndex++)
{
valueOfFriendship += valueIndex * (valueIndex + 1);
friendshipsLookupTable[i] = valueOfFriendship;
}
return friendshipsLookupTable;
}
}
The ul suffix on the 1 tells C# that I want that 1 to be an unsigned long or ulong. (Similar to the f suffix indicating float.) This allows the implicit type engine to appropriately determine what type that var really is.
The i++, valueIndex++ tells the compiler to increment both of those variables each time the loop iterates.
Finally, this if block should be seriously refactored:
if (groupId1 == 0 || groupId2 == 0)
{
if (groupId1 == 0 && groupId2 == 0)
{
groupIndex++;
groupCount++;
groups[groupIndex].Links = 1;
groups[groupIndex].Nodes = new Stack<int>();
groups[groupIndex].Nodes.Push(id1);
groups[groupIndex].Nodes.Push(id2);
groupIdMap[id1] = groupIndex;
groupIdMap[id2] = groupIndex;
}
else if (groupId1 == 0)
{
// add student1 into student2's group
groups[groupId2].Nodes.Push(id1);
groups[groupId2].Links++;
groupIdMap[id1] = groupId2;
}
else
{
// add student2 into studnet1's group
groups[groupId1].Nodes.Push(id2);
groups[groupId1].Links++;
groupIdMap[id2] = groupId1;
}
}
else
{
if (groupId1 == groupId2)
{
groups[groupId1].Links++;
}
else // merge two groups
{
groupCount--;
int groupSize1 = groups[groupId1].Nodes.Count;
int groupSize2 = groups[groupId2].Nodes.Count;
if (groupSize1 < groupSize2)
{
// small, big, groupId, nodeGroupId
Group.MergeSmallGroupToLargeOne(groups, groupId1, groupId2, groupIdMap);
}
else
{
Group.MergeSmallGroupToLargeOne(groups, groupId2, groupId1, groupIdMap);
}
}
}
Arrow-code is never appreciated, and nothing in that block becomes more complex when it's refactored to one level:
if (groupId1 == 0 && groupId2 == 0)
{
groupIndex++;
groupCount++;
groups[groupIndex].Links = 1;
groups[groupIndex].Nodes = new Stack<int>();
groups[groupIndex].Nodes.Push(id1);
groups[groupIndex].Nodes.Push(id2);
groupIdMap[id1] = groupIndex;
groupIdMap[id2] = groupIndex;
}
else if (groupId1 == 0)
{
// add student1 into student2's group
groups[groupId2].Nodes.Push(id1);
groups[groupId2].Links++;
groupIdMap[id1] = groupId2;
}
else if (groupId2 == 0)
{
// add student2 into studnet1's group
groups[groupId1].Nodes.Push(id2);
groups[groupId1].Links++;
groupIdMap[id2] = groupId1;
}
else if (groupId1 == groupId2)
{
groups[groupId1].Links++;
}
else // merge two groups
{
groupCount--;
int groupSize1 = groups[groupId1].Nodes.Count;
int groupSize2 = groups[groupId2].Nodes.Count;
if (groupSize1 < groupSize2)
{
// small, big, groupId, nodeGroupId
Group.MergeSmallGroupToLargeOne(groups, groupId1, groupId2, groupIdMap);
}
else
{
Group.MergeSmallGroupToLargeOne(groups, groupId2, groupId1, groupIdMap);
}
}
Then we see that these two cases:
else if (groupId1 == 0)
{
// add student1 into student2's group
groups[groupId2].Nodes.Push(id1);
groups[groupId2].Links++;
groupIdMap[id1] = groupId2;
}
else if (groupId2 == 0)
{
// add student2 into studnet1's group
groups[groupId1].Nodes.Push(id2);
groups[groupId1].Links++;
groupIdMap[id2] = groupId1;
}
Do the same thing just in opposite directions. Well, that's not hard to deal with:
if (groupId1 == 0 || groupId2 == 0)
{
var groupId = groupId1 + groupId2; // One of them is 0, so if we add them we'll get the ID for the other
var id = groupId1 == 0 ? id1 : id2; // This (`a ? b : c`) is the ternary operator, if `groupId1 == 0` then `id1` is returned, else `id2` is returned
groups[groupId].Nodes.Push(id);
groups[groupId].Links++;
groupIdMap[id] = groupId;
}
Now we've compressed our code more, and it's still extremely obvious what's going on. Our final block is:
if (groupId1 == 0 && groupId2 == 0)
{
groupIndex++;
groupCount++;
groups[groupIndex].Links = 1;
groups[groupIndex].Nodes = new Stack<int>();
groups[groupIndex].Nodes.Push(id1);
groups[groupIndex].Nodes.Push(id2);
groupIdMap[id1] = groupIndex;
groupIdMap[id2] = groupIndex;
}
else if (groupId1 == 0 || groupId2 == 0)
{
var groupId = groupId1 + groupId2; // One of them is 0, so if we add them we'll get the ID for the other
var id = groupId1 == 0 ? id1 : id2; // This (`a ? b : c`) is the ternary operator, if `groupId1 == 0` then `id1` is returned, else `id2` is returned
groups[groupId].Nodes.Push(id);
groups[groupId].Links++;
groupIdMap[id] = groupId;
}
else if (groupId1 == groupId2)
{
groups[groupId1].Links++;
}
else // merge two groups
{
groupCount--;
int groupSize1 = groups[groupId1].Nodes.Count;
int groupSize2 = groups[groupId2].Nodes.Count;
if (groupSize1 < groupSize2)
{
// small, big, groupId, nodeGroupId
Group.MergeSmallGroupToLargeOne(groups, groupId1, groupId2, groupIdMap);
}
else
{
Group.MergeSmallGroupToLargeOne(groups, groupId2, groupId1, groupIdMap);
}
}
Overall: very solid code, excellent work and I'm glad to see another person experimenting with the language. :) Hopefully you learn more and more about it, and become another highly-qualified developer in C#. | {
"domain": "codereview.stackexchange",
"id": 27679,
"tags": "c#, object-oriented, programming-challenge, interview-questions, graph"
} |
Testing different implementations of malloc() | Question: Can you help me verify my test result? I'm testing different malloc() implementations with a small program that allocates gigabytes many times:
int main(int agrc, char **argv) {
int i;
for (i = 0; i < 1000000; i++) {
void *p = malloc(1024 * 1024 * 1024);
free(p);
}
return (0);
}
If I run it and time it, then it takes 5 seconds:
$ time ./gig
real 0m5.140s
user 0m0.384s
sys 0m4.752s
Now I try my custom malloc() with exactly the same program and it seems unreasonable faster.
$ time ./gb_quickfit
real 0m0.045s
user 0m0.044s
sys 0m0.000s
Why is the custom malloc() so much faster? I used the "quick malloc()" algorithm.
void *malloc_quick(size_t nbytes) /* number of bytes of memory to allocate */
{
Header *moreroce(unsigned);
int index, i;
index = qindex(nbytes);
/*
* Use another strategy for too large allocations. We want the allocation
* to be quick, so use malloc_first().
*/
if (index >= NRQUICKLISTS) {
return malloc_first(nbytes);
}
/* Initialize the quick fit lists if this is the first run. */
if (first_run) {
for (i = 0; i < NRQUICKLISTS; ++i) {
quick_fit_lists[i] = NULL;
}
first_run = false;
}
/*
* If the quick fit list pointer is NULL, then there are no free memory
* blocks present, so we will have to create some before continuing.
*/
if (quick_fit_lists[index] == NULL) {
Header* new_quick_fit_list = init_quick_fit_list(index);
if (new_quick_fit_list == NULL) {
return NULL;
} else {
quick_fit_lists[index] = new_quick_fit_list;
}
}
/*
* Now that we know there is at least one free quick fit memory block,
* let's use return that and also update the quick fit list pointer so that
* it points to the next in the list.
*/
void* pointer_to_return = (void *)(quick_fit_lists[index] + 1);
quick_fit_lists[index] = quick_fit_lists[index]->s.ptr;
/* printf("Time taken %d seconds %d milliseconds", msec/1000, msec%1000);*/
return pointer_to_return;
}
I'm sure there is a catch because I don't have much experience in this detailed level of C. Why are the results so different? Does the system malloc() only have one algorithm?
Can I be sure that the test is correct? If I run Valgrind with the test, it reports no error . I try again run the test, check with Valgrind that the test doesn't generate error with Valgrind and get the result again
$ time ./gb_quickfit
real 0m0.759s
user 0m0.584s
sys 0m0.172s
dac@dac-Latitude-E7450:~/ClionProjects/omalloc/openmalloc/overhead$ time ./a.out
real 0m0.826s
user 0m0.644s
sys 0m0.180s
Now the result is more reasonable, my custom malloc is only slightly faster. The reason I got so large difference first time might have been of errors in the test allocating too much. The second test looks like:
/* returns an array of arrays of char*, all of which NULL */
char ***alloc_matrix(unsigned rows, unsigned columns) {
char ***matrix = malloc(rows * sizeof(char **));
unsigned row = 0;
unsigned column = 0;
if (!matrix) abort();
for (row = 0; row < rows; row++) {
matrix[row] = calloc(columns, sizeof(char *));
if (!matrix[row]) abort();
for (column = 0; column < columns; column++) {
matrix[row][column] = NULL;
}
}
return matrix;
}
/* deallocates an array of arrays of char*, calling free() on each */
void free_matrix(char ***matrix, unsigned rows, unsigned columns) {
unsigned row = 0;
unsigned column = 0;
for (row = 0; row < rows; row++) {
for (column = 0; column < columns; column++) {
/* printf("column %d row %d\n", column, row);*/
free(matrix[row][column]);
}
free(matrix[row]);
}
free(matrix);
}
int main(int agrc, char **argv) {
/* int i;
for (i = 0; i < 1000000; i++) {
void *p = malloc(1024 * 1024 * 1024);
free(p);
}*/
int x = 10000;
char *** matrix = alloc_matrix(x, x);
free_matrix(matrix, x, x);
return (0);
}
Answer: It's very possible that the memory is not really being allocated in RAM, but is in the virtual address space. I couldn't guess why your implementation causes this, but if you keep the memory allocated and sleep, you may find (assuming your on Linux, I don't know about Windows) that the memory is allocated but not backed by anything till you use it, due to the size. | {
"domain": "codereview.stackexchange",
"id": 20089,
"tags": "c, memory-management, benchmarking"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.