anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
How did cosmologists calculate the small density of dark energy in our universe? | Question: I've seen the number a few times (138 x10 to the power of -123) but i haven´t found how scientists calculated it. Max Tegmark mentions it in his research paper about multiverse types as representing the cosmological constant in Planck units
Answer: The dark energy density in the universe is a concordance result, based on several pieces of independent information- the power spectrum of spatial structure in the cosmic microwave background, the redshift-distance raltiinship derived from high redshift Type Ia supernovae, baryon acoustic oscillations, models of cosmic structure formation, primordial abundances and so on.
The value is about 0.7 of the energy density required for a "flat" universe - the so-called critical density. This is about $6\times 10^{-10}$ Joules per cubic metre. | {
"domain": "physics.stackexchange",
"id": 49073,
"tags": "cosmology, dark-energy, cosmological-constant"
} |
Can localized wavepackets have mass? | Question: Page 31 of David Tong's notes on QFT (also in Srednicki's book while discussing LSZ reduction formula), talks about Gaussian wavepackets $$|\varphi\rangle=\int \frac{d^3\textbf{p}}{(2\pi)^{3}}e^{-i\textbf{p}\cdot\textbf{x}}\varphi(\textbf{p})|\textbf{p}\rangle$$ with $\varphi(\textbf{p})=\exp[-\textbf{p}^2/2m^2]$ such that the state is somewhat localized in position space and somewhat localized in momentum space. My question is whether such state satisfy relativistic dispersion relation (RDR) $E^2-\textbf{p}^2=m^2$, if the one-particle Fock states $|\textbf{p}\rangle$ satisfy, $E^2-\textbf{p}^2=m^2$. If not, can it faithfully represent a real physical particle?
EDIT: Is it possible to consider a different function than $\varphi(\textbf{p})=\exp[-\textbf{p}^2/2m^2]$ so that the state is at the same time somewhat localized and also has a mass $m$?
Answer: $$P^2 \, \int \text d ^3 \mathbf p f(\mathbf p ) \vert \mathbf p \rangle = \int \text d ^3 \mathbf p f(\mathbf p ) P^2 \vert \mathbf p \rangle = m^2\int \text d ^3 \mathbf p f(\mathbf p ) \vert \mathbf p \rangle$$
All these states are by definition on the mass shell (for each wavefunction $f$). Note that the localization in position is just a heuristic concept, if they have not introduced a relativistic position operator. It means that $$\intop \text d ^3 \mathbf p f_1(\mathbf p )^* f_2(\mathbf p)\approx 0$$ irrespective of the momentum distributions $\vert f_{1,2}(\mathbf p)\vert ^2$. | {
"domain": "physics.stackexchange",
"id": 35515,
"tags": "quantum-field-theory, particle-physics, hilbert-space, scattering, second-quantization"
} |
Is heat flowing outside in this case? | Question: In Feynman's treatment of the Carnot cycle, he considers a perfect gas in a cylinder+piston in which we are injecting heat $Q_{1}$ with the help of a heat reservoir at constant temperature $T_{1}$. At the same time, we are expanding the piston by ourselves. He said,
Suppose that we have a gas in a cylinder equipped
with a frictionless piston. The gas is not necessarily a perfect gas. The fluid does
not even have to be a gas, but to be specific let us say we do have a perfect gas. Also, suppose that we have two heat pads, $T_{1}$ and $T_{2}$—great big things that have definite temperatures, $T_{1}$ and $T_{2}$. We will suppose in this case that $T_{1}$ is higher than $T_{2}$.
Let us first heat the gas and at the same time expand it, while it is in contact with the heat pad at $T_{1}$. As we do this, pulling the piston out very slowly as the heat flows into the gas, we will make sure that the temperature of the gas never gets very far from $T_{1}$. If we pull the piston out too fast, the temperature of the gas will fall too much below $T_{1}$ and then the process will not be quite reversible, but if we pull it slowly enough, the temperature of the gas will never depart much from $T_{1}$.
So if the gas is being kept at a constant temperature $T_{1}$, and at the same time we are expanding the piston ourselves and injecting heat into the gas, then the injected heat $Q_{1}$ must be flowing to the outside, isn't it?
Answer:
So if the gas is being kept at a constant temperature 1, and at the
same time we are expanding the piston ourselves and injecting heat
into the gas, then the injected heat 1 must be flowing to the
outside, isn't it?
No heat leaves the system in this expansion process. The injected heat equals the expansion work because, for an ideal gas, any change in internal energy is proportional to the change in temperature of the gas. Since the temperature of the gas does not change, the change in internal energy has to be zero. Then, from the first law for a closed system
$$\Delta U=Q-W$$
$$\Delta U=0$$
$$Q=W$$
The process Feynman is describing is a reversible isothermal expansion. To carry it out the external pressure has to be slowly reduced so that the gas pressure is always in equilibrium with the external pressure and the temperature of the gas remains constant.
As @Chet Miller pointed out, that doesn't require the piston to be "pulled" as Feynman says, but that the external pressure be allowed to slowly decrease.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 63798,
"tags": "thermodynamics, work"
} |
Non-linear systems in classical mechanics | Question:
In general, what is meant by non-linear system in classical mechanics? Does it always concern the differential equations one ends up with (any examples would be greatly appreciated)? If so, is it considered as non-linear because of: Higher powers of system variables? ($x^2,x^3...$), or does also any function of $x$ makes the system non-linear? like $\cos(x)$, $\ln(x)$, $e^x$ etc. I am confused.
Furthermore, why is it that most non-linear systems are considered non-integrable? Is it due to this fact that such systems are usually considered to be unpredictable even classically? (because we can't have exact analytical solutions?).
Answer: (1) In general, what is meant by non-linear system in classical mechanics?
A linear system is described by a set of differential equations that are a linear combination of the dependent variable and its derivatives. Some examples of linear systems in classical mechanics:
A damped harmonic oscillator, $$m \frac{d^2 x(t)}{dt^2} + c \frac{d x(t)}{dt} + k x(t) = 0$$
The heat equation, $$\frac{\partial u(\vec x, t)}{\partial t} -\alpha \nabla^2 u(\vec x, t) = 0$$
The wave equation, $$\frac{\partial^2 u(\vec x, t)}{\partial t^2} -c \nabla^2 u(\vec x, t) = 0$$
Non-linear systems cannot be described by a linear set of differential equations. Some examples of non-linear systems in classical mechanics:
Aerodynamic drag, where the drag force is proportional to the square of velocity, $$F_d = \frac 1 2 \rho v^2 C_D A$$
The Navier-Stokes equations, which are notoriously non-linear,
$$\rho \left( \frac{\partial v}{\partial t} + \vec v \cdot \vec \nabla v \right) = -\vec \nabla p + \vec \nabla T + \vec f$$
Gravitational systems, where the force is inversely proportional to the square of distance between objects,
$$\vec F = -\frac {GMm}{||\vec r||^3}\vec r$$
(2) Furthermore, why is it that most non-linear systems are considered non-integrable?
That term, "non-integrable" has two very distinct meanings. One sense is that the integral cannot be expressed as a finite combination of elementary functions. The elementary functions are polynomials, rational roots, the exponential function, the logarithmic function, and the trigonometric functions. This is a rather arbitrary division. For example, the integrals $\operatorname{li}(x) = \int_0^x 1/\ln(t)\,dt$ and $\operatorname{Si}(x) = \int_0^x \sin(t)/t\,dt$ cannot be expressed in the elementary functions. These are the logarithmic and sine integrals. These "special functions" appear so often that algorithms have been devised to estimate their values. Categorizing functions as elementary versus non-elementary is a bit arbitrary.
Just because the solution to a problem can't be expressed in elementary functions doesn't mean the problem is unsolvable. It just mean it's not solvable in the elementary functions. For example, people oftentimes say the three body problem is not "solvable". That's nonsense (ignoring collision cases). In the sense of solvability in the elementary functions, even the two body problem is not "integrable". Kepler's equation, $M = E - e\sin E$, gets in the way. Just because the two body problem cannot be expressed in terms of a finite combination of elementary functions does not mean we can't solve the two body problem.
There's another sense of "integrability", which is "does the integral exist?" Going back to the n body problem, a problem exists with collisions. These collisions introduce singularities, so that one could say that the n body problem is not integrable in the case of collisions. Collisions represent one kind of singularity. Painlevé conjectured that the n body problem has collisionless singularities in when $n\ge 4$. This has been proven to be true when $n \ge 5$. Newtonian mechanics allows some configurations of gravitating point masses to be sent to infinity in finite time. This truly is an example of non-integrability.
Proving integrability (or lack thereof) in this sense is a much tougher problem than showing that a problem is (or is not) solvable in the elementary functions. There's a million dollar prize for the first person who can either prove that the Navier-Stokes equations have globally-defined, smooth solutions, or come up with a counterexample that shows that that the Navier-Stokes equations are not "integrable." | {
"domain": "physics.stackexchange",
"id": 17004,
"tags": "classical-mechanics, non-linear-systems, integrable-systems"
} |
Jenkins buildfarm fails to download file from bitbucket.org | Question:
On the Jenkins build farm, my downloads from bitbucket.org always yield the same hash of d41d8cd98f00b204e9800998ecf8427e causing a build/configure failure. I don't think http will work any better because it redirects to an https amazon cloud server.
http://jenkins.ros.org/job/ros-indigo-ueye_binarydeb_trusty_amd64/1/consoleText
http://jenkins.ros.org/job/ros-indigo-ueye_binarydeb_trusty_i386/1/consoleText
From my CMakeLists.txt:
file(DOWNLOAD
https://bitbucket.org/kmhallen/ueye/downloads/uEye_SDK_4_40_amd64.tar.gz
${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_SHARE_DESTINATION}/3rdparty/uEye_SDK_amd64.tar.gz
SHOW_PROGRESS
INACTIVITY_TIMEOUT 60
EXPECTED_MD5 5290609fb3906a3355a6350dd36b2c76
TLS_VERIFY on)
file(DOWNLOAD
https://bitbucket.org/kmhallen/ueye/downloads/uEye_SDK_4_40_i386.tar.gz
${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_SHARE_DESTINATION}/3rdparty/uEye_SDK_i386.tar.gz
SHOW_PROGRESS
INACTIVITY_TIMEOUT 60
EXPECTED_MD5 d9803f2db1604f5a0993c4b62d395a31
TLS_VERIFY on)
From the CMakeLists.txt of velodyne_driver:
catkin_download_test_data(
${PROJECT_NAME}_tests_class.pcap
http://download.ros.org/data/velodyne/class.pcap
DESTINATION ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_SHARE_DESTINATION}/tests
MD5 65808d25772101358a3719b451b3d015)
One solution is to hosts files on download.ros.org like the velodyne_driver and the costmap_2d packages. How can I upload to this hosting service?
Update: Prerelease downloads and builds fine.
http://jenkins.ros.org/job/prerelease-indigo-ueye/1/ARCH_PARAM=amd64,UBUNTU_PARAM=trusty,label=prerelease/console
Originally posted by kmhallen on ROS Answers with karma: 1416 on 2014-09-15
Post score: 1
Answer:
The problem is that BitBucket is using a redirect to a temporary URL in the Amazon cloud. The CMake code can not follow this redirect.
You can either place the resource in a location which is fetchable without redirects or ask for that specific file to be uploaded to download.ros.org (we can't give write access to the server as it is setup right now). The first approach obviously allows you to change the file / upload multiple resources easily.
Originally posted by Dirk Thomas with karma: 16276 on 2014-09-17
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by kmhallen on 2014-09-18:
Can someone upload both uEye_SDK_4_40_amd64.tar.gz and uEye_SDK_4_40_i386.tar.gz from https://bitbucket.org/kmhallen/ueye/downloads to download.ros.org/data/ueye/ ? Thanks.
Comment by Dirk Thomas on 2014-09-18:
Done: http://download.ros.org/data/ueye/
Comment by kmhallen on 2014-09-18:
I get HTTP Error 403 Forbidden. It appears the public read permission is not set. Also please add the ".gz" extension for both files. Thanks for the quick response.
Comment by Dirk Thomas on 2014-09-18:
Sorry, I didn't look at it at all. Done.
Comment by kmhallen on 2014-09-18:
Builds are passing now. Thanks. | {
"domain": "robotics.stackexchange",
"id": 19406,
"tags": "ros, jenkins, buildfarm, release"
} |
How can earthquakes shift the earth's axis? | Question: One often comes across news articles that claim that an earthquake shifted the earth's axis.
http://news.google.com/?q=earthquake%20shifted%20OR%20shifts%20earth%27s%20axis
If you ignore the influence of other celestial bodies, an internal event like an earthquake surely can't change the direction of the angular momentum of the Earth (unless stuff is ejected out of Earth), since angular momentum has to be conserved in the absence of an external torque. So the axis has to remain fixed.
Am I missing something? Or are geologists trying to say that the resulting movement of tectonic plates causes a change in the point of intersection of the axis (which remains the same) and the plates that include the poles, so that it seems as if the axis has shifted?
EDIT
Some articles mention the value of the shift in the axis and also the change in the length of the day. If, as Ted Bunn's answer indicates below, the shift in the axis isn't actually real but is because of the movement of tectonic plates with respect to the axis, shouldn't the shift be different at the north and south poles? How are the shifts and the change in day-length calculated?
Answer: Angular momentum doesn't change, but the angular velocity vector does. This is effectively due to a shift in the body's moment of inertia tensor. | {
"domain": "physics.stackexchange",
"id": 5855,
"tags": "classical-mechanics, rotational-dynamics"
} |
ROS Humble Ubuntu 22.04 Apt Install Issue | Question:
After following: https://docs.ros.org/en/humble/Installation/Ubuntu-Install-Debians.html and doing a system upgrade, I am unable to install ros-humble-desktop-full or ros-rolling-desktop-full (see https://github.com/ros2/ros2/issues/1272):
$ sudo apt install ros-humble-desktop
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
apt : Depends: libapt-pkg6.0 (>= 2.4.5) but it is not going to be installed
Depends: libsystemd0 but it is not installable
bsdutils : PreDepends: libsystemd0 but it is not installable
dconf-gsettings-backend : Depends: dconf-service (< 0.40.0-3.1~)
Depends: dconf-service (>= 0.40.0-3)
init : PreDepends: systemd-sysv
libcolord2 : Depends: libudev1 (>= 196) but it is not installable
libdbus-1-3 : Depends: libsystemd0 but it is not installable
Recommends: dbus
libgudev-1.0-0 : Depends: libudev1 (>= 199) but it is not installable
libhwloc15 : Depends: libudev1 (>= 183) but it is not installable
libignition-cmake2-dev : Depends: cmake
libinput10 : Depends: libudev1 (>= 183) but it is not installable
Depends: libinput-bin (>= 1.20.0-1ubuntu0.1)
libopenmpi-dev : Depends: openmpi-bin (>= 3.0.0-1)
Recommends: libcoarrays-openmpi-dev but it is not going to be installed
libopenni2-0 : Depends: libudev1 (>= 183) but it is not installable
libpulse0 : Depends: libsystemd0 but it is not installable
libqt5gui5 : Depends: libudev1 (>= 183) but it is not installable
libudev-dev : Depends: libudev1 (= 249.11-0ubuntu3) but it is not installable
libusb-1.0-0 : Depends: libudev1 (>= 183) but it is not installable
mpi-default-bin : Depends: openmpi-bin
ros-humble-ament-cmake : Depends: cmake
ros-humble-ament-cmake-core : Depends: cmake
ros-humble-foonathan-memory-vendor : Depends: cmake
shim-signed : Depends: grub-efi-amd64-signed but it is not going to be installed or
grub-efi-arm64-signed but it is not installable
Depends: grub2-common (>= 2.04-1ubuntu24)
util-linux : PreDepends: libsystemd0 but it is not installable
PreDepends: libudev1 (>= 183) but it is not installable
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
Originally posted by jgoppert on ROS Answers with karma: 41 on 2022-06-23
Post score: 4
Original comments
Comment by Morris on 2022-06-24:
I'm getting the same error on a brand new Jammy install.
Comment by prawat on 2022-06-26:
I have the same issues with a newly installed jammy
Comment by prawat on 2022-07-01:
After dist-upgrade
$ sudo apt-cache policy libudev1
libudev1:
Installed: 249.11-0ubuntu3.3
Candidate: 249.11-0ubuntu3.3
Version table:
*** 249.11-0ubuntu3.3 500
500 http://us.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages
100 /var/lib/dpkg/status
249.11-0ubuntu3 500
500 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages
Comment by prawat on 2022-07-03:
install ros-humble-desktop now works without problems.
Comment by omers on 2022-07-05:
@prawat Can you please elaborate? I am still experiencing this problem
Comment by randychen233 on 2023-01-02:
Hi, I did exactly what's written in the procedures above, and it seems like no error popped up. However, ROS is not actually installed...when the above commands are executed, ROS is simply not there and when I run roscore, an error popped up
Answer:
This is being tracked in ros2/ros2#1287.
As this is a known issue, I suggest (future) readers to refer to that issue, instead of posting here.
Originally posted by gvdhoorn with karma: 86574 on 2022-06-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 37800,
"tags": "ros, ros2, apt, debian"
} |
Why does Taylor’s series “work”? | Question: I am an undergraduate Physics student completing my first year shortly. The following question is based on the physical systems I’ve encountered so far. (We mostly did Newtonian mechanics.)
In all of our analyses of the physical systems (up till now) we recklessly exploited Taylor’s series, retaining terms up to the desired precision of our approximate model of reality.
But what is the justification of using Taylor’s series? It implicitly implies that the mathematical functions in our physical model are analytic. But how can we be sure about that?
Sure, the nature doesn’t seem to be discontinuous or have “kinks” (i.e. nonexistent derivatives) in its behaviour. That seems plausible. But still, there are non-analytic smooth functions. And there are “many” more of them than there are analytic functions. So even if the nature works smoothly in its endeavours, it is essentially zero probability that it should do so analytically.
So why do we use Taylor’s series at all?
Answer: I had this same problem, too. The trick with it is realizing that there's an important difference between Taylor series and Taylor approximations or polynomials, whose behavior is described by Taylor's theorem. Yes, very often I suspect a common mistake is that you first see Taylor polynomials and theorem, and then you get Taylor series and that becomes the focus and suddenly you forget about the rest.
But here, what we're actually doing when we "truncate" a Taylor series is that we are going back to a Taylor polynomial, since that is what a truncated Taylor series is - or alternatively, a Taylor series is the natural extension of to infinite order. In that context, Taylor's theorem tells you exactly how it does or does not behave as an approximation and - surprise - it doesn't require anything about analyticity at all. Analyticity only comes into play when you consider the full series: in fact, what Taylor's theorem tells you is that a finite Taylor polynomial will still work as an approximation for even a non-analytic function, so long as you get suitably close to the point at which you're taking the polynomial and the function is differentiable enough to be able to make the polynomial of the given degree possible to take.
Specifically, Taylor's theorem tells you that, analytic or not, if you cut the Taylor series so that the highest term has degree $N$, to form the Taylor polynomial (or truncated Taylor series) $T_N(a, x)$, where $a$ is the expansion point, you have
$$f(x) = T_N(a, x) + o(|x - a|^N),\ \ \ \ \ x \rightarrow a$$
where the last part defines the behavior of the remainder term: this is the "little-o notation" and means that the error pales in comparison to the bound $|x - a|^N$.
As an example in elementary mathematical physics, consider the analysis of the "pathological" potential in Newtonian mechanics given by
$$U(x) := \begin{cases} e^{-\frac{1}{x^2}},\ x \ne 0\\ 0,\ \mbox{otherwise} \end{cases}$$
which is smooth everywhere, but not analytic when $x = 0$. In particular, it is so bad that not only is it not analytic, the Taylor series exists and even converges ... just to the wrong thing!:
$$U(x)\ "="\ 0 + 0x + 0x^2 + 0x^3 + 0x^4 + \cdots,\ \ \ \ \mbox{near $x = 0$}$$
... and yes, that is literally 0s on every term, so the right-hand expression equals $0$!
(ADD - see comments: no... not THAT 0! ... uh ... Ooops... uhhh ... )
Nonetheless, while that is technically "wrong", the usual analysis methods you have for this system will still tell you the "right thing", provided you're careful: in particular, we note that $x = 0$ looks like some kind of "equilibrium" since $U'$ is zero there, but we also note that we are told - correctly! - that we should not apply the harmonic oscillator approximation because we also have that the coefficient out in front of $x^2$ is 0 as well.
We are justified in both conclusions because while this Taylor series is "bad", it is still A-OK by Taylor's theorem to write the truncated series, and thus Taylor polynomial,
$$U(x) \approx 0 + 0x + 0x^2,\ \ \ \ \mbox{near $x = 0$}$$
even though it "equals $0$", because this $U(x)$ is "so exquisitely approximated by the constant function $U^{*}(x) := 0$" that it is $o(|x|^N)$ for every order $N > 0$ and thus, in particular, also $N = 2$! Hence, the harmonic analysis and conclusion of failure thereof are still 100% justified!
ADD (IE+1936.6817 Ms - 2018-05-16): Per a comment added below, there is an additional wrinkle in this story which had been thinking of mentioning but didn't, yet for which, in light of that, I thought maybe I now should.
There are actually two different kinds of ways in which the Taylor series can fail when a function is not analytic at a point and it is taken at that point. One of these is the way I showed above - where the Taylor series converges, but it converges to the "wrong" thing in that it does not equal the function in any non-trivial interval around that point (you might be able to have it equal it on some weird dusty/broken-up set, but not on any interval), i.e. no interval $[a - \epsilon, a + \epsilon]$ with $\epsilon \ne 0$. Such a point is called a Cauchy point, or C-point.
The other way is for the Taylor series to have actually radius of convergence 0, i.e. it does not converge in any non-trivial interval of the same form with $\epsilon \ne 0$. This kind of point is called a Pringsheim point, or P-point. This case was not demonstrated, but even in such a case, the Taylor series is still an asymptotic series in the sense that it will at least try to start to converge if you're close enough and, moreover, the closer you are to the expansion point $a$, the more terms you can take before it stops converging and starts to diverge again. Since in physics, we are usually interested - and esp. for the harmonic oscillator - in only a few low-order terms, the ultimate behavior of the series is not important and we can still take it to get, say, the harmonic approximation near a point of equilibrium even if the function is not analytic there - e.g. consider the potential $U_3(x) := U(x) + \frac{1}{2} kx^2$ with $k > 0$, where we used the first potential we just gave above. This is not analytic at $x = 0$ either, but nonetheless, the harmonic approximation will not only work, but work exquisitely well, and with the frequency $\omega := \sqrt{\frac{k}{m}}$ as usual.
See:
https://math.stackexchange.com/questions/620290/is-it-possible-for-a-function-to-be-smooth-everywhere-analytic-nowhere-yet-tay | {
"domain": "physics.stackexchange",
"id": 58562,
"tags": "classical-mechanics, mathematics, analyticity"
} |
Free a binary tree without using recursion or allocating memory | Question: As the title says, the objective is to free a binary tree without using the stack or allocating memory.
This was required for a kernel module where resources were limited.
Result has a complexity of \$ O \left( n \right) \$.
template<typename T>
struct Node
{
T data;
Node* left;
Node* right;
};
// Utility function to find the bottom left
// Of the tree (because that has a NULL left
// pointer that we can use to store information.
Node* findBottomLeft(Node* t)
{
while(t->left != NULL)
{
t = t->left;
}
return t;
}
void freeTree(Node* t)
{
if (t == NULL)
{ return;
}
// Points at the bottom left node.
// Any right nodes are added to the bottom left as we go down
// this progressively flattens the tree into a list as we go.
Node* bottomLeft = findBottomLeft(t);
while(t != NULL)
{
// Technically we don't need the if (it works fine without)
// But it makes the code easier to reason about with it here.
if (t->right != NULL)
{
bottomLeft->left = t->right;
bottomLeft = findBottomLeft(bottomLeft);
}
// Now just free the curent node
Node* old = t;
t = t->left;
delete old;
}
}
Answer: function missing templates
I'm sure you're aware, but the functions need the same templating as the Node such that void freeTree(Node* t) becomes:
template<typename T>
void freeTree(Node<T>* t)
slight reduction in memory use
You could slightly reduce the stack used by essentially inlining the function call to findBottomLeft. The rewritten function now looks like this:
template<typename T>
void freeTree(Node<T>* t)
{
// bl points at the bottom left node.
// Any right nodes from t are added to the bottom left as we go down.
// This progressively flattens the tree into a list as we go.
Node<T> *bl;
for(bl=t ; bl != nullptr && bl->left != nullptr; bl=bl->left);
while(t != nullptr)
{
// body of for loop deliberately empty
for (bl->left = t->right; bl != nullptr && bl->left != nullptr; bl=bl->left);
Node<T>* old = t;
// Now just free the curent node
t = t->left;
delete old;
}
}
Note that I'm using nullptr rather than NULL here which is a C++11 feature. If you're not using a C++11 compliant compiler, you can simply use NULL for each of those instances.
Also, I've eliminated the early return in the case that t was equal to NULL in the original because this case is handled correctly by the for loop and while loop.
It's also important to realize that the body of the for loop is deliberately empty. Some people dislike having a for loop with just a semicolon at the end, but I think it's not a problem if there is a comment pointing it out. | {
"domain": "codereview.stackexchange",
"id": 7788,
"tags": "c++, memory-management, tree, kernel"
} |
Costmap2DROS transform timeout.Could not get robot pose, cancelling reconfiguration | Question:
Hi , I am doing an experiment about hector slam navigation.
for the picture as below,the small tf mark is where it start and the big one is the last place .
here is my launch file (click me to github)
But the system crash when the laser scanner move every time about 10 seconds.
And shows Costmap2DROS transform timeout and can't get robot pose.
(update stauts 2016.02.27)
I guess this warning is because of the setting I modify.
I modify "hector_navigation/hector_exploration_node/config/costmap.yaml",change " transform_tolerance: 2 " to " transform_tolerance: 10".
How could I solve it? Thanks for anyone response.><
Here are the error message.
[ INFO] [126.662293041]: [hector_exploration_planner] Initializing HectorExplorationPlanner
[ INFO] [126.843700868]: [hector_exploration_planner] Parameter set. security_const: 0.500000, min_obstacle_dist:
1000, plan_in_unknown: 1, use_inflated_obstacle: 1, p_goal_angle_penalty_:50 , min_frontier_size: 5, p_dist_for_goal_reached_: 0.250000, same_frontier: 0.250000
[ WARN] [178.280680532]: Costmap2DROS transform timeout. Current time: 178.2801, global_pose stamp: 168.1918, tolerance: 10.0000
[ WARN] [178.281182366]: Could not get robot pose, cancelling reconfiguration
[ WARN] [179.391044619]: Costmap2DROS transform timeout. Current time: 179.3909, global_pose stamp: 168.1918,
tolerance: 10.0000
[ WARN] [179.391814787]: Could not get robot pose, cancelling reconfiguration
[ WARN] [180.400909692]: Costmap2DROS transform timeout. Current time: 180.4008, global_pose stamp: 168.1918, tolerance: 10.0000
[ WARN] [180.406124285]: Could not get robot pose, cancelling reconfiguration
[ WARN] [181.480204223]: Costmap2DROS transform timeout. Current time: 181.4799, global_pose stamp: 168.1918, tolerance: 10.0000
[ WARN] [181.480478224]: Could not get robot pose, cancelling reconfiguration
[ WARN] [182.480537195]: Costmap2DROS transform timeout. Current time: 182.4804, global_pose stamp: 168.1918, tolerance: 10.0000
[ WARN] [182.480666779]: Could not get robot pose, cancelling reconfiguration
[ WARN] [183.580379513]: Costmap2DROS transform timeout. Current time: 183.5802, global_pose stamp: 168.1918, tolerance: 10.0000
[ WARN] [183.580514430]: Could not get robot pose, cancelling reconfiguration
[ WARN] [184.629115323]: Costmap2DROS transform timeout. Current time: 184.6289, global_pose stamp: 168.1918, tolerance: 10.0000
[ WARN] [184.680667166]: Could not get robot pose, cancelling reconfiguration
Originally posted by YingHua on ROS Answers with karma: 196 on 2016-02-26
Post score: 4
Original comments
Comment by vaziri on 2016-02-26:
Your tolerance for transforms is set to 10.000 meaning that the system will not use transform information that is more than 10 seconds old.Your system is not getting any new TF messages after 168.19 . Read about tf and do the tutorials - http://wiki.ros.org/tf, then update with more info
Comment by YingHua on 2016-02-26:
I make some test yesterday. I guess that SLAM system crash is not because of this warning.
Maybe the system sway too much make it crash.But if I want to resolve this warning,should I study more about tf ?
Comment by YingHua on 2016-02-26:
Thanks for your response!!vaziri :) I update my question already.
Comment by julimen5 on 2018-03-24:
did you fix this? im having troubles too
Answer:
There is a related question (duplicate?) here, will post same answer in both.
In our case it was a performance issue, the computer was not strong enough to manage the load. I changed rviz configuration to prevent pointcloud visualisation, laserscanner vis, and furthermore I was having a process consuming 100% CPU (a node which was not sleeping while spinning). After those fixes, now everything is back to normal.
Originally posted by Oscar Lima with karma: 831 on 2019-06-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 23914,
"tags": "slam, navigation, costmap, hector"
} |
Reducing boilerplate of binary layout handling in Haskell | Question: Here is the code: https://github.com/EarlGray/haskell-snippets/blob/master/ext2hs/Ext2.hs
My problem is that it is quite painful to make large Haskell "records" and deserialize a binary layout with Data.Binary.Get manually like this:
instance Binary Superblock where
get = binGetSuperblock
binGetSuperblock = do
uint13 <- replicateM 13 getWord32le
ushort6 <- replicateM 6 getWord16le
uint4 <- replicateM 4 getWord32le
ushort2 <- replicateM 2 getWord16le
return Superblock {
sInodesCount = uint13 !! 0, sBlocksCount = uint13 !! 1,
sReservedBlocksCount = uint13 !! 2, sFreeBlocksCount = uint13 !! 3,
sFreeInodesCount = uint13 !! 4, sFirstDataBlock = uint13 !! 5,
sLogBlockSize = uint13 !! 6, sLogClusterSize = uint13 !! 7,
sBlocksPerGroup = uint13 !! 8, sClustersPerGroup = uint13 !! 9,
sInodesPerGroup = uint13 !! 10, sMountTime = uint13 !! 11,
sWriteTime = uint13 !! 12,
sMountsCount = ushort6 !! 0, sMaxMountsCount = int (ushort6 !! 1),
sMagic = ushort6 !! 2, sState = ushort6 !! 3,
sErrors = ushort6 !! 4,
sLastCheckTime = uint4 !! 0, sCheckInterval = uint4 !! 1,
sCreatorOS = uint4 !! 2, sRevLevel = (uint4 !! 3, ushort6 !! 5),
sDefaultResUID = ushort2 !! 0, sDefaultResGID = ushort2 !! 1,
sDynRev = Nothing,
sPrealloc = Nothing,
sJournaling = Nothing
}
(this is ext2fs superblock structure).
First of all, I would like to make binary layout as much declarative as possible, abstracting away those ugly sequences of getWord32le in Get monad with temporary fields. I want something like my python code, where binary fields are unpacked to a map according to a struct format string
The second thing I need is a more concise and more clear way for initializing a large Haskell record (creating a lot of temporary named values in the monad is pain).
One way I think of is to read binary fields into a heterogeneous map forall a. BinaryField a => Map String a, accessing each value by its name and pre-extracting most used ones to Haskell.
Perhaps Template Haskell and code generation from a declarative format may help, but I have no experience with TH yet.
I would be glad to hear any suggestions or hints about a more idiomatic way (are there any relevant tools?) or about the code in general (though it's just my learning code and it's very incomplete at the moment).
Answer: A way to avoid those temporary values and (!!) operators, at the cost perhaps of some obscurity, would be something like the following:
import Control.Applicative ((<$>),(<*>),pure)
binGetSuperblock' = do
partiallyAppliedConstructor <- Superblock
<$> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord32le
<*> getWord16le
<*> (int <$> getWord16le)
<*> getWord16le
<*> getWord16le
outOfPlace16 <- getWord16le
partiallyAppliedConstructor
<$> getWord16le
<*> getWord32le
<*> getWord32le
<*> (OSEnum <$> getWord32le)
<*> ((,) <$> getWord32le <*> pure outOfPlace16)
<*> getWord16le
<*> getWord16le
<*> pure Nothing
<*> pure Nothing
<*> pure Nothing | {
"domain": "codereview.stackexchange",
"id": 2982,
"tags": "haskell"
} |
Why does substitution terminate? | Question: I'm formalizing some properties of lambda calculus in Coq and I have some problems proving termination of substitution. My terms are defined as:
Inductive Term :=
TVar: nat -> Term
| TAbs: nat -> Term -> Term
| TApp: Term -> Term -> Term.
and I have a way of generating fresh variables using a function
fresh: Term -> nat
satisfying:
Lemma fresh_is_fresh:
forall t,
~ FV t (fresh t).
and similarly a function fresh2: Term -> Term -> nat for obtaining a fresh identifier wrt. two terms.
I now want to define a standard notion of capture avoiding substitution:
Fixpoint substitute (t1:Term) (t2:Term) (x:nat): Term :=
match t1 with
| TVar y =>
if beq_nat x y then t2 else TVar y
| TApp t11 t12 =>
TApp (substitute t11 t2 x) (substitute t12 t2 x)
| TAbs y t =>
if beq_nat x y then
TAbs x t
else TAbs (fresh2 t t1) (substitute (substitute t (TVar y) (fresh2 t t1)) t2 x)
end.
Obviously, Coq cannot see that this definition is terminating, so I need to provide a custom measure, so that the code is something like the following:
Program Fixpoint substitute (t1:Term) (t2:Term) (x:nat) {measure ???}: Term :=
match t1 with
| TVar y =>
if beq_nat x y then t2 else TVar y
| TApp t11 t12 =>
TApp (substitute t11 t2 x) (substitute t12 t2 x)
| TAbs y t =>
if beq_nat x y then
TAbs x t
else TAbs (fresh2 t t1) (substitute (substitute t (TVar y) (fresh2 t t1)) t2 x)
end.
Now my question is: What is the correct measure to use here? And how I do use it to prove the obligations which arise using this measure?
Answer: Derek Elkins' comment solved my problem! Here's the solution I came up with:
We create a seperate renaming function rename: Term -> nat -> nat -> Term, which assumes that the name to rename with is fresh. This makes it trivially terminating.
Fixpoint rename (t:Term) (x y : nat): Term :=
match t with
| TVar z =>
if beq_nat y z then TVar x
else TVar z
| TApp t1 t2 =>
TApp (rename t1 x y) (rename t2 x y)
| TAbs z t =>
if beq_nat y z then
TAbs z t
else
TAbs z (rename t x y)
end.
we then define a (somewhat arbitrary) measure of a term:
Fixpoint size (t:Term): nat :=
match t with
TVar _ => 1
| TApp t1 t2 => size t1 + size t2
| TAbs x t => 2 + size t
end.
and we can easily prove that renaming preserves this measure:
Lemma rename_preserves_size:
forall t x y,
size t = size (rename t x y).
this lemma is used in the final case of substitition in order to prove that the term rename t y' y has the same size as t and thus the call substitute (rename t y' y) t2 x happens on a strictly smaller term.
Now all I need to prove that this is correct. | {
"domain": "cs.stackexchange",
"id": 10374,
"tags": "programming-languages, lambda-calculus, semantics, coq, termination"
} |
Which machine learning algorithms can be used for time series forecasts? | Question: Currently I am playing around with time series forecasts (specifically for Forex). I have seen some scientific papers about echo state networks which are applied to Forex forecast. Are there other good machine learning algorithms for this purpose?
It would also be interesting to extract "profitable" patterns from the time series.
Answer: Here are three survey papers that examine the use of machine learning in time series forecasting:
"An Empirical Comparison of Machine Learning Models for Time Series Forecasting" by Ahmed, Atiya, El Gayar, and El-shishiny provides an empirical comparison of several machine learning algorithms, including:
"...multilayer perceptron, Bayesian neural networks, radial basis
functions, generalized regression neural networks (also called kernel
regression), K-nearest neighbor regression, CART regression trees,
support vector regression, and Gaussian processes."
"Financial time series forecasting with machine learning techniques: A survey" by Krollner, Vanstone, and Finnie finds:
"...that artificial neural networks (ANNs) are the dominant machine
learning technique in this area."
"Machine Learning Strategies for Time Series Forecasting" by Bontempi, Ben Taieb, and Le Borgne focuses on three aspects:
"...the formalization of one-step forecasting problems as supervised
learning tasks, the discussion of local learning techniques as an
effective tool for dealing with temporal data, and the role of the
forecasting strategy when we move from one-step to multiple-step
forecasting." | {
"domain": "cs.stackexchange",
"id": 1708,
"tags": "reference-request, machine-learning, artificial-intelligence"
} |
FFMPEG with Java Wrapper | Question: In this java application, I am trying to convert an video into small clips.
Here is the implementation class for the same
package ffmpeg.clip.process;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.file.Paths;
import java.time.Duration;
import java.time.LocalTime;
import java.time.temporal.ChronoUnit;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;
import java.util.stream.Collectors;
import ffmpeg.clip.utils.VideoConstant;
import ffmpeg.clip.utils.VideoUtils;
/*
* @author Nitishkumar Singh
* @Description: class will use ffmpeg to break an source video into clips
*/
public class VideoToClip {
/*
* Prevent from creating instance
*/
private VideoToClip() {
}
/**
* Get Video Duration is milliseconds
*
* @Exception IOException - File does not exist VideoException- Video File have data issues
*/
static LocalTime getDuration(String sourceVideoFile) throws Exception {
if (!Paths.get(sourceVideoFile).toFile().exists())
throw new Exception("File does not exist!!");
Process proc = new ProcessBuilder(VideoConstant.SHELL, VideoConstant.SHELL_COMMAND_STRING_ARGUMENT,
String.format(VideoConstant.DURATION_COMMAND, sourceVideoFile)).start();
boolean errorOccured = (new BufferedReader(new InputStreamReader(proc.getErrorStream())).lines()
.count() > VideoConstant.ZERO);
String durationInSeconds = new BufferedReader(new InputStreamReader(proc.getInputStream())).lines()
.collect(Collectors.joining(System.lineSeparator()));
proc.destroy();
if (errorOccured || (durationInSeconds.length() == VideoConstant.ZERO))
throw new Exception("Video File have some issues!");
else
return VideoUtils.parseHourMinuteSecondMillisecondFormat(durationInSeconds);
}
/**
* Create Clips for Video Using Start and End Second
*
* @Exception IOException - Clip Creation Process Failed InterruptedException - Clip Creation task get's failed
*/
static String toClipProcess(String sourceVideo, String outputDirectory, LocalTime start, LocalTime end,
String fileExtension) throws IOException, InterruptedException, ExecutionException {
String clipName = String.format(VideoConstant.CLIP_FILE_NAME,
VideoUtils.getHourMinuteSecondMillisecondFormat(start),
VideoUtils.getHourMinuteSecondMillisecondFormat(end), fileExtension);
String command = String.format(VideoConstant.FFMPEG_OUTPUT_COMMAND, sourceVideo,
VideoUtils.getHourMinuteSecondMillisecondFormat(start),
VideoUtils.getHourMinuteSecondMillisecondFormat(end.minus(start.toNanoOfDay(), ChronoUnit.NANOS)),
outputDirectory, clipName);
LocalTime startTime = LocalTime.now();
System.out.println("Clip Name: " + clipName);
System.out.println("FFMPEG Process Execution Started");
CompletableFuture<Process> completableFuture = CompletableFuture.supplyAsync(() -> {
try {
return executeProcess(command);
} catch (InterruptedException | IOException ex) {
throw new RuntimeException(ex);
}
});
completableFuture.get();
// remove
LocalTime endTime = LocalTime.now();
System.out.println("Clip Name: " + clipName);
System.out.println("FFMPEG Process Execution Finished");
System.out.println("Duration: " + Duration.between(startTime, endTime).toMillis() / 1000);
return clipName;
}
/**
* Create and Execute Process for each command
*/
static Process executeProcess(String command) throws InterruptedException, IOException {
Process clipProcess = Runtime.getRuntime().exec(command);
clipProcess.waitFor();
return clipProcess;
}
}
The Entire Solution is availble at Github. I am actually using CompletableFuture and running FFMPEG command by creating Java Process. The time it takes is too much. For a 40 minutes video, it takes more than 49 minutes, on a 64 CPU machine. I am trying to reduce the core size to 8 or something, as well improve its performance, as this kind of performance won't be acceptable for any kind of application.
22-jan-2017 update
One Update, I have changed the FFMPEG command to create clips and updated to FFMPEG 3, but there is no improvement.
ffmpeg -y -i INPUT_FILE_PATH -ss TIME_STAMP -t DURATION_TO_CLIP OUTPUT_FILE_PATH
Answer: That is a natural restriction on video encoding. On modern machines 1 minute of 720p video is encoded approximately in 1 minute.
You can save a lot of time if you do not need re-encoding (i.e. changing codec or video size) by using -codec copy ffmpeg option.
Also you said you have 64 cores, but your code use only 1 thread for encoding. Use -threads 0 to allow ffmpeg to choose by itself.
Also, if you need to perform this in Java - give Jaffree a chance (I'm an author). | {
"domain": "codereview.stackexchange",
"id": 32883,
"tags": "java, performance, multithreading, child-process, video"
} |
Why do we need insulation material between two walls? | Question: Consider a slab made of two walls separated by air. Why do we need insulation material between the two walls. Air thermal conductivity is lower than most thermal conductivities of insulating material and convection cannot be an issue in the enclosed volume: hot air rises, so what? it won't go any further than the top of the cavity.
Answer: You can think of thermal conductivity as a measure of how readily heat will flow through the material while it is stationary. The low thermal conductivity of air means that it takes a long time for heat to diffuse through an air pocket.
If the air is permitted to move, however, this intuition goes out the window. The air in contact with one wall gets warm and rises, and the resulting circulation causes it to be brought into contact with the other wall. In this way, the heat doesn't need to diffuse through the air, as it's being transported by bulk air flow.
Insulating materials such as blown fiberglass (or a wool sweater) are good insulators precisely because they trap many small pockets of air, which shuts down convection and forces the heat to flow diffusively. Once there's no convection, the low thermal conductivity of the air pockets makes the material a good insulator. You're right that the thermal conductivity of the trapping material is usually higher than the thermal conductivity of the air itself, but that's the (fairly modest) price we have to pay for killing the convection. | {
"domain": "physics.stackexchange",
"id": 84447,
"tags": "thermodynamics, thermal-radiation, thermal-conductivity"
} |
Coriolis effect when walking along equator | Question: Okey, I'm thinking about a problem like this:
Suppose we walk along the equator, let's choose the east directon (for example). Let our walking speed be $v$, then from the coordinate system that follows earths rotation, we expect a coriolisacceleration to act upon us.
We know that the coriolisacceleration can be calculated as $2 \vec{\omega} \times \vec{r}$, but choosing our rotating coordinate system such that $x$ is in the direction of east, and $z$ radially out towards us from the center of earth, then, since the rotationvector of earth points in the positive $y$ direction, the cross product shows us that the coriolisacceleration points in the $z$ - direction, but how can this be the case. Shouldn't we expect that it's pointing upwards (y - direction), since we would experience a higher angular velocity by walking along the equator?
Thanks.
Answer: Walking along the equator, the Coriolis force pushes neither left nor right, but instead up or down (from the walker's perspective), which is exactly the $z$ direction you defined.
You can make sense of this like so: If you were to walk straight ahead along the equator (let's stick with east as in your example), you are orbiting the center of rotation faster than someone standing still. This increases your centrifugal force, which you can interpret as an apparent radial force outwards that appears in addition to the centrifugal force you would experience if you were standing still.
If you were to walk due west, you would see the opposite effect: you orbit slower, making the centrifugal force weaker, which you can also frame as an apparent force pointing radially inward, countering part of the centrifugal force you would expect if you were standing still. | {
"domain": "physics.stackexchange",
"id": 87754,
"tags": "coriolis-effect"
} |
Keeping track of the tennis score | Question: I'm trying to get hired by some company and they gave me a simple remote test to take at home so they can judge my coding skills.
It's a really simple problem based on the website HackerRank. It's a mini game where you have to keep track of the score of a tennis game (0-15-30-40-ADV).
This is really basic and I managed to do it without any problems. But I believe the more important thing is not if it works or not but rather how you did it and if you did code like a professional.
What could I improve on in my code to actually code like a professional?
//======================================
// Author: Philippe Balleydier
// Last Update : 08/09/16
// Object : Little algorithm able to keep track of the tennis score
//======================================
#include <iostream>
#include <stdlib.h>
#include <time.h>
#include <string>
using namespace std;
// score map for printing the score
const string Score[] = {"0", "15", "30", "40", "ADV"};
//======================================
// Class Game: contains all the informations and methods about the game being played
//======================================
class Game
{
// see below for explanations
public:
Game();
bool createPlayer(int playerId, string playerName);
string getName(int playerId);
bool hasADV(int playerId);
bool setScore(int playerId, int score);
int getIntScore(int playerId);
string getStringScore(int playerId);
private:
string players[2]; // players' name
int scores[2]; // players' score
};
//--------------------------------------
// Constructor : init score to 0
//--------------------------------------
Game::Game()
{
scores[0] = 0;
scores[1] = 0;
}
//--------------------------------------
// createPlayer : create a player depending on its ID
//--------------------------------------
bool Game::createPlayer(int playerId, string playerName)
{
if (playerId == 0 || playerId == 1)
{
players[playerId] = playerName;
return true;
}
else
return false;
}
//--------------------------------------
// getName : return the name of player "playerId" as a string
//--------------------------------------
string Game::getName(int playerId)
{
if (playerId == 0 || playerId == 1)
return players[playerId];
else
return "N/A";
}
//--------------------------------------
// getName : return true if the player's score is ADV, else false
//--------------------------------------
bool Game::hasADV(int playerId)
{
if (scores[playerId]==4)
return true;
else
return false;
}
//--------------------------------------
// setScore : change the score of player "playerId"
//--------------------------------------
bool Game::setScore(int playerId, int score)
{
if (score < 5)
scores[playerId]=score;
else
return false;
}
//--------------------------------------
// getIntScore : return the score of player "playerId" as a int (0, 1, 2, 3, 4)
//--------------------------------------
int Game::getIntScore(int playerId)
{
return scores[playerId];
}
//--------------------------------------
// getStringScore : return the score of player "playerId" as a string (0, 15, 30, 40, ADV)
//--------------------------------------
string Game::getStringScore(int playerId)
{
return Score[scores[playerId]];
}
//=======================================
int main()
{
// rand init
srand(time(NULL));
// setting up the game
Game myGame;
myGame.createPlayer(0, "John");
myGame.createPlayer(1, "Paul");
//starting game loop
bool stop = false;
while(!stop)
{
// identifying winner and looser
int pointWinner = rand()%2;
int pointLooser=(pointWinner+1)%2;
cout << "The winner of the point is " << myGame.getName(pointWinner) << endl;
// The pointWinner has advantage => win
if (myGame.hasADV(pointWinner))
{
stop = true;
cout << "Winner is " << myGame.getName(pointWinner) << endl;
} // The opponent has advantage => tie (40-40)
else if (myGame.hasADV(pointLooser))
myGame.setScore(pointLooser, 3);
// Tie (40-40) => pointWinner gets advantage
else if (myGame.getIntScore(pointWinner)==3 && myGame.getIntScore(pointLooser) == 3)
myGame.setScore(pointWinner, 4);
//pointWinner is 1 point ahead of pointLooser and >=30
else if (myGame.getIntScore(pointWinner)>myGame.getIntScore(pointLooser) && myGame.getIntScore(pointWinner)>=2)
{
stop = true;
cout << "Winner is " << myGame.getName(pointWinner) << endl;
}
else // nothing special, pointWinner marks the point
myGame.setScore(pointWinner, myGame.getIntScore(pointWinner)+1);
// printing score
if (!stop)
cout << "Score : " << myGame.getName(0) << " " << myGame.getStringScore(0) << " - " << myGame.getStringScore(1) << " " << myGame.getName(1) << endl;
}
return 0;
}
Just so you know, it has to be a 1-page .cpp file, so I can't really split the Game class into different .h and .cpp files.
Answer: Overall
You missed the point.
If you are supposed to create a class to track the score of game you did it wrong. Most of the logic that should be inside the class has been left inside main.
If you see the pattern:
Get info from object.
Manipulate data.
Update state of object.
This usually means that you should have a function called Manipulate() (where Manipulate is the action you are doing).
I would expect the logic running the game to be:
while(game)
{
game.winPoint(rand() % 2);
std::cout << game << "\n";
}
std::cout << "Winner was: " << game.winner() << "\n";
Thus your interface to the class should be:
class Game
{
public:
// returns true if the game is still being played
bool operator() const;
// A player won a point (updates the state.
void winPoint(int playerId);
// return a reference to the winning player.
// If the game is not finished calling this is UB.
Player& winner() const;
// Print the current state of the game.
void print(std::ostream& str) const;
friend std::ostream& operator<<(std::ostream& s, Game const& g)
{
g.print(s);
return s;
}
};
Code Review
Comments.
Comments should be meaningful.
A bad comment is worse than no comment. This is because like code, comments need to be maintained. If the comments and code fall out of sync then what does a maintainer do? Fix the code or fix the comment?
So comments should be for describing why (not how).
The first two lines are OK.
//======================================
// Author: Philippe Balleydier
// Last Update : 08/09/16
// Object : Little algorithm able to keep track of the tennis score
//======================================
The trouble is the third line is not accurate. It's not an algorithm it's a class.
This comment on the other hand is completely useless. It tells me nothing that I don't already know.
//======================================
// Class Game: contains all the informations and methods about the game being played
//======================================
Header files.
These are C header files.
#include <stdlib.h>
#include <time.h>
There are C++ versions of these header files. The C++ versions put all the appropriate declarations in the namespace std.
#include <cstdlib>
#include <ctime>
Never Do this.
using namespace std;
If you had read any other C++ review on this board it would have told you not to do this. See: Why is “using namespace std” in C++ considered bad practice?.
There is a reason that the standard libraries are called std and not standard. Its to make prefixing them easy.
This should be a private member of the class.
// score map for printing the score
const string Score[] = {"0", "15", "30", "40", "ADV"};
Better indenting
Nice indenting is critical to coding. The compiler does not care. But the code will have to be read by humans long after you abandon this code to some poor maintainer. So please make it easy to read.
class Game
{
// see below for explanations
public:
=> Game();
=> bool createPlayer(int playerId, string playerName);
I want to see that those methods are inside the public region of the class.
Constructor
Is that comment telling me something I can not read in the code. Remove useless comment.
//--------------------------------------
// Constructor : init score to 0
//--------------------------------------
Game::Game()
{
scores[0] = 0;
scores[1] = 0;
}
The problem is that you have not set up the class completely. You have just created an empty object. You require the user of your class to make two further calls to set up the players before the class is ready to use. This means I can accidentally start using the class before it is completely set up.
I would have done it like this:
Player player1("Loki");
Player player2("Thor");
Game game(player1, player2);
// Game is now in a valid state and ready to go.
Use exceptions when you have an error
bool Game::createPlayer(int playerId, string playerName)
OK so I fail to add a player now what. The fact that you have an error condition probably means that there is a serious flaw in your code (the code that is calling the class) so your application should probably fail fast rather than try to continue.
if (!myGame.createPlayer(2, "Bob"))
{
// Failed to create player.
// what can I do here?
}
Also you return an error code but fail to even try and check it in your main application. That is a serious flaw (especially in interview code).
myGame.createPlayer(0, "John"); // returns an unchecked bool !!!!!
myGame.createPlayer(1, "Paul");
Getter/Setter Bad practice
It means you are leaking implementation details of the code. Prefer to use action methods that internally mutate the state of the object.
Note: Getter/Setter are fine on object bags; but not on classes.
Aside: Another bad comment (on each function). The comment is unneeded because the function is well named, so well that I can see exactly what it is doing without the comment.
//--------------------------------------
// getName : return the name of player "playerId" as a string
//--------------------------------------
string Game::getName(int playerId)
bool Game::hasADV(int playerId)
bool Game::setScore(int playerId, int score)
int Game::getIntScore(int playerId)
string Game::getStringScore(int playerId)
Use the new random number
// rand init
srand(time(NULL));
Sure it works. For this kind of example it is also fine. But this is an interview question. You want to show that you are up-to-date with modern C++ so use the new random number stuff.
Main
main() is special. If you don't return a value from main then the compiler will plant return 0; for you. So don't add a return to main.
I use the fact that if there is a return 0; in my code that it indicates that at another point in main there is a return that is not zero. | {
"domain": "codereview.stackexchange",
"id": 21901,
"tags": "c++, interview-questions"
} |
Text similarity with sentence embeddings | Question: I'm trying to calculate similarity between texts with various lengths. My current approach is following:
Using Universal Sentence Encoder, I convert text to a set of vectors.
I average these vectors to create the final feature vector.
I compare feature vectors using cosine similarity.
This gives me pretty good results for texts with roughly same sizes, but I was wondering if there is a better approach for the step #2 if texts have different lengths.
Answer: One approach is using Word Mover’s Distance (WMD). WMD is an algorithm for finding the distance between texts of different lengths, where each word is represented as a word embedding vector.
The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document.
For example:
Source: "From Word Embeddings To Document Distances" Paper
WMD can be modified to Sentence Mover’s Distance, comparing how far apart different sentence embeddings are to each other. | {
"domain": "datascience.stackexchange",
"id": 6068,
"tags": "word-embeddings, similarity, similar-documents"
} |
Does friction do work or dissipate heat? | Question: I know there are a bunch of similar questions but I read through them all and they don't answer my question.
Let's say I give a box on a floor an initial "kick" of force such that it has kinetic energy $KE$. Due to friction between the box and the floor, the box will slide to a halt. This means the friction must supply work equal and opposite to the objects energy: $W = -KE$.
However, we know that friction is an irreversible process. This means there is an entropy increase $S > 0$. But according to the classical definition of entropy, $S = \frac{Q}{T}$. Since work does not appear in this equation, this would imply there had to be a heat transfer at some point, but where? Is the frictive force also causing heat?
Answer: 1) Work is done by the friction forces until the box stops.
2) Box kinetic energy is transformed to increased temperature (internal energy) of the sliding surfaces.
3) The cooling to the neighbourhood is an irreversible process, increasing entropy. | {
"domain": "physics.stackexchange",
"id": 64567,
"tags": "thermodynamics, work, friction, entropy, dissipation"
} |
Easily removable material that sticks to skin | Question: I am looking for some kind of material that can be used to easily latch something to someone's finger. This is for a first year engineering project. The idea is to use this material to latch small motors on someone's fingers in the same way that the motor in this picture is attached. The person should be able to move their fingers around without the motor falling off and the motor should be removable with a slightly strong pull. Also the material should create as small a barrier between the finger and motor as possible so that when the motor vibrates, it can easily be felt by the finger.
I'm thinking more along the lines of a sticky polymer sheet that readily sticks to the skin than some sort of adhesive or fluid glue, but I'm open to hearing all ideas.
Are there polymer materials out there (or particular functional groups) that match this description?
Answer: I have an engineering solution for you that bypasses the need to find a chemistry solution.
I'm thinking more along the lines of a sticky polymer sheet that readily sticks to the skin
You want an adhesive bandage. Adhesive bandages are thin, stick to skin, and are easy to remove. In my experience, the less expensive generic brands are both thinner and stickier, so go with a generic brand. If you are worried about the thickness, do not use the central part with the embedded gauze pad. Only use the wings. Use a strong glue to fix your motor to the back of the adhesive wing. | {
"domain": "chemistry.stackexchange",
"id": 1655,
"tags": "polymers, materials"
} |
Should the work done on a rigid body be calculated with respect to the point of application, or the center of mass? | Question: I'm trying to understand work calculation for a force acting on a rigid body (as opposed to a point particle).
My understanding is that the same basic principle applies: the work done by force F on body B is the line integral of F over the curve C which was travelled by some point P.
My question is, which point is P? Do we integrate over the curve travelled by the point which the force is acting upon at a given moment (if so, what if this point changes over time)? Or do we integrate over the curve travelled by the center of mass of the object?
I have looked online and found different resources which seem to suggest both approaches respectively. For example: Work done on rigid body vs particle Here the latter is mentioned, and elsewhere the former approach is mentioned. Please explain the simplest or best strategy for such problems.
Clarification:
I'm mainly looking to understand how to calculate the work done by a specific force. I understand that that total work will then be the sum of all of them.
Answer:
For example: Work done on rigid body vs particle Here the latter is mentioned, and elsewhere the former approach is mentioned.
In user258881's answer to the linked question there are two terms:
$$W=\underbrace{\int \mathbf F\cdot \mathrm d \mathbf r}_{\text{work done by the force}}+\underbrace{\int \boldsymbol{\tau}\cdot \mathrm d \boldsymbol{\theta}}_{\text{work done by the torque}}$$
The "work done by the force" isn't well-named. This term accounts for the work that goes into increasing the translational kinetic energy of the body. But it doesn't really account for all the work done by the applied force. It only accounts for the component of the force that acts through the center of mass.
The "work done by the torque" accounts for the work that goes into increasing the rotational kinetic energy of the body. This accounts for the component of force acting perpendicular to the vector between the point of application and the center of mass.
If you sum these up, you get the total work done by the force. This is the same as your first method of integrating over the curve travelled by the point of application of the force.
The issue is that "the force" and "the torque", as named by user258881, aren't two separate things. In your conceptualization of the problem, they are both components of the applied force. The main difference is that you are imagining a floating in body in space with only one force acting on it, so you can identify "the point of application of the force" as a single point. User258881 was considering a body acted on by multiple forces, so there is no single "point of application" and it is more convenient to first separate each force into translational component and torque component, and sum them up separately before integrating to find the total work done. | {
"domain": "physics.stackexchange",
"id": 100320,
"tags": "classical-mechanics, work"
} |
Potential function - numerical simulation | Question: Using MATLAB, I fixed the potential in a region inside a rectangular plate (100 V) and in the border (50 V). I got the following result of the potential along the plate:
I can't find an intuitive explanation why the potential would decrease to values below the least potential (50 V).
The figure above is wrong because the number of iterations was not enough (200). Here is the same plate with 1000 iterations:
Answer: If you're trying to simulate a 2D solution of the Laplace equation (which is the only unambiguous reading of your post as currently stated; if that's not what you're doing then you should clarify your question with exactly what it is you're doing and how), then your code is wrong.
The reason is that your results don't obey the maximum principle: a harmonic function cannot have any local maxima or minima except at the boundaries.
With things as they are, I would put my money on there being a bug in your code. (Note, however, that this is not really the place to ask people to help you debug it. Depending on how you phrase it, Computational Science may be the place or not.) | {
"domain": "physics.stackexchange",
"id": 32504,
"tags": "homework-and-exercises, electrostatics, potential, computational-physics, software"
} |
At what point is a gene different between species that we call it a different gene? | Question: I am new to genetics and I know humans share many genes with mice for instance but that there are slight differences in conserved nucleotide sequences. Is there a community consensus around at what point a gene in one species is a different gene? How are regulatory elements like enhancers factored into this definition?
Answer: From an evolutionary angle, you are describing an ortholog: "Orthologs are genes in different species that evolved from a common ancestral gene by speciation, and, in general, orthologs retain the same function during the course of evolution." quoting here. A lot of effort has been made to construct ortholog databases (e.g. see quest for orthologs). They are distinguished from paralogs (approximating duplicated genes) and lateral gene transfer events, there is however, no magic cutoffs for defining orthology.
Your 2nd point about regulatory elements, well they could fit in the same framework, any sequence can be clustered. What is a gene anyway? Is a noncoding RNA a gene? How do we go from a collection of transcripts to this unit we term gene? There is some ambiguity... | {
"domain": "bioinformatics.stackexchange",
"id": 2002,
"tags": "phylogenetics, genomics"
} |
How can I calculate flow rate and velocity of fluid at the exit of a garden storage tank? | Question: $$P_1+\frac12\rho V_1^2+\rho gh_1=P_2+\frac12\rho V_2^2+\rho gh_2$$
$$V=Qt, \; Q=A\bar v$$
I have a tank of water in the garden and I am assuming it is full when I start watering. I'm using a couple different equations to see how quickly and how much water is flowing out of a fire hose to water trees. The hose itself is made out of polyester and it is 100 ft. long. When I open the valve, the hose doesn't become completely turgid, and so I'm thinking that there is some energy lost because of flow in the hose and the pressure necessary to keep the hose from collapsing. The flow isn't very great and so I'd like to see what I'm dealing with here so I can buy a smaller hose.
The hose right now is 2 in. (0.05 m) in diameter and the tank is 2 m high. I did some calculations to get the flow rate using the above equations, but I ended up with 6 m/s for velocity and 48 L/s for the flow rate, which is much higher rate than what I am actually experiencing. I am calculating the velocity as
$$V_2=\sqrt{2\rho gh},$$
and I derived this by assuming that
$$h_2 = 0,$$
$$P_1 - P_2 = 0, \; \text{and}$$
$$V_1 = 0,$$
but it doesn't seem quite right.
Answer: I would simply modify the Bernoulli's equation for an incompressible fluid
$$ p_i + \rho g h_i + \frac{\rho u_i^2}{2} = const$$
to account for pressure loss terms $\Delta p_j$ corresponding to pipe friction and maybe also entrance losses and losses from a couple of 45° elbow bends due to a bent pipe
$$ p_1 + \rho g h_1 + \frac{\rho u_1^2}{2} = p_2 + \rho g h_2 + \frac{\rho u_2^2}{2} + \underbrace{\sum\limits_j \Delta p_j}_\text{Losses}.$$
where in your case $p_1 = p_2 = p_0$ is the atmospheric pressure and $h_1 = h$ and $h_2 = 0$. You can find corresponding empirical K-values for the corresponding losses in the literature. Likely the cross-section of the tank $A_1$ is significantly bigger than the pipe $A_2$ and therefore you might assume that $u_1 \approx 0$, else you might use the continuity equation for an incompressible fluid
$$A_1 u_1 = A_2 u_2$$
to determine the correlation between the speeds in the two cross-sections.
Only considering turbulent flow in a straight pipe with a corresponding pipe friction this pressure drop term is given
$$\Delta p = \underbrace{f \frac{L}{D}}_K \frac{\rho u_2^2}{2}$$
where the friction factor $f$ depends on the Reynolds number inside the pipe with diameter $D$ and length $L$ and a fluid with kinematic viscosity $\nu$ (for water $10^{-6} \frac{m^2}{s}$)
$$Re_D := \frac{u_2 D}{\nu}$$
and the surface roughness and thus the equation
$$ g h + \left( \frac{A_2}{A_1} \right)^2 \frac{u_2^2}{2} = \frac{u_2^2}{2} + f \frac{L}{D} \frac{u_2^2}{2}.$$
has to be solved iteratively for $u_2$. The second term again may be neglected if $A_1 >> A_2$. | {
"domain": "physics.stackexchange",
"id": 65608,
"tags": "classical-mechanics, fluid-dynamics, flow, bernoulli-equation"
} |
How to use gazebo bumper plugin to get contact state? | Question:
I'm trying to pull out some data from the contact point of an object, so I chose to use the Gazebo Ros Bumper plug in. I call the plugin in the sensor section of my xml file. But now i don't know how to see my data displayed.
Originally posted by tommy on ROS Answers with karma: 29 on 2012-10-12
Post score: 1
Answer:
According to the wiki, GazeboRosBumper plugin provides contact feedback via ContactsState message. So you will need to subscribe to the topic that is publishing that message.
rostopic (list, info, type...) is your friend.
More generally, you should search on answers.ros (and also pay attention to tags).
When I select the "bumper" tag (or include it as an "Interesting tag" on the right menu), one of the questions that is listed is "How do I use force sensor/ bumper sensor in Gazebo" and "Reading bumper state from PR2"
Originally posted by SL Remy with karma: 2022 on 2012-10-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 11350,
"tags": "ros"
} |
Why does tetrachloromethane have a higher boiling point than trichloromethane? | Question: London dispersion forces (LDF) are present in all molecules, whether polar or non-polar. Molecules also exhibiting dipole-dipole interactions (in addition to the LDF) must have stronger forces of attraction than those molecules which exhibit only LDF.
Then, why does tetrachloromethane (carbon tetrachloride), which is a non-polar molecule exhibiting only London dispersion forces, have a higher boiling point ($\pu{77 ^\circ C}$) than trichloromethane (chloroform) ($\pu{61 ^\circ C}$) which is a polar molecule, exhibiting dipole-dipole interactions?
Answer: You also need to account for the difference in dispersion forces between the two molecules. Chlorine is much larger than hydrogen. Therefore tetrachloromethane has a larger molecular surface area which increases the intermolecular interaction strength. In this particular case, it outweighs the weak dipole interactions present in trichloromethane. | {
"domain": "chemistry.stackexchange",
"id": 16449,
"tags": "halides, intermolecular-forces, boiling-point"
} |
Variable Capturing With Repetition of Variable Name | Question: I am very confused as to which variables are captured by which λ in the example below:
(λa.λb.(λa.a)aba)(ab)
I am new to lambda calculus and the repetition of variables makes this example hard for me to understand and reduce. Any help would be appreciated!
Answer: If you make the construction tree of the expression, a variable $x$ of a leaf refers to that $\lambda x$ which is closest to $x$ in the path from $x$ to the root.
Another way to see it is with the use of scoping rules. The scope of each $\lambda$ is the body of the $\lambda$.
So, the scope of the inner-most $\lambda a$ is only $a$. This means that any occurrence of $a$ outside that parenthesis refers to a different variable than the $a$ inside the parenthesis (point 1).
The scope of the $\lambda b$ is $(λa.a)aba$, so similarly any $b$ outside this expression refers to a different variable (that just happens to have the same name).
The scope of the outermost $\lambda a$ is $\lambda b.(\lambda a.a)aba$. Here things are a bit more complex, because there's another $\lambda a$ inside. But according to point (1) the $a$ in $\lambda a.a$ is different than the $a$'s in the rest of the body. So, only the $a$'s in $aba$ refer to the outermost $\lambda a$.
Note that there are no lambda's to bind the $a$ and $b$ in subexpression $(ab)$ (on the right of the expression). Therefore, the $a$ and $b$ in $(ab)$ occur free. | {
"domain": "cs.stackexchange",
"id": 13601,
"tags": "lambda-calculus"
} |
nav2: go exactly to this point, not approximately | Question:
When I provide a goal pose to the nav2 system on my robot, it tries it best to get within the goal tolerance of the goal, then stops. I find this behavior to be unintuitive: i should expect that the robot tries its best to get to the actual goal, then accept any result within the tolerance, but instead, it goes to the edge of the tolerance and is happy with that result. This means that if I give an xy tolerance of 1.0, it will always stop 1.0 meters from the goal. But then, if I set the xy goal tolerance too small, it oscillates around the target. Is this the intended behavior? How do I tell the nav2 system to generate a path to EXACTLY the goal, not just to any point within the tolerance? Using the pure pursuit controller if it makes a difference.
Originally posted by Per Edwardsson on ROS Answers with karma: 501 on 2023-03-14
Post score: 0
Answer:
Sounds like a very reasonable potential contribution to change the behavior. However, its hard to then define when "done" is "done" without causing the exact same oscillation issues as you describe for setting very low tolerances. Basically: we would need a proposal for a metric for how to say, after we're within the tolerance, when we're completed to a satisfactory refined level.
Originally posted by stevemacenski with karma: 8272 on 2023-03-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 38314,
"tags": "ros2"
} |
Prove that the trace norm is dual to the spectral norm | Question: Suppose $A\in L(X,Y)$. $||\cdot||$ denotes spectral norm and denotes the largest singular value of a matrix, i.e. the largest eigenvalue of $\sqrt{A^*A}$.
$||\cdot||_{tr}$ denotes trace norm. We have that
$$||A||_{tr}=tr\sqrt{A^*A}$$
So I would like to prove the statement that
$$||A||_{tr}=\max\{|tr(A^*B)|: B\in L(X,Y), ||B||=1 \}$$
I know that from Nielsen and Chuang lemma 9.5 that
$$|tr(AU)|\le tr |A|$$ and equality is achieved by a unitary.
We have by definition that $|A|=\sqrt{A^*A}$. So $||A||_{tr}=tr|A|$.
I think my question is if $B$ is not a unitary but has norm 1, can we have that
$$|tr(AB)|> tr |A|\ge |tr(AU)|$$ for any unitary? And if yes, why the maximum is still achieved by a unitary?
Answer: There are different ways to prove what you want to prove, including the solution tsgeorgios has suggested, but for the sake of gaining greater intuition I would suggest starting with the recognition that the trace norm of any matrix is equal to the sum of its singular values.
Once you have this, the inequality you are trying to prove follows pretty easily. In particular, consider a singular value decomposition
$$
A = \sum_k s_k |\psi_k\rangle \langle \phi_k|.
$$
For any choice of $B$ we have, by the triangle inequality and a simple property of the spectral norm, that
$$
|\operatorname{Tr}(A^{\ast} B)|
= \biggl| \sum_{k} s_k \langle \psi_k | B | \phi_k\rangle \biggr| \leq \sum_k s_k |\langle \psi_k | B | \phi_k\rangle|
\leq \sum_k s_k \|B\| = \| A \|_{\text{tr}} \| B \|.
$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 2061,
"tags": "mathematics, textbook-and-exercises, trace-distance"
} |
Matter fields on spacetime | Question: In "The Large Scale Structure of Space and Time" by Hawking and Ellis the following is stated:
"There will be various fields on $M$, such as the electromagnetic field, the neutrino field, etc., which describe the matter content of space-time"
Here $M$ is meant to be a spacetime (i.e. a smooth $4$-dimensional manifold). From a mathematical perspective, how does one interpret "field" as quoted above. Are the authors saying that these matter fields (such as the electromagnetic field and the neutrino field) are vector fields on $M$?
Answer: For a physicist, a “field” is any function over the manifold taking values in some vector space. Scalar fields are just real/complex functions on $M$, vector fields are just vector-valued functions over $M$, and spinor fields are spinor-valued functions over $M$. These fields are taken together with some action or stress tensor to couple them to the gravitational field.
Mathematically, a “field” is a section of some bundle $E$ over $M$, taken together with some least-action principle or stress tensor. A scalar field is a section of a line bundle, a vector field is a section of the (co)tangent bundle; spinor fields are sections of the “spin bundle”, and the gravitational field is a section of the (0,2) tensor bundle. | {
"domain": "physics.stackexchange",
"id": 64648,
"tags": "general-relativity, differential-geometry"
} |
General Theory of Small Oscillations and existence of solutions | Question: For small oscillations, my textbook equation for amplitude says:
$(V-\omega^2T) \cdot a=0$ where $a$ is a column vector in which each component $a_i$ is related to $q_i$ as $q_i=a_i\cos(\omega t-\gamma)$. Now it says for a solution to exist $\det(V-\omega^2T)=0$ must be satisfied.
My objection is: if it's non-zero it has an inverse and simple matrix algebra would do the trick. Although the problem we might face here is that the RHS is 0, which would give $a=0$. But if we make $\det(V-\omega^2T)=0$, doesn't it imply that the system has infinitely many solutions, or no solution- again as useless as previous case. I somehow get a feeling that the author is trying to use the argument that $x \cdot y=0$ and we want some value of $x$, hence $y=0$ but I am not sure can we extend this line of reasoning to matrices.
Answer: Say $A$ is some square matrix. If we have an equation $Ax=0$, then there is a nonzero solution if and only if $\det(A)=0$. We can prove this as follows:
If $\det(A)\neq 0$, then $A$ is invertible, and so only one element $x$ can map to $0$. We know $A 0=0$, so no other element can map to zero.
If $\det(A)=0$, then we know $A$ is not invertible, and so somewhere, two elements map to the same element. Say $Ax=Ay$, where $x\neq y$. Then $0=Ax-Ay=A(x-y)$, where $x-y\neq 0$. We thus have a nonzero solution to $Ax=0$. | {
"domain": "physics.stackexchange",
"id": 25744,
"tags": "homework-and-exercises, oscillators, linear-algebra, coupled-oscillators, linearized-theory"
} |
Why doesn't Bernoulli's Principle apply to Current and Resistors in a circuit? | Question: Bernoulli's principle makes sense when you apply it to fluids. If you decrease the diameter of a pipe then the velocity of the fluid increases because it needs to keep the same rate of fluid moving through the pipe.
So my question is:
If Voltage == Diameter of the pipe
and
Current == Rate of which the fluid is moving
Why do resistors work?
Shouldn't the resistor only actually work within itself but then return the current to it's actual rate once you have passed it?
Or have I taken the analogy of wires being like pipes of water to far?
Answer: Yes, you've taken the analogy too far: electrons don't actually move through the wire in the way that fluids flow through a pipe. Hence, there is no reason why an analog of Bernoulli's principle should apply. | {
"domain": "physics.stackexchange",
"id": 69823,
"tags": "electric-circuits, electric-current, electrical-resistance, voltage, bernoulli-equation"
} |
What is MIP hypothesis? | Question: While reading about muon analysis I read that finding calorimetry segments along with the tracks in the silicon tracker for muons helps us find a subset of tracks compatible with the MIP hypothesis. I was looking for what MIP means and all I could find is Minimum Ionizing Particle. Does MIP mean that and what is the MIP hypothesis?
Answer: The relationship between the energy detected in (a large class of) ionizing detectors and the distance traveled depends a bit on the energy of the particle, but the curve has a valley and is only weakly sensitive to speed in that valley (and the speed of minimum ionization also depends on the particle mass).
The rate of energy deposition for particles at the bottom of the valley is called 'minimum ionization'.
A minimum ionizing particle (MIP) is one that has the speed that generate minimum ionization.
The hypothesis here is that the track length is connected to the detector response by the assumption that the particle's speed is in the valley. So this is a analysis method (rather than a result you are hoping to publish) that works by saying
"Assume [some fact] about the thing we've detected and use that to work out more details, then later we'll look to see if the fit that results is consistent with what we assumed."
And because the minimium ionization valley is pretty broad and a lot of particles are roughly qualified this works out very well. | {
"domain": "physics.stackexchange",
"id": 44463,
"tags": "experimental-physics, leptons"
} |
How to build a library on ROS? | Question:
I installed lsd_slam and ros-indigo. To use lsd_slam library, I downloaded source code and run rosmake lsd_slam.
I try to modify content of this library files. I add below line to 248th line in main_on_images.cpp under lsd_slam/lsd_slam_core/src. Then I run rosmake lsd_slam to build.
printf("getCurrentPoseEstimate\n");
I run lsd_slam with dataset using below command :
rosrun lsd_slam_core dataset_slam _files:=<files> _hz:=<hz> _calib:=<calibration_file>
I don't see getCurrentPoseEstimate message. How can I build a library after modifying source code on ROS?
Originally posted by jossy on ROS Answers with karma: 83 on 2015-07-14
Post score: 0
Answer:
rosmake is enough. I don't delete this question to help other users.
Originally posted by jossy with karma: 83 on 2015-07-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 22178,
"tags": "ros, build, rosmake, ros-indigo, lsd-slam"
} |
Thought experiment with opposite intuitions from thermo/info theory | Question: I'm given to understand that entropy in thermodynamics and entropy in information theory are functionally interchangeable. Informally, I can accept that the amount of work required to achieve a physical state that realizes a function of a random variable or process (or can be described by one) with a certain entropy is proportionate to the entropy of that state.
My problem is that thermodynamic entropy and info-theoretic entropy seem to point in opposite directions. So here's a thought experiment:
Imagine I have a pot with two fluids in it, on top of a burner. These two fluids don't chemically bond, but can produce a homogeneous mixture if you stir enough. They also settle out to different levels, being of different densities.
If I heat the fluids with the burner until one or both of them is near boiling, I know that I'll get a pretty good mixture after a while, like butter in chocolate. My notion is that this is a high-entropy state for the mixture, as it's as "scrambled" as it can be. If I were to try to describe this mixture with a string (a la Komolgorov), I would have to write a very long one to describe which particular state each of the fluid particles were in at a given instant.
On the other hand, though, this is a state I achieved by adding energy to the system through the burner. I obviously put the fluids into a state from which I could extract work. (At the most absurd level, I could put a little wheel in the fluid that would spin as they convected, and use it to power an LED light). Doesn't that mean that I've decreased the entropy? After all, isn't the Earth provided with energy to do work from the radiated energy of the Sun?
The reverse is just as contradictory to me. If I let the heat radiate back out of the pot and turn off the burner, the fluids will eventually settle out into layers. This is a much simpler arrangement to describe, and closer to that of an "unbroken egg" than the heated mixture. But now I have less energy to do work with, and it seems like this is a higher entropy state, since I've just "let it run down".
So what's wrong? My intuition, my definitions, or the experiment?
Answer: As far as extraction of work is concerned, what matters is change in entropy of universe. If the process that occurs increases entropy of universe, then you can in principle extract work out of it. During that process a particular sub-system's entropy may increase, decrease, or remain constant.
In your boiling pot example, you have indeed increased entropy of the contents of the pot by heating it. But this doesn't mean you cannot extract work out of it. In fact the convective motion inside the pot tells you there is still mechanical energy inside the pot that could be converted to heat, and therefore further increase entropy of universe. So it is not surprising that you can extract some work out of it using some contraption like a turbine wheel.
When you turn off heating and the pot cools down and returns to equilibrium, the universe has reached the state of maximum entropy (to avoid complications, think of boiling pot and heating mechanism to be located inside an isolated chamber). From there no other higher entropy state is accessible to the universe, so no work can be extracted from the cooled down pot.
In short, always think in terms of entropy change of the entire universe, and not of a particular system. | {
"domain": "physics.stackexchange",
"id": 36422,
"tags": "thermodynamics, entropy, information, thought-experiment"
} |
Generating IkFast MoveIt Plugin for UR5 from 2 different URDF's | Question:
Thanks to the MoveIt tutorial on IkFast Kinematics Solver, I managed to generate a plugin for UR5 using the urdf files in the ur_description from the generic ros-industrial/universal_robot github. However, I am using the new ur robot driver. That requires the urdf from the specific gitbub of fmauch/universal_robot. When I used the urdf from there, the docker image generates alot of errors. I assume both ur5.urdf.xacro are describing the same manipulator and should produce the same Ik solver but it obviously does not!
Any idea which of them should be used.
Thank you very much for your help.
Originally posted by Victor Wu on ROS Answers with karma: 53 on 2020-08-11
Post score: 0
Answer:
I assume both ur5.urdf.xacro are describing the same manipulator and should produce the same Ik solver but it obviously does not!
that would not be a(n) (entirely) correct assumption.
The files in ros-industrial/universal_robot model the kinematics of a single robot, which is almost guaranteed not to correspond to the kinematics of your specific robot.
The difference would be in the kinematic calibration data.
The files in fmauch/universal_robot have been extended to allow you to import the calibration from your robot, which will then update link length and joint offsets such that the resulting URDF corresponds 100% with the kinematic structure of your specific robot.
So while from a high-level those files model the same robot type, the URDFs which will be uploaded to robot_description are actually different.
Edit:
does this mean, if I want to use a self generated IkFast plugin for Moveit to use, I should go through the ur robot ros driver to the point where I get the calibration data from the robot first. Then use that data to generate a urdf with the calibration data to use the docker image to generate the IkFast plugin before using the moveit setup assistant to generate the moveit config suit for use?
If you want the IKFast plugin to be generated while taking the calibration data of your robot into account, then: yes.
At the same time, I would like to ask how is the joint_limited situation for ur5? You and others started to discuss this issue back from 2014. Have you come to a conclusion that we do not need to limit the joints to just +-pi except for the elble?
And a further question, is the vel_joint_trajectory_controller of the new ur robot driver ready for use yet?
These seem unrelated to the question you ask in your OP.
It would be better to post new questions -- after verifying these have not been asked before.
Originally posted by gvdhoorn with karma: 86574 on 2020-08-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-08-11:
The need to use the fmauch fork will be removed, as the files in ros-industrial/universal_robot will be updated to include the necessary changes to support importing kinematic calibration data.
This is already started and part of ros-industrial/universal_robot#448.
The current progress can be seen in ros-industrial/universal_robot@melodic-devel-staging.
Work to make UniversalRobots/Universal_Robots_ROS_Driver compatible with this is in UniversalRobots/Universal_Robots_ROS_Driver#97.
Comment by Victor Wu on 2020-08-11:
@gvdhoorn does this mean, if I want to use a self generated IkFast plugin for Moveit to use, I should go through the ur robot ros driver to the point where I get the calibration data from the robot first. Then use that data to generate a urdf with the calibration data to use the docker image to generate the IkFast plugin before using the moveit setup assistant to generate the moveit config suit for use?
At the same time, I would like to ask how is the joint_limited situation for ur5? You and others started to discuss this issue back from 2014. Have you come to a conclusion that we do not need to limit the joints to just +-pi except for the elble?
And a further question, is the vel_joint_trajectory_controller of the new ur robot driver ready for use yet?
Comment by Victor Wu on 2020-08-12:
When I use the urdf from the fmauch fork and trying to generate the IkFast plugin, I got the following:
openravepy.databases.inversekinematics: testik, success rate: 0.000000, wrong solutions: 0.000000, no solutions: 1.000000, missing solution: 0.000000
Created /tmp/ikfast.rKmGra/.openrave/kinematics.4bd8292d934c988a06151a7ad63f3388/ikfast0x10000049.Transform6D.0_1_2_3_4_5.cpp
So, I will interpret it as complete failure, with success rate:0; no solutions: 1.0. What could have gone wrong?
Comment by gvdhoorn on 2020-08-12:
I have no idea.
Comment by Victor Wu on 2020-08-12:
Thank you very much for all your help so far. I will then try the melodic-devel-staging and report back. | {
"domain": "robotics.stackexchange",
"id": 35397,
"tags": "ros, ros-melodic, ur5"
} |
Why are single-electron states occupied at $T=0$ in superconductors? | Question: Some textbooks show a density of states where the single-electron states below the Fermi energy are occupied at $T=0$ (see picture). However, I thought that at $T=0$ all electrons are paired. Hence this would mean that they are bosons and should not show up in the density of states for single electrons. How is that possible?
Answer: In few words, I believe that this picture is misleading, since there is a totally equivalent picture which makes your point more intuitive. However, I need to go through the details.
Mathematical analysis
You can see this very clearly from the BCS theory. The BCS Hamiltonian reads
$$
H = \sum_k \Psi_k^{\dagger} \left(
\begin{array}{cc}
\varepsilon_k & \Delta \\
\Delta & -\varepsilon_k
\end{array}
\right) \Psi_k,
$$
where I have introduced the Nambu spinor containing usual electron operators
$$
\Psi_k = \left( \begin{array}{c} c_{k\uparrow} \\ c^{\dagger}_{-k\downarrow} \end{array} \right).
$$
The BCS Hamiltonian does not commute with the particle number operator, meaning that the number of fermions in the system is not a good quantum number, and is not fixed.
Now performing a suitable unitary transformation, you can diagonalize the matrix and get a diagonal Hamiltonian in terms of new fermionic operators $\alpha_{k\sigma}$, $\sigma=\uparrow, \downarrow$:
$$
H = \sum_{k\sigma} \lambda_k \alpha^{\dagger}_{k\sigma} \alpha_{k\sigma} - \sum_k \lambda_k
$$
where $\lambda_k = \sqrt{\varepsilon_k^2 + \Delta^2}$.
Notice that $\lambda_k >0$ for every $k$, as long as the system is gapped ($\Delta\neq 0$).
So what is the ground state of such system, provided that you don't have any constraint on the particle number?
Well, it's the state with no $\alpha$-fermions at all, since the presence of any $\alpha$-fermion would increase the energy by an amount $\lambda_k$!
Moreover, the ground state energy is $E_0 = - \sum_k \lambda_k$.
This makes totally sense: all the electrons are in the condensate, $\alpha$-fermions represent quasi-particle excitations on top of it, and in the ground state there are no such excitations.
Now the misleading - but equivalent - picture.
There is a trivial trick to make things more symmetric: let's write the identity $\lambda_k \alpha^{\dagger}_{k\sigma} \alpha_{k\sigma} = (\lambda_k/2) (\alpha^{\dagger}_{k\sigma} \alpha_{k\sigma} + 1 - \alpha_{k\sigma} \alpha^{\dagger}_{k\sigma})$, so that the Hamiltonian becomes
$$
H = \sum_k \left( \frac{\lambda_k}{2} \alpha^{\dagger}_{k\sigma} \alpha_{k\sigma} - \frac{\lambda_k}{2} \alpha_{k\sigma} \alpha^{\dagger}_{k\sigma} \right),
$$
Finally let's rewrite the second term introducing the operator $\beta^{\dagger}_{k\sigma} = \alpha_{k\sigma}$ (notice that a $\beta$-fermion can be interpreted as a hole of $\alpha$-fermions):
$$
H = \sum_k \left( \frac{\lambda_k}{2} \alpha^{\dagger}_{k\sigma} \alpha_{k\sigma} - \frac{\lambda_k}{2} \beta^{\dagger}_{k\sigma} \beta_{k\sigma} \right).
$$
The plot you are showing clearly comes out from this Hamiltonian (notice the symmetryc energy bands $\pm \lambda_k/2$). What is the ground state now? Again the state with no $\alpha$-particles, corresponding to the state with the maximum amount of $\beta$-particles! The ground state energy is $E_0 = -\sum_k \lambda_k$, which is again consistent.
The latter approach is more nice and symmetric, but also misleading, because it suggests the wrong idea that the ground state is made by electrons filling up the lowest energy band. The correct interpretation is that the system is filled up with $\beta$-fermions, or equivalently it is completely empty of $\alpha$-fermions: in either case, if you write down the state in terms of the original electron operators $c_{k\sigma}$, you will get the BCS ground state, i.e. a coherent state of condensed electronic pairs :) | {
"domain": "physics.stackexchange",
"id": 87356,
"tags": "solid-state-physics, temperature, superconductivity, bosons, density-of-states"
} |
What's the point of using Morphological processing concerning images? | Question: I dig deeper in internet to figure out what's the point of morphological processing on images, but i'd rather to hear the answer from people with experience, can you give me a hand with this? Thank you in advance
Answer: Many operations can be performed on images for enhancement and restoration. Sometimes, they seem mere handcraft and tinkering.
To make more sense of such workflows, and provide it with solid roots, it is useful to consider that images are classes of abstract objects, and operations as well-defined structural actions on the former. Depending on premises or assumptions on the image shape (is it binary, continuous-valued, what do we expect from filtering, what elements are meaningful, etc.), several constructions can be derived.
Standard linear filtering techniques in images assume that images are composed of linear superpositions of components. This yields convolution-based methods, and treating images as elements of a linear vector space (the underlying structure), with a strong "continuous" underlying theory. This can be related to Fourier or harmonic analysis, such a strong influence of least-squares method.
But image formation is often non-linear, for instance because of occlusion or saturation. So you have a collection of non-linear tools, with homomorphic, min, max or median filters for instance, or stack filters for bit-wise versions. Plus, they can be discretized in both values and space.
Mathematical morphology is another comprehensive framework, based on another underlying mathematical structure, lattices/set theory and related operations (erosion, dilation), with a strong discrete background (especially on image values).
Both have been proved recently connected in some ways, through the Cramer transform, see for instance An Explanation for the Logarithmic Connection between Linear and Morphological System Theory. Another interesting reading is 1989's Fourier analysis, mathematical morphology, and vision by C. Ronse. | {
"domain": "dsp.stackexchange",
"id": 7001,
"tags": "image-processing, signal-analysis"
} |
getting error when I install ros-kinetic-rosbridge-suite on ubuntu 16 | Question:
Can anyone please suggest me?
:~$ echo $ROS_DISTRO
kinetic
:~$ sudo apt install ros-kinetic-rosbridge-suite
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-kinetic-rosbridge-suite
Edit: The output:
:~$ cat /etc/apt/sources.list.d/ros-latest.list
deb http://packages.ros.org/ros/ubuntu precise main
Originally posted by amart on ROS Answers with karma: 3 on 2019-06-04
Post score: 0
Original comments
Comment by gvdhoorn on 2019-06-04:
Can you install other ROS pkgs using apt? What is the output of uname -a? Have you ran sudo apt update recently?
Comment by amart on 2019-06-06:
i've run sudo apt update. I can't install any ros package.
Comment by gvdhoorn on 2019-06-06:
If you can't install any ROS packages then I would guess you haven't setup your system correctly.
Which installation guides (if any) did you follow?
What is the output of
cat /etc/apt/sources.list.d/ros-latest.list
Answer:
Edit: The output:
:~$ cat /etc/apt/sources.list.d/ros-latest.list
deb http://packages.ros.org/ros/ubuntu precise main
The problem most likely is that you've configured your system to ask the ROS package repositories for Ubuntu Precise (ie: 14.04) while you are actually running Ubuntu Xenial (16.04) -- or at least according to your question title.
I'm not sure how that happened, but you should be able to correct this situation using the following command:
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
Note that this will override whatever you have now in /etc/apt/sources.list.d/ros-latest.list.
Did you follow the installation tutorial for ROS Kinetic on Ubuntu Xenial?
Originally posted by gvdhoorn with karma: 86574 on 2019-06-06
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by amart on 2019-06-06:
Yes I followed http://wiki.ros.org/kinetic/Installation/Ubuntu for Ros Kinetic on Ubuntu Xenial.
After I run this comment
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
The output has overridden which resulted is :
deb http://packages.ros.org/ros/ubuntu xenial main
But cannot install rosbridge package yet. Any other suggestion please?
Comment by gvdhoorn on 2019-06-06:
Did you run sudo apt-get update before trying to install again?
Comment by amart on 2019-06-06:
After running sudo apt-get update and try to install again, now it perfectly worked. Thank a lot you saved my time.
Comment by amart on 2019-06-06:
Sure. Done sir | {
"domain": "robotics.stackexchange",
"id": 33115,
"tags": "ros-kinetic"
} |
Create new data frames from existing data frame based on unique column values | Question: I have a large data set (4.5 million rows, 35 columns). The columns of interest are company_id (string) and company_score (float). There are approximately 10,000 unique company_id's.
company_id company_score date_submitted company_region
AA .07 1/1/2017 NW
AB .08 1/2/2017 NE
CD .0003 1/18/2017 NW
My goal is to create approximately 10,000 new dataframes, by unique company_id, with only the relevant rows in that data frame.
The first idea I had was to create the collection of data frames shown below, then loop through the original data set and append in new values based on criteria.
company_dictionary = {}
for company in df['company_id']:
company_dictionary[company_id] = pd.DataFrame([])
Is there a better way to do this by leveraging pandas? i.e., is there a way I can use a built-in pandas function to create new filtered dataframes with only the relevant rows?
Edit: I tried a new approach, but I'm now encountering an error message that I don't understanding.
[In] unique_company_id = np.unique(df[['ID_BB_GLOBAL']].values)
[In] unique_company_id
[Out] array(['BBG000B9WMF7', 'BBG000B9XBP9', 'BBG000B9ZG58', ..., 'BBG00FWZQ3R9',
'BBG00G4XRQN5', 'BBG00H2MZS56'], dtype=object)
[In] for id in unique_company_id:
[In] new_df = df[df['id'] == id]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
C:\get_loc(self, key, method, tolerance)
2133 try:
-> 2134 return self._engine.get_loc(key)
2135 except KeyError:
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4433)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4279)()
pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13742)()
pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13696)()
KeyError: 'id'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-50-dce34398f1e1> in <module>()
1 for id in unique_bank_id:
----> 2 new_df = df[df['id'] == id]
C:\ in __getitem__(self, key)
2057 return self._getitem_multilevel(key)
2058 else:
-> 2059 return self._getitem_column(key)
2060
2061 def _getitem_column(self, key):
C:\ in _getitem_column(self, key)
2064 # get column
2065 if self.columns.is_unique:
-> 2066 return self._get_item_cache(key)
2067
2068 # duplicate columns & possible reduce dimensionality
C:\ in _get_item_cache(self, item)
1384 res = cache.get(item)
1385 if res is None:
-> 1386 values = self._data.get(item)
1387 res = self._box_item_values(item, values)
1388 cache[item] = res
C:\ in get(self, item, fastpath)
3541
3542 if not isnull(item):
-> 3543 loc = self.items.get_loc(item)
3544 else:
3545 indexer = np.arange(len(self.items))[isnull(self.items)]
C:\ in get_loc(self, key, method, tolerance)
2134 return self._engine.get_loc(key)
2135 except KeyError:
-> 2136 return self._engine.get_loc(self._maybe_cast_indexer(key))
2137
2138 indexer = self.get_indexer([key], method=method, tolerance=tolerance)
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4433)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4279)()
pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13742)()
pandas\src\hashtable_class_helper.pxi in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:13696)()
KeyError: 'id'
Answer: You can groupby company_id column and convert its result into a dictionary of DataFrames:
import pandas as pd
df = pd.DataFrame({
"company_id": ["AA", "AB", "AA", "CD", "AB"],
"company_score": [.07, .08, .06, .0003, .09],
"company_region": ["NW", "NE", "NW", "NW", "NE"]})
# Approach 1
dict_of_companies = {k: v for k, v in df.groupby('company_id')}
# Approach 2
dict_of_companies = dict(tuple(df.groupby("company_id")))
import pprint
pprint.pprint(dict_of_companies)
Output:
{'AA': company_id company_region company_score
0 AA NW 0.07
2 AA NW 0.06,
'AB': company_id company_region company_score
1 AB NE 0.08
4 AB NE 0.09,
'CD': company_id company_region company_score
3 CD NW 0.0003} | {
"domain": "datascience.stackexchange",
"id": 2871,
"tags": "python, pandas, dataframe"
} |
How to simulate shortest Path finder/planner and dynamic obstacle detection in Gazebo | Question:
I am a beacholers student choose final year project to be shortest path finder and planner including dynamic obstacle detection (A robotic Car). As i am new to gazebo, looking at some official tutorials i got to know how to make a car move. But where should i exactly start from, i dont know how to even avoid static obstacles.
Please guide me, where could i get some extra help ???
Originally posted by SalmaanAhmed on Gazebo Answers with karma: 17 on 2013-08-20
Post score: 0
Answer:
Hi,
implementing such stuff has to be done by your own controllers gazebo only provides the physical simulation.
To recognize static obstacles you can use Sensors like the laser scanners. A very primitive approach would be if you detect an object on the right side you move left...(This is probably not the best approach cause you might get stuck in corners etc...)
For path planning you will need some kind of other representation(a map) of your world. This map could be used for different path planning algorithms. If you want to use a map you have to now if this is generated in runtime or if it is know to the robot before you start. If you have to create a map at runtime you might want to take a look at SLAM packages...
After you got a map you can use common path planners with it. Which is the best planner for your purposes is not an easy question. It depends highly on the environment your in, on the size of your environment, if there are moving obstacles etc, maybe the SBPL or OMPL Planners have an algorithm that suits your needs...
It might be a good idea to look at ROS for some of this things, but it is not mandatory...
Originally posted by evilBiber with karma: 881 on 2013-08-26
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by SalmaanAhmed on 2013-09-18:
Thank you !!
I might consult you when i get stuck :)
Comment by anonymous on 2014-11-24:
hey did u able to control the braking system of car (polaris) in gazebo ? Any help would be appreciated . Thanks! | {
"domain": "robotics.stackexchange",
"id": 3435,
"tags": "gazebo, gazebo-sensor, collision"
} |
Connection ROS to Universal Robot | Question:
Hi everyone,
I have got an Universal robot and a sensor mounted/connected to the robot.
The robot is connected with a network cable to my PC. All I want is to have the current coordinates and the Sensor Data shown in ROS.
I have spend hours on hours on this issue... Can someone help me?
Edit: sorry for the unspecific question.
Thanks for the link, yes I have seen it before.
Back to my question/ questions:
Is it possible to have a "tf monitor" like in this video -> (https://www.youtube.com/watch?v=Ra-nXIfPWdg) minute 6:40) for the current status of the roboter? When yes, how ?
When I follow the link wiki/universal_robot there is the GitHub link: https://github.com/ros-industrial/universal_robot. When I am setting a workspace, which of the folders (on GitHub) should I put in source folder? I have put the following folders inside:
universal_robot
ur_bringup
ur_driver
ur_msgs
after that I used catkin_make. I' ve tried it several time for testing reasons and i also tried catkin_make install after that (do I need catkin_make install?).
So the question is, what should I do for the next step to run the following:
To bring up the real robot, run:
roslaunch ur_bringup ur5_bringup.launch robot_ip:=IP_OF_THE_ROBOT [reverse_port:=REVERSE_PORT]
And what does this sentence mean:
Don't forget to source the correct setup shell files and use a new terminal for each command!
What I have to do?
Cause when I am running roslaunch... the following information appears:
Unable to register with master node http://...: Master my not be running yet. Will keep trying.
Thank you for your help and bringing ROS (community) further.
Originally posted by toriori on ROS Answers with karma: 1 on 2017-02-08
Post score: 0
Answer:
You don't tell us what you've already tried, so: have you seen wiki/universal_robot?
As to the sensor: you don't tell us what (kind of) sensor you have, so there I cannot help you.
Edit: this is probably not what you want to hear, but I have to ask: have you looked at any of the tutorials, read a book or experimented a bit with ROS before you set out to work with your UR5? I ask because your questions seem to suggest that you might be struggling with the basic setup of a ROS workspace, downloading, understanding and compiling packages and then using them.
If you have already looked at them, please ignore, but I'll just include a link to the basic ROS tutorials and to (one example of) a book: A Gentle Introduction to ROS (there are many more, but this one is available for free). If this is your first experience with ROS, please spend some time studying these resources, as they will let you avoid lots and lots of frustration and "spend[ing] hours on hours on" issues like the current one.
Now as to your questions:
1 . Is it possible to have a "tf monitor" like in this video (minute 6:40) for the current status of the roboter? When yes, how ?
tf_monitor is a tool in the tf package, which is a basic part of all ROS installations, so you should have it already. For visualisation of TF frames, use the TF Display in RViz.
2 . When I follow the link wiki/universal_robot [..] which of the folders (on GitHub) should I put in source folder?
First: determine whether you really need to build the packages from source or if you could just use the binary packages that have already been prepared for you. Are you going to work on the source code of the UR drivers? Or do you just want to interface with your robot? Are you runing ROS Kinetic?
If you are using ROS Indigo and just want to use the software (ie: not develop it), please just run: sudo apt-get install ros-indigo-universal-robot. That should install all the necessary parts.
If you are on ROS Kinetic or want to work on the source code of the packages, you'll have to build them from source.
I'll wait for you to tell us which it is before going into much more detail. In any case I expect all of this will become clear once/if you read up on ROS packages and workspace management. See (fi): Creating a workspace for catkin, the other Catkin tutorials and the book I mentioned earlier (and if you want some more background info, see A Gentle Introduction to Catkin).
after that I used catkin_make. I' ve tried it several time for testing reasons and i also tried catkin_make install after that (do I need catkin_make install?)
Again, I'll wait for you to let us know what you really need. For a (very short) intro into how to build packages (hosted on Github) from source in a catkin workspace, see the accepted answer to #q252478.
So the question is, what should I do for the next step to run the following:
roslaunch ur_bringup ur5_bringup.launch robot_ip:=IP_OF_THE_ROBOT [reverse_port:=REVERSE_PORT]
In summary:
source the correct setup.bash (either from your workspace, or the main one under /opt/ros/..)
find out the IP address of your robot controller, and
then roslaunch ur_bringup ur5_bringup.launch robot_ip:=$IP_OF_YOUR_CONTROLLER (where you replace $IP_OF_YOUR_CONTROLLER with the actual IP that you found in step 2
Originally posted by gvdhoorn with karma: 86574 on 2017-02-08
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26953,
"tags": "ros, universal-robot"
} |
Is it legal to use non-terminal twice in Backus-Naur grammar | Question: It's a school assignment:
The problem is:
Write a BN grammar for +67.
The given solution I have been given is following:
<digit> ::= 6 | 7
<number> ::= <digit> | <digit> <number>
<+67> ::= + <number>
My question is, when given this simple task, could it be derived like this?
<digit> ::= 6 | 7
<number> ::= <digit> <digit>
<+67> ::= + <number>
More precisely, can I use same non-terminal twice in same rule?
Answer: Yes, you are allowed to use the same non-terminal more than once in a rule.
Note that the two grammars you give do not express the same set of strings.
The first grammar gives any sequence of 6 or 7 following the +, as can be applied recursively. The second defines exactly four strings +66, +67. +76, +77. | {
"domain": "cs.stackexchange",
"id": 1464,
"tags": "terminology, formal-grammars"
} |
unambiguous grammar but it's not LR(1) | Question: I have following grammar:
$$A \to a A a \mid \varepsilon$$
This grammar is not ambiguous because it has no more than one parse tree from the any sentence generated by this grammar, but there is a shift-reduce conflict in LR(1) on this grammar.
Why does there exist an conflict even though it is not ambiguous grammar?
Answer: All $LR(1)$ grammars -- indeed, all $LR(k)$ grammars -- are unambiguous, by definition. But the converse is not true: the fact that a grammar is unambiguous does not say anything about whether it can be parsed with an $LR(k)$ parser.
The grammar you present is not $LR(1)$, although the language itself is. (In fact, the language is regular: $(aa)^*$.) But that's not true for the language of even-length palindromes which has a rather similar unambiguous CFG:
$$\begin{align}
S &\to \epsilon \\
S &\to a S a \\
S &\to b S b
\end{align}$$
Intuitively, the problem with parsing palindromes deterministically is that you have to start popping the stack at the middle of the sentence. But you can't tell where the middle of the sentence is until you reach the end, and since there is no limit on the length of a sentence, the end could be arbitrarily distant from the middle. So no finite lookahead is sufficient to make the decision.
A context-free language is $LR(k)$ precisely if it is deterministic. For the outline of a proof of the non-determinism of the language of even-length palindromes, see: prove no DPDA accepts language of even-lengthed palindromes | {
"domain": "cs.stackexchange",
"id": 8860,
"tags": "formal-languages, context-free, compilers"
} |
Python and RDKit to extract sub-structures in a SMILES | Question: I am not a chemist, I come from a computer science background. I am involved in a cheminformatics project. I have a list of molecules in SMILES format, which I want to extract sub-structures from.
I found some interesting sub-structures that I want to look for in a given molecule in the paper https://www.mdpi.com/1420-3049/21/1/75 published in 2015.
In the abstract it is mentioned that: We first uncovered three important criteria closely related to drug-likeness, namely:
(1) the best numbers of aromatic and non-aromatic rings are 2 and 1, respectively;
(2) the best functional groups of candidate drugs are usually -OH, -COOR and -COOH in turn, but not -CONHOH, -SH, -CHO and -SO3H. In addition, the -F functional group is beneficial to CNS drugs, and -NH2 functional group is beneficial to anti-infective drugs and anti-cancer drugs;
(3) the best R-value intervals of candidate drugs are in the range of 0.05–0.50
(preferably 0.10–0.35), and R-value of the candidate CNS drugs should be as small as possible in this interval
also, Acyclic groups as mentioned in this paper (figure 4), https://bmcchem.biomedcentral.com/articles/10.1186/s13065-021-00737-2
My question is, is there a method in RDKit (or other Python libraries) that extracts these criteria from smiles?
Answer: I chose the example in Figure 1 in the article for your first three points.
from rdkit import Chem
from rdkit.Chem import Draw, Descriptors, rdqueries
m = Chem.MolFromSmiles('NC(=O)C1(CCN(CCCN2C3=C(CCC4=C2C=C(Cl)C=C4)C=CC=C3)CC1)N1CCCCC1')
Draw.MolToImage(m, size=(400,200))
print('Aromatic ring count =', Descriptors.NumAromaticRings(m))
Aromatic ring count = 2
print('Non-aromatic ring count =', Descriptors.NumAliphaticRings(m))
Non-aromatic ring count = 3
You have to define SMARTS for all functional groups.
fg = Chem.MolFromSmarts('C(=O)[NX3;H2]') # SMARTS for -CONH2
print('Functional group:', len(m.GetSubstructMatches(fg)), '-CONH2')
Functional group: 1 -CONH2
If I see it correctly, the R value is (heavyatoms - carbons) / heavyatoms.
heavyatoms = Descriptors.HeavyAtomCount(m)
q = rdqueries.AtomNumEqualsQueryAtom(6) # 6 for carbon
carbons = len(m.GetAtomsMatchingQuery(q))
r = (heavyatoms-carbons)/heavyatoms
print('R value =', round(r, 2))
R value = 0.18
What you call Acyclic groups just means, that there are no rings in molecule.
from rdkit.Chem.Scaffolds import MurckoScaffold
m1 = Chem.MolFromSmiles('CCC')
core = MurckoScaffold.GetScaffoldForMol(m1)
s = Chem.MolToSmiles(core)
if len(s) == 0:
print('No ring in the molecule')
No ring in the molecule | {
"domain": "chemistry.stackexchange",
"id": 16451,
"tags": "cheminformatics, molecular-design"
} |
Photo of sprites in a clear dark sky, how is this possible? | Question:
above x2: Photos by David Finlay, from here.
The BBC news article Rare 'sprites' photographed beside Southern Lights shows photographs by the Australian photographer David Finlay. They are remarkably clear and distinct, and set against a clear night sky filled with stars. (The two fuzzy blue blobs are the large and small Magellanic Clouds - dwarf galaxies in the local group.)
I had thought that sprites were challenging to photograph not only because they are dim and rare requiring a special low-light cooled-CCD camera and luck, but that one had to be almost directly above a thunderstorm. This is a view of an almost completely clear sky, from the ground with a normal camera.
Are the clouds and light in the distance at the bottom of the first image the source of these sprites? There is really that much difference in altitude between the sprite and the associated thunderstorm?
Answer: According to this Wikipedia entry Sprites occur at altitudes between 50 and 90km while the thunderstorms that create them generally top out below 16km so there is a minimum of more than 30km of height difference between a storm and any sprites it may spawn. So yes, at the right angle, you should be able to get a photograph of a storm's sprites from the ground. Whether the storm we can see in the photo is in fact the source of those sprites I don't know, given the angles I think the part of the storm that created them is actually below the horizon. | {
"domain": "earthscience.stackexchange",
"id": 2384,
"tags": "lightning, thunderstorm, upper-atmosphere"
} |
Why are clouds lighter than the sky during the day but darker at night | Question: This is probably a very basic question but I couldn't find a good answer to it, most search results are about rain clouds or clouds appearing red at night (something I've never seen except for during sunset but apparently it's common in bigger cities).
Basically what I'm wondering is why clouds during the day appear lighter than the sky (white vs light blue) while clouds at night and during the evening appear darker than the sky (see image).
Image quality is low because I took it with my phone through my window.
I guess the clouds could be blocking the light and therefore appear darker but in that case, shouldn't the same thing be happening during the day?
Answer: There could be quite a few things going on.
Off the bat there's no incoming light for them to scatter: during the day, clouds are white because the water droplets are big enough for all visible light to cause Mie scattering, but if you don't have much light falling on them, you can't observe the scattering and you can't observe light passing through either.
Then you could consider the fact that in some places, it rains more in the evening/night than during the day (if you have hotter surface temperatures during the afternoon, you see cloud formation and precipitation during the late evening, and with the lower temperatures in the night, the air is more likely to become saturated, see Dew Point), and clouds which precede rain are thicker and denser. They don't allow much light pass through.
And lastly, there's less ambient light which they can reflect back towards you. | {
"domain": "physics.stackexchange",
"id": 52587,
"tags": "visible-light, sun, weather"
} |
Forcing use of catkin in jenkins builds | Question:
Hi,
I have a package in which the CMakeLists.txt has to work without catkin as well (to build with cmake only). My current solution is to disable all catkin-specific commands (eg. catkin_package()) by taking them out with if statements, eg. if(COMPILE_WITH_CATKIN) ..<catkin commands> .... To set COMPILE_WITH_CATKIN I use the fact that CATKIN_DEVEL_PREFIX is defined only when building from within a catkin workspace:
set (COMPILE_WITH_CATKIN NOT CATKIN_DEVEL_PREFIX)
This works fine in a catkin workspace and with cmake only, but there are problems when building on Jenkins, when the CATKIN_DEVEL_PREFIX is not defined, but catkin still needs to be used (especially catkin_package()). So for building on jenkins, building with catkin has to be enforced. I currently have this work-around:
set (ENFORCE_CATKIN false) # set this to true for jenkins builds ONLY
set (COMPILE_WITH_CATKIN ENFORCE_CATKIN OR CATKIN_DEVEL_PREFIX)
Before releasing the package with bloom, I manually set ENFORCE_CATKIN to true. Afterwards, for my source repository, I re-set it to false.
This is certainly not the prettiest solution. Is there a better way to detect when catkin commands would lead to errors (eg. catkin_package() command not found)?
Thanks for your help,
Jenny
Originally posted by JenniferBuehler on ROS Answers with karma: 1 on 2016-06-01
Post score: 0
Answer:
I am not sure about your use case but in CMake you can try to find catkin and then continue based on if it was found or not. Like this:
find_package(catkin) # not passing REQUIRED
if(catkin_FOUND)
# maybe call find_package again with COMPONENTS
..
else()
..
endif()
You can also mix this with a custom option like COMPILE_WITH_CATKIN.
Originally posted by Dirk Thomas with karma: 16276 on 2016-06-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by JenniferBuehler on 2016-06-01:
Hi Dirk,
thanks for your quick reply.
I have tried this approach, however when compiling with cmake only, catkin is still found if it's installed...
I'm basically looking for a way to detect when compilation is happening from within a catkin workspace/jenkins.
Comment by JenniferBuehler on 2016-06-01:
The use case is a couple of packages which should compile without catkin as well. They are general packages which are also used in projects outside ROS. I'd like to be able to put the package in a catkin workspace to compile as normal, and also be able to build with cmake separately.
Comment by Dirk Thomas on 2016-06-01:
There is no distinguishing variable or anything else which lets you decide how CMake was being invoked. You could invert the logic and pass COMPILE_WITHOUT_CATKIN if catkin is on your CMAKE_PREFIX_PATH but you don't want to use it.
Comment by JenniferBuehler on 2016-06-01:
That is also a good idea, thanks! Will try this next time. For now, my approach works, I just wanted to know if there's a nicer way to achieve it. Thanks for your help!
Comment by Dirk Thomas on 2016-06-01:
We can't provide a public way (without authentication and authorization) to trigger builds on the farm. That would very likely result in abuse. | {
"domain": "robotics.stackexchange",
"id": 24778,
"tags": "catkin, jenkins"
} |
What is the difference between a desiccant and a deliquescent? | Question: I understand that both a desiccating solid and a deliquescent solid are "hygroscopic" in the sense that they bind to water.
But what is the difference? And what are the applications of a desiccant vs a deliquescent?
Answer: A deliquescent solid will eventually dissolve in water it absorbed, like, for example, calcium chloride does. Molecular sieves are very good dessicants but they do not dissolve in water, so they are not deliquescent. As you can probably guess, operating with dessicants that are deliquescent may be more troublesome than with non-deliquscent ones, but it doesn't mean that they are inferior.
The choice of a proper drying agent depends on the material you want to dry. For example, non-deliquescent dessicants, like aluminium oxide, silicagel or molecular sieves are best to fill columns for solvent drying, because they won't clog when their drying capacity is exceeded. Deliquescent drying agents are perfect for the use in dessicators, because it is easy to remove them when they are exhausted. | {
"domain": "chemistry.stackexchange",
"id": 6235,
"tags": "everyday-chemistry"
} |
Use the commutation relation between the position operator $\hat X$ and the momentum operator $\hat P_x$ to show the given equivalence relation | Question: I am attempting to prove the following relation
$\frac 1 2$$(\hat X^2 \hat P_x+\hat P_x \hat X^2)$ = $\hat X \hat P_x \hat X$
My attempt: $\hat X=x$ , $\hat P_x=-ih\frac d {dx}$
I commuted the commutator relation: $[\hat X,\hat P_x] = ih$
I'm unsure how I could use the commutator relation here.
Using the given values one should obtain: $\frac 1 2$$(x^2*-ih\frac d {dx}--ih\frac d {dx}*x^2)$ = $x*ih\frac d {dx}*x$
Which seems trivial, and this yields: $\frac 1 2$$(x^2*-ih\frac d {dx}-2xih)$=$-xih$
Which could be written as $\frac 1 2$$(x^2*-ih\frac d {dx})$ = $0$.
I have clearly gone wrong here and I'm not sure where. I should really apply the commutator relation somewhere or perhaps use the argument that $[\hat X^n, \hat P_x]=ihn\hat X^{n-1}$
Answer: It's not true that $$-i\hbar \frac{d}{dx} * x^2 = -2i\hbar x$$
That is because $*$ should be understood as multiplication of operators, and not as evaluating one operator on another.
The proper way to calculate this product is to act on some wavefunction:
$$ (\hat P_x \hat X^2 \psi)(x) = -i\hbar \frac{d}{dx} (x^2\psi(x)) = -i\hbar(2x\psi(x) + x^2\frac{d}{dx}\psi(x)) = ((-2i\hbar \hat X + \hat X^2\hat P_x)\psi)(x)$$
Since it works for any function $\psi$, it means that
$$ \hat P_x \hat X^2 = -2i\hbar \hat X + \hat X^2\hat P_x $$
Similarly
$$ \hat X\hat P_x \hat X = -i\hbar \hat X + \hat X^2\hat P_x $$
It is also possible to prove the desired relation without using any representation of the operators. We have
$$ \frac12(\hat X^2 \hat P_x + \hat P_x \hat X^2) - \hat X\hat P_x\hat X = \frac12(\hat X^2 \hat P_x-\hat X\hat P_x\hat X)+\frac12(\hat P_x \hat X^2-\hat X\hat P_x\hat X) = \\
= \frac12 \hat X(\hat X\hat P_x - \hat P_x\hat X) + \frac12(\hat P_x \hat X-\hat X\hat P_x)\hat X =\\
= \frac12 \hat X * i\hbar + \frac12 (-i\hbar) *\hat X = 0$$ | {
"domain": "physics.stackexchange",
"id": 65175,
"tags": "quantum-mechanics, homework-and-exercises, operators, momentum, commutator"
} |
What is the algebraic result of the matrix exponential operation $e^{i A}|b\rangle$? | Question: The circuit for the HHL algorithm looks as follows:
I am uncertain what is the algebraic operation of the matrix exponential $e^{i A}$ on $|b\rangle$?
If $$|b\rangle = b_0|0\rangle + b_1|1\rangle$$ and $$A=\begin{bmatrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{bmatrix}$$ does this imply that
$$e^{i A}|b\rangle$$
$$=\left[\cos{\left( \begin{bmatrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{bmatrix}\right)} + i \sin{\left( \begin{bmatrix} a_{00} & a_{01} \\ a_{10} & a_{11} \end{bmatrix}\right)}\right] (b_0|0\rangle + b_1|1\rangle)$$
$$=\begin{bmatrix} \cos a_{00} + i\sin a_{00} & \cos a_{01} + i\sin a_{01} \\ \cos a_{10} + i\sin a_{10} & \cos a_{11} + i\sin a_{11} \end{bmatrix} \begin{bmatrix} b_0 \\ b_1 \end{bmatrix}$$
$$=\begin{bmatrix} e^{ia_{00}} & e^{ia_{01}} \\ e^{ia_{10}} & e^{a_{11}} \end{bmatrix} \begin{bmatrix} b_0 \\ b_1 \end{bmatrix}$$
$$ =\begin{bmatrix} b_0e^{ia_{00}} + b_1e^{ia_{01}} \\ b_0e^{ia_{10}} + b_1e^{ia_{11}} \end{bmatrix}$$
$$ = \color{red}{(b_0e^{ia_{00}} + b_1e^{ia_{01}})}|0\rangle + \color{red}{(b_0e^{ia_{10}} + b_1e^{ia_{11}})}|1\rangle$$
?
Answer: For any normal matrix you can use spectral decomposition for calculation of matrix exponential.
For normal matrix, it holds $f(A) = \sum f(\lambda_i) |u_i\rangle \langle u_i|$. In your case function $f(x)$ is $e^{ix}$. $\lambda_i$ are eigenvalues and $|u_i\rangle$ respective eigenvectors of matrix $A$.
Since, in HHL we suppose that the matrix is Hermitian, it is also normal and we can use approach described above. | {
"domain": "quantumcomputing.stackexchange",
"id": 4931,
"tags": "optimization, hhl-algorithm"
} |
Porous media flow equation in terms of fluid compressibility and determination of coefficients? | Question: I'm having trouble understanding an equation and solution method given in literature. The 1D flow equation for "high rate" linear gas flow through a porous medium is given as:
$$\tag{1} p_1-p_2=\Delta p = \frac{\mu L}{k \beta} \left(\frac{w}{A}\right) + \frac{c_f L}{\sqrt{k} \beta} \left(\frac{w}{A}\right)^2$$
where $p_1-p_2=\Delta p$ is the difference in the upstream and downstream pressures over the length of flow $L$; $\mu$ is the gas viscosity, $k$ is the the porous medium permeability coefficient, $\beta$ is the gas isothermal compressibility, $w$ is the gas mass flow rate, $A$ is the cross-sectional area normal to the direction of flow, and $c_f$ is a dimensionless coefficient.
We assume that, for multiple values of $\Delta p$ (3 or more), all variables in (1) other than $k$ and $c_f$ are known.
Equation (1) is of the form $y = a_2x^2 + a_1x+c$. Its stated that to solve for $k$ and $c_f$, one plots $\bar p \times \Delta p$ as a function of $(w/A)$, determine the values of the coefficients $a_2$ and $a_1$ from the best fit 2nd-order polynomial to the data. From these $a_2$ and $a_1$ values $k$ and $c_f$ can be determined. For the the product $\bar p \times \Delta p$, $\bar p$ is the "average" pressure over the length of flow, taken as $=(p_1 + p_2)/2$.
I am having trouble seeing how (1) was derived, specifically using isothermal compressibility $\beta$ of the gas, and why the product $\bar p \times \Delta p$ is used in the plotting/solution methodology for solving for $k$ and $c_f$.
What I've tried: Starting with
$$\tag{2} -\frac{dp}{dx}=\frac{\mu}{k}v+\frac{c_f}{\sqrt{k}}\rho v^2$$
where $v=q/A=w/(\rho A)$; $v$=superficial velocity, $q$=volumetric flow rate, $\rho$=fluid (gas) density. Therefore, (2) can be written in terms of mass rate and density:
$$\tag{3} -\frac{d p}{d x}=\frac{\mu}{k}\frac{w}{\rho A}+\frac{c_f \rho}{\sqrt{k}}\left(\frac{w}{\rho A}\right)^2$$
Separating variables and integrating:
$$\int_{p_1}^{p_2} d p=\int_0^L\frac{\mu}{k}\frac{w}{\rho A}d x+\int_0^L\frac{c_f \rho}{\sqrt{k}}\left(\frac{w}{\rho A}\right)^2 d x$$
Assume $k, A, c_f, w$ are constants, independent of pressure. Pull the presssure-dependent terms $\mu$ and $\rho$ from the integrals and evaluate them at the average pressure $\bar p$. We then have:
$$\Delta p=\frac{\mu}{k}\frac{w}{\rho(\bar p) A}L+\frac{c_f}{\sqrt{k}\rho(\bar p)}\left(\frac{w}{A}\right)^2 L$$
At this point I'm not sure how we can go from here and swap the $1/\rho(\bar p)$ terms with isothermal compressibility $\beta=\frac{1}{\rho} \frac{\partial \rho}{\partial p}$ to obtain Eqn(1), nor do I see the reason why $\bar p\times\Delta p$ is plotted as a function of $w/A$ instead of $\Delta p$ as a function of $w/A$.
Answer: You need to solve the differential equation properly, by approximating $\rho$ as a function of p: $$\rho=\rho(\bar{p})[1+\beta (p-\bar{p})]$$and moving that to the left side of the equation (with the dp/dx). | {
"domain": "physics.stackexchange",
"id": 68431,
"tags": "fluid-dynamics, porous-media"
} |
Convert network Lidar data to LaserScan | Question:
Hi there!
I am trying to stream data back from my Parrot AR Drone 2.0, which has a LIDAR attached, to my PC (which has ROS running) in order to run Hector Slam.
I currently have code running on the quad which is reading the Lidar (connected to the USB port on the quad) data and streaming this to my PC using netcat. This is all working well and I can see the data coming in on my desktop. I now need a way to listen to this data and convert it into the LaserScan messages that Hector Slam uses.
I am very new to ROS and although have learnt a lot already, I do not know how to do this and can't seem to find any similar posts made here. I may well just be searching for the wrong things but any help would be greatly appreciated!
Cheers,
Andy
PC - Ubuntu 14.04
AR Drone - BusyBox I think
Originally posted by andythepandy93 on ROS Answers with karma: 55 on 2016-03-15
Post score: 0
Answer:
Well, you would need to write a driver listening to the network stream and publishing the lidar data as a LaserScan message.
Basically, this is a simple program (ROS node), that reads this data, puts it in a member of the type sensor_msgs::LaserScan, publishes this data, and this is it.
Check out this question here about writing hardware drivers....
Originally posted by mgruhler with karma: 12390 on 2016-03-16
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by andythepandy93 on 2016-03-16:
Great, thanks for the info. Will give that a read
Comment by andythepandy93 on 2016-03-16:
Is there a way to listen to stdin in ROS or somehow otherwise pipe into a node, so that I can let netcat deal with receiving the data and just let the node convert it into the LaserScan format? | {
"domain": "robotics.stackexchange",
"id": 24122,
"tags": "slam, navigation, hector, scan, network"
} |
Check whether signal is periodic | Question: I have a signal with multiple frequencies. The frequency changes depending on the amount of air flowing through the fuel snap. The signal resembles a square wave ranging from $0.4$ to $4.5$ Volts and was measured over a duration of $30$ seconds, resulting in $1,200,00$ values. I am attempting to analyze the signal, but first I need to determine whether it is periodic or not using MATLAB.
I analyzed the autocorrelation of the MAF signal based on an answer provided. However, does this mean it is not periodic? The autocorrelation decays and eventually vanishes towards the ends.
i used the fft to identify the frequencies present in the signal (does each peak magnitude belong to a different frequency component?) and how can i determine the exact time or the period at which each frequency started?
the fft:
Answer: Computing the autocorrelation function is a robust approach to determine periodicity: the degree of periodicity will be indicated by peaks at offsets $\tau$ in the autocorrelation function. The FFT alone can be used to indicate periodicity of individual tones, but the autocorrelation function can reveal with better fidelity the periodicity of any function including noise like functions or functions that may be buried in noise.
I show an example of this where I synthesized a received GPS signal down-converted to baseband together with the dominant receiver noise. To signal appears as additive white Gaussian noise (AWGN) in both the time and frequency band given the low level GPS signal received. The autocorrelation function reveals the 1 ms periodicity of the GPS C/A code used while we can't distinguish this in the frequency domain. If the captured signal was truly AWGN only, we would only get the single central peak.
A zoom in on the frequency spectrum for this signal (as determined from an FFT) is given below:
The 1 ms periodicity would appear as harmonics spaced at 1 KHz in the frequency spectrum if visible. We see in this case that it is very difficult to distinguish the 1 KHz harmonics from the noise, yet in comparison we were able to see evidence of the 1 ms periodicity much more clearly from the autocorrelation function.
If we repeat the above plots with the GPS signal only and no noise, we get the following result where the periodicity is visible with both approaches:
Each peak in the FFT is a complex number. The location of the peak magnitude on the frequency axis indicates the frequency that was detected with a relative magnitude between various peaks indicating the strength of each component. Often the components will be integer harmonics of what can be described as one periodic waveform—- any periodic waveform that is not s pure sinusoid or single exponential will have integer harmonics consistent with the Fourier Series. The phase of each peak will reveal the relative phase of each component, but this will not be accurate if the frequency component is not perfectly aligned with the center of an FFT bin. | {
"domain": "dsp.stackexchange",
"id": 12125,
"tags": "matlab, signal-analysis, power-spectral-density, preprocessing"
} |
Find an integer such that linear equation becomes divisible by another given integer | Question: Given integers a, b, and c all <=n, is there an efficient algorithm to find an integer y
such that a | c + b*y? One brute force approach will be to check for all y = 0 to n
and if there is a solution, there must be a solution within this range.
That will be O(n)
Answer: Yes. You are asking how to solve a linear congruence $by \equiv -c \pmod a$. The solution is to compute the modular inverse of $b$ modulo $a$ using the extended Euclidean algorithm, multiply both sides of the equation by that modular inverse, and then read off the solution. This can be done in $O(\log^2 n)$ time. | {
"domain": "cs.stackexchange",
"id": 19201,
"tags": "algorithms, time-complexity, asymptotics"
} |
NFA and DFA implementation | Question: Can you review my F# code and point out some insights about it?
What I want to know, in order of relevance:
Avoid so much common code between DFA and NFA. I want to make something more generic, there is much code in common there.
Make it more F# idiomatic
Performance is not a concern. Readability is.
module DFA =
type DeterministicFiniteAutomaton = {
InitialState: string
FinalStates: Set<string>
Transitions: Map<string * char, string>
}
let private nextState (symbol:char) (state:string) (transitions:Map<string * char, string>) =
transitions |> Map.tryFind (state, symbol)
let rec private haltState (input:string) (index:int) (state:string) (transitions:Map<string * char, string>) =
match index with
| i when i = input.Length -> state
| _ ->
match nextState input.[index] state transitions with
| None -> null
| Some state -> haltState input (index+1) state transitions
let accepts (input:string) (dfa:DeterministicFiniteAutomaton) =
dfa.FinalStates |> Set.contains (haltState input 0 dfa.InitialState dfa.Transitions)
module NFA =
type NondeterministicFiniteAutomaton = {
InitialState: string
FinalStates: Set<string>
Transitions: Map<string * char, string List>
}
let private nextState (symbol:char) (state:string) (transitions:Map<string * char, string List>) =
transitions |> Map.tryFind (state, symbol)
let rec private haltStates (input:string) (index:int) (state:string) (transitions:Map<string * char, string List>) =
match index with
| i when i = input.Length -> Seq.singleton state
| _ ->
match nextState input.[index] state transitions with
| None -> Seq.empty
| Some states ->
states |> Seq.collect (fun state ->
haltStates input (index+1) state transitions)
let accepts (input:string) (nfa:NondeterministicFiniteAutomaton) =
haltStates input 0 nfa.InitialState nfa.Transitions
|> Set.ofSeq
|> Set.intersect nfa.FinalStates
|> Set.count > 0
Answer: A quick look shows the only difference with your FiniteAutomaton types is that the transition either takes a string or string List. So genericize it!
type FiniteAutomaton<'a> = {
InitialState: string
FinalStates: Set<string>
Transitions: Map<string * char, 'a>
}
type DeterministicFiniteAutomaton = FiniteAutomaton<string>
type NondeterministicFiniteAutomaton = FiniteAutomaton<string List>
The rest pretty much falls in place:
let private nextState symbol state fa =
fa.Transitions |> Map.tryFind (state, symbol)
let rec private haltState (input:string) index state fa =
match index with
| i when i = input.Length -> state
| _ ->
match nextState input.[index] state fa with
| None -> null
| Some state -> haltState input (index+1) state fa
let accepts input fa =
fa.FinalStates |> Set.contains (haltState input 0 fa.InitialState fa)
I try to remove as much type annotations as possible. Just let F#'s type inference do its magic. Also, it was convenient to pass around the fa instead of fa.Transitions. Finally, the code compiles, but I have no idea if it works.
Edit:
If you want to be totaly generic you can do this:
type FiniteAutomaton<'STATE, 'TOKEN when 'STATE:comparison and 'TOKEN:comparison> = {
InitialState: 'STATE
FinalStates: Set<'STATE>
Transitions: Map<'STATE * 'TOKEN, 'STATE List>
}
Also 'STATE should be a 'TOKEN List. | {
"domain": "codereview.stackexchange",
"id": 19923,
"tags": "f#, state-machine"
} |
What is the meaning behind the neutrino oscillation parameter? | Question: As far as I can tell, there are 6 parameters that describe how a neutrino oscillates: 2 mass squared differences, 3 mixing angles and another parameter I don't understand at all (delta). Thus I have three questions:
I understand that the 2 mass differences is enough information for the three masses, but we cant yet measure the actual mass. Is this limit due to current technology or is the mass squared difference the most fundamental information about the mass we could possibly acquire?
What exactly do the mixing angles physically represent? Are they all of the same importance, and if so why has the theta-13 angle been studied so relatively little? I have also read that a non-zero theta-13 angle hints towards an asymmetry between matter and anti-matter. How so, and why the 13 angle in particular over the other angles?
What is the delta parameter? I know that current technology does not have the capacity to measure it, but the next generation (hopefully) will. What does this parameter represent and what implications would measuring it have?
Answer:
The actual masses are accessible in theory, but not from mixing measurements. Cosmological measurements could give us a useable handle on the sum of the masses (though until we settle the hierarchy questions this may not provide a unique answer), or the combination of a much better model of supernovae plus a precision measurement of the differences in arrival times of the light and neutrino wave-fronts from a supernova whose distance is well known could give the masses directly.
First, your information is out of date, $\theta_{13}$ is now the most precisely known of all the mixing angles. Go Daya Bay, Reno and Double Chooz!1 Now, what they represent is a bit abstract. Hmmm...they are angles but in an obscure mathematical space. Taken together they fully specify the flavor content of the mass states or the mass content of the flavor states. If that doesn't have any meaning to you, you need to study quantum mechanics to get the full story. In the mean time, you can think of them as parameters in a complicated trigonometric expression that explains how strongly the flavors mix in terms of distance between production and detection and the energy of the neutrino.
Finally $\delta_{CP}$. If, $\theta_{13}$ is non-zero (it is) and $\delta_{CP}$ is neither zero nor $\pi$, then it is possible for neutrino mixing to fail to observe the symmetry called "CP". CP symmetry is the assertion that the laws of physics look the same if you both (a) change all the matter particles in the system to anti-matter and (b) reflect the system through a point. CP is good in most systems (in all the systems you will encounter in everyday life), but it is already known to be violate in some flavor-violating quark decays. The thing is that we think CP violation might explain why the universe we see today is all matter when we believe is started out half matter and half anti-matter. Only the know sources of CP violation don't seem to be enough, so finding another source would make a large class of cosmological theorists very happy.
1 Full disclosure, I was a part of Double Chooz. | {
"domain": "physics.stackexchange",
"id": 13219,
"tags": "particle-physics, mass, standard-model, neutrinos, beyond-the-standard-model"
} |
Time synchronization errors with multiple computers | Question:
I am experiencing a string of time synchronization errors that I am struggling with.
I have two different computers, each of which is streaming data from a different sensor. I have set the time update on each computer to look at a common time server (sudo ntpdate {ip.address}).
Computer A runs a Hokuyo, the roscore, a robot urdf, amcl, and a processing node (A_p) for the hokuyo data.
Computer B runs a Kinect, two processing nodes accessing the kinect (B_p), a node (C_p) fusing data from the kinect and the processing node (A_p), and another node fusing information from all of the nodes (A_p, B_p, and C_p) on computer A and B, and transforming their locations into map space (generated by amcl).
This configuration generates synchronization errors, even though everything uses ros::Time::now() to retrieve time. When fusing data in node B_p, I get time synchronization errors:
terminate called after throwing an instance of 'tf::ExtrapolationException
what(): Lookup would require extrapolation into the past. Requested time 1362173375.685855810 but the earliest data is at time 1362173375.907669363, when looking up transform from frame [/robot_tf/back_laser] to frame [/robot_tf/base_link]
and when transforming data into /map space, I get transformation errors that I am assuming are also time synchronization errors:
terminate called after throwing an instance of 'tf::LookupException'
what(): Frame id /robot_tf/base_link does not exist! Frames (6): Frame /robot_tf/camera2_depth_optical_frame exists with parent /robot_tf/camera2_depth_frame.
All of these processes work when on the same computer, but spread across 2 computers, they eventually fail... and eventually isn't very long. Can I get rid of these errors entirely? Barring that, how do I ignore these errors? Do I need to write my own exception handler, or is there something easy in ros already?
Thank you,
Originally posted by ebeowulf on ROS Answers with karma: 100 on 2013-03-01
Post score: 0
Answer:
I'm not sure if this is what's producing the error, but:
When dealing with message data and tf, I avoid using ros:Time::now() whenever possible. Instead, use the timestamp of each message.
Originally posted by Ivan Dryanovski with karma: 4954 on 2013-03-01
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 13145,
"tags": "ros"
} |
Handling various types of nodes when traversing a DOM tree | Question: How to minimize the following code using java's features ... looking for some workaround with the switch-case statement
I've seen several question regarding switch-case design pattern / best practices but because I am new to java, I am having difficulties in implementing them.
Here is the code:
protected void readTree(Node node, Branch current)
{
Element element = null;
Document document = null;
if(current instanceof Element)
{
element = (Element) current;
}
else
{
document = (Document) current;
}
String nodeVal = node.getNodeValue();
String nodeName = node.getNodeName();
switch(node.getNodeType())
{
case ELEMENT_NODE:
readElement(node, current);
break;
case PROCESSING_INSTRUCTION_NODE:
if(current instanceof Element)
{
element.addProcessingInstruction(nodeName, nodeVal);
break;
}
document.addProcessingInstruction(nodeName, nodeVal);
break;
case COMMENT_NODE:
if(current instanceof Element)
{
element.addComment(nodeVal);
break;
}
document.addComment(nodeVal);
break;
case DOCUMENT_TYPE_NODE:
DocumentType domDocType = (DocumentType) node;
document.addDocType(domDocType.getName(), domDocType.getPublicId(), domDocType.getSystemId());
break;
case TEXT_NODE:
element.addText(nodeVal);
break;
case CDATA_SECTION_NODE:
element.addCDATA(nodeVal);
break;
case ENTITY_REFERENCE_NODE:
if(node.getFirstChild() != null)
{
element.addEntity(nodeName, node.getFirstChild().getNodeValue());
break;
}
element.addEntity(nodeName, "");
break;
case ENTITY_NODE:
element.addEntity(nodeName, nodeVal);
break;
default:
System.out.println("WARNING: Unknown node type: " + node.getNodeType());
}
}
Answer: I don't really like this:
case ENTITY_REFERENCE_NODE:
if(node.getFirstChild() != null)
{
element.addEntity(nodeName, node.getFirstChild().getNodeValue());
break;
}
element.addEntity(nodeName, "");
break;
Idiomatically, more than one break in a case is confusing. You should either refactor to an if...else:
if(node.getFirstChild() != null) {
element.addEntity(nodeName, node.getFirstChild().getNodeValue());
}
else {
element.addEntity(nodeName, "");
}
break;
or (I prefer this, by the way) create a method that does this task for you:
case ENTITY_REFERENCE_NODE:
addNodeChildValue(node);
break;
In my experience, switch statements are easiest to read when you minimize the number of tasks done in the cases. That's not to say you should only do one, at most two tasks (or some other arbitrary number). You should look for tasks to extract to methods or code that is repeated in several cases that should not be part of the switch. | {
"domain": "codereview.stackexchange",
"id": 1513,
"tags": "java, parsing, xml, dom"
} |
Thermodynamically, how did the first cell arise? | Question: Living cells are biochemical systems that constantly perform chemical reactions. One of the important consequences of these chemical reactions is the capacity of a living cell to replicate itself. The daughter cells will also constantly perform chemical reactions.
From an evolutionary perspective, new cells always arise from old ones via cell division. The first cell on premordial Earth must be readily performing chemical reactions in order to replicate. A raft of biochemicals must have for some reason aggregated to form the first cell and begin chemical reactions.
An important effect of highly orchestrated chemical reactions inside a living cell is that biological order is maintained, at the expense of increased entropy in its surroundings. However, this observation is an effect of cellular organization and activities, not the cause.
I feel thermodynamics is important in explaining how a raft of biochemicals came together in the first place to organize the earliest cell, and how these biochemicals started to engage in highly orchestrated chemical reactions. Then, what is the exact theory?
Answer: The first amino acids
For how life arose from no life, the Miller-Urey Experiment demonstrates how in primordial Earth conditions, a spark in the atmosphere (analogous to lightning) could have initiated synthesis of the many of the same amino acids (and amines) life uses today.
My understanding was that the spark broke some bonds between the gases (and water vapor) in the air, allowing atoms to re-bind in new combinations to make amino acids.
Here’s a 25-Page Biographical Memoir from the NAS on Stanley Miller, page 3 is where they start discussing this experiment. Miller published this experiment in the early 1950’s, in which he identified 5 amino acids. However, NASA and others revisited the same experiment and discovered Miller under-reported amino acids present, a result from the limited equipment available at the time. They have found 14 amino acids and 5 amines. Miller’s vials from an unpublished experiment of similar nature produced 22 amino acids.
In addition, amino acids have been found on meteorites, suggesting that they are capable of spontaneous synthesis.
The first replicating life
There is a lot less known about this, as we’re still researching it now. Not only this, but there are currently several different going theories.
Keep in mind, the Central Dogma of molecular biology describes genetic flow as DNA—> RNA—> PROTEIN.
The reason why there are different theories (and so many models within them) around the chemistry-to-biology (CTB) transition does not lie in which monomer is more likely to synthesize spontaneously, rather it is almost literally a chicken-and-the-egg situation. Everything is a precursor for everything.
Proteins are made by amino acids, which are polymerized by metabolic processes. Proteins are made by other proteins, and directed/coded for by RNA.
RNA is made by the monomer building blocks of ribose (sugar), phosphate, and a base (A, U, G, C). The bases are biosynthesized from metabolic pathways. RNA is elongated, and read by proteins.
Metabolism is driven by proteins and largely by ATP, which contains the nucleobase adenosine found in RNA (& DNA).
Protein first
Since amino acids are the monomers of proteins, it is a likely (and simple) possibility that a spontaneous reaction bound amino acids in the right order and configuration to create the first protein that enabled life. This protein’s function could likely be to polymerize/elongate amino acid chains into proteins.
This theory is supported and demonstrated by this study, which suggests that the first “protein” catalyzed the elongation and proper folding of other amino acid chains, including creating more copies of itself. 0.3% of the possible permutations of monomer sequences give a “protein” the elongation capability described, which though small, is an exciting possibility considering the age of the Earth and timeline of life. What’s more, polypeptide formation does not require as much precision as RNA synthesis, as speculated characteristics of proteins and amino acids comport with the computer modeling shown by the study.
Though this theory seems to contradict the central dogma, we must keep in mind that thermodynamically the reactions occurring at the origins of life were spontaneous, so the “pattern” of genetic flow present in most life might not be a direct indicator of “what came first.” This is supported by the existence of prions (pathogenic proteins), and the fact that simple organisms don’t always use mRNA to synthesize proteins (alternatively, they use “nonribosomal peptide synthetases”).
The limitation of protein-first models is that proteins don’t really “work backward” into RNA. Though nucleic acids acids contain components of amino acids from their biosynthesis, the are pretty different molecules.
Some models suggest this protein-first theory, AS WELL as an RNA first theory (see below). The idea is that self-replication of biological molecules started with proteins, but nucleic acid-based “life” also evolved, seperately. RNA self-replication probably took longer to reach, but outcompeted protein “life,” explaining today’s life as well as providing an explanation for the above limitation.
Organisms can “uptake” other organisms and incorporate them into their own cells. 2 commonly-cited widespread examples include organelles mitochondria and chloroplasts, both thought to originally be independent prokaryotes themselves.
RNA first (aka The RNA World Hypothesis)
Some theories suggest RNA came before functional proteins. In fact, this has been the de facto theory, because we had no mechanism to explain how proteins could self-replicate - something that has recently been demonstrated. In addition, this follows the central dogma, and would explain not only the development protein synthesis but also genetic information storage (the latter of which cannot be directly explained by the protein-first theory alone).
An idea behind this is that some RNA was able to auto-synthesize the first RNA-replicating enzymes. This sounds like a catch-22, but this study had found some RNA strands when joined can self-create an enzyme that replicates itself - the literal chemical makeup of this enzyme are RNA-oligonucleotides.
However, these RNA fragments in the lab have been take from existing ribozymes, so they may not be an accurate indicator for the first, spontaneously-created enzymes.
Limitations from and only RNA-first model stem from the fact that currently RNA synthesis/replication processes are detailed, intricate, and fragile. The chances of spontaneous reactions alone creating this process seem minuscule (though as mentioned earlier life took a LONG time to develop).
Until recently, another cited limitation concerned the fact that no known organisms replicate RNA. However, some viruses manage to code for RNA-replicated proteins, and (as mentioned earlier) RNA’s capability of self-replication has been recently discovered, be it in a lab setting.
Other theories
The last main debated theory concerns metabolism. This idea suggests metabolic processes occurred before the first life form, since many metabolic processes known today synthesize amino acids and nucleic acids from chemicals in our diet (how our body make the building blocks).
However, such a huge part of what makes metabolism today possible is driven by enzymes, which are proteins. Because of this, all of these processes would have to be dramatically slowed down in rate to mimic spontaneous reactions in primordial conditions, and on top of that there would have to be an explanation for how energetically unfavorable reactions in metabolism would occur. Most of these energetically unfavorable reactions in current metabolism harvest the potential energy in ATP, which longhand is adenosine triphosphate. The molecule adenosine is also used as one of the bases in nucleic acids, which at some level inherently contributes towards to the RNA World Hypothesis.
Primordial amino acids (that have not been retained in today’s life) serving as the hidden first building blocks of life is not an uncommon theory. However, this concept rarely stands alone - it typically contributes to either the protein-first or RNA-first theories.
Proto-RNA theories propose an unknown “stepping stone” molecule like RNA. Evidence of this inherently will be limited.
Some ideas cite a meteor or space rock on which life was present collided with Earth, transferring life to Earth.
Once self-replication AND translation (RNA-protein) have been established, Darwinian evolution can explain the rest.
Additional resources
How Structure Arose in the Primordial Soup (Quantamagazine)
Life’s First Molecule Was Protein, Not RNA, New Model Suggests (Quantamagazine) | {
"domain": "biology.stackexchange",
"id": 11505,
"tags": "biochemistry, cell-biology, dna-replication, thermodynamics"
} |
Need an intermediate resistivity part/material | Question: I need a part or material for a planned experiment (the experiment is similar to those described in my articles http://arxiv.org/abs/1208.0066 and http://arxiv.org/abs/1109.1626 ). The problem is that the required resistivity (about 0.3 Ohm-cm or of the same order of magnitude) is much higher than that of metals and much lower than that of dielectrics. Eventually, I need a long cylindrical part, about 1.5 mm diameter and about 1 m length. So far I have considered semiconductors, conducting polymers, and absorbing materials of http://www.eccosorb.com/Collateral/Documents/English-US/Electrical%20Parameters/ls%20parameters.pdf . The latter materials seem good, but they are essentially foams, and the required part cannot be machined from them. As for semiconductors and conductive polymers, I don't have a clear idea how to get (to order) a material with the required resistivity and how to make (to order) the required part. I need the above resistivity at a frequency of about 25 GHz, so, in principle, I could use a nonconductive, but absorbing (at the required frequency) material, but I would prefer a material that is conducting for direct current as well, to be able to measure the absorbed power. I would prefer a material with decent mechanical properties, so that I could, e.g., strain (tighten) the cylindrical part.
Any advice?
EDIT (02/02/2014): I have finally obtained the required parts. They are made of doped polysilicon. I am grateful for the answers.
Answer: This is a good challenge!
Here is maybe a solution:
Ordering a 1 meter long Si rod with the correct doping level.
It seems they are already able to make 2 mm diameter rods of 1 meter long out of pure silicon.
(http://www.goodfellow.com/catalogue/GFCat4J.php?ewd_token=5IQzQHXALuzqcQFMm0Hg7C5opN6zxs&n=O76P2NzWunxfCV3bOzRYzd6IvAfLFt) | {
"domain": "physics.stackexchange",
"id": 8557,
"tags": "electricity, experimental-physics, classical-mechanics, material-science, semiconductor-physics"
} |
Finding an optimal topological ordering | Question: I have some jobs, which calculate values. Some of these jobs require the calculated values of other jobs for their own calculation. An execution plan for these jobs can be found with a topological sort on the dependency graph.
However, these calculated values are huge and occupy a large chunk of memory. I want to find a topological order that minimizes memory usage.
To give a simple example, consider the following dependency graph:
There are three possible topological orders: $1\, 2\, 3\, 4$, $1\, 3\, 2\, 4$, and $3\, 1\, 2\, 4$. Let's look at each of them.
In the first case, the values of $1$ and $3$ can be discarded immediately after $2$ and $4$ have finished, respectively, so each of them must be kept in memory for one additional step. The value of $2$ has to be kept for two additional steps until $4$ has finished. So in total we have to keep $4$ results in memory over the whole execution.
In the second case the values of $1$ and $3$ have to be kept for two steps and the value of $2$ for one step. This makes $5$ in total, so this ordering is worse in terms of memory usage.
In the third case the value of $3$ has to be kept for three steps and the values of $1$ and $2$ for one step each. Again we have a total of $5$, so this ordering is also worse than the first one. Thus the first ordering should be chosen.
Is there an efficient algorithm to find an optimal ordering without needing to inspect all possible orderings?
Bonus: is it also possible to give an optimal solution, if the values would need different amounts of memory per job?
Answer: This problem is called the Minimum Linear Arrangement of a Directed Graph and it is indeed NP-hard.
See the reduction from the MLA to MLAD on page 11 of this (old) technical report: S. Even and Y. Shiloah, NP-completeness of several arrangement problems, Technical Report no. 43 of the Department of Computer Science, Israel Institute of Technology (Technion), 1975 (PDF). | {
"domain": "cs.stackexchange",
"id": 7100,
"tags": "algorithms, graphs"
} |
Is my use of CSS inheritance clean enough for these sprites? | Question: I have a set of links with background images on this CodePen.
I am seeking feedback to see if I did this the most optimal way. For example, I made a sprite so that I can just load one image. And I leveraged inheritance so that I did not have to assign every containing div a class. Just used a child selector and for the individual buttons, I used :nth-of-type to pass the background image.
Tell me your thoughts if this could use improvement.
HTML
<div class="introSelect">
<div>
<a href="#"> <!-- Link Pending -->
<span>Dentist</span>
</a>
</div>
<div>
<a href="#"> <!-- Link Pending -->
<span>Patient</span>
</a>
</div>
<div>
<a href="#"> <!-- Link Pending -->
<span>Lab</span>
</a>
</div>
</div>
CSS
.introSelect { text-align:center; }
.introSelect div {
display:inline-block;
position: relative;
text-align: center;
background: url('https://s3-us-west-2.amazonaws.com/s.cdpn.io/101702/bruxzir-user-sprites_1.png') no-repeat;
width: 90px;
height: 90px;
}
.introSelect div:nth-of-type(1) {
background-position: 0 -180px ;
}
.introSelect div:nth-of-type(2) {
background-position: -90px -180px;
}
.introSelect div:nth-of-type(3) {
background-position: -180px -180px;
}
.introSelect a {
display:block;
text-decoration: none;
}
.introSelect span {
background-color: rgba(152, 216, 242, 0.7);
color: #444;
font-weight: bold;
letter-spacing: 1px;
position: absolute; bottom: 0; left: 0; right: 0;
}
Answer: There hasn't been a good reason to use extra elements such as spans for purposes of hiding text in the last 10 years. If you need to support very old browsers, negative indentation is the simplest method. Otherwise, there's plenty of clean, modern techniques to choose from.
.foo {
text-indent: -100em;
}
http://nicolasgallagher.com/css-image-replacement-with-pseudo-elements/
http://www.zeldman.com/2012/03/01/replacing-the-9999px-hack-new-image-replacement/
There's not really a good reason to use nth-child here. You'd be better off using self-documenting class names. If the order of the images need to be adjusted, then you don't have to make modifications in multiple places (markup and CSS).
.introSelect .dentist {
background-position: 0 -180px ;
}
.introSelect .patient {
background-position: -90px -180px;
}
.introSelect .lab {
background-position: -180px -180px;
} | {
"domain": "codereview.stackexchange",
"id": 5899,
"tags": "html, css"
} |
Squeezed vacuum state | Question: From:
Loudon, Rodney. The quantum theory of light. OUP Oxford, 2000.
Consider the single-mode quadrature-squeezed vacuum state defined by
$ | \zeta \rangle = \hat{S} (\zeta) | 0 \rangle $
where the squeeze operator is
$ \hat{S} (\zeta) = \text{exp} ( \frac{1}{2} \zeta^* \hat{a}^2 - \frac{1}{2} \zeta (\hat{a}^{\dagger})^2)$
where $\hat{a} $ and $ \hat{a}^{\dagger}$ are the destruction and creation operator for quantum harmonic oscillator
and $\zeta$ is the complex squeeze parameter
$\zeta = s e^{i \theta}$
Define the operators $\hat{X}$ and $ \hat{Y}$ like
$ \hat{X} = \frac{1}{2} ( \hat{a} + \hat{a}^{\dagger})$ and $ \hat{Y} = \frac{1}{2}\text{i}( \hat{a}^{\dagger} - \hat{a} )$
we can verify that
$ \langle \zeta | \hat{X} | \zeta \rangle $ = $ \langle \zeta | \hat{Y} | \zeta \rangle $ = 0
and the variances
$ (\Delta X )^2 = \frac{1}{4} [ e^{2s} \text{sin}^2(\frac{1}{2} \theta) + e^{-2 s} \text{cos}^2(\frac{1}{2} \theta) ] $
$ (\Delta Y )^2 = \frac{1}{4} [ e^{2s} \text{cos}^2(\frac{1}{2} \theta) + e^{-2 s} \text{sen}^2(\frac{1}{2} \theta) ] $
now he shows a representation of the quadrature expectation values
Question:
I am not understanding how he draws that ellipse and how he calculates the length of the axis.
This is a rotated ellipse and I know the relationship with an ellipse with the axis parallel to the cartesian axis. But I still don't understand how he creates it.
Answer: The ellipse he draws is a cartoon of the Wigner function of the state, which is not discussed in the book. The Wigner function of a squeezed vacuum state is:
$$
W(x,y) \propto \mathrm{exp}\big[-e^{2\zeta}x^2 - e^{-2\zeta}y^2\big].
$$
In other words a 2D Gaussian function, that is squeezed along one axis and stretched along another. For a detailed discussion on Wigner functions, including squeezed states, take a look at the book by Ulf Leonhardt. | {
"domain": "physics.stackexchange",
"id": 90900,
"tags": "vacuum, quantum-optics, coherence, squeezed-states"
} |
Experiment proving that the Earth is rotating around the Sun | Question: Is there any simple experiment done on Earth proving that the Earth is rotating around the Sun? Something in the same spirit of what Foucault did proving that the Earth is rotating around itself.
P.S. I know that actually the system rotates around its center of mass. My point is to prove, by an experiment on Earth that we are rotating around some point.
P.S.2 - Done on Earth. One cannot watch the skies!
Answer: There is! It corresponds just to the tides due to the Sun.
Let us suppose that the Earth is not rotating around the Sun, that is, we are not in free fall towards the Sun. In this case the liquid from oceans would accumulate nearest the Sun. The effect would be only one daily tide.
On the other hand when we fall towards the Sun there are accumulation of liquid nearest and farther from the Sun causing two daily tides.
An easy way to see this is to remember that closer point to the Sun is pulled more (by gravity). The side closer the Sun is attracted more than the center of the Earth, which is attracted more than the farther side. Another way to see the effect is to compute the effective gravity generate by the real gravity plus a small correction due to non inertial effects.
This proves that we are falling towards the Sun and since we don't reach it, the only possibility is that we are in circular motion. Notice I am talking about tides due the Sun which correspond to a tiny effect compared to tides due to the Moon.
Another interesting point that someone could ask is whether that does contradict General Relativity or not. According to GR it would be impossible distinguish between a gravitational fields and accelerate frames by any local experiment. A body in free fall should not experience gravity. The thing is that the above mentioned experiment is non local. It compares results at a Earth's diameter distant from each other. | {
"domain": "physics.stackexchange",
"id": 30380,
"tags": "newtonian-mechanics, orbital-motion, reference-frames, solar-system"
} |
How to build release version of ROS node | Question:
I have written my own ROS node, it works well. But I am not sure if the exe is debug version or release version? How can I find out? What changes I need to make to CMakefile.txt to build release version ROS node?
Originally posted by AutoCar on ROS Answers with karma: 102 on 2018-12-25
Post score: 0
Answer:
You can set the build type in the CMake but it's not recommended to hard code the build type in the CMakeLists.txt but to pass it on the command line when you build. This way if you want to build with debug you just change your invocation instead of having to change the code. This is particularly important if you have a large workspace with lots of packages. If you wanted to change the build type you'd have to modify every package's CMakeLists.txt which is a pain and messy if you have other changes nearby.
cmake -DCMAKE_BUILD_TYPE=Release ..
https://stackoverflow.com/questions/7724569/debug-vs-release-in-cmake
Note that you can pass this to
catkin: -DCMAKE_BUILD_TYPE=Release http://wiki.ros.org/catkin/Tutorials/using_a_workspace
catkin_tools or colcon: --cmake-args -DCMAKE_BUILD_TYPE=Debug https://catkin-tools.readthedocs.io/en/latest/cheat_sheet.html
Note that if you don't set a build type that's not debug or release in CMake https://blog.kitware.com/cmake-and-the-default-build-type/
Originally posted by tfoote with karma: 58457 on 2019-01-04
This answer was ACCEPTED on the original site
Post score: 5 | {
"domain": "robotics.stackexchange",
"id": 32210,
"tags": "ros-kinetic"
} |
A shorter way for getting variables | Question: I have the following piece of code in my project, and that a few times:
$this->uniqueSessionKey = $settings->get('uniqueSessionKey') ? $settings->get('uniqueSessionKey') : $_SERVER['HTTP_USER_AGENT'] . time();
I was wondering if there's a better way of writing this, especially the part where it checks if $settings->get(...) is not false and then uses the previously checked value. As it uses a database connection it's probably not very efficient checking for the same thing twice.
Is there a better way of writing this? I'm already using the shortest if-else that I'm aware of.
If it's not possible, I will put the $settings->get(...) in a variable anyway, and then check if it's false. I've tried using || but that returned false (?) so I probably did that wrong.
Note that I don't want to check it for multiple values, so I don't need to use for etc.
Answer: Since PHP 5.3 you can use the shortcut $a = $b? : $c:
$this->uniqueSessionKey = $settings->get('uniqueSessionKey') ? : $_SERVER['HTTP_USER_AGENT'] . time();
--
Just a note: you can not use this if you want check if the var $a is set. In this case you must use the original form: $a = isset($b)? $b : $c;
Otherwise, you can write a custom function in this way:
function getIfSet(&$a, $default = null) {
return isset($a)? $a : $default;
}
$a = getIfSet($b,$c);
Passing var by reference (&$a) avoid the PHP notice if $a is not set. | {
"domain": "codereview.stackexchange",
"id": 11947,
"tags": "php"
} |
Designing CFG that accepts a^n b^m (n<m<2n) | Question: Designing CFG for the following language $\{a^nb^m \hspace{0.2cm} | \hspace{0.2cm} n\ge0, m\ge0, n\le m \le 2n \}$ is easy.
$S \to aSb \mid aSbb \mid \lambda$
Then how about this? Language $\{a^nb^m \hspace{0.2cm} | \hspace{0.2cm} n\ge0, m\ge0, n<m<2n \}$.
It is somewhat difficult for me. I have no idea how to exclude $n=m$ or $m=2n$.
Could you give me some hints?
Answer: Try this to remove the two cases the first time :
$$S\to aAb$$
$$A\to aCbb$$
$$C\to aCb|aCbb|\lambda$$
If you go to $A$ you cannot have $a^nb^{2n}$ and while going from $A$ to $aCbb$ you are removing the possiblity to get $a^nb^n$. | {
"domain": "cs.stackexchange",
"id": 9133,
"tags": "formal-languages, automata"
} |
How is mole fraction of a solution different in vapour state vs in solution vs vapour-liquid equilibrium? | Question:
'x' grams of benzene and 'y' grams of toluene, it is said that the vapor pressure of benzene is 75 mmHg and that of toluene is 22 mmHg. Find mole fraction in the vapor state,
What I do not understand is how different states can have different values for mole fraction? Do the moles change with a change of state?
More precisely how does the fraction of mole in each state look at transition states? As in say I started with 100 moles in the liquid phase, when it reaches the point where it is about to turn to gas (b.p) how much fraction of moles are gas and how much are liquid?
Answer: When you have a liquid mixture, there is a vapor phase that is made up of those same molecules in equilibrium with the liquid phase. There is no rule that says the moles will split 50/50 in the vapor and liquid phase or any other fraction. The amount that stays in any one phase is influenced by many variables (e.g. molecule to molecule interactions, temperature, pressure, volatility) so we use many different models and assumptions to calculate the amount in either phase or we conduct experiments to measure the amounts. | {
"domain": "chemistry.stackexchange",
"id": 12815,
"tags": "physical-chemistry"
} |
Why the equality regarding Kronecker delta holds? | Question: Can anyone show me why this equality below holds? I understand the matrix form of Kronecker delta is an identity matrix, but why this "coming from nowhere" delta function $\delta_{i,j}$ can have the exact same index $(i, j)$ as the previous terms?
$$\sum_j p_ j|\psi_j\rangle\!\langle\psi_j|\rho^{-1}|\psi_i\rangle= \sum_j p_j|\psi_j\rangle\!\langle\psi_j|\rho^{-1}|\psi_i\rangle\delta_{i,j},$$
where $\rho$ is a diagonal density matrix.
Answer: So I might be missing some detail or context, but if the note about $\rho$ being diagonal means it's diagonal in the $\{ |\psi_i \rangle \}$ orthonormal basis, then $\rho = \sum_k p_k |\psi_k\rangle\langle\psi_k| $ for some set of probabilities $\{p_k\}$ and in the case where those are all nonzero, $\rho^{-1} = \sum_k \frac{1}{p_k} |\psi_k\rangle\langle\psi_k|$.
Then sandwiching $\rho^{-1}$ between $\langle \psi_j |$ and $|\psi_i\rangle$ will result in 0 when $i\neq j$ and we can multiply it by 1 when $i=j$ without changing anything, so we can introduce the $\delta_{ij}$ for free. I'm assuming it's useful for some simplification later on in a derivation, but again I could be missing some context. | {
"domain": "quantumcomputing.stackexchange",
"id": 3959,
"tags": "textbook-and-exercises"
} |
What's the physical meaning of a reduced density matrix in EPR? | Question: Consider an EPR situation in which there are two particles, a and b, of which state is given by
$\Psi = \frac{1}{\sqrt2}(|1\rangle|0\rangle + |0\rangle|1\rangle)$,
where $|0\rangle$ and $|1\rangle$ are two eigenstates of an observable O on single-particle Hilbert space (such as z-spin).
Now, while a and b are not in pure states, each of them may be in a mixed state. For instance, one could say that a is in a mixed state represented by the following reduced density matrix:
$\frac{1}{2}(|0\rangle\langle0|+|1\rangle\langle1|)$.
Typically, a mixed state is interpreted as representing our ignorance. On this view, $\frac{1}{2}(|0\rangle\langle0|+|1\rangle\langle1|)$ would correspond to a state in which a is either in $|1\rangle$ or $|0\rangle$ but we simply do not know which state a is in.
This interpretation, however, would be equivalent to the refuted position (attributed to EPR) that the states of a and b were already determined even before measurement, and we simply do not have access to those states. Since this 'local hidden variable' view was refuted, the above reduced density matrix cannot be interpreted as implying our ignorance.
Then how should we interpret the reduced density matrix?
The only reasonable move I can think of is to say that while a density matrix normally represents our ignorance, a reduced density matrix doesn't. This, however, seems very unsatisfying.
So, what's the physical meaning of the reduced density matrix in the EPR case?
Answer: I don't know if this will answer your question, but maybe it will help. Consider the case in which we have one spin-1/2 particle in a pure state
\begin{align}
|+\rangle=\frac{1}{2}\left(|0\rangle + |1\rangle\right)
\end{align}
with $S_z|0\rangle = -1/2$, $S_z|1\rangle = +1/2$. While the outcome of the measurement of $S_z$ in this state is random, we know that this state is an eigenstate of $S_x$, so if we told ahead of time that our spin is prepared in state $|+\rangle$, we know that we will always obtain the value $1/2$ if we measure $S_x$. This is true for any pure state $|\psi\rangle$ of a spin-1/2 system, i.e., we can always find a direction $\hat{\mathbf{n}}$ such that $|\psi\rangle$ is an eigenstate of $\mathbf{S}\cdot\mathbf{n}$.
In contrast, if our spin is in a mixed state—for example the mixed state obtained by tracing out the other spin in a Bell state—then the result of a measurement along any axis is random. In other words, given the complete quantum state of the spin, there is no measurement of which is guaranteed. I think this is a better way of stating the sense in which a mixed-state density matrix encodes "ignorance."
If we accept the interpretation you describe, where the mixed state $\rho = \frac{1}{2}\left(|0\rangle\langle0|+|1\rangle\langle1|\right)$ means our system is either in $|0\rangle$ or $|1\rangle$, this means that our state is definitely not in either of the eigenstates $|-\rangle$, $|+\rangle$ of $S_x$. But this leads us to a contradiction, because we can also write our state as $\rho = \frac{1}{2}\left(|-\rangle\langle -|+|+\rangle\langle+|\right)$, and this should mean that the system is either in $|-\rangle$ or $|+\rangle$. So I think the stated interpretation is incorrect, as is any conclusion you might draw from it. | {
"domain": "physics.stackexchange",
"id": 85105,
"tags": "quantum-mechanics, quantum-entanglement, density-operator, linear-algebra, epr-experiment"
} |
Interpretation of $\frac{1}{r}\left(\frac{\partial V_\phi}{\partial x^\theta}-\frac{\partial V_\theta}{\partial x^\phi}\right)$ for given Space-time | Question: I'm doing an exercise where I needed to find, for an arbitary $4$-vector $V^\mu$
$$ V_{r;\theta;\phi}-V_{r;\phi;\theta}=\ ???$$
for the space-time metric
$$ ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}=\left(1-\frac{r_g}{r}\right)c^2dt^2-\left(1+\frac{r_g}{r}\right)dr^2-r^2g_\Omega $$
Note that here $;$ denote the covariant differential so that
$$V_{r;\theta;\phi}=\frac{D }{Dx^\phi}\left(\frac{D V_r}{Dx^\theta}\right)$$
I'm able to show that the required quantity turn out to be
$$V_{r;\theta;\phi}-V_{r;\phi;\theta}=\frac{1}{r}\left(\frac{\partial V_\phi}{\partial x^\theta}-\frac{\partial V_\theta}{\partial x^\phi}\right)$$
It's asked to interpret this result which I'm not able to do. I can understand that the right-hand side is a radial component of the curl. But I'm not able to see any geometrical interpretation why this should be equal to left hand side. Please help me with this.
Answer: You certainly have an interesting question, YoungKidachi.
As soon as I saw the indices in the vectors $V$, I immediately recognized a commutator in the covariant derivatives, and you probably did too. The thing is tah I link a commutator of covariant derivatives directly to the Riemann curvature tensor, and in fact it is the right way to go; take a look here Riemman curvature tensor. What you are considering is just a component of:
$
V_\beta R^\beta \,_{\nu \sigma \rho} = \nabla_{[\rho}\nabla_{\sigma]} V_\nu
$
So my take at it is that you are measuring the component $ r $ (radial distance) of the vector that tells you what the difference is between transporting $ V_\nu $ in the $ \theta $ then in the $ \phi $ direction vs. transporting it first in the $ \phi $ direction and then in the $ \theta $ direction.
I hope this helps you, or at least that it gets you on the right track! | {
"domain": "physics.stackexchange",
"id": 84487,
"tags": "general-relativity, differential-geometry, differentiation, vector-fields, vortex"
} |
Explaining the restorative force in a bifilar pendulum | Question: Ok so I am an A2 physics student, and for one of my pieces of coursework I conducted a practical investigation, my topic being the factors affecting the period and swing of a bifilar pendulum.
The only useful information I was able to find on the topic was here: http://voyager.egglescliffe.org.uk/physics/gravitation/bifilar/bif.html
What I need to do is explain simply why increasing d in the diagram below, the distance between the threads on support A, while keeping d on bar B the same, will increase the period.
http://physics.dorpstraat21.nl/images/expts/bifilar%20pendulum1.png
In the link I posted above, Faysal Riaz seems to explain this in this section:
"Therefore the restoring Couple, CR, which acts towards the equilibrium position so negative, is given by:
CR = (-Mgθd2)/4y
Applying Newton’s Second Law for the rotational motion of the rod, which is of constant mass:
(Id2θ)/dt2= (-Mgθd2)/4y
∴(d2θ)/dt2= (-Mgθd2)/4Iy"
The problem is, I have no clue what this means.
Can anyone simplify this so I could understand it better, or explain why decreasing d in the way I described above, increases the period in a few lines of basic mechanics if that's possible?
Answer: You know that swinging is oscillation of the total energy between kinetic and potential energy.
The kinetic energy in the bifilar pendulum is mainly generated by the horizontal elongation of the bar.
In the following we always consider oscillations with the same horizontal elongation angle amplitude.
Furthermore, we approximate the oscillation of the horizontal elongation as sinusoidal.
Under these assumptions the maximal kinetic energy is proportional to the squared oscillation frequency.
The potential energy is proportional to the lifing of the bar.
For one and the same horizontal elongation angle the bar is lifted the higher the larger d is.
Therefore, for larger d you have higher maximal potential energy in the oscillation with constant amplitude therefore you have higher maximal kinetic energy which implies higher oscillation frequency.
This is the scheme, now you can try to get approximate formulae if you like.
Note:
The pendulum would even work with d=0 where there is no lifting of the bar. But in this case the working principle is different. With d=0 you must consider torsion stiffness of the thread and the potential energy is stored in the torsion of the thread.
For our considerations above we have neglected this effect. | {
"domain": "physics.stackexchange",
"id": 12579,
"tags": "newtonian-mechanics, rotation"
} |
Can we determine the force an object exerts by its mass and acceleration? | Question: I understand that the objects acceleration is determined by the force exerted on it, and that the force exerted on it is determined by its acceleration.
But, does an object's (named A) acceleration (and mass) tell us anything about how much force the object will exert on another object (named B)?
Answer: No, in general it will not. The acceleration and mass tell you only the total (net) force being exerted on object A at that moment. That is equivalent to the total force object A is exerting on all other objects (B, C, etc.) it is interacting with at that moment. However, the mass and acceleration do not tell you anything about the individual forces that object A exerts on, say, just object B.
If you are able to determine that object A is only subject to one force, then you can find that one force because you know it's equal to the total force. But that's the only case in which the mass and acceleration tell you about a specific force. | {
"domain": "physics.stackexchange",
"id": 2497,
"tags": "newtonian-mechanics, forces, mass, acceleration"
} |
Electric field inside a non-conducting shell with a charge inside the cavity | Question: I have read about conducting charged shells and spheres and know somewhat about the electric fields associated with them. But I have never found anything on non-conducting shells. I have searched online and gone through several famous physics and electrodynamics/electrostatic books. Hence I have devised a thought experiment to resolve my doubts on the topic.
Consider a charged symmetrical non-conducting shell having a charge $Q$ on its surface and $q$ at a point inside the cavity. If I try to find the electric field a point inside the cavity, I will find that they point outwards (due to $q$). But since $Q$ is already uniformly distributed on the surface, how can the field lines due to $q$ pass though the outer surface?
Another question that I have is does the uniformly distributed charge $Q$ on the surface create an electric field inside the cavity? If it does then where do the field lines due to $Q$ inside the cavity go(end)?
What would be the situation inside the cavity if instead of a non-conductor, we simply have a shell of charges and somehow managed to keep them in the situation described above?
Answer: There are many things to look at. Firstly, a conducting material and non-conducting material are different. For conducting materials, typically all the charges are found on the surfaces. For non-conducting shells (e.g. plastic), the materials contain molecules which act like dipoles which product a field that counteract the applied field. (See http://hyperphysics.phy-astr.gsu.edu/hbase/electric/dielec.html)
You may or may not be aware of Gauss's Law for Electric Fields (it's one of maxwells equations). But essentially, the electric field inside the shell is only due to the charge q. (See https://en.wikipedia.org/wiki/Gauss%27s_law) The reason for this is simple, you split up all the charges on the surface of the shell and draw the electric field at point inside the shell contributed by each little charge. The net electric field which is a vector sum would be zero.
The field outside the shell is a superposition of the electric fields due to the charge q and the shell charge Q. This is a consequence of the linearity of maxwells equations. This just means, that we can add electric field vectors. | {
"domain": "physics.stackexchange",
"id": 73209,
"tags": "electrostatics, electric-fields, charge, conductors, thought-experiment"
} |
rqt crash when subscribing to any topic | Question:
Fresh install of ubuntu gnome 16.04
apt-get install ros-kinetic-desktop-full ros-kinetic-ros-control ros-kinetic-ros-controllers ros-kinetic-moveit ros-kinetic-moveit-ros ros-kinetic-gazebo-ros
run my robot_description.launch
I run: rqt
open plugin topic monitor
check any topic
i get this:
[ERROR] [1488471809.228233]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.235319]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.242770]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.282522]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.283385]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.328485]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.336671]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.342535]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.380507]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.383168]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.428342]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.436333]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.443620]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.482359]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.483897]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.528835]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.536844]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.545590]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.587778]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.591574]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.629997]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.636860]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.644545]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.681600]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.683263]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.729279]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.736976]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.744407]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.783724]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.785033]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.829423]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.837650]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.845325]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.886005]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
[ERROR] [1488471809.890275]: bad callback: <bound method TopicInfo.message_callback of <rqt_topic.topic_info.TopicInfo object at 0x7f2ae4bc63d0>>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback
cb(msg)
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 100, in message_callback
self.sizes.append(buff.len)
AttributeError: 'cStringIO.StringO' object has no attribute 'len'
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_widget.py", line 193, in refresh_topics
self._update_topics_data()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_widget.py", line 204, in _update_topics_data
bytes_per_s, _, _, _ = topic_info.get_bw()
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_topic/topic_info.py", line 120, in get_bw
max_size = max(self.sizes)
ValueError: max() arg is an empty sequence
More info:
ros-kinetic-python-qt-binding Version: 0.3.2-0xenial-20170124-171935-0800
ros-kinetic-qt-gui Version: 0.3.4-0xenial-20170124-172657-0800
ros-kinetic-rqt Version: 0.3.2-0xenial-20170124-181049-0800
python-pyqt5 Version: 5.5.1+dfsg-3ubuntu4
python-pyside2 Version: 2.0.0+dev-0~201604151742~rev1858~pkg38~ubuntu16.04.1
UPDATE:
After downloading the source from: https://github.com/ros-visualization/rqt_common_plugins
I run: catkin_make -DCMAKE_INSTALL_PREFIX=/opt/ros/kinetic install
and get:
Install the project... -- Install configuration: "" -- Installing: /opt/ros/kinetic/_setup_util.py CMake Error at cmake_install.cmake:54 (file): file INSTALL cannot copy file "/home/gortium/Software/ros-kinetic/build/catkin_generated/installspace/_setup_util.py" to "/opt/ros/kinetic/_setup_util.py".
Makefile:61: recipe for target 'install' failed make: * [install] Error 1 Invoking "make install -j4 -l4" failed
Originally posted by Gortium on ROS Answers with karma: 11 on 2017-03-02
Post score: 1
Answer:
Please see duplicate question #q255923.
Originally posted by Dirk Thomas with karma: 16276 on 2017-03-02
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Gortium on 2017-03-02:
Thank you!
but Build fail. I updated my question.
Is it the same if i am on Kinetic?
Comment by Dirk Thomas on 2017-03-02:
Yes, it is the same.
Comment by Gortium on 2017-03-02:
Ok fixed it by not installing and sourcing the setup.bash of rqt_common_plugins
Comment by sheng on 2017-03-04:
Hello,I get the same trouble ,when run catkin_make -DCMAKE_INSTALL_PREFIX=/opt/ros/kinetic install,and get failed like you ,I can't understand what's your meaning of"not installing and sourcing the setup.bash of rqt_common_plugins",could you detailed descript it?
Comment by 130s on 2017-03-04:
@sheng see the answer suggested in this answer.
Comment by Dirk Thomas on 2017-03-04:
If you are using indigo or Jade you can simply update your installed Debian packages. The latest release of ros-<DISTRO>-rqt-topic will address the problem. The updated Kinetic package will hopefully be available on a couple of days. | {
"domain": "robotics.stackexchange",
"id": 27186,
"tags": "ros, callback, topic, rqt"
} |
All Categorical data | Question: I need some feedback on a problem I have been working on. I am working with a fairly balanced dataset with all categorical features, and a categorical outcome (classification problem). The data has no continuous numerical features. To predict my outcome on testset I am using xgboost algorithm. Since I have all categorical predictors I am using one-hot encoding to handle my categorical features. Now I am a bit worried that I might be missing something in the process, so I wanted to check if I have all categorical features with a binary outcome is this a valid approach? I don't see any other way to deal with this problem.
FYI the categorical variables are not things like ZIP codes, IDs...they are actually relevant to the outcome...e.g. smoker (yes/no) | high bp (yes/no)
What do you think?
Answer: I don't see any problem doing classification with purely categorical features, as far as the features are relevant.
And as always, some precautions dealing with categorical features:
The choice of model. Some models can handle categorical features off-the-shelf (e.g. tree-based algorithm), some are specifically designed for (e.g. CatBoost). These models may ease your feature engineering work, and probably better accuracy.
Cardinality. Sometimes a categorical feature can take a lot of values (plus unknown/unseen ones), which can be a problem. You should think ahead about what to do in these cases. | {
"domain": "datascience.stackexchange",
"id": 10906,
"tags": "classification, python-3.x, one-hot-encoding, one-class-classification"
} |
Real digital filter property | Question: I am a beginner to study about the filter notion and property
Being a real digital filter,
(here "real filter" I means that its impulse response is real-valued)
this formula is established.
But I have no idea how to prove it
$$ |H(\pi +w )|=|H(\pi -w)|$$
What should be $H(w)$ or $|H(w)| $ for generalized proof procedure?
Answer: HINT:
From the definition of the DTFT
$$H(\omega)=\sum_{n=-\infty}^{\infty}h[n]e^{-jn\omega}\tag{1}$$
derive the following facts:
$H(\omega)=H(\omega+2\pi)$
$H(\omega)=H^*(-\omega)$ for real-valued $h[n]$
where $*$ means complex conjugation. Combine these two results to prove the given equation. | {
"domain": "dsp.stackexchange",
"id": 3670,
"tags": "filters, filter-design, digital-filters"
} |
Is there evidence to suggest that nutrients in vitamin capsules are not as readily absorbed as the same nutrients in whole foods? | Question: I recently fell ill with a cold, and began to take a vitamin C capsule each day to help my immune system. When I noticed no change in my condition, I began to incorporate an abundance of citrus into my diet instead of taking the capsules. When I ate the citrus my condition began to improve markedly.
The ingredients listed by the vitamin manufacturer are:
Ascorbic Cellulose Gel
Hydroxypropyl Cellulose
Croscarmellose Sodium
Stearic Acid
Magnesium Stearate
Silicon Dioxide
Not excluding the possibility of coincidence, I was was intrigued. Has evidence been published to suggest that nutrients in whole foods like vitamin C in citrus fruits are more readily utilized in the body than nutrients in vitamin capsules?
Answer: Vitamin C bioavailability
According to the review Synthetic or Food-Derived Vitamin C—Are They Equally Bioavailable? (Nutrients, 2013), the bioavailability of vitamin C from foods and supplements is similar:
...all steady state comparative bioavailability studies in humans have
shown no differences between synthetic and natural vitamin C,
regardless of the subject population, study design or intervention
used.
and, according to Institute of Medicine (in the US):
The type of food consumed has not been shown to have a significant
effect on absorption of either intrinsic or supplemental vitamin C.
Vitamin C supplements as prevention for common cold
Vitamin C supplements, even in doses 200+ mg/day (more than 3 x recommended dietary allowance - RDA) do not likely help in common cold:
This review is restricted to placebo‐controlled trials testing 0.2 g
per day or more of vitamin C.
Twenty‐nine trial comparisons involving 11,306 participants
contributed to the meta‐analysis on the risk ratio (RR) of developing
a cold whilst taking prophylactic vitamin C.
The failure of vitamin C supplementation to reduce the incidence of
colds in the general population indicates that routine prophylaxis is
not justified. Vitamin C could be useful for people exposed to brief
periods of severe physical exercise. (Cochrane, 2007)
To get 200+ mg vitamin C from citruses, you would need to eat at least 3 oranges or 7 lemons.
Considering the above evidence, the improvement of cold symptoms was likely a natural process.
Bioavailability of other nutrients from foods/supplements
There is no general rule to say that nutrients from foods or supplements are absorbed better or worse; it can depend on a specific food and a specific supplement formulation.
Iron:
In our in vitro model, naturally iron-rich mineral waters and
synthetic liquid iron formulations have equivalent or better
bioavailability compared with ferrous iron sulphate tablets. (European Journal of Nutrition)
Iron-fortified foods:
Bioavailability of fortification iron varies widely with the iron
compound used (56), and foods sensitive to color and flavor changes
are usually fortified with water-insoluble iron compounds of low
bioavailability. Iron compounds recommended for food fortification by
the World Health Organization (WHO) (56) include ferrous sulfate,
ferrous fumarate, ferric pyrophosphate, and electrolytic iron powder.
Many cereal foods, however, are fortified with low-cost elemental iron
powders, which are not recommended by WHO (57) and these have even
lower bioavailability (AJCN, 2010).
Magnesium:
The results of serum and urine analysis indicated that Mg
bioavailability was comparable for mineral waters with different
mineralization levels, bread, and a dietary supplement. (Tandofline,
2017)
Mg supplements comparison:
Studies on the bioavailability of different magnesium salts
consistently demonstrate that organic salts of magnesium (e.g., Mg
citrate) have a higher bioavailability than inorganic salts (e.g., Mg
oxide) (Nutrients, 2019)
Potassium:
The bioavailability of potassium is as high from potatoes as from
potassium gluconate supplements. (AJCN, 2016)
In conclusion, even if most studies mentioned in this answer suggest that nutrients from foods and supplements are equally bioavailable, you need to check specific supplement formulations, for example, iron from many fortified foods and magnesium oxide can have poor bioavailability. Anyway, the studies show that most people with normal blood nutrient levels do not need dietary supplements (Int J. Prev. Med., 2012 ; Annals of Internal Medicine, 2014). | {
"domain": "biology.stackexchange",
"id": 10217,
"tags": "nutrition, digestive-system, digestion, immune-system"
} |
F1_score(average='micro') is equal to calculating accuracy for multiclasification | Question: Is f1_score(average='micro') always the same as calculating the accuracy. Or it is just in this case?
I have tried with different values and they gave the same answer but I don't have the analytical demonstration.
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
y_true = [0, 1, 2, 0, 1, 2]
y_pred = [0, 2, 1, 0, 0, 1]
print(f1_score(y_true, y_pred, average='micro'))
print(accuracy_score(y_true,y_pred))
# 0.3333333
# 0.3333333
Answer: In classification tasks for which every test case is guaranteed to be assigned to exactly one class, micro-F is equivalent to accuracy.
The above answer is from:
https://stackoverflow.com/questions/37358496/is-f1-micro-the-same-as-accuracy
More detailed explanation:
https://simonhessner.de/why-are-precision-recall-and-f1-score-equal-when-using-micro-averaging-in-a-multi-class-problem/ | {
"domain": "datascience.stackexchange",
"id": 6530,
"tags": "machine-learning, python, scikit-learn, multiclass-classification, metric"
} |
How is the efficiency of a heat engine related to the entropy produced during the process? | Question: I'm reading Schroeder's An Introduction to Thermal Physics. Regarding heat engines, it is stated:
Unfortunately, only part of the energy absorbed as heat can be converted to work by a heat engine. The reason is that the heat, as it flows in, brings along entropy, which must somehow be disposed of before the cycle can start over. To get rid of the entropy, every heat engine must dump some waste heat into its environment. The work produced by the engine is the difference between the heat absorbed and the waste heat expelled.
This seems to suggest that to maximize the efficiency of the engine, one should minimize the entropy produced during the process.
Okay, so now let's try to minimize the entropy that is created. The heat that leaves my hot reservoir is $Q_h$. This is the heat that is given to the gas in my engine. So the entropy change for the hot reservoir is $\frac{Q_h}{T_h}$ and the entropy change in the gas is $\frac{Q_h}{T_{gas}}$. Of course, in order for heat to flow, $T_h$ must be bigger than $T_{gas}$. This implies $$\frac{Q_h}{T_h} < \frac{Q_h}{T_{gas}}$$and the total entropy increases during the heat transfer, which is what I would expect. So, to minimize the entropy change, you would want the $T_{gas}$ to be very close to $T_h$. I believe the same argument can be made for the heat transfer from the gas to the cold reservoir.
However, the equation for the efficiency is
$$e = 1 - \frac{Q_c}{Q_h}$$
which can also be written as
$$e \leq 1 - \frac{T_c}{T_h}$$
where $T_c$ is the temperature of the cold reservoir. This equation implies that if the temperatures are similar then the efficiency is essentially zero. If the temperatures are $Q_h = \infty$ and $Q_c = 0$ then the efficiency is maximized. But by the argument I gave above, it seems that this would create a huge amount of entropy.
Am I making an error in my reasoning?
Answer: The Short Answer
How is the efficiency of a heat engine related to the entropy produced during the process?
The maximum efficiency for any heat engine operating between two temperature $T_H$ and $T_C$ is the Carnot efficiency, given by
$$e_C = 1 -\frac{T_C}{T_H}.$$
Such a heat engine produces no entropy, because we can show that the entropy lost by the hot reservoir is exactly equal to the entropy gain of the cold reservoir, and of course, the system's entropy on the net doesn't change because the system undergoes a cycle.
Any heat engine operating between the same two temperatures whose efficiency is less than $e_C$ necessarily increases the entropy of the universe; in particular, the total entropy of the reservoirs must increase. This increase in entropy of the reservoirs is called entropy generation.
Finally, the efficiency of the perfect engine is less than one, necessarily, because the entropy "flow" into the system from the hot reservoir must be at least exactly balanced by the entropy "flow" out of the system into the cold reservoir (because the net change in system entropy must be zero in the cycle), and this necessitates waste heat from the system into the cold reservoir. The fact that $e_C$ goes to one in the limit of small ratios $T_C/T_H$ is a consequence of the fact that $Q_C$ is small compared to $Q_H$. It is not a consequence of the fact that entropy generation is small in this case, because entropy generation is already zero for the Carnot cycle.
Explanation
Let's concentrate first on the interaction between the system and the hot reservoir. An amount $\delta Q_H$ of energy flows into the system from the hot reservoir, which means that the system's entropy changes by
$$\mathrm dS_\text{sys} = \frac{\delta Q_H}{T_\text{sys}},$$
and correspondingly, the reservoir's entropy changes by
$$\mathrm dS_\text{hot} = -\frac{\delta Q_H}{T_{H}}.$$
It is straight-forward to show then, that the total change in entropy of system plus environment satisfies
$$\mathrm dS = \mathrm dS_\text{hot}+\mathrm dS_\text{sys} \geq0,$$
with equality holding if and only if the system and environment exchange energy via heating when they have equal temperatures, $T_\text{sys} = T_H$.
As a consequence, in order to minimize entropy production (and, in fact zero it out completely) during this process, we want $T_\text{sys} = T_H$, and the net change in system entropy during this process can then be written as
$$\Delta S_\text{sys} = \int \frac{\delta Q_H}{T_\text{sys}} = \frac{Q_H}{T_{H}},$$
since we are assuming that the temperature of the reservoir doesn't change at all during the cycle.
Now, since the system operates on a thermodynamic cycle, and since the system entropy $S_\text{sys}$ is a state variable (state function/$dS$ is an exact differential, etc.), it must be true that
$$\mathrm dS_\text{sys,cycle}=0.$$
Therefore, there must be some other process during which the system expels an amount of energy $Q_C$ to some other reservoir via heating in such a way that the change in system entropy during this new process is the negative of the change in system entropy that we calculated before. By the same argument as above, it must be that this change in entropy is
$$\Delta S_2 = -\frac{Q_C}{T_C},$$
where $T_C$ is the temperature of the cold reservoir.
Finally, then, since system entropy is a state variable,
$$0 = \Delta S + \Delta S_2 = \frac{Q_H}{T_H}-\frac{Q_C}{T_C}.$$
Another way of looking at this equation is that the net change in entropy of the hot reservoir is negative the net change in entropy of the cold reservoir during the cycle, and hence the net change in entropy of the universe is zero during the cycle.
Efficiency and work
Now, none of this seemed related to the fact that efficiency goes to 1 as the ratio of $T_C$ to $T_H$ goes to zero. This comes in in the following way. First, the net work output during one cycle is
$$W_\text{out} = Q_H-Q_C,$$
and hence the efficiency of the engine that we've just made is
$$e = \frac{W_\text{out}}{Q_H} = 1 - \frac{T_C}{T_H},$$
after some algebra. Based on our calculation above, this must be the maximum efficiency of any engine operating between these two temperatures. However, if we change the temperatures, then we can change the efficiency. The reason the efficiency goes up as the temperature ratio goes down is that $W_\text{out}$, being the difference between the heat flows, must go up if, say, we lower $T_C$ (because then $Q_C$ goes down) or if we raise $T_H$ (because then $Q_H$ goes up).
In some sense, this part really doesn't have much to do with entropy at all, because from the thermodynamic perspective, entropy production (which is the increase in entropy of an isolated system) is a measure of how much work we could have done if we had done the process reversibly, but we have already designed the perfect engine operating between those two particular temperatures above, so entropy doesn't have anything else to say. | {
"domain": "physics.stackexchange",
"id": 53484,
"tags": "thermodynamics, entropy, heat-engine"
} |
Why isn't momentum conserved from my reference frame? | Question: Two cars of mass $m_{1}$ and $m_{2}$ collide with each other in a completely inelastic collision. So after the collision they continue to go in same velocity. Now suppose I am in one car. So I will consider myself stationary, so my velocity will be $0$. And the other car's velocity relative to me will be $v$.
Now after the collision, I will still consider myself stationary and so my velocity will still be $0$ and the other car's velocity relative to me also will be $0$.
So the total initial momentum $p_{i}$ will be:
$m_{2}v$
And the final total momentum $p_{f}$ will be $0$
So clearly momentum isn't conserved. Clearly I am wrong. Why isn't momentum conserved from my reference frame?
Answer: You accelerated during the collision, so your reference frame is not inertial. The usual conservation laws only apply in inertial frames, unless you account for the fictitious forces and fictitious work created by the acceleration of the reference frame. Fictitious forces, if not accounted for, can make it look like energy comes from nowhere or disappears, when in reality the energy is being used to accelerate the reference frame. | {
"domain": "physics.stackexchange",
"id": 50989,
"tags": "newtonian-mechanics, reference-frames, momentum, conservation-laws, collision"
} |
Can moving charges be influenced by its own magnetic field? | Question: From the Biot-Savart law we se that a moving charge creates a magnetic field which, in turn, can exert a force on electric charges, thus, do the charges on, let's say, a wire, exert a force not on themselves but on its neighbouring charges?
More over, and on this I'm a little rusty, does a closed current carrying loop, be it a square, circle, pentagon, whatever, imply a null total magnetic force, if the loop is isolated from any other magnetic field and can only interact with the one created by the current that goes through it?
Answer: Yes, the pieces of the circuit affect the other pieces of circuit via magnetic force and this is known as the Ampère force. | {
"domain": "physics.stackexchange",
"id": 49442,
"tags": "electromagnetism"
} |
How to connect a pneumatic cylinder to a door? | Question: I have this pneumatic cylinder that I wish to connect to a wooden door in order to make it kinda like a Star wars door. I'm wondering what the best way to connect the cylinder to the door would be.
The cylinder has a threaded rod at the end. Looks like the one attached. I was wondering if there are components like a bracket that can be screwed onto the door and has a threaded loop to which the Rod could be screwed to?
Answer: Based on the threads at the end of your sample cylinder, you would be seeking a female threaded rod end:
There are a multitude of variations of this product, almost always including the words "rod end" with different modifiers. You could have a forked threaded rod end.
It would not have to be female threads, of course. Depending on the work load, you may not have to have the bearing insert of the first image, although it allows for longer life and easier alignment in use. | {
"domain": "engineering.stackexchange",
"id": 1953,
"tags": "pneumatic"
} |
rosserial Fuerte install help [Solved] | Question:
EDIT: Managed to download from source and install without issue. Had to restart as updates were preventing connections.
Hey
I am currently trying to install rosserial for ROS Fuerte, and I keep getting that rosserial can not be found.
From everything I have seen it should exist, however following instructions from http://www.ros.org/wiki/rosserial_arduino/Tutorials/Arduino%20IDE%20Setup do not seem to be working.
Trying to download from debs I get the following.
robot@robot:~$ sudo apt-get install ros-diamondback-rosserial
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-diamondback-rosserial
Trying to install from source, I am new'ish to Ubunutu so this error I think is as simple as me missing something.
robot@robot:~$ hg clone https://kforge.ros.org/rosserial/hg rosserial
abort: error: no host given
What steps am I missing to download rosserial correctly?
Originally posted by HenryW on ROS Answers with karma: 140 on 2012-10-04
Post score: 0
Answer:
From @HenryW 's edit:
Managed to download from source and install without issue. Had to restart as updates were preventing connections.
Originally posted by tfoote with karma: 58457 on 2012-10-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11235,
"tags": "rosserial, ros-fuerte"
} |
DFS for all possible walks from a source to a destination with exactly k edges | Question: Problem Statement: Given a directed graph and two vertices ‘u’ and ‘v’ in it, count all possible walks from ‘u’ to ‘v’ with exactly k edges on the walk.
My question is that, say we have a DAG (directed acyclic graph), can DFS be a solution to the problem? I understand that native DFS would fail for the original problem as if there exists a cycle then we cannot traverse over the cycle to increase the path length from source to destination.
I've written the code tweaking DFS which I think solves the problem for a DAG.
I've used the inputs from a similar example, and have also compared the code both of which are based on the same algorithm.
//Pseudocode below:
public void printPossiblePaths(int u, int v, int k, int count, Queue queue) {
Add vertex u to queue;
Mark u as visited;
foreach vertex w adjacent to u {
If w is not visited {
If count equals k before reaching 'v', ignore this vertex;
if (count==k) {
Print the path followed using the elements in the queue;
Remove last element from the queue;
return;
}
// Recur with the source as the new vertex and the updated queue
printPossiblePaths(w, v, k, count, queue);
}
}
Mark u as unvisited;
}
Gist of the algo:
1) Do a depth first search from the source vertex counting the edges as you go to.
2) Once you have traversed the k-edges, check if the node if we've arrived at the 'v'th node, otherwise come out of that path and check for other paths.
3) Similarly, if we do see the 'v'th node after 'k' edges, print the queue and check for for all other possible paths.
My question is : Is there a more efficient approach to achieve the same for a DAG or could we further tweak the code to handle cycles as well?
Answer: I found the solution to this question. The idea is based on the following:
Consider an adjacency matrix A, where A[i][j] is 1 if there is an edge between i and j, and 0 otherwise.
Then, it can be proved that the number of paths of length k between i and j is just the [i][j] entry of A^k.
Thus the solution would be to build A and construct A^k using matrix multiplication (the usual trick for doing exponentiation applies here). Then just look up the necessary entry.
The final complexity would simply be $O(V^3 logk)$ | {
"domain": "cs.stackexchange",
"id": 7060,
"tags": "algorithms, graphs, counting"
} |
What does the notation $c = [1:\beta]$ mean? | Question: I have been reading a online-book/blog/material on Quantum Mechanics, when I encountered a notation on a page and I have no idea what it means. See if you can help.
Here's the link and follows the paragraph where I am stuck.
Observe that exchanging either the incoming or the outgoing
particles is tantamount to exchanging the two alternatives and,
correspondingly, the two amplitudes, so that A2 takes the place of
A1 and vice versa. Since the two amplitudes have the same
magnitude, there is a complex number c of unit magnitude such
that A2 = A1 c. In other words, multiplication by c = [1:β]
represents an exchange of the incoming or outgoing particles.
If the incoming or outgoing particles are exchanged twice, then (i)
A1 gets multiplied by $c^2$ and (ii) the original situation is
restored. Thus A1 = A1 $c^2$, whence it follows that $c^2$ = [1:2β] = 1.
This means that 2β must be equal to an integral multiple of 360°,
and this leaves us with two possibilities: β = 0°, in which case
A2 = A1, or β = 180°, in which case A2 = −A1.
I have put the notation in bold. What is it that the writer means exactly by $[1:\beta] \ \ and \ [1: 2 \beta]$ ?
:)
Answer: The author uses this weird notation $[c:\gamma]$ to represent complex numbers. It means: c is short for the magnitude $|c|$ of c, $\gamma$ is the phase of c.
I have never seen this before either ;-).
The author explains it earlier in his book, check out this link. | {
"domain": "physics.stackexchange",
"id": 6762,
"tags": "quantum-mechanics, notation"
} |
Can/have bacteria been engineered to express ACE 2 receptors? | Question: In light of SARS-COV-2 binding to the ACE 2 protein in human cells, I was wondering if any bacteria exist or could exist that possess this membrane protein. If not, do you believe that an ACE 2 like protein https://www.jstage.jst.go.jp/article/jpssuppl/92/0/92_2-YIA-09/_pdf expressed by a bacteria could allow the COVID-19 virus to successfully attach and be fused into the bacterium?
Answer: Short answer is no, because the ACE2 receptor is only a trigger (and anchoring point) for endocytosis.
The endocytosis requires many enzymes/proteins which are (as far as I know) specific to eukaryotes. You'd need to express those too in your bacterium and the lack of adequate eukaryote-like transport protein in your bacterium will likely forbid the whole from functioning. | {
"domain": "biology.stackexchange",
"id": 10578,
"tags": "microbiology, virology, coronavirus"
} |
The Doppler effect in a medium like air (sound) versus the electromagnetic Doppler effect | Question: When you have a listener and a source, and when one of the two move relative to the other, the frequency perceived by the listener will be different.
Example:
If the listener travels toward the stationary source at 0.5 times the speed of sound, he will perceived a frequency 1.5 times higher.
If the source travels toward the stationary listener at 0.5 times the speed of sound, the listener will perceived a frequency 2 times higher.
I perfectly understand why the frequency is different in both cases (1.5 and 2 times): if you draw lines representing the crest of the cos wave emitted at a given frequency, you will see that the lines comes closer following the number I gave previously in the two cases.
What I don't understand is the Doppler effect for electromagnetic waves (like the light). Apparently the Doppler effect is there independent of who is moving and the only thing that matter is the relative velocity between the source and the receiver. I here only speak of non-relativistic Doppler effect so the relative speed is small with respect to the speed of light.
It can explain the red-shift of the star light due to the expansion of the universe.
Why is it only the relative speed that matter here? I read that it was because in the sound cases, the sound propagate in a medium (air) and that the light propagate in nothing ... but I do not see how it explains it ?
Thank you for any help !
Samuel.
Answer: When you consider your example in terms of electromagnetic waves, relativistics comes into play. In this case it is time dilation. If you take time dilation into account, then the contradiction disappears. Let me show you:
If you consider yourself to be stationary and the emitter to be moving at 0.5c, then you will see the frequency 1.5 times higher. However time for the emitter has slowed. The frequency they emit is 1.155 higher than if it were at rest. The combined frequency you see is then 1.5 * 1.155 = 1.732 times higher than the emitter frequency.
Let's consider the other example, the emitter is stationary and you are moving at 0.5c. Doppler predicts the frequency to be twice is high. However since your time is 1.155 times slower, the frequency you see is, whaddoyaknow, only 2 / 1.155 = 1.732 times higher.
Exactly the same result, regardless of the refrence frame! Hooray for relativity! | {
"domain": "physics.stackexchange",
"id": 9646,
"tags": "electromagnetism, acoustics, doppler-effect"
} |
Displaying courses in an HTML calendar | Question: I'm struggling for a while now with the readability of my code, I after I tried to get as much insight as possible (for my standards). On my level, I think I understand and use it all right for my level.
But I'm still having big chunks of mixed html/css in the presentation. Often I have a mediocre complex multi-dimensional array as a return value and on the actual presentation page, I iterate through it, but still do a lot of stuff with it.
So I'm looking now into template engines like Smarty, but I can't get my head around it, how I would save some actual code with it in examples like the following where I iterate and work with the array in the presentation:
$courseinfo = new courseinfo($_SESSION['course_short']);
$row = $courseinfo->get_all();
$default = $courseinfo->get_default();
$prices = $courseinfo->get_prices();
$month_min_show = 5;
//color settings for prices
$colorlow = '#6F6';
$colormid = '#09F';
$colorhigh = '#F90';
$colorspecial = '#F0F';
$colorfull = 'rgba(255,0,0,0.3)';
/* CONTENT CALENDAR
-----------------
-----------------
*/
echo '<div id="calendar">';
$count['month'] = 0;
foreach($row as $month)
{
$lastcourse = end($month['course']);
$laststart = $lastcourse['date'];
$enddate = new DateTime($laststart);
$enddate->modify('+ '.($lastcourse['length']-1).' days');
$iterate = new DateTime('01-'.date('m',strtotime($laststart)).'-'.$month['year']);
if (!isset($stored['year']) || isset($stored['year']) && $stored['year'] != $month['year'])
{
if($count['month'] > $month_min_show) { break;} // don't show next year if $month_min_show months already displayed
if(isset($stored['year'])) { echo '<br /><br /><br /><div style="margin-top:-10px"></div>';}
echo '<span class="year" style="float:left;">'.$month['year'].'</span>';
echo '<div style="float:right;margin-top:-20px;padding-right:5px;">';
echo '<div style="float:left;font-size:12px;font-weight:bold;">PRICES '.$month['year'].'</div>';
echo '<div class="pricelegend" style="background-color:'.$colorfull.'">fully booked</div>';
if (in_array('low',$prices[$month['year']])) { echo '<div class="pricelegend" style="background-color:'.$colorlow.'">'.$default['price_low'].' €</div>'; }
if (in_array('mid',$prices[$month['year']])) { echo '<div class="pricelegend" style="background-color:'.$colormid.'">'.$default['price_mid'].' €</div>'; }
if (in_array('high',$prices[$month['year']])) { echo '<div class="pricelegend" style="background-color:'.$colorhigh.'">'.$default['price_high'].' €</div>'; }
if (in_array('custom',$prices[$month['year']])) { echo '<div class="pricelegend" style="background-color:'.$colorspecial.'">Special Offer</div>';}
echo '</div><div style="clear:both;"></div><hr width="800px;" align="left"/>';
}
echo '<div class="m_start">'.mb_strtoupper($month['monthname'],'UTF-8').'<br />';
echo '<span class="yearsmall">'.$month['year'].'</span>';
echo '</div>';
echo '<div class="courses">';
echo '<div style="float:left;width:10px;"> </div>';
while($iterate<=$enddate)
{
$dayname = strftime('%a',$iterate->format('U'));
if ($dayname == "So" OR $dayname == "Sa") { $daycolor = "#999"; } else { $daycolor = "#FFF";}
echo '<div class="dayname" id="'.$iterate->format('dmY').'" style="color:'.$daycolor.'">'.$dayname.'</div>';
$iterate->modify('+ 1 days');
}
echo '<br />';
$lineswitch = 0;
foreach($month['course'] as $course)
{
$date = $course['date'];
$date = new DateTime("$date");
$coursewidth = $course['length']*20-2;
if($course['class'] == 'low') { $pricecolor = $colorlow; }
elseif($course['class'] == 'mid') { $pricecolor = $colormid; }
elseif($course['class'] == 'high') { $pricecolor = $colorhigh; }
else {$pricecolor = $colorspecial;}
if($course['user'] >= $course['usermax']) { $pricecolor = $colorfull; }
if(isset($_SESSION['course_id']) && $_SESSION['course_id'] == $course['id']) { $bordercolor = 'border-color:#FFF';} else {$bordercolor = '';}
if($course['user'] < $course['usermax']) { echo '<a class="clink" id="'.$course['id'].'" href="'.$_SESSION['book_url'].'?course='.$course['id'].'" target="_self">'; }
echo '<div class="course" style="background-color:'.$pricecolor.';'.$bordercolor.';width:'.$coursewidth.'px;margin-top:'.$lineswitch*17 .'px;margin-left:'.(10+($date->format('d')-1)*20).'px">';
if($course['user'] < $course['usermax'])
{
echo '<span class="coursestart"> '.$date->format('d').'</span>';
if($course['length'] > 1)
{
echo '-';
$date->modify('+ '.($course['length']-1).' days');
echo '<span class="courseend">'.$date->format('d').' </span>';
}
}
else { echo '<span style="color:#000;">x</span>'; }
echo '</div>';
if($course['user'] < $course['usermax']) {echo '</a>';}
unset($date);
if ($lineswitch == 0) { $lineswitch = 1;} else {$lineswitch = 0;}
}
echo '</div>';
echo '<div class="m_end"></div>';
echo '<div style="clear:both;"></div><br />';
$stored['year'] = $month['year'];
$count['month']++;
}
echo '</div>';
Here's an example of the array I'm iterating through:
Array ( [04.2012] =>
Array ( [monthname] => April [year] => 2012 [course] =>
Array (
[0] => Array ( [id] => 106 [date] => 2012-04-02 14:00:00 [length] => 3 [class] => mid [price] => 110 [user] => 0 [usermax] => 20 [day] => 02 [week] => 14 [dayname] => Mo [hours] => 3 )
[1] => Array ( [id] => 107 [date] => 2012-04-03 10:00:00 [length] => 3 [class] => mid [price] => 110 [user] => 0 [usermax] => 20 [day] => 03 [week] => 14 [dayname] => Di [hours] => 3 )
[2] => Array ( [id] => 108 [date] => 2012-04-05 14:00:00 [length] => 3 [class] => mid [price] => 110 [user] => 0 [usermax] => 20 [day] => 05 [week] => 14 [dayname] => Do [hours] => 3 )
)
)
This is quite a bit of code as you can see, just so you can get an idea how I still have to work a lot with the array.
So how could I split this into smaller chunks or just make it more readable and easier to work with ?
Hope I could make clear what I want here...and sure if you find anything else that's totally stupid in this code, give me a word!
I wouldn't know how a template engine would help as it still is a lot of ifs and dynamic changes in there.
Sidenote: I'm working alone and always will, so the separation is just for me.
Answer: Thank you for the perfect example of the real-life piece of presentation logic.
Most people pushing some primitive templating solutions just have no idea of such a complex cases existence.
Three rules for you to get it right:
Use PHP as a template engine.
Output not an HTML tag nor text constant using PHP echo but use straight HTML only.
Move ALL data preparations to the business logic part.
Format ALL your data in the business logic.
Pass only scalars to the template. No datetime objects!
No complex logic - use only basic PHP syntax in the template.
So, foreach your data twice:
first time to do all the data preparations and formatting.
and next time to do the actual output in the template.
So, PHP code become like this
$count['month'] = 0;
foreach($row as $i => $month)
{
$month['lastcourse'] = end($month['course']);
$month['laststart'] = $month['lastcourse']['date'];
$month['enddate'] = new DateTime($month['laststart']);
$month['enddate']->modify('+ '.($month['lastcourse']['length']-1).' days');
$month['iterate'] = new DateTime('01-'.date('m',strtotime($month['laststart'])).'-'.$month['year']);
$month['showyear'] = (!isset($stored['year']) || isset($stored['year']) && $stored['year'] != $month['year']);
$month['monthname'] = mb_strtoupper($month['monthname'],'UTF-8');
$row[$i] = $month;
}
while template as clean as this
<div id="calendar">
<?php foreach($row as $month): ?>
<?php if ($month['showyear']): ?>
<?php if ($stored['year']): ?>
<br /><br /><br /><div style="margin-top:-10px"></div>
<?php endif ?>
<span class="year" style="float:left;"><?=$month['year']?></span>
<div style="float:right;margin-top:-20px;padding-right:5px;">
<div style="float:left;font-size:12px;font-weight:bold;">PRICES <?=$month['year']?></div>
<div class="pricelegend" style="background-color:<?=$colorfull?>">fully booked</div>
some code removed
</div><div style="clear:both;"></div><hr width="800px;" align="left"/>
<?php endif ?>
<div class="m_start"><?=$month['monthname']?><br />
<span class="yearsmall"><?=$month['year']?></span>
</div>
<div class="courses">
<div style="float:left;width:10px;"> </div>
<?php endforeach ?>
</div>
I am not going to reformat all your code, but just to give you an idea. | {
"domain": "codereview.stackexchange",
"id": 29202,
"tags": "php, html, datetime, layout"
} |
How to create a 180 degree laserscan from a panning Kinect? | Question:
The Kinect has a narrow (57 degree) field of view, which is limited for obstacle avoidance, navigation and map making as pointed out elsewhere by others. My Kinect is mounted on a pan/tilt mechanism so I should be able to pan the laser and create a wider simulated laserscan. (but with an admittedly lower scan rate than a real laser) An advantage however is that the Kinect should be able to identify obstacles the full height of the robot, rather than just at the laser elevation. Does anyone have experience or example code available for this task?
Originally posted by Bart on ROS Answers with karma: 856 on 2011-04-19
Post score: 2
Answer:
Based on some help in previous ROS questions/answers I have developed a nodelet "pipeline" that augments the pointcloud_to_laserscan package provided in turtlebot. This pipeline takes the pointcloud from "cloud_throttle.cpp" and tranforms it geometrically based on pan and tilt angles broadcast to tf using pcl_ros::transformPointCloud in a new nodelet "cloud_to_scan_tf.cpp". It then uses a third nodelet "scan_to_wide.cpp" to overlay a sequence of panned narrow scans into a wider 180 degree scan. The result is a simulation of a wider field of view, horizontal mounted, forward facing, planer laser.
The "cloud_to_scan_tf.cpp" nodelet was documented previously as "cloud_to_scanHoriz.cpp" in the "pointcloud to laserscan with transform?" ROS Answers question.
Attached below is the code for the "scan_to_wide" nodelet and a typical launch file. There is a Bool message to start/stop the panning. This nodelet sends a Float32 message to a hardware interface program to move the pan servo and receives a JointState message from the hardware interface program to indicate the pan servo position. The Kinect is tilted downwards 25 degrees to improve visibility just in front of the robot, but tf corrects the laserscan to horizontal. When the robot is moving the Kinect is stationary facing forward. When the robot stops it does a 180 degree pan, which takes about 3 seconds. The Kinect nodelets and other hardware interface programs run on a small netbook (Atom N270 1.6 GHz) at 100% CPU, but only using 20 KB/sec of wireless network bandwidth to display the laserscan on rviz on a remote desktop computer.
I'm interested if anyone has suggestions for improvement or experience integrating a panning Kinect with the navigation stack. As the FAQ suggests, a longer discussion should be moved to the mailing list.
/*
* Copyright (c) 2010, Willow Garage, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyrigh
t
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* * Neither the name of the Willow Garage, Inc. nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
//scan_to_wide.cpp
//Move a Kinect camera on a pan/tilt base to provide laserscan ranges with a wider field of view.
//Consolidate the overlapping scans to provide a single wider laserScan message
//Panning moves from center, left, center, right, center ...
//A laserscan is published at each position.
//When not panning, the received laserscan for center position is immediately published
//The following messages are sent to the hardware interface program:
// "camera_pan_mode" Boolean message to intiate or stop scanning
// "camera_pan_angle_cmd" Float32 message indicating the position to move the camera to
//The following message is recieved from the hardware interface program:
// "camera_pos" JointsPos message indicating the actual position of the camera
// "laser_scan_tf" laserscan message is received from pointcloud_to_laserscan package
// "laser_scan_wide" laserscan message is published
#include "ros/ros.h"
#include "pluginlib/class_list_macros.h"
#include "nodelet/nodelet.h"
#include "std_msgs/Bool.h"
#include "std_msgs/Float32.h"
#include "sensor_msgs/JointState.h"
#include "sensor_msgs/LaserScan.h"
#include <vector>
namespace r2PointcloudToLaser
{
class ScanToWide : public nodelet::Nodelet
{
public:
//Constructor
ScanToWide(): kinectFov(1.02974), panAngleErrorLimit(0.035), settleTimeSecs(1.0)
{
};
private:
//Global data
bool cameraPanCmd;
enum CameraPanModeType {FIXED, PAN} cameraPanMode;
enum CameraPanStepType {INIT, LEFT, LEFTSCAN, CENTER, CENTERSCAN, RIGHT, RIGHTSCAN, DONE} cameraPanStep;
enum CameraPanDirection {LEFTDIR, RIGHTDIR} cameraPanDirection;
float panPosition; //camera angular yaw position
ros::Time positionTime;
double settleTimeSecs;
ros::Duration delayTime; //allow camera pan position to settle
double kinectFov; //Kinect horizonal field of view, 59 degrees
double panAngleTarget;
double panAngleErrorLimit; //2 degrees
sensor_msgs::LaserScan laserOut; //laserIn messages overlaid into output scan
//************************************
//Define ROS messages used
ros::Subscriber cameraPanModeSubscriber; //std_msgs::Bool, "camera_pan_mode"
ros::Publisher cameraPanAnglePublisher; //std_msgs::Float32, "camera_pan_angle_cmd"
ros::Subscriber cameraPosSubscriber; //sensor_msgs::JointState, "camera_pos"
ros::Subscriber laserSubscriber; //sensor_msgs::LaserScan, "laser_scan_horiz"
ros::Publisher laserScanPublisher; //sensor_msgs::LaserScan, "laserscan_wide"
//************************************
//Function declarations
void callbackLaserPanCmd(const std_msgs::Bool::ConstPtr& panCmd);
void callbackCameraPos(const sensor_msgs::JointState::ConstPtr& camPos);
void callbackLaserscan(const sensor_msgs::LaserScan::ConstPtr& laserIn);
void initializeLaserOut(const sensor_msgs::LaserScan::ConstPtr& laserIn);
void copyLaserInToOut(const sensor_msgs::LaserScan::ConstPtr& laserIn,
sensor_msgs::LaserScan& laserOut);
void sendCameraPanCmd(enum CameraPanStepType cameraPanStep);
//Nodelet initialization
virtual void onInit()
{
ros::NodeHandle& nodeHandle = getNodeHandle();
ros::NodeHandle& privateNodeHandle = getPrivateNodeHandle();
//Get settle time for camera after movement in seconds
privateNodeHandle.getParam("kinect_fov", kinectFov);
privateNodeHandle.getParam("pan_error_limit", panAngleErrorLimit);
privateNodeHandle.getParam("settle_time", settleTimeSecs);
NODELET_INFO("ScanToWide kinect field of view: %f", kinectFov);
NODELET_INFO("ScanToWide pan error limit: %f", panAngleErrorLimit);
NODELET_INFO("ScanToWide settle time: %f", settleTimeSecs);
//Initialize private data
delayTime.fromSec(settleTimeSecs);
cameraPanCmd = false;
cameraPanMode = FIXED;
cameraPanStep = INIT;
cameraPanDirection = LEFTDIR;
//Set up to process a fixed laserscan
cameraPanModeSubscriber = nodeHandle.subscribe<std_msgs::Bool>("camera_pan_mode", 2, &ScanToWide::callbackLaserPanCmd, this);
cameraPanAnglePublisher = nodeHandle.advertise<std_msgs::Float32>("camera_pan_angle_cmd", 2);
cameraPosSubscriber = nodeHandle.subscribe<sensor_msgs::JointState>("camera_pos", 2, &ScanToWide::callbackCameraPos, this);
laserSubscriber = nodeHandle.subscribe<sensor_msgs::LaserScan>("scan_in", 2, &ScanToWide::callbackLaserscan, this);
laserScanPublisher = nodeHandle.advertise<sensor_msgs::LaserScan>("scan_out", 10);
};
}; //class ScanToWide
//*********************************************************
//ROS message subscription callbacks
//Laser panning command message received from user interface program
void ScanToWide::callbackLaserPanCmd(const std_msgs::Bool::ConstPtr& panCmd)
{
//cameraPanCmd = panCmd->data; //save command state
if (panCmd->data == true) {
//Initiate panning
if (cameraPanMode == FIXED) {
cameraPanMode = PAN;
//Move to left position for first reading, then wait for callback
cameraPanStep = LEFT;
sendCameraPanCmd(cameraPanStep);
}
}
else {
//Stop panning, move to center
cameraPanMode = FIXED;
sendCameraPanCmd(CENTER);
cameraPanStep = DONE;
//Clear existing range values for the next laserscan
laserOut.ranges.assign(laserOut.ranges.size(), laserOut.range_max + 1.0);
}
}
//Camera position message received from hardware interface program
//notify laserscan callback that it is in position to save scan
void ScanToWide::callbackCameraPos(const sensor_msgs::JointState::ConstPtr& camPos)
{
float panAngle;
//Process position if in panning mode
if (cameraPanMode == PAN){
panAngle = camPos->position[0];
if (fabs(panAngle - panAngleTarget) < panAngleErrorLimit) {
//Notify laserscan callback that camera is in correct position
if (cameraPanStep == LEFT)
cameraPanStep = LEFTSCAN;
else if (cameraPanStep == CENTER)
cameraPanStep = CENTERSCAN;
else if (cameraPanStep == RIGHT)
cameraPanStep = RIGHTSCAN;
//Save time of camera position message
positionTime = camPos->header.stamp;
} //if panAngle
} //if cameraPanMode
} //callbackCameraPos
//Laserscan message received from pointcloud_to_laserscan program
void ScanToWide::callbackLaserscan(const sensor_msgs::LaserScan::ConstPtr& laserIn)
{
//Format wide laserscan based on narrow laserscan on first pass (once)
if (cameraPanStep == INIT) {
//First pass, initialize laserOut properties based on laserIn
initializeLaserOut(laserIn);
cameraPanStep = DONE;
} //if camerPanStep = INIT
//Copy narrow laserscan to wide laserscan and publish
copyLaserInToOut(laserIn, laserOut);
laserOut.header.stamp = ros::Time::now();
laserScanPublisher.publish(laserOut);
if (cameraPanMode == PAN) {
//Control panning movement
//Wait for pan motion to settle and a new laserscan before moving again
if ((laserIn->header.stamp - positionTime) > delayTime) {
//At leftmost position
if (cameraPanStep == LEFTSCAN) {
//Move camera to next position
cameraPanStep = CENTER;
cameraPanDirection = RIGHTDIR;
sendCameraPanCmd(cameraPanStep);
} //if panStep LEFTSCAN
//At center position
else if (cameraPanStep == CENTERSCAN) {
//Move camera to next position, depending on previous position
if (cameraPanDirection == RIGHTDIR) {
cameraPanStep = RIGHT;
sendCameraPanCmd(cameraPanStep);
}
else {
cameraPanStep = LEFT;
sendCameraPanCmd(cameraPanStep);
}
} //if panStep CENTERSCAN
//At rightmost position
else if (cameraPanStep == RIGHTSCAN) {
//Move camera to next position
cameraPanStep = CENTER;
cameraPanDirection = LEFTDIR;
sendCameraPanCmd(cameraPanStep);
} //if panStep RIGHTSCAN
} //if position time
} //if cameraPanMode PAN
} //callbackLaserscan
//************************************
//Function declarations
//Initialize the laserOut message
void ScanToWide::initializeLaserOut(const sensor_msgs::LaserScan::ConstPtr& laserIn)
{
laserOut.header.frame_id = laserIn->header.frame_id;
laserOut.time_increment = laserIn->time_increment;
laserOut.scan_time = laserIn->scan_time;
laserOut.range_min = laserIn->range_min;
laserOut.range_max = laserIn->range_max;
laserOut.angle_min = laserIn->angle_min; //panned right
laserOut.angle_max = laserIn->angle_max; //panned left
laserOut.angle_increment = laserIn->angle_increment;
laserOut.ranges.resize(laserIn->ranges.size(), laserOut.range_max + 1.0);
ROS_INFO("initializeLaserOut : wide laserscan initialized");
}
//Copy laserIn ranges to appropriate sector in laserOut
void ScanToWide::copyLaserInToOut(const sensor_msgs::LaserScan::ConstPtr& laserIn,
sensor_msgs::LaserScan& laserOut)
{
//Display incoming laser scan range values for testing
//ROS_INFO("copyLaserInToOut cameraPanStep: %d", cameraPanStep);
//ROS_INFO("laserIn ranges.size: %d", laserIn->ranges.size());
//for (unsigned int i = 0; i < laserIn->ranges.size(); i++) {
// ROS_INFO("index: %d, range: %f", i, laserIn->ranges[i]);
//}
//Copy input laserscan ranges to output laserscan
for (unsigned int i = 0; i < laserIn->ranges.size(); i++) {
if (laserIn->ranges.at(i) < laserIn->range_max)
laserOut.ranges.at(i) = laserIn->ranges.at(i);
}
}
//Send a camera pan position command message
void ScanToWide::sendCameraPanCmd(enum CameraPanStepType cameraPanStep)
{
std_msgs::Float32 panAngle;
//Set message parameter
if (cameraPanStep == LEFT) {
panAngleTarget = kinectFov;
}
else if (cameraPanStep == RIGHT) {
panAngleTarget = -kinectFov;
}
else if (cameraPanStep == CENTER) {
panAngleTarget = 0.0;
}
else {
panAngleTarget = 0.0; //center
}
panAngle.data = panAngleTarget;
//Send message, then wait for callback
cameraPanAnglePublisher.publish(panAngle);
} //sendCameraPosCmd
PLUGINLIB_DECLARE_CLASS(r2PointcloudToLaser, ScanToWide, r2PointcloudToLaser::ScanToWide, nodelet::Nodelet);
} //r2PointcloudToLaser
<launch>
<!-- kinect and frame ids -->
<include file="$(find openni_camera)/launch/openni_node.launch"/>
<!-- openni manager -->
<node pkg="nodelet" type="nodelet" name="openni_manager" output="screen" respawn="true" args="manager"/>
<!-- throttling -->
<node pkg="nodelet" type="nodelet" name="pointcloud_throttle" args="load r2PointcloudToLaser/CloudThrottle openni_manager">
<param name="max_rate" value="2"/>
<remap from="cloud_in" to="/camera/depth/points"/>
<remap from="cloud_out" to="cloud_throttled"/>
</node>
<!-- pointcloud to laser with transform -->
<node pkg="nodelet" type="nodelet" name="kinect_laser" args="load r2PointcloudToLaser/CloudToScanTf openni_manager">
<param name="base_frame" value="/base_footprint"/>
<param name="laser_frame" value="/camera_tower"/>
<param name="min_height" value="0.07"/>
<param name="max_height" value="0.75"/>
<remap from="cloud_in" to="cloud_throttled"/>
<remap from="scan_out" to="laser_scan_tf"/>
</node>
<!-- laser scan to laser scan wide -->
<node pkg="nodelet" type="nodelet" name="laser_wide" args="load r2PointcloudToLaser/ScanToWide openni_manager">
<param name="kinect_fov" value="1.02974"/>
<param name="pan_error_limit" value="0.035"/>
<param name="settle_time" value="2.0"/>
<remap from="scan_in" to="laser_scan_tf"/>
<remap from="scan_out" to="laser_scan_wide"/>
</node>
<node pkg="kinect_base" type="kinect_base_node" name="kinect_base_control" />
</launch>
Originally posted by Bart with karma: 856 on 2011-04-19
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Bart on 2011-04-23:
I can report that a panning Kinect to laserscan can work well with gmapping. I only update the map when a new scan comes in and only send a wide scan when stationary, after a scan is completed. The scan matching localization in gmapping is impressive. Compensates for poor odometry. | {
"domain": "robotics.stackexchange",
"id": 5403,
"tags": "ros, kinect, pointcloud-to-laserscan"
} |
What is the formula used for atmospheric visibility? | Question: I am working on NCEP GFS data to plot various maps. One such plot I would like to have is visibility. I have seen some of the sites showing visibility charts and the source of the data was shown as NCEP GFS.
I could not find any variable related to visibility in the GFS file.
Is there any specific formula available for calculating visibility?
Answer: I calculate surface visibility from WRF output using a calculation that I adapted from DTC's Unified Post Processor, specifically from their Fortran routine found in UPPV2.2/src/unipost/CALVIS.f. The calculation is based on hydrometeor mixing ratios, and air temperature and pressure, all from the lowest model layer. If your GFS output has hydrometeor mixing ratios, you can use this formula to calculate visibility. The documentation in the original code reads:
This routine computes horizontal visibility at the
surface or lowest model layer, from qc, qr, qi, and qs.
qv--water vapor mixing ratio (kg/kg)
qc--cloud water mixing ratio (kg/kg)
qr--rain water mixing ratio (kg/kg)
qi--cloud ice mixing ratio (kg/kg)
qs--snow mixing ratio (kg/kg)
tt--temperature (k)
pp--pressure (Pa)
If iice=0:
qprc=qr qrain=qr and qclw=qc if T>0C
qcld=qc =0 =0 if T<0C
qsnow=qs and qclice=qc if T<0C
=0 =0 if T>0C
If iice=1:
qprc=qr+qs qrain=qr and qclw=qc
qcld=qc+qi qsnow=qs and qclice=qc
Independent of the above definitions, the scheme can use different
assumptions of the state of hydrometeors:
meth='d': qprc is all frozen if T<0, liquid if T>0
meth='b': Bocchieri scheme used to determine whether qprc
is rain or snow. A temperature assumption is used to
determine whether qcld is liquid or frozen.
meth='r': Uses the four mixing ratios qrain, qsnow, qclw,
and qclice
The routine uses the following
expressions for extinction coefficient, beta (in km**-1),
with C being the mass concentration (in g/m**3):
cloud water: beta = 144.7 * C ** (0.8800)
rain water: beta = 2.24 * C ** (0.7500)
cloud ice: beta = 327.8 * C ** (1.0000)
snow: beta = 10.36 * C ** (0.7776)
These expressions were obtained from the following sources:
for cloud water: from Kunkel (1984)
for rainwater: from M-P dist'n, with No=8e6 m**-4 and
rho_w=1000 kg/m**3
for cloud ice: assume randomly oriented plates which follow
mass-diameter relationship from Rutledge and Hobbs (1983)
for snow: from Stallabrass (1985), assuming beta = -ln(.02)/vis
The extinction coefficient for each water species present is
calculated, and then all applicable betas are summed to yield
a single beta. Then the following relationship is used to
determine visibility (in km), where epsilon is the threshhold
of contrast, usually taken to be .02:
vis = -ln(epsilon)/beta [found in Kunkel (1984)]
I have adapted the code from this routine to a Python function below, which you can use for your purposes:
def calculate_visibility(qv,qc,qr,qi,qs,T,p):
"""
Calculates visibility based on the UPP algorithm.
See documentation in UPPV2.2/src/unipost/CALVIS.f for the description of
input arguments and references.
"""
Rd = 287.
COEFLC = 144.7
COEFLP = 2.24
COEFFC = 327.8
COEFFP = 10.36
EXPLC = 0.88
EXPLP = 0.75
EXPFC = 1.
EXPFP = 0.7776
Tv = T * (1+0.61*qv) # Virtual temperature
rhoa = p/(Rd*Tv) # Air density [kg m^-3]
rhow = 1e3 # Water density [kg m^-3]
rhoi = 0.917e3 # Ice density [kg m^-3]
vovmd = (1+qv)/rhoa + (qc+qr)/rhow + (qi+qs)/rhoi
conc_lc = 1e3*qc/vovmd
conc_lp = 1e3*qr/vovmd
conc_fc = 1e3*qi/vovmd
conc_fp = 1e3*qs/vovmd
# Make sure all concentrations are positive
conc_lc[conc_lc < 0] = 0
conc_lp[conc_lp < 0] = 0
conc_fc[conc_fc < 0] = 0
conc_fp[conc_fp < 0] = 0
betav = COEFFC*conc_fc**EXPFC\
+ COEFFP*conc_fp**EXPFP\
+ COEFLC*conc_lc**EXPLC\
+ COEFLP*conc_lp**EXPLP+1E-10
vis = -np.log(0.02)/betav # Visibility [km]
vis[vis > 24.135] = 24.135
return vis | {
"domain": "earthscience.stackexchange",
"id": 719,
"tags": "meteorology, atmosphere-modelling, visualization"
} |
What is the relationship between magnetic flux, voltage, induced current and time in a simple AC generator? | Question: I've tried deriving the equations for voltage and magnetic flux over the angle between the rotating coil and the magnetic field vectors in this picture.
But for the induced AC current generated by this rotating coil, how do I derive it's equation over time?
If I use P = VI, I get asymptotes from the resulting equation: I = -k*sec(theta).
If I use V = IR, I get the equation: -k*cos(theta) which contradicts this Wikipedia page: https://en.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Induction/Worked_Solutions
I really could use some explanation as to how and why the AC current generated behaves the way it does.
Answer: If you're referring to point 4 in the wiki article, read it again and you'll see that it's not giving the induced current, i.e. that resulting from the emf. To find the induced current you use $I=\frac{V}{R}$ as you suggested.
[A small point: it's bad to write things like $\Phi= \text{sin}\ \theta$. For one thing the units don't match. You mean $\Phi= \Phi_{max}\ \text{sin}\ \theta$.] | {
"domain": "physics.stackexchange",
"id": 48674,
"tags": "electromagnetism, electric-current, voltage"
} |
A vanilla JavaScript library for typesetting pseudocode in HTML documents | Question: I have this plain JavaScript library for typesetting pseudocode in HTML documents. (See the documentation for details.) (See the GitHub repository - broken as of now.)
Source code
algotype.js
//// ////////////////////////////////////////////////////// ////
// Algotype.js, version 1.6 by Rodion "(code)rodde" Efremov //
////////////////////////////////////////////////////////////////
var Algotype = {};
// The string beginning the comments of the algorithm declaration.
Algotype.ALGORITHM_HEADER_COMMENT_TAG = "#";
// The string beginning the step comments.
Algotype.ALGORITHM_STEP_COMMENT_TAG = "#";
// The width of code line numbers. This default works well. If you, however,
// need to typeset an algorithm with at least 100 rows (in which case the space
// is tight) just increase this constant.
Algotype.LINE_NUMBER_WIDTH = 25;
// The indentation in pixels.
Algotype.INDENTATION_WIDTH = 30;
// Number of pixels between the line number span and the pseudocode span.
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE = 8;
// The URL from which to download the MathJax math typesetting facilities.
Algotype.MATHJAX_SCRIPT_URL =
"https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_CHTML";
// Configuration for MathJax. This provides in the pseudocode macros for such
// keywords as 'and', 'or', 'not', and others, with the font that matches the
// actual keywords, such as 'for each', 'repeat ... until', etc.
Algotype.MATHJAX_CONFIG =
'MathJax.Hub.Config({' +
"tex2jax: {inlineMath: [['$','$']]}," +
'TeX: {' +
'Macros: {' +
'And: "\\\\mathbf{and}",' +
'Or: "\\\\mathbf{or}",' +
'Not: "\\\\mathbf{not}",' +
'Is: "\\\\mathbf{is}",' +
'In: "\\\\mathbf{in}",' +
'Mapped: "\\\\mathbf{mapped}",' +
'Nil: "\\\\mathbf{nil}"' +
'}' +
'}' +
'});';
Algotype.MATHJAX_CONFIG_MIME_TYPE = "text/x-mathjax-config";
Algotype.UNNAMED_ALGORITHM = "UnnamedAlgorithm";
Algotype.loadMathJax = function() {
// Load the MathJax.
var importedScript = document.createElement("script");
importedScript.async = "true";
importedScript.src = Algotype.MATHJAX_SCRIPT_URL;
document.head.appendChild(importedScript);
// Make MathJax process the configuration.
var mathJaxSettingsScript = document.createElement("script");
mathJaxSettingsScript.type = Algotype.MATHJAX_CONFIG_MIME_TYPE;
mathJaxSettingsScript.innerHTML = Algotype.MATHJAX_CONFIG;
document.head.appendChild(mathJaxSettingsScript);
};
Algotype.getAlgorithmHeaderComment = function (algorithmElement) {
var algorithmHeaderComment =
algorithmElement.getAttribute("comment");
if (!algorithmHeaderComment) {
return "";
}
return " " +
Algotype.ALGORITHM_HEADER_COMMENT_TAG +
" " + algorithmHeaderComment;
};
Algotype.getAlgorithmParameterList = function(algorithmElement) {
var algorithmParameterList =
algorithmElement.getAttribute("parameters") || "";
algorithmParameterList = algorithmParameterList.trim();
if (!algorithmParameterList) {
return "$()$";
}
// Remove the beginning parenthesis, if present.
if (algorithmParameterList[0] === "(") {
algorithmParameterList =
algorithmParameterList.substring(1,
algorithmParameterList.length);
}
// Remove the ending parenthesis, if present.
if (algorithmParameterList[algorithmParameterList.length - 1] === ")") {
algorithmParameterList =
algorithmParameterList
.substring(0, algorithmParameterList.length - 1);
}
// Remove possible leading and trailing space within the parentheses.
algorithmParameterList = algorithmParameterList.trim();
// Split the string into parameter tokens.
var algorithmParameters = algorithmParameterList.split(/\s*,\s*|\s+/);
// Construct the TeX for the algorithm parameter list.
var tex = "$(";
var separator = "";
for (var i = 0; i < algorithmParameters.length; ++i) {
tex += separator;
tex += algorithmParameters[i];
separator = ", ";
}
return tex + ")$";
};
Algotype.getLabelHtml = function(state, label) {
return "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'></td>\n" +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-label algotype-text'>" + label +
"</td>\n" +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
};
Algotype.typesetIf = function(ifElement, state) {
var conditionTeX = (ifElement.getAttribute("condition") || "").trim();
conditionTeX = addTeXDelimeters(conditionTeX);
var htmlText = "";
var comment = ifElement.getAttribute("comment");
var commentId = (ifElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
var ifId = (ifElement.getAttribute("id") || "").trim();
var ifIdTextBegin = "";
var ifIdTextEnd = "";
if (ifId) {
ifIdTextBegin = "<span id='" + ifId + "'>";
ifIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
ifIdTextBegin + "if " +
conditionTeX + ":" + ifIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = ifElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
Algotype.typesetElseIf = function(elseIfElement, state) {
var conditionTeX = (elseIfElement.getAttribute("condition") || "").trim();
conditionTeX = addTeXDelimeters(conditionTeX);
var htmlText = "";
var comment = elseIfElement.getAttribute("comment");
var commentId = (elseIfElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
var elseIfId = (elseIfElement.getAttribute("id") || "").trim();
var elseIfIdTextBegin = "";
var elseIfIdTextEnd = "";
if (elseIfId) {
elseIfIdTextBegin = "<span id='" + elseIfId + "'>";
elseIfIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
elseIfIdTextBegin + "else if " +
conditionTeX + ":" + elseIfIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = elseIfElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
Algotype.typesetElse = function(elseElement, state) {
var htmlText = "";
var comment = elseElement.getAttribute("comment");
var commentId = (elseElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
var elseId = (elseElement.getAttribute("id") || "").trim();
var elseIdTextBegin = "";
var elseIdTextEnd = "";
if (elseId) {
elseIdTextBegin = "<span id='" + elseId + "'>";
elseIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
elseIdTextBegin + "else:" + elseIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = elseElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
Algotype.typesetStep = function(stepElement, state, isReturn) {
var stepText = (stepElement.innerHTML || "").trim();
var inTeX = false;
var htmlText = "";
var call = "";
for (var i = 0; i < stepText.length; ++i) {
var character = stepText[i];
switch (character) {
case '$':
if (!inTeX) {
if (call) {
// Dump the current call.
htmlText +=
" <span " +
"class='algotype-text algotype-algorithm-name'>" +
call +
"</span>";
call = "";
}
inTeX = true;
} else {
inTeX = false;
}
htmlText += "$";
break;
default:
if (inTeX) {
htmlText += character;
} else {
call += character;
}
}
}
if (call) {
// Handling the trailing call sequence.
htmlText += " <span " +
"class='algotype-text algotype-algorithm-name'>" +
call +
"</span>";
}
var comment = stepElement.getAttribute("comment") || "";
var commentId = (stepElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "'";
}
if (comment) {
comment = " <span class='algotype-step-comment'" + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
var returnHtml = "";
if (isReturn === true) {
returnHtml = "<span class='algotype-text algotype-keyword'>" +
"return</span> ";
}
var stepId = (stepElement.getAttribute("id") || "").trim();
var stepIdTextBegin = "";
var stepIdTextEnd = "";
if (stepId) {
stepIdTextBegin = "<span id='" + stepId + "'>";
stepIdTextEnd = "</span>";
}
htmlText = "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
"</td> " +
"<td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>" +
"<td class='algotype-text'>" +
stepIdTextBegin +
returnHtml +
htmlText + stepIdTextEnd + comment +
"</td>\n" +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
state["lineNumber"]++;
return htmlText;
};
Algotype.typesetReturn = function(returnElement, state) {
return Algotype.typesetStep(returnElement, state, true);
};
Algotype.typesetBreak = function(breakElement, state) {
var comment = breakElement.getAttribute("comment") || "";
var commentId = (breakElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "'";
}
if (comment) {
comment = " <span class='algotype-step-comment'" + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
var label = breakElement.innerHTML;
var breakId = (breakElement.getAttribute("id") || "").trim();
var breakIdTextBegin = "";
var breakIdTextEnd = "";
if (breakId) {
breakIdTextBegin = "<span id='" + breakId + "'>";
breakIdTextEnd = "</span>";
}
var htmlText =
"<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] + "</td>\n" +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
breakIdTextBegin + "break " +
(label ? "<span class='algotype-label'>" + label + "</span>" : "") +
breakIdTextEnd + comment +
"</td>\n" +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
state["lineNumber"]++;
return htmlText;
};
Algotype.typesetContinue = function(continueElement, state) {
var comment = continueElement.getAttribute("comment") || "";
var commentId = (continueElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "'";
}
if (comment) {
comment = " <span class='algotype-step-comment'" + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
var label = continueElement.innerHTML;
var continueId = (continueElement.getAttribute("id") || "").trim();
var continueIdTextBegin = "";
var continueIdTextEnd = "";
if (continueId) {
continueIdTextBegin = "<span id='" + continueId + "'>";
continueIdTextEnd = "</span>";
}
var htmlText = "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] + "</td>\n" +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
continueIdTextBegin + "continue " +
(label ? "<span class='algotype-label'>" + label + "</span>" : "") +
continueIdTextEnd + comment +
"</td>\n" +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
state["lineNumber"]++;
return htmlText;
};
Algotype.typesetForEach = function(forEachElement, state) {
var conditionTeX = forEachElement.getAttribute("condition") || "";
conditionTeX = conditionTeX.trim();
if (conditionTeX[0] !== "$") {
conditionTeX = "$" + conditionTeX;
}
if (conditionTeX[conditionTeX.length - 1] !== "$") {
conditionTeX += "$";
}
var label = forEachElement.getAttribute("label");
var htmlText = "";
var comment = forEachElement.getAttribute("comment");
var commentId = (forEachElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
if (label) {
label = label.trim();
if (label[label.length - 1] !== ":") {
label += ":";
}
htmlText += Algotype.getLabelHtml(state, label);
}
var forEachId = (forEachElement.getAttribute("id") || "").trim();
var forEachIdTextBegin = "";
var forEachIdTextEnd = "";
if (forEachId) {
forEachIdTextBegin = "<span id='" + forEachId +"'>";
forEachIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
forEachIdTextBegin + "for each " +
conditionTeX + ":" + forEachIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = forEachElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
function addTeXDelimeters(code) {
if (!code) {
return "";
}
code = code.trim();
if (code[0] !== "$") {
code = "$" + code;
}
if (code[code.length - 1] !== "$") {
code += "$";
}
return code;
}
Algotype.typesetFor = function(forElement, state) {
var initConditionTeX = forElement.getAttribute("init") || "";
var toConditionTeX = forElement.getAttribute("to") || "";
var stepConditionTeX = forElement.getAttribute("step") || "";
initConditionTeX = addTeXDelimeters(initConditionTeX);
toConditionTeX = addTeXDelimeters(toConditionTeX);
if (stepConditionTeX) {
stepConditionTeX = addTeXDelimeters(stepConditionTeX);
}
var label = forElement.getAttribute("label");
var htmlText = "";
var comment = forElement.getAttribute("comment");
var commentId = (forElement.getAttribute("comment-id") || "").trim();
var idText = "";
var stepText = "";
if (stepConditionTeX) {
stepText = " step " + stepConditionTeX;
}
if (commentId) {
idText = "id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
if (label) {
label = label.trim();
if (label[label.length - 1] !== ":") {
label += ":";
}
htmlText += Algotype.getLabelHtml(state, label);
}
var forId = (forElement.getAttribute("id") || "").trim();
var forIdTextBegin = "";
var forIdTextEnd = "";
if (forId) {
forIdTextBegin = "<span id='" + forId + "'>";
forIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
forIdTextBegin + "for " +
initConditionTeX + " to " + toConditionTeX + stepText + ":" +
forIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = forElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
Algotype.typesetForDownto = function(forDowntoElement, state) {
var initConditionTeX = forDowntoElement.getAttribute("init") || "";
var toConditionTeX = forDowntoElement.getAttribute("to") || "";
var stepConditionTeX = forDowntoElement.getAttribute("step") || "";
initConditionTeX = addTeXDelimeters(initConditionTeX);
toConditionTeX = addTeXDelimeters(toConditionTeX);
if (stepConditionTeX) {
stepConditionTeX = addTeXDelimeters(stepConditionTeX);
}
var label = forDowntoElement.getAttribute("label");
var htmlText = "";
var comment = forDowntoElement.getAttribute("comment");
var commentId = (forDowntoElement.getAttribute("comment-id") || "").trim();
var idText = "";
var stepText = "";
if (stepConditionTeX) {
stepText = " step " + stepConditionTeX;
}
if (commentId) {
idText = "id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
if (label) {
label = label.trim();
if (label[label.length - 1] !== ":") {
label += ":";
}
htmlText += Algotype.getLabelHtml(state, label);
}
var forDowntoId = (forDowntoElement.getAttribute("id") || "").trim();
var forDowntoTextBegin = "";
var forDowntoTextEnd = "";
if (forDowntoId) {
forDowntoTextBegin = "<span id='" + forDowntoId + "'>";
forDowntoTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
forDowntoTextBegin + "for " +
initConditionTeX + " downto " + toConditionTeX + stepText +
":" +
forDowntoTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = forDowntoElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
Algotype.typesetForever = function(foreverElement, state) {
var label = foreverElement.getAttribute("label");
var htmlText = "";
var comment = foreverElement.getAttribute("comment");
var commentId = (foreverElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = "id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
if (label) {
label = label.trim();
if (label[label.length - 1] !== ":") {
label += ":";
}
htmlText += Algotype.getLabelHtml(state, label);
}
var foreverId = (foreverElement.getAttribute("id") || "").trim();
var foreverIdTextBegin = "";
var foreverIdTextEnd = "";
if (foreverId) {
foreverIdTextBegin = "<span id='" + foreverId + "'>";
foreverIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
foreverIdTextBegin + "forever:" + foreverIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = foreverElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
Algotype.typesetWhile = function(whileElement, state) {
var conditionTeX = (whileElement.getAttribute("condition") || "").trim();
conditionTeX = addTeXDelimeters(conditionTeX);
var label = whileElement.getAttribute("label");
var htmlText = "";
var comment = whileElement.getAttribute("comment");
var commentId = (whileElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
if (label) {
label = label.trim();
if (label[label.length - 1] !== ":") {
label += ":";
}
htmlText += Algotype.getLabelHtml(state, label);
}
var whileId = (whileElement.getAttribute("id") || "").trim();
var whileIdTextBegin = "";
var whileIdTextEnd = "";
if (whileId) {
whileIdTextBegin = "<span id='" + whileId + "'>";
whileIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
whileIdTextBegin + "while " +
conditionTeX + ":" + whileIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = whileElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
return htmlText;
};
Algotype.typesetRepeatUntil = function(repeatUntilElement, state) {
var conditionTeX =
(repeatUntilElement.getAttribute("condition") || "").trim();
conditionTeX = addTeXDelimeters(conditionTeX);
var label = repeatUntilElement.getAttribute("label");
var htmlText = "";
var comment = repeatUntilElement.getAttribute("comment");
var commentId =
(repeatUntilElement.getAttribute("comment-id") || "").trim();
var idText = "";
if (commentId) {
idText = " id='" + commentId + "' ";
}
if (comment) {
comment = " <span class='algotype-step-comment' " + idText + ">" +
Algotype.ALGORITHM_STEP_COMMENT_TAG + " " +
comment.trim() + "</span>";
}
if (label) {
label = label.trim();
if (label[label.length - 1] !== ":") {
label += ":";
}
htmlText += Algotype.getLabelHtml(state, label);
}
var repeatUntilId = (repeatUntilElement.getAttribute("id") || "").trim();
var repeatUntilIdTextBegin = "";
var repeatUntilIdTextEnd = "";
if (repeatUntilId) {
repeatUntilIdTextBegin = "<span id='" + repeatUntilId + "'>";
repeatUntilIdTextEnd = "</span>";
}
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
repeatUntilIdTextBegin + "repeat" + repeatUntilIdTextEnd +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
var saveIndentation = state["indentation"];
state["lineNumber"]++;
state["indentation"]++;
var childElements = repeatUntilElement.children;
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
// Reset the indentation counter.
state["indentation"] = saveIndentation;
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
repeatUntilIdTextBegin + "until " +
conditionTeX + repeatUntilIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
state["lineNumber"]++;
return htmlText;
};
Algotype.typesetAlgorithm = function(algorithmElement) {
var algorithmName =
algorithmElement.getAttribute("name") || Algotype.UNNAMED_ALGORITHM;
var algorithmParameterList =
Algotype.getAlgorithmParameterList(algorithmElement);
var commentText = Algotype.getAlgorithmHeaderComment(algorithmElement);
var parentNode = algorithmElement.parentNode;
var htmlText =
"<span class='algotype-text algotype-algorithm-name'>" +
algorithmName +
"</span><span class='algotype-text'>" + algorithmParameterList +
commentText +
"</span><br/>";
var childElements = algorithmElement.children;
var state = {
lineNumber: 1,
indentation: 0
};
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
var paragraphElement = document.createElement("p");
paragraphElement.style.textAlign = "left";
paragraphElement.innerHTML = htmlText;
parentNode.appendChild(paragraphElement);
};
Algotype.setup = function() {
// Load MathJax.
Algotype.loadMathJax();
// Typeset all algorithms present in the DOM.
var algorithmList = document.getElementsByTagName("alg-algorithm");
for (var i = 0; i < algorithmList.length; ++i) {
Algotype.typesetAlgorithm(algorithmList[i]);
}
};
Algotype.dispatchTable = {};
Algotype.dispatchTable["alg-foreach"] = Algotype.typesetForEach;
Algotype.dispatchTable["alg-for"] = Algotype.typesetFor;
Algotype.dispatchTable["alg-for-downto"] = Algotype.typesetForDownto;
Algotype.dispatchTable["alg-forever"] = Algotype.typesetForever;
Algotype.dispatchTable["alg-while"] = Algotype.typesetWhile;
Algotype.dispatchTable["alg-repeat-until"] = Algotype.typesetRepeatUntil;
Algotype.dispatchTable["alg-if"] = Algotype.typesetIf;
Algotype.dispatchTable["alg-else-if"] = Algotype.typesetElseIf;
Algotype.dispatchTable["alg-else"] = Algotype.typesetElse;
Algotype.dispatchTable["alg-step"] = Algotype.typesetStep;
Algotype.dispatchTable["alg-return"] = Algotype.typesetReturn;
Algotype.dispatchTable["alg-break"] = Algotype.typesetBreak;
Algotype.dispatchTable["alg-continue"] = Algotype.typesetContinue;
var oldOnloadHandler = window.onload;
window.onload = function() {
if (oldOnloadHandler) {
oldOnloadHandler();
}
Algotype.setup();
};
algotype.css
/*
//// ////////////////////////////////////////////////////// ////
// Algotype.js, version 1.6 by Rodion "(code)rodde" Efremov //
//////////////////////////////////////////////////////////////// */
alg-algorithm {
display: none;
}
.algotype-text {
padding-bottom: 2px;
font-family: Times New Roman;
font-size: 18px;
}
.algotype-keyword {
font-weight: bold;
}
table.algotype-code-row-table {
padding: 0;
margin: 0;
border-collapse: collapse;
margin-bottom: -3px;
}
tbody.algotype-code-row-tbody {
padding: 0;
margin: 0;
margin-bottom: -3px;
}
tr.algotype-algorithm-line {
padding: 0;
margin: 0;
margin-bottom: -3px;
}
td.algotype-algorithm-line-number {
padding: 0;
margin: 0;
font-family: Times New Roman;
font-size: 16px;
font-weight: bold;
width: 20px;
text-align: right;
margin-bottom: 0px;
}
td.algotype-line-number-space {
padding: 0;
margin: 0;
margin-bottom: -3px;
}
.algotype-algorithm-name {
font-variant: small-caps;
font-weight: bolder;
}
.algotype-label {
font-size: 14px;
font-family: monospace;
font-weight: normal;
}
.algotype-step-comment {
font-family: Times New Roman;
font-size: 18px;
font-weight: normal;
font-variant: normal;
}
How it looks
Critique request
At the very least, my code is more or less DRY. I am not a professional JavaScript developer, so if you are, please tell me anything I could improve.
Answer: Looks like no one has yet read through this code yet, so I guess I'll give it a try. Here are my initial observations, as well as some general suggestions:
You may want to take a look at ES6 classes.
This is entirely up to you, but some developers will do var Element = document.createElement to cut down on the verbosity of the code. This allows you to type less and increase readability.
Use of ES6 template literals and arrow functions may help shorten some of your code and increase readability. Instead of this:
return " " +
Algotype.ALGORITHM_HEADER_COMMENT_TAG +
" " + algorithmHeaderComment;
You can do something like this:
return ` ${Algotype.ALGORITHM_HEADER_COMMENT_TAG} ${algorithmHeaderComment}`;
Instead of this:
Algotype.getAlgorithmParameterList = function(algorithmElement) {
var algorithmParameterList =
algorithmElement.getAttribute("parameters") || "";
algorithmParameterList = algorithmParameterList.trim();
...
Why not this?
Algotype.getAlgorithmParameterList = algorithmElement => {
var parameterList = (algorithmElement.getAttribute("parameters") || "").trim();
...
}
As an additional comment on your variable names, long and descriptive variable names are good, but you should strike a balance. Variable names that are too long or convey unneeded information can and should be shortened. For example, shortening algorithmElement to element, since one can infer that your functions are supposed to operate on algorithm elements.
Why not regex?
// Remove the beginning parenthesis, if present.
if (algorithmParameterList[0] === "(") {
algorithmParameterList =
algorithmParameterList.substring(1,
algorithmParameterList.length);
}
// Remove the ending parenthesis, if present.
if (algorithmParameterList[algorithmParameterList.length - 1] === ")") {
algorithmParameterList =
algorithmParameterList
.substring(0, algorithmParameterList.length - 1);
}
// Remove possible leading and trailing space within the parentheses.
algorithmParameterList = algorithmParameterList.trim();
You used a regex to split the string, why not use one for removing the parentheses and spaces? You can use a regex similar to this to accomplish the same task as the code block above:
var parameterList = /^\(*\s*([\w]*)\s*\)*$/.exec(parameterList)[1];
Make sure you thoroughly document this in your code though, because this sacrifices readability for brevity.
At the very least, my code is more or less DRY.
Not really, the code is still a little wet. Could use a little more DRY. It seems like code that is very similar to this is repeated a lot, but with slightly different parameterization.
htmlText += "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'>\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
state["lineNumber"] +
" </td> " +
" <td class='algotype-line-number-space' width='" +
(Algotype.INDENTATION_WIDTH * state["indentation"] +
Algotype.DISTANCE_BETWEEN_LINE_NUMBER_AND_CODE) +
"px'></td>\n" +
" <td class='algotype-text algotype-keyword'>" +
ifIdTextBegin + "if " +
conditionTeX + ":" + ifIdTextEnd +
(comment ? comment : "") +
" </td> " +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
Why don't you just write a helper method for generating this? You have Algotype.getLabelHTML already, just modify it slightly to look something like this:
Algotype.getLabelHtml = (lineNumber, width, content) => {
return "<table class='algotype-code-row-table'>\n" +
" <tbody class='algotype-code-row-tbody'\n" +
" <tr class='algotype-algorithm-line'>\n" +
" <td class='algotype-algorithm-line-number'>" +
` ${lineNumber}\n` +
" </td>\n" +
` <td class='algotype-line-number-space' width='${width}px'></td>\n` +
` ${content}\n` +
" </tr>\n" +
" </tbody>\n" +
"</table>\n";
};
Refactor common elements. This chunk seems to be in every one of the typeset* functions.
for (var i = 0; i < childElements.length; ++i) {
var elementName = childElements[i].tagName.toLowerCase();
var handlerFunction = Algotype.dispatchTable[elementName];
if (handlerFunction) {
htmlText += handlerFunction(childElements[i], state);
} else {
throw new Error("Unknown element: '" + elementName + "'.");
}
}
You could probably put that in a helper function somewhere.
Have you considered using an XML parser to tokenize this? Or using the native JS DOM to store your generated HTML?
Other than that, the code is mostly understandable and readable. It is a bit monolithic however, which I suspect is why no one has reviewed it. 1200~ lines of code is a bit of a daunting task to read through. I suggest you break it up into submodules. Put all your utility/formatting functions in a file, and put all the typesetting functions in another. That's one way to go about doing it.
Happy coding! | {
"domain": "codereview.stackexchange",
"id": 26729,
"tags": "javascript, css, xml, library, tex"
} |
The impact of using different scaling strategy with Clustering | Question: I'm currently learning about clustering. To practice clustering, I am using this dataset.
After running K-means clustering for multiple values of k and plotting the results, I can see that scaling is affecting the results (within-cluster SSE) and I want to use this post to confirm my intuition as to why this is the case.
I don't believe that this is a meaningful reduction in the Within-Cluster SSE because the numerical distances are sensitive to scale, and I don't think that this has any effect on how accurate the model is. Is that intuition correct?
I just wasn't expecting the reduction to be this drastic between standardizing and normalizing.
Code and the results:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('customers.csv')
X = df.iloc[:, [3, 4]].to_numpy()
from sklearn.preprocessing import StandardScaler, MinMaxScaler
ssc, mmsc = StandardScaler(), MinMaxScaler()
X_ssc = ssc.fit_transform(X)
X_mmsc = mmsc.fit_transform(X)
from sklearn.cluster import KMeans
# Unscaled
k_vals = list(range(2, 21))
WCSSE = []
for k in k_vals:
kmeans = KMeans(n_clusters=k)
model = kmeans.fit(X)
WCSSE.append(model.inertia_)
plt.plot(WCSSE, marker='o', markersize=10)
# Standard Scaler
k_vals = list(range(2, 21))
WCSSE = []
for k in k_vals:
kmeans = KMeans(n_clusters=k)
model = kmeans.fit(X_ssc)
WCSSE.append(model.inertia_)
plt.plot(WCSSE, marker='o', markersize=10)
# MinMax scaler
k_vals = list(range(2, 21))
WCSSE = []
for k in k_vals:
kmeans = KMeans(n_clusters=k)
model = kmeans.fit(X_mmsc)
WCSSE.append(model.inertia_)
plt.plot(WCSSE, marker='o', markersize=10)
Answer:
Is that intuition correct
Yes
There is no improvement in Cluster quality. All the 3 are the same and should be that way.
We can easily observe that all the 3 clusters are forming the elbow at 2.5. Even all other aspects of the 3 plots are exactly the same.
Within Cluster Sum of Squares (WCSS) measures the squared average distance of all the points within a cluster to the center of the cluster.It is the mean distance of each point within the Cluster from the Centroid. No ratio is involved in this metric(i.e. to cancel the impact of scale), hence it will definitely depend on the space size and also on the number of clusters.
Imaging your space is of the size of Earth, then you standardized it to make it as a size of a football, then you make it even smaller i.e. golf ball with Normalization.
I just wasn't expecting the reduction to be this drastic between standardizing and normalizing.
Obviously, the mean distance will reduce in the proportion of Standard deviation in the case of Standardization and "Max value" in the case of Normalization. Having big outliers can have a bigger impact.
Using the same logic, we can see that the metric decreases with the number of clusters too. More cluster means each cluster is closer to its Centrod and hence smaller SSE. That's why it's not a great metric.
You may try to calculate silhouette score, which combines both the Cohesion and Separation for 3 cases
from sklearn.metrics import silhouette_score
silhouette_score(X, kmeans.labels_)
Ref | {
"domain": "datascience.stackexchange",
"id": 7808,
"tags": "python, scikit-learn, k-means"
} |
Translations by a basis vector in a periodic potential | Question: In the case of a periodic potential with translational invariance $ V(x+L) = V(x) $, why do we assume that the the translation operator , when acting on an energy eigenstate , makes the state pick up a phase instead of it remaining completely unchanged? That is,
$$ T_L\psi(x) = e^{i\phi}\psi(x) $$
An obvious response to this is that the probability amplitude shouldn't change on a translation ( otherwise the translational symmetry breaks down) , i.e $P(x) = P(x+L) $ and hence the maximum a wavefunction can do is pick up a phase. But this condition is also satisfied by the wavefunction not changing at all i.e $\psi(x+L) = \psi(x) $.
I understand that the differect eigenvalues of the translation operator correspond to different values of the wavevector $k$ , indicating how the phase of the wavefunction changes when we move spatially. But if the lattice is periodic, shouldn't the wave function be exactly the same (without any change is phase as well) in all the unit cells at the equivalents points (say the center) within those cells?
Is it just done to ensure the we remain as general as possible and make the least number of assumptions?
Answer: The first thing to understand is the Hilbert space we're working in. We take $\mathcal H= L^2([-\frac{NL}{2},\frac{NL}{2}])$ (corresponding to $N$ atoms with equal spacing $L$ between them) with periodic boundary conditions (called Born-von Karman conditions). Therefore, for any element of the Hilbert space $\Psi$ we must have that $\Psi(NL/2) = \Psi(-NL/2)$.
Next, the fact that $[H,T_L]=0$ means that we can find an energy eigenbasis $\{\psi_n\}$ which is also an eigenbasis for $T_L$. You are asking why we don't assume that $T_L \psi_n = \psi_n$ for each one of these.
The answer is that any wavefunction $\Psi\in \mathcal H$ can be expanded as $\Psi = \sum_n c_n \psi_n$. The action of $T_L$ on $\Psi$ is then
$$T_L (\Psi) = \sum_n c_n T_L(\psi_n) = \sum_n c_n \psi_n = \Psi $$
which implies that every possible wavefunction (not just energy eigenstates!) is lattice translation-invariant, which would mean that $T_L$ is simply the identity operator on $\mathcal H$.
This is obviously not the case. An arbitrary element of $\mathcal H$ has no symmetry requirements whatsoever (it could be a Gaussian wavepacket located at the origin, for example). Since $T_L$ is not the identity operator on the whole space, it cannot be the identity operator on the chosen basis $\{\psi_n\}$, which means that in general, $T_L \psi_n = \lambda \psi_n$ with $\lambda \neq 1$.
Though not the identity operator, $T_L$ is unitary, as $\langle T_L \psi, T_L \phi\rangle = \langle \psi,\phi\rangle$. This implies in particular that $|\lambda|^2 = 1$, and therefore that $\lambda = e^{i\phi_n(x)}$ for some real-valued function $\phi_n(x)$. This is the general idea behind Bloch's theorem.
Bloch's theorem goes on to show that any energy eigenfunction $\psi_n(x)$ can be written as $\psi_n(x) = e^{ikx}u_n(x)$ where $u_n$ is periodic with period $L$, and $k = -\frac{\pi}{L} + m\frac{2\pi}{NL}$ for integer $m$ between $0$ and $N-1$. That is, $k\in [-\pi/L,\pi/L]$ in steps of $\frac{2\pi}{NL}$. We call these states Bloch waves.
From a comment,
Is there any physical interpretation to the phase $\phi$?
Yes, absolutely. $\hbar k$ is called the crystal momentum of the state. It's not quite the same as normal momentum, but it's similar in many ways. Crystal momentum is quite frequently conserved during interactions and collision, making it a very useful physical quantity. One can also find the group velocity of localized wavepackets via
$$\mathbf v = \frac{1}{m} \nabla_{\mathbf k} E_n(k)$$
where $E_n(k)$ is the energy of the eigenstate $\psi_n = e^{ikx}u_n(x)$. | {
"domain": "physics.stackexchange",
"id": 66798,
"tags": "quantum-mechanics, symmetry"
} |
What is resonance, and are resonance structures real? | Question: My teacher told me about resonance and explained it as different structures which are flipping back and forth and that we only observe a sort of average structure. How does this work? Why do the different structures not exist on their own?
Answer: This answer is intended to clear up some misconceptions about resonance which have come up many times on this site.
Resonance is a part of valence bond theory which is used to describe delocalised electron systems in terms of contributing structures, each only involving 2-centre-2-electron bonds. It is a concept that is very often taught badly and misinterpreted by students. The usual explanation is that it is as if the molecule is flipping back and forth between different structures very rapidly and that what is observed is an average of these structures. This is wrong! (There are molecules that do this (e.g bullvalene), but the rapidly interconverting structures are not called resonance forms or resonance structures.)
Individual resonance structures do not exist on their own. They are not in some sort of rapid equilibrium. There is only a single structure for a molecule such as benzene, which can be described by resonance. The difference between an equilibrium situation and a resonance situation can be seen on a potential energy diagram.
This diagram shows two possible structures of the 2-norbornyl cation. Structure (a) shows the single delocalised structure, described by resonance whereas structures (b) show the equilibrium option, with the delocalised structure (a) as a transition state. The key point is that resonance hybrids are a single potential energy minimum, whereas equilibrating structures are two energy minima separated by a barrier. In 2013 an X-ray diffraction structure was finally obtained and the correct structure was shown to be (a).
Resonance describes delocalised bonding in terms of contributing structures that give some of their character to the single overall structure. These structures do not have to be equally weighted in their contribution. For example, amides can be described by the following resonance structures:
The left structure is the major contributor but the right structure also contributes and so the structure of an amide has some double bond character in the C-N bond (ie. the bond order is >1) and less double bond character in the C-O bond (bond order <2).
The alternative to valence bond theory and the resonance description of molecules is molecular orbital theory. This explains delocalised bonding as electrons occupying molecular orbitals which extend over more than two atoms. | {
"domain": "chemistry.stackexchange",
"id": 6106,
"tags": "resonance, valence-bond-theory"
} |
How do canyons form? | Question: I read that canyons are the result of long-time erosion from a plateau but is there any simulation I could play with to understand the phenomenon better?
Answer: I guess you are talking about subaerial (as opposed to submarine) canyons, and are mostly thinking of numerical (as opposed to physical) models.
I expect there are numerical models suitable for submarine application, but I haven't looked for them. There are physical models of rivers too, but I think they are better suited to modeling 'soft' sedimentary systems like deltas, floodplains, etc.
Here are some examples of numerical models in Python, starting with the research-grade ones (but if it was me I'd probably start with the ones I've thrown at the end):
PyBadlands
an open-source python-based framework that allows for evaluation of sediment transport, landscape dynamics and sedimentary basins evolution under the influence of climate, sea waves and tectonics
Read the paper. Get the code.
Landlab
This is a high-level toolkit for building models, but their docs point to all sorts of examples, several of which have to do with landscape evolution. For example, see Terrainbento and SPACE, below.
Check out the website. Read a nice poster.Get the code.
Terrainbento
This package builds on Landlab (above) to compare a large number of different Earth surface modelling strategies.
Read the paper. Get the code.
SPACE
Another package that builds on Landlab, this time to study the simultaneous modeling of alluvial and bedrock erosion and transport.
Read the paper. Get the code.
Some others I've come across:
An interactive model: https://github.com/fastscape-lem/ipyfastscape
A simple model: https://github.com/johnjarmitage/flem
A simple model, from scratch: https://github.com/mtb-za/landscape_evolution_short_course/blob/master/notebooks/FastScape.ipynb
A global model (!): https://github.com/Geodels/gospl | {
"domain": "earthscience.stackexchange",
"id": 2397,
"tags": "erosion, simulation, landscape"
} |
Condition for steady current and the applicability of Gauss's Law? | Question: The conditions for steady current are often specified as
$$\frac{\partial\rho}{\partial t}=0 \,\,\,\,and\,\,\,\frac{\partial\vec{J}}{\partial t}=0 $$
Combining $\frac{\partial\rho}{\partial t}=0$ with the continuity equation ($\nabla\cdot \vec{J}=\frac{\partial\rho}{\partial t}$) we get that for a steady current, we must have that
$$\nabla \cdot \vec{J}=0$$
That is, the divergence of the current density must be zero everywhere for a steady current. I take issue with this implication for suppose we have an a infinite wire of uniform conductivity $\sigma$ and of cross sectional area $A$ conducting a steady current with current density $\vec{J}$. The current density at every point within the wire is clearly the same. However outside the wire, the current density is zero everywhere (we can assume the wire is immersed in a perfect insulator). That means that at the boundary between the wire and its surroundings, $\vec{J}$ experiences a discontinuous drop. Now my question is whether it is actually correct to say that in steady current conditions, we must necessarily have that $\nabla \cdot \vec{J}=0$. This surely can't be correct because I have just used the most stereotypical and idealized example of steady current (an ideal and infinite wire with truly uniform conductivity) and have shown that even in this extremely simplified and idealized case, we do not have that $\nabla \cdot \vec{J}=0$ for all points in space. So what is going on here? Also, what would the implications of this be for the charge distribution at the boundary? From ohms law, we have that
$$\vec{E}=\rho \vec{J}$$
$$\Rightarrow \nabla \cdot \vec{E} =\nabla \cdot (\rho \vec{J})$$
Clearly the RHS of the above is undefined at the boundary (both $\sigma$ and $\vec{J}$) experience discontinuities there. So that means the LHS, namely $\vec{E}$ is also undefined at the boundary. From gauss's law, does this not mean that the charge density at the boundary is undefined?
Any help on these issues would be most appreciated!
Answer: Suppose that we have an infinite wire of radius $a$ on the $z$ axis. An steady current in cylindrical coordinates can be described as
$$\vec{J}=J\Theta(a-\rho)\hat{z},$$
where $\rho$ is the axial distance and $\Theta$ is the Heaviside step function.
If you take the divergence of this current density, it takes this form in cylindrical coordinates
$$\vec{\nabla}\cdot\vec{J}=\frac{1}{\rho}\frac{\partial(\rho J_\rho)}{\partial\rho}+\frac1\rho\frac{\partial J_\phi}{\partial\phi}+\frac{\partial J_z}{\partial z}.$$
The only non-zero component of the current density is $J_z$, but it is independent of $z$.
We also have to check what happens to this divergence at $\rho=0$, because the first two terms are undefined in that region. To do this, we could integrate $\vec{\nabla}\cdot(A\hat{z})$ in a infinite cylinder of radius $\varepsilon$ over the $z$ axis
$$\iiint_V\vec{\nabla}\cdot(A\hat{z})\,dV=\iint_SA\hat{z}\cdot d\vec{S}=\iint_SA\hat{z}\cdot\hat{\rho}\,dS=0,$$
so $\vec{\nabla}\cdot\vec{J}=0$ everywhere. | {
"domain": "physics.stackexchange",
"id": 77133,
"tags": "electromagnetism, electrostatics, electric-current, charge, maxwell-equations"
} |
What is the purpose of getting a rabies vaccine after exposure? | Question: After exposure to the virus, it is already inside you and your immune system will start to recognize it. Is the vaccine then just a way to kickstart this process so the body can fight off the infection sooner? Is it just a way to introduce much more antigen to the system than would be present only a few hours post exposure from actual virus replication? Or is the boost coming from adjuvants?
Answer: Rabies virus enters the body, typically from a bite, and then enters nerves which it follows up to the brain. An immune response to a first exposure of a pathogen generally takes many days, perhaps weeks, to develop to the point where it's protective. This is often even slower when the pathogen is in nerves, which are relatively sheltered from the immune system, and when there's only a small amount of virus present.
After rabies exposure, people are given two treatments: They're given rabies immune globulin, which contains pre-formed antibodies against rabies, and they're also given the vaccine.
The rabies immune globulin is the most important of these. It provides immediate protection, beginning within minutes of injection. If the virus has not yet entered the nerve cell (which often takes quite a while) then this globulin will bind to and inactivate all, or almost all, of the virus.
Giving the vaccine as well is an added precaution. It will give a faster, probably much faster, response than would the natural virus. There may be only a few rabies viruses present -- it only takes one! -- and of course the natural virus is doing everything in its power to avoid making a strong immune response, whereas the vaccine is the opposite. It has large amounts of antigen, and it's optimized for making a strong immune response. (Rabies vaccine post-exposure is boosted on days 3, 7, and 14 after the first dose. That's an extremely aggressive vaccination procedure that you don't see with any routine treatments.) The vaccine might drive a protective immune response within a few days, while the natural virus might take a month, or never.
If the rabies immune globulin doesn't catch all the virus, or if it temporarily blocks it but then some escapes -- then the vaccine-induced immune response might protect them.
CDC page on post-exposure vaccination
CDC page on post-exposure globulin | {
"domain": "biology.stackexchange",
"id": 9962,
"tags": "immunology, virology"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.