anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Proof of $dE = T dS − P dV + µdN$ | Question: I'm following this book. The author states The First Law of Thermodynamics as follows:
$$dE = đQ + đW + đC$$ , (3.50)
where
– đQ = energy put into the system thermally by an environment, such
as a stove;
– đW = mechanical work performed on the system by forces arising
from pressure, electromagnetism, etc.;
– đC = energy brought into the system, either by particles that arrive
as a result of environmental changes, or by the environment itself.
This includes large-scale potential energy, such as that due to gravity
when we are treating a large system such as an atmosphere.
where $dE$ is the infinitesimal increase in a system’s internal energy.
Then he goes on to replace $đQ$ with $TdS$ using the following argument:
In Section 3.8 we found that entropy is extensive: the total entropy of a
set of interacting systems is always the sum of the systems’ individual entropies, even though this total entropy grows as the systems evolve toward
equilibrium. Indeed, at constant volume and particle number (i.e., for thermal interactions only), the First Law says $dE = đQ$, whereas (3.141) says
$dE = T dS$. We infer that $T dS$ is the desired replacement for $đQ$ that makes
for an exact-differential-only quasi-static version of the First Law. Replacing the “heat into the system”, $đQ$, with $T dS$ is something that we already
saw and used in the discussion around Figure 3.13. We might replace dE
with $đQ$ in that discussion and observe that, whereas we certainly can write
$đQ_1 = T_1 dS_1$ and $đQ_2 = T_2 dS_2$
, we cannot write “$đQ = T dS$” for the entire
evolving system. So, the quasi-static version of the First Law using only exact
differentials applies individually to each subsystem, but not to the combined
system, because it is only the subsystems that are always held very close to
equilibrium.
I don't understand this because it seems like it will work only for constant volume and particle number and also it is known that $đQ = TdS$ is true only for reversible processes, so we can't infer that $dE = T dS − P dV + µdN$ holds in general.
Also, if the book's reasoning is indeed not well-founded, how to actually prove this?
Edit: I highlighted the exact place where the assumption about constant volume is made. I don't understand how we can then proceed to conclude that something is true for any quasi-static process.
Answer: I think you are correct, the quoted text is not an entirely convincing and clear argument for validity of the relation $dU = TdS - PdV+\mu N$ in homogeneous simple systems, in general.
One way to make such an argument is this: any equilibrium state with defined quantities $U,S,V,N$ can be reached from the reference state by some reversible process, in which $dQ = TdS$ holds all along (which we know from Clausius), and also $dW = -pdV$ holds all along. Consequently, one can define $S$ and $U$ at any equilibrium state of the system, and then $U$ is function of $S,V,N$, and for any infinitesimal variation of $S,V,N$ in the space of thermodynamics states, we have the mathematical relation from multivariate calculus:
$$
dU = \frac{\partial U}{\partial S}_{V,N}\bigg|dS + \frac{\partial U}{\partial V}\bigg|_{S,N}dV + \frac{\partial U}{\partial N}\bigg|_{S,V}dN.
$$
This is valid for any process where $U,S,V,N$ and the corresponding derivatives are defined all along, even for those processes which are not entirely reversible.
Further physics arguments identify
$$
\frac{\partial U}{\partial S}\bigg|_{V,N} = T,
$$
$$
\frac{\partial U}{\partial V}\bigg|_{S,N} = -p,
$$
$$
\frac{\partial U}{\partial N}\bigg|_{S,V} = \mu.
$$
Thus $dQ=TdS$ can be valid even for irreversible processes, provided the variables change in such a way that they are defined all along at all stages of the process. This is when the process can be represented as a curve in the space of thermodynamic equilibrium states. For example, if heat transfer is over a non-zero temperature difference outside the system, but inside the system, temperature is uniform, then $dQ=TdS$, even though the process as a whole is irreversible. | {
"domain": "physics.stackexchange",
"id": 94124,
"tags": "thermodynamics, energy, work, entropy"
} |
Language for teaching basic programming | Question: I'm interesting in teaching programming to middle school students. I'd like a programming language with the following criteria:
Simple - pared down to the absolute minimum needed to support sophisticated programming without too much code. As such, for this language, I'm not interested in pointers and am weary of object-orientation (although functions are good).
Powerful - I'd like to be able to program 21st-century elements, including graphics, networking, and distributed processing.
Debuggable - I'd like an elegant Integrated Development Environment with a human-readable debugger (i.e. not some strange error message with a stack trace, but a clear explanation that an average middle school student can use to determine what is wrong with the code).
The standard programming languages (C,C++,C#,Java) fail the first criterion. Basic programming languages like Scratch fail the second (and possibly third) criterion.
Scripting languages (perl, python, php) fail the last criterion.
I'd like to know if someone knows of such a beast, before I sit down to make it up myself.
Answer: I recommend Javascipt.
Just about everyone reading this has access to a development environment by default in their browser.
It's forgiving for new programmers.
It supports a modern feature set.
There's a vast repository of sample code on the internet, quality notwithstanding
It's a real-world applicable language. | {
"domain": "cs.stackexchange",
"id": 3253,
"tags": "programming-languages, education"
} |
Could my object use static methods? Anything I need to do to make the code better? | Question: With the Zend coding convention in the back of my mind, I have set this up.
It lets you communicate with the API of the vendor TargetSMS. They allow you to bill customers trough SMS. For now I've only spent time on the NonSubscription option.
I would just like to know if it's ok, or what could be better. I am mostly interested in knowing if the TargetSms object could use static methods. For example, I think the isAllowedIp method could be static, since I would like to use it even if the object is not initiated (I was told that's the idea behind static methods).
TargetSMS object
<?php
/**
* TargetSMS object with TargetSMS related methods.
*/
namespace TargetPay\Sms;
class TargetSms
{
/**
* The allowed IP of TargetSms.
* @var array
*/
protected $_targetSmsIp = array('89.184.168.65');
/**
* The ok response code for TargetSMS.
* @var number
*/
protected $_responseCode = 45000;
/**
* Check if the request is comming from TargetSMS.
* @param string $ip
* @return boolean
*/
public function isAllowedIp($ip = '')
{
if (in_array($ip, $this->_targetSmsIp)) {
return true;
}
return false;
}
/**
* Get the TargetSMS required responsecode.
* @return number
*/
public function getResponseCode()
{
return $this->_responseCode;
}
/**
* Add a new allowed ip address to the array with allowed ip addresses.
* @param string $ip
*/
public function addAllowedIp($ip = '')
{
$this->_targetSmsIp[] = $ip;
}
}
Answer: If you wanted to make isAllowdIp static (and there's nothing wrong with that, per se), you -indeed- don't need an instance of your class to call that method. But owing to there being no instance, you won't have access to any non-static properties of your class, either.
To get around that, you'd have to change:
protected $_targetSmsIp = array('89.184.168.65');
To
protected static $_targetSmsIp = array('89.184.168.65');
And change these method, too:
public static function isAllowedIp($ip)
{//don't check empty strings, they're always invalid ips!
return !!in_array($ip, self::$_targetSmsIp);
}
public static function addAllowedIp($ip)
{//don't allow '' defaults! adding empty strings aren't valid ips
if (!filter_var($ip, FILTER_VALIDATE_IP))
{//check what you're adding to the OK-list!
throw new InvalidArgumentException($ip. ' is not a valid IP');
}
self::$_targetSmsIp[] = $ip;
}
This, by itself isn't code that makes my eyes water, but it doesn't exactly sit well with me, either: It's hard to tell what the actual task of your class is: validate IP's/data? Is it the API connection layer?
I gather it's the latter. In which case, I'd define my methods to only accept objects of a given type. This object is where you can filter, check and validate all input... For example, the IpObject:
class IpObject
{
private $ip = null;
private static $valideIps = array();
public function __construct($ip = null)
{
if ($ip)
{
$this->setIp($ip);
}
}
public function getIp()
{
return $this->ip;
}
public function setIp($ip)
{
if (!filter_var($ip, FILTER_VALIDATE_IP) || !in_array($ip, self::$validIps))
{
throw new InvalidArgumentException($ip.' is not valid');
}
$this->ip = $ip;
return $this;
}
}
The major problem with your creating statics here is that, by changeing the allowed IP's for one instance, you're changing the allowed IP's accross the board: you can't black-list or OK ip's for instances of your object individually, so after a while it'll get quite tricky to work out which IP's are allowed and which aren't.
All in all, an array in this object won't add too much overhead... not that you notice, and it will make life easier when testing/debugging.
Instead of using statics. having an instance at the ready is 99.99% of times the better option. | {
"domain": "codereview.stackexchange",
"id": 4473,
"tags": "php, object-oriented, static"
} |
WAM in URDF for Gazebo | Question:
Hi everyone!!
My name is Ivan, I'm new in this comunity,and I write because I need a robot for my project wam in a file URDF to gazebo. But the truth is not where to find one (and for that matter, I've been late on my project and I tried to create but favorable Last results)
Anyone know where I can find one??
thank everyone in advance
Greetings
Originally posted by Ivan Rojas Jofre on ROS Answers with karma: 70 on 2011-05-01
Post score: 0
Answer:
If you are looking for an urdf for a wam arm, then take a look at the arm simulator project
http://thearmrobot.com/aboutSimulator.html
Then you can use
$ rosrun gazebo gazebo__model -h
To guide you through converting a urdf to gazebo XML format.
Or your can write a launch script to run gazebo with your urdf. Look in gazebo_worlds package for examples.
Originally posted by nkoenig with karma: 431 on 2011-06-17
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Benjamin Blumer on 2013-11-08:
I don't have enough karma to edit this answer. But I've recently made an SDF and some easy-to-use sample plugins for controlling a WAM. The SDF is made from the Institute De Robotica's URDF. Find it here: https://github.com/BenBlumer/Gazebo_WAM | {
"domain": "robotics.stackexchange",
"id": 5488,
"tags": "urdf"
} |
A paradox to law of conservation of angular momentum | Question: I came across the following problem:
A small body of mass $M$ tied to a non stretchable string moves over a smooth horizontal plane. The other end is being drawn into a hole with constant speed. Find thread tension as a function of distance $r$ between the body and hole if ar $r=r_1$ the angular velocity $\omega=\omega_1$.
The problem is easy. The path will look like
One can easily conserve angular momentum about O which yields:
$rv=r_1v_1$ (where $v$ is the tangential velocity)
$v=r_1v_1/r$
Now $ T=ma$
Thus $ T=mr\omega^2$.
But my confusion is regarding some other thing. Let at a time instant the particle be at a distance $r$ from hole and its velocity be $v$ as follows:
Now at that instant the body will have two velocity components 1) $V$perpendicular to the rope and 2) $v_2$ radially towards the hole ($v_2$ being the constant speed with which the rope is being drawn ). But then the tension will be radially towards the hole and thus perpendicular to the component $v$. Tension being perpendicular to $v$ cannot change the magnitude of $v$ (That's what we were taught). And radial velocity should always be $v_2$. Thus by this logic component $v$ should never change and remain equal to initial value $r_1v_1$. What is the explanation behind this disagreement?
Answer:
Tension being perpendicular to cannot change the magnitude of
This statement is the problem. The tension is not perpendicular to the velocity.
The tension is in the radial direction and as you yourself noted there is a radial component of the velocity. So the two vectors are not perpendicular and thus the tension can and does change the magnitude of the velocity
Edit: the above assumed that $v$ was the velocity, but the OP intended $v$ as just the tangential component of the velocity. That is also problematic for a different reason.
In normal Cartesian coordinates it is true that a force in one direction will not cause any change in the magnitude of the perpendicular component. This is the basis of projectile motion where the horizontal velocity is constant under a vertical force.
This same reasoning does not hold in polar coordinates where the basis vector in the radial direction is $\hat r(r,\theta)$ and the basis vector in the angular direction is $\hat \theta(r,\theta)$. Consider what happens when the string is cut. At that moment and thereafter the forces in both the $\hat r$ and $\hat \theta$ directions are zero. The ball initially has zero "radial velocity" ($v_r=dr/dt$) and pure "tangential velocity" ($v_\theta=r \ d\theta/dt$). The velocity vector is $\vec v = v_r \hat r + v_\theta \hat \theta$, which remains constant, so $v_r$ and $v_\theta$ change to compensate as $\hat r$ and $\hat \theta$ change from point to point in the polar coordinates. Specifically, $v_r$ begins increasing and $v_\theta$ begins decreasing. As time goes on, $v_\theta$ gets smaller and smaller approaching zero, and $v_r$ gets larger and larger approaching the initial tangential velocity. All with no force applied. | {
"domain": "physics.stackexchange",
"id": 87426,
"tags": "newtonian-mechanics, forces, angular-momentum, free-body-diagram, string"
} |
Arduino IDE compile error "...undefined reference to `ros::normalizeSecNSec(unsigned long&..." | Question:
When I compile the "Hello World" example in the Arduino IDE I get this message:
HelloWorld.cpp.o: In function global constructors keyed to nh': HelloWorld.cpp:31: undefined reference to ros::normalizeSecNSec(unsigned long&, unsigned long&)'
collect2: ld returned 1 exit status
I arrived at this point after painstakingly editing the code of every file in the relevant sequence so that every #include "(some_file_name)" reference contained the full directory address to fix "No such file or directory" error messages. So:
"ros/msg.h"
became
"/home/ryan/sketchbook/libraries/ros/msg.h"
And it seemed to be working (at least it stopped throwing up those error messages) but I can't figure this error out. I'm stuck. Any help would be greatly appreciated.
Thanks for the input, Cody. I've mirrored the formatting as it is in the files and it seems to have worked. Those errors looked a bit like this"
HelloWorld.cpp:7:29: fatal error: std_msgs/String.h: No such file or directory
compilation terminated.
This one is a bit different as it refers to a line of code directly in the "Hello World".pde file. Following the same tactic as before I modified it from:
#include <std_msgs/Strings.h>
to
#include </home/ryan/sketchbook/libraries/std_msgs/Strings.h>
When I do this the compiler posts the error message I listed right at the top of my post. I don't know if this means I fixed that problem and am on to the next, or if the modification caused this error.
I'm wondering if the IDE can't locate everything in my sketchbook for some reason. I assume I messed up something on the setup but I had no errors there that I could tell and the IDE sees the ros example files just fine, they're listed in its menus. Either way I'm still stuck.
[Originally posted](https://answers.ros.org/question/34193/arduino-ide-compile-error-"...undefined-reference-to-`ros::normalizesecnsec(unsigned-long&..."/) by RyanG on ROS Answers with karma: 26 on 2012-05-16
Post score: 0
Original comments
Comment by Cody on 2012-05-16:
Can you post all of the errors and warnings you get? I expect if the change in #include "ros/msg.h" changes things, we'll see some other items. Also, if you want code to format properly, wrap it in backticks or indent it with 4 spaces.
Comment by RyanG on 2012-05-17:
Thanks, Cody. I've updated my post. The forum's displaying the code lines all screwy and I can't change it. No idea why. They should look like this. #include <std_msgs strings.h> and #include </home/ryan/sketchbook/libraries/std_msgs/Strings.h>
Comment by Bkat on 2017-04-25:
Have there been any updates on the potential root cause? I've tried a bunch of solutions but nothing seems to work.
Answer:
After Third re-install I got it to work. Don't know why though.
Originally posted by RyanG with karma: 26 on 2012-05-17
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9428,
"tags": "ros, arduino, ide, rosserial"
} |
How to convert horizontal coordinates using NOVAS? | Question: I'm using NOVAS 3.1. I know that I can convert equatorial coordinates to horizontal coordinates using the equ2hor function.
Is it possible to make NOVAS do the inverse transformation: from horizontal to equatorial? There is no hor2equ, but maybe some other function implements this functionality?
Answer: This is the reply I got from the US Naval Observatory itself:
No, we do not have that transformation in NOVAS. Generally, altitude and azimuth are not determined to very high accuracy, so using them to obtain RA and Dec, which are much more precisely defined and measured, does not make a lot of sense. | {
"domain": "astronomy.stackexchange",
"id": 831,
"tags": "fundamental-astronomy, coordinate, software"
} |
How to determine the equation for the reaction of europium with sulfuric acid based on experimental measurements? | Question: I was given a set of results by a fellow researcher where he reacted $0.608~\mathrm{g}$ of europium ($M = 152$) with an excess of $\ce{H2SO4}$ and collected $144~\mathrm{cm^3}$ of $\ce{H2}$ gas at room temperature and pressure.
I was then asked to derive the equation for the reaction using these figures.
This is what I did.
I know that $1~\mathrm{mol}$ of gas is $24000~\mathrm{cm3}$.
I found out that $144~\mathrm{cm^3}$ is equivalent to $0.006~\mathrm{mol}$ of $\ce{H2}$.
Then, I did $0.608 / 152$ to get $0.004~\mathrm{mol}$.
This meant that europium and hydrogen gas were in the ratio $2:3$.
The equation I wrote was $$\ce{2Eu + H2SO4 -> Eu2SO4 + 3H2}$$
But I am wrong and I don't know where.
Answer: Your equation isn't balanced; the oxidation state for europium is +3 and not +1; and europium sulfate does not exist in the solution you describe.
From this link:
Reaction of europium with acids
Europium metal dissolves readily in dilute sulphuric acid to form solutions containing the very pale pink aquated Eu(III) ion together with hydrogen gas, $\ce{H2}$. It is quite likely that $\ce{Eu^3+}$(aq) exists as largely the complex ion $\ce{[Eu(OH2)9]^3+}$
$$\ce{2Eu(s) + 3H2SO4(aq) -> 2Eu^{3+}(aq) + 3SO4^{2-}(aq) + 3H2(g)}$$ | {
"domain": "chemistry.stackexchange",
"id": 5603,
"tags": "inorganic-chemistry, mole"
} |
How to correctly use Gauss Law? Why is it used the way it is? | Question: Gauss Law
$$\oint \vec E . d \vec A = \frac{Q_{enclosed}}{\epsilon _○}$$ From this question, Gauss Law is more fundamental than Coulomb's Law. This seems counter intuitive to me when it comes to applying Gauss Law. In the linked example and in many other cases like finding field due to uniform (sheet distribution of charge, cubical distribution, infinite wire) etc.
In all those cases my teachers as well as the books use Gauss law as $$\vec E.\int d \vec A \implies E= \frac {Q_{enclosed}}{A_{net}\epsilon_○}$$
They take it for granted that the field is equal in magnitude (wherever non-zero) on a Gaussian surface without any explain.
What is the reason behind such an assumption? Please provide a proof
Does this assumption also hold true for any random(irregular shape) non-uniform distribution of charge?
I have been told that we choose Gaussian surface as per convenience so that accounts for the assumptions.
How to prove that the chosen Gaussian surface make the assumption true? For example how to prove that a spherical Gaussian surface for a sphere makes the assumption valid.
Answer:
They take it for granted that the field is equal in magnitude (wherever non-zero) on a Gaussian surface without any explain.
They don't "take it for granted." Rather, they are starting you off with a simple pedagogical example.
What is the reason behind such an assumption? Please provide a proof
There is no general proof, since it is not generally true.
Often one tries to find a specific gaussian surface that makes the problem simple to solve. E.g., a spherical surface surrounding a point charge, in which case the field magnitude the same on the whole surface. This is only an example, it is not generally true.
Does this assumption also hold true for any random(irregular shape) non-uniform distribution of charge?
No, the "assumption" of constant field magnitude does not hold for an arbitrary surface. But, Gauss's law does still hold for any closed surface.
How to prove that the chosen Gaussian surface make the assumption true? For example how to prove that a spherical Gaussian surface for a sphere makes the assumption valid.
It depends on the specific arrangement of charges and the specific surface of interest. As discussed above your supposed "assumption" is not an assumption, it is just a fact for the specific pedagogical examples you are being taught. | {
"domain": "physics.stackexchange",
"id": 97745,
"tags": "electrostatics, electric-fields, symmetry, gauss-law"
} |
Why does pH change with temperature? | Question: Why does pH change with temperature? I recently read up on some chemistry notes, and found out that the higher the temperature of distilled water, the lower the pH. Why? Does this apply to other fluids too? Does this also mean dipping a litmus paper into 2 beakers of water of different temperature would yield different results?
Answer: Water always contains a certain amount of $\ce{OH-}$ and $\ce{H3O+}$ ions. The pH is the negative decadic logarithm of the $\ce{H3O+}$ concentration (pH = - log(c($\ce{H3O+}$)). As chemical equilibria might change with temperature so does the concentration of $\ce{H3O+}$ and therefore the pH.
Although I never tried, I suppose this could affect measurements using pH paper. On the other hand the pH changes might very well be to small to be detectable using this method.
How this affects other fluids / equilibria strongly depends on how each equilibrium is affected by temperature. At higher temperatures equilibria are shifted in the endothermic direction and at lower temperatures they are shifted in the exothermic direction. You will need to look up the details for each equilibrium I suppose. | {
"domain": "chemistry.stackexchange",
"id": 9496,
"tags": "ph, temperature"
} |
Space Elevator: how would you keep the end of a Space Elevator's cable rigid? | Question: I had a discussion with my wife about physics behind a Space Elevator.
Imagine a cable rising 60k into the firmament for its elevator to climb.
The cable would have to rotate in unison with the earth, maintaining a straight line. How?
She believes that you need to rocket or a mass of some size to keep the cable rigged; otherwise the earth would roll up the cable like a spinning spool rolling up its thread.
As a model, I mentioned that someone could spin a rope of some mass around himself with the centripetal force keeping he rope taut. However this would not work with a small-mass thread.
So....
How much mass would we need to keep the space-elevator cable rigid?
Answer: Your wife could be right about the rocket.
Whenever we say something is small physically, we need to be sure what it is small with respect to. In the case of a thread, the drag force exerted by air beats the centripetal force keeping it taut. Because centripetal forces scale with mass, a denser thread will work.
The issue facing a space elevator is different. For a rigid cable, we need the terminus to be above geostationary orbit. The height of the atmosphere is about 100km from sea level, while geostationary orbit is about 36000 km, so very little of the cable will be exposed to atmospheric drag. What net drag there is will be mostly from the prevailing winds, I suspect. The effect of this will be small but persistent. If it is never corrected, over time the elevator will drift away from vertical. Whether this effect is large enough to matter over, say, the lifetime of a civilization, I don't know.
For the other part of your question - what mass would we need to keep it taut? - I think your intuition about the rope is good. Here our concern isn't wrapping, but whether the rope will collapse. Imagine the rope is made of little beads of mass d$m$ connected by little massless cables of length d$h$. In the rotating frame, a bead at height $h$ in the cable experiences a total force $\mathrm{d}F = -\mathrm{d}m\frac{M_eG}{(h+R_e)^2}+T(h+\mathrm{d}h) - T(h) + \mathrm{d}m \, \omega^2(h+R_e) = 0$ for a system in equilibrium, where $T$ is the tension. If we call the mass per unit length of the cable $\lambda$, then we can turn this into a differential equation:
$$\frac{\mathrm{d}}{\mathrm{d}h} T(h) = \lambda \left ( \frac{M_eG}{(h+R_e)^2} - \omega ^2 (h+R_e) \right )$$
Integrating:
$$ T(h) = \lambda \left ( M_eG \left( \frac{1}{R_e} - \frac{1}{R_e+h} \right ) - \frac{1}{2}\omega^2 (h^2 + 2 h R_e) \right) +C $$
We find the integration constant $C$ we look at the base of the cable. The tension there must be sufficient to keep the whole cable in place. The force the whole cable exerts at that point is
$$T(0)=C=\int_0^L \lambda \left ( -\frac{M_eG}{(h+R_e)^2} + \omega ^2 (h+R_e) \right ) \mathrm{d}h$$
where $L$ is the total length of the cable. We can see right away that this integral becomes more and more positive as $L$ increases, so even without finishing the integral we know there is some value of $L$ that will make this base tension positive. The condition that the cable stay taut is the condition $T(h) > 0 \;\forall\; h < L$ - that is, there is tension in the cable everywhere but the very end. Plug in the value of $C$ and you will find that if $C$ is positive, this is the case.
So all we technically need is a long cable. But it might be more efficient to have a counterweight at the end. | {
"domain": "physics.stackexchange",
"id": 16967,
"tags": "newtonian-mechanics, newtonian-gravity, centripetal-force"
} |
How do I simplify $O\left({n^2}/{\log{\frac{n(n+1)}{2}}}\right)$ | Question: I'm not very certain about how to deal with asymptotics when they are in the denominator. For $$O\left(\frac{n^2}{\log{\frac{n(n+1)}{2}}}\right)$$, my intuition tells me that it should be treated in a similar way as little o. Would that be correct?
Also, is this in any way comparable with $O(n)$ or $O(n^2)$?
Answer: Your expression is
$$ E = \frac{cn^2}{\log \frac{n(n+1)}{2}}$$
where $c$ is some constant. The simple upper bound for $E$ is
$$ E\le c n^2$$
which implies that $\mathcal{O}(n^2)$. For a better bound
$$E = \frac{cn^2}{\log \frac{n(n+1)}{2}} = \frac{cn^2}{ 2 \log n + \log n - \log 2 } $$
Now it is an easy verification that $E$ is $\mathcal{O}(\frac{n^2}{\log n})$. In other words $E$ is $o(n^2)$(weaker statement as compare to the previous one). | {
"domain": "cs.stackexchange",
"id": 15323,
"tags": "asymptotics, landau-notation, big-o-notation"
} |
Counting solutions of a particular type in HORN SAT | Question: I am interested in counting the number of solutions of a particular type (say #) in HORN SAT. I have 2 questions concerning the same.
Suppose we have a HORN SAT -: $(x_1) \land (x_2 \implies x_1)$, then the solutions are $(1, 0)$ and $(1,1)$. For solutions of type #, I would like to eliminate $(1,1)$ as after negating $x_2$ we will still have a valid solution. In some sense $(1,1)$ is not a minimal solution. Do solutions of type # have a formal name? It seems natural that SAT solvers must strive to obtain solutions of type # and use those to generate other solutions.
Since the problem of HORN satisfiability is easy, are there efficient algorithms for counting HORN sat solutions? If so, could someone please point me to a good reference.
Answer: Finding the minimal solution can be done in polynomial time (in fact, linear time), using the standard algorithm. There is always only a single minimal solution, and the standard algorithm for testing satisfiability of a HornSAT instance will also give you the minimal solution.
Counting the number of solutions is #P-complete and thus seems to be quite hard; see Is #HORNSAT polynomial?. | {
"domain": "cs.stackexchange",
"id": 11362,
"tags": "satisfiability, counting, sat-solvers"
} |
A simple pocket calculator | Question: I was searching the internet for some Python exercises, and I've found a lab assignment from University of Toronto. Here's the major of it:
Question 1.
Welcome Message In the if __name__ == "__main__" block, write code that displays the following: Welcome to the calculator program. Current value: 0
Question 2.
Displaying the Current Value Write a function whose signature is display_current_value(), and which displays the current value in the calculator. In the if __name__ == "__main__" block, test this function by calling it and observing the output.
Question 3.
Addition Write a function whose signature is add(to_add), and which adds to_add to the current value in the calculator, and modifies the current value accordingly. In the if __name__ == "__main__" block, test the function add by calling it, as well as by calling display_current_value().
Hint: when modifying global variables from within functions, remember to declare them as global.
Question 4.
Multiplication Write a function whose signature is mult(to_mult), and which multiplies the current value in the calculator by to_mult, and modifies the current value accordingly. In the if __name__ == "__main__" block, test the function.
Question 5.
Division Write a function whose signature is div(to_div), and which divides the current value in the calculator by to_div, and modifies the current value accordingly. In the if __name__ == "__main__" block, test the function. What values of to_div might cause problems? Try them to see what happens.
Question 6.
Memory and Recall Pocket calculators usually have a memory and a recall button. The memory button saves the current value and the recall button restores the saved value. Implement this functionality.
Question 7.
Undo Implement a function that simulates the Undo button: the function restores the previous value that appeared on the screen before the current one.
Here's my solution:
current_value = 0
memory = current_value
memo = [current_value]
def is_divisible(x, y):
"""Return whether x is divisible by y. """
return x % y == 0
def update_list(lst, value):
"""Updates a list with an item. """
lst.append(value)
def display_current_value():
"""Prints the current value of the calculator. """
print('Current value:', current_value)
def add(to_add):
"""Adds a number to the current value of the calcalutor. """
global current_value
current_value += to_add
update_list(memo, current_value)
def mult(to_mult):
"""Multiplies the current value of the calcalutor by a number. """
global current_value
current_value *= to_mult
update_list(memo, current_value)
def div(to_div):
"""Divides the current value of the calcalutor by a number. """
global current_value
if is_divisible(current_value, to_div):
current_value //= to_div
else:
current_value /= to_div
update_list(memo, current_value)
def store():
"""Stores the current value. """
global memory
memory = current_value
def recall():
"""Recalls the saved value. """
return memory
def undo():
"""Restores the previous value. """
global memo
if len(memo) >= 2:
global current_value
current_value = memo[-2]
memo.pop(-2)
def main():
print('Welcome to the calculator program.' )
display_current_value()
add(6)
display_current_value()
mult(2)
display_current_value()
div(3)
display_current_value()
for iteration in range(3):
undo()
display_current_value()
if __name__ == '__main__':
main()
Is it possible to implement undo without dealing with other functions such as add? Is the code well-documented?
Please point out any bad practice or any bug I've made. Please point out how somethings can be done better. And please note that I'm a beginner and a hobbyist.
Answer: I was asked to rewrite my comments as an answer.
You want to check that your divisor != 0, or catch ZeroDivisionError exception.
You may want to rethink your liberal use of globals. You can have the same behaviour, and a more robust code, if you send the value in the global to the function, and assign the result of the function.
Here's the code with the above implemented:
$ cat calc.py
def is_divisible(x, y):
"""Return whether x is divisible by y. """
return x % y == 0
def update_list(lst, value):
"""Updates a list with an item. """
lst.append(value)
return lst
def display_current_value(current_value):
"""Prints the current value of the calculator. """
print('Current value:', current_value)
def add(to_add, current_value, memo):
"""Adds a number to the current value of the calcalutor. """
print (current_value,"+",to_add)
current_value += to_add
memo = update_list(memo, current_value)
return current_value
def mult(to_mult, current_value, memo):
"""Multiplies the current value of the calcalutor by a number. """
print (current_value,"*",to_mult)
current_value *= to_mult
memo = update_list(memo, current_value)
return current_value
def div(to_div, current_value, memo):
"""Divides the current value of the calcalutor by a number. """
print (current_value,"/",to_div)
if to_div != 0:
if is_divisible(current_value, to_div):
current_value //= to_div
else:
current_value /= to_div
else:
print ("you tried division by 0")
memo = update_list(memo, current_value)
return current_value
def undo(current_value, memo):
"""Restores the previous value. """
print("undo")
if len(memo) >= 2:
memo.pop()
current_value = memo[-1]
return current_value, memo
def main(current_value, memory, memo):
print('Welcome to the calculator program.' )
display_current_value(current_value)
current_value = add(6, current_value, memo)
display_current_value(current_value)
current_value = mult(2, current_value, memo)
display_current_value(current_value)
current_value = div(3, current_value, memo)
display_current_value(current_value)
for iteration in range(3):
# print("memo",memo)
current_value, memo = undo(current_value, memo)
display_current_value(current_value)
if __name__ == '__main__':
current_value = 0
memory = current_value
memo = [current_value]
main(current_value, memory, memo)
$ python3 calc.py
Welcome to the calculator program.
Current value: 0
0 + 6
Current value: 6
6 * 2
Current value: 12
12 / 3
Current value: 4
undo
Current value: 12
undo
Current value: 6
undo
Current value: 0
Note: for completion sake, you may want to add a function for substruction. | {
"domain": "codereview.stackexchange",
"id": 23035,
"tags": "python, beginner, algorithm, python-3.x, calculator"
} |
Web service getting value using LINQ from queue table in SQL database | Question: As a test of my C# skills, I have been asked to create a simple web service which will take a message from a queue table in a SQL database and send it to a web application when a button is pressed on the application. I have never written a web service before so I am going in a little blind here so was after some advise on whether I had done this correct or not.
The stored procedure usp_dequeueTestProject gets a value from the top of the list in a table and then deletes the row in that table. At the moment I do not have this being archived anywhere, would it be better practice to instead of delete this, to just mark it as sent instead?
[WebService(Namespace = "http://tempuri.org/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[System.ComponentModel.ToolboxItem(false)]
// To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line.
// [System.Web.Script.Services.ScriptService]
public class Service1 : System.Web.Services.WebService
{
[WebMethod]
public string GetDataLINQ()
{
try
{
TestProjectLinqSQLDataContext dc = new TestProjectLinqSQLDataContext();
var command = dc.usp_dequeueTestProject();
string value = command.Select(c => c.Command).SingleOrDefault();
return value;
}
catch (Exception ex)
{
return ex.ToString();
}
}
}
I've opted for using LINQ for starters, I am not sure if this is the best way to do it or not? I am only passing through the one string as well... In reality I guess you would normally want to send more than one field, such as datetime sent, message type etc, but wasn't sure what data type to use for this? I have seen this done using a Strut, but wasn't sure if this was correct? Any guidance would be greatly appreciated.
Answer:
A Linq-to-SQL DataContext is an IDisposable, your code isn't calling its Dispose() method. Wrap it in a using block to ensure proper disposal.
I would use var in the TestProjectLinqSQLDataContext instantiation as well, but that's just personal preference. I find Foo bar = new Foo() redundant.
Using SingleOrDefault assumes that the SP is returning only 1 record. That very well be the case now, but the SP being outside of the code, I wouldn't make that assumption. Use FirstOrDefault to grab only the first record - this way the day the SP is modified to return more than 1 row, your code won't break.
Something like this:
string result;
using (var dc = new TestProjectLinqSQLDataContext())
{
var command = dc.usp_dequeueTestProject();
result = command.Select(c => c.Command).FirstOrDefault();
}
return result;
I agree with @Jesse's comment about exceptions usage: the calling code has no way of easily telling a valid response from an error message (exception type, message and stack trace), since your code returns a string in every case.
Let it blow up! | {
"domain": "codereview.stackexchange",
"id": 7671,
"tags": "c#, beginner, linq, asp.net, web-services"
} |
Min weighted edge cover: don't follow proof in Schrijver | Question: I'm reading section 19.3 of Combinatorial Optimization by Schrijver where he details an algorithm for finding the min-weight edge cover. His method works for general graphs, but I'm particularly interested in bi-partite graphs. To find the min-weighted edge cover of a graph $G=(V,E)$ with a weight function $w: E \to R$, he first defines a clone graph, $\tilde{G} = (\tilde{V},\tilde{E})$. He then defines a larger graph, $G'=(V',E')$ whose set of vertices, $V'$ is the union of $V$ and $\tilde{V}$. The edges, $E'$ and weight function $w'(E')$ are as follows:
$w'(e)=w'(\tilde{e})=w(e)$ for $e \in E$
$w'(v,\tilde{v})=2\mu(v)$ for each $v \in V$ where $\mu(v)$ is the minimum weight edge of $G$ incident on $v$.
Now, we construct a minimum weight perfect matching, $M$ for $G'$ and this yields a minimum weight edge cover $F$ for $G$ once we replace any edge $v\tilde{v}$ in $M$ by an edge $e_v$ of minimum weight of $G$ incident on $v$.
Now, for the proof that this works, the author notes that $w(F)=\frac{1}{2}w'(M)$. So far so good.
Then, he states that any edge cover $F'$ for $G$ gives by reverse construction a perfect matching $M'$ in $G'$ with $w'(M')\leq 2w(F')$.
This is the part I don't understand. How does one go about constructing this new perfect matching, $M'$ and further prove the inequality?
Answer: Let $M$ be a maximal subset of $F'$ which is a matching. Any vertex $v$ not covered by $M$ must be covered by some edge $e_v=(v,w)$ in $F'$. Since $M$ is maximal, $w$ must be covered by $M$. It follows that if $v_1 \neq v_2$ are not covered by $M$, then $e_{v_1} \neq e_{v_2}$.
The matching $M'$ consists of the two copies of each edge in $M$, and of the edges $(v,\tilde{v})$ for each $v$ not covered by $M$. Edges of the first type have total weight $2w(M)$. Since $w(v,\tilde{v}) \leq 2w(e_v)$, edges of the second type have total weight at most $2w(F' \setminus M)$ (this crucially uses that $e_{v_1} \neq e_{v_2}$ for any $v_1 \neq v_2$ not covered by $M$). Hence $w(M') \leq 2w(F')$. | {
"domain": "cs.stackexchange",
"id": 17320,
"tags": "graphs, bipartite-matching"
} |
Combining .txt files | Question: A guy at the company I work for needed a small application that would combine multiple text files into one, larger text file.
I wrote a console application for this. it seems pretty efficient, but I was wondering if there would be an even more efficient way of doing this.
It has 2 important functions, one that gets the files from a folder, where string input is the folder location:
static string[] getFiles(string input)
{
DirectoryInfo dinfo = new DirectoryInfo(@input);
FileInfo[] files = dinfo.GetFiles("*.txt");
List<string> list = new List<string>();
foreach(FileInfo file in files)
{
list.Add(input + @"\" + file.Name);
}
string[] arr = list.ToArray();
return arr;
}
And of course the function that combines the files together, its input are the name of the file (string newName) and an array with the names of the files found in the folder by getFiles() (string[] files):
static void writeDump(string newName, string[] files)
{
if (!File.Exists(newName))
{
using (StreamWriter sw = File.CreateText(newName))
{
for (int i = 0; i < files.Length; i++)
{
using (StreamReader sr = File.OpenText(files[i]))
{
string s = "";
while ((s = sr.ReadLine()) != null)
{
sw.WriteLine(s);
}
}
}
}
} else
{
Console.Clear();
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("File already exists");
start(); //start is called from the main function
}
}
And because start(); might be confusing, I'll also add the main function here:
static void Main(string[] args)
{
start();
}
How efficient is this and could it be more efficient?
Answer:
list.Add(input + @"\" + file.Name);
Seems a bit pointless: file.FullName would get you the fully qualified name without throwing information away and reconstructing it. In fact, that method could be simplified with Linq to
static string[] getFiles(string input)
{
DirectoryInfo dinfo = new DirectoryInfo(@input);
return dinfo.GetFiles("*.txt").Select(f => f.FullName).ToArray();
}
I also note that the .Net convention for the name would be GetFiles with an initial uppercase letter.
for (int i = 0; i < files.Length; i++)
{
using (StreamReader sr = File.OpenText(files[i]))
{
string s = "";
while ((s = sr.ReadLine()) != null)
{
sw.WriteLine(s);
}
}
}
Since you don't care about i you could simplify things with foreach; and the initial value of s is unnecessary, so you could have
foreach (var filename in files)
{
using (StreamReader sr = File.OpenText(filename))
{
string s;
while ((s = sr.ReadLine()) != null)
{
sw.WriteLine(s);
}
}
}
But now we get to two key points of the requirements which aren't explicitly stated:
If the files don't end with newlines, this code will insert newlines. That may or may not be intended, and it may or may not be desirable.
This code is using an Encoding to parse the bytes to strings, then using an Encoding to convert the strings back to bytes. The particular encoding used is implicit. This isn't particularly efficient, but it does have some benefits:
If the files were generated by Microsoft tools, they are quite likely to start with BOMs (even if they're UTF-8). In the nasty case that they mix UTF-8-BOM, UTF-8, and UTF-16 then you rely on the encoding conversion.
Even if the files are consistent, you're going to avoid the appearance of BOMs embedded in the text that a straightforward byte-by-byte concatenation would give.
It also has at least one non-performance-related disadvantage:
Regardless of the encoding of the input files, the output file is likely to be UTF-8-BOM, which may be an undesirable side-effect if they were all UTF-8 or UTF-16.
If you wanted a straight byte-by-byte conversion then it would be more efficient to use
using (var strmOut = File.Create(newName))
{
foreach (var filename in files)
{
using (var strmIn = File.OpenRead(filename))
{
strmIn.CopyTo(strmOut);
}
}
}
If you can guarantee that the input files are all UTF-8-BOM then it would be more efficient to use
using (var strmOut = File.Create(newName))
{
foreach (var filename in files)
{
using (var strmIn = File.OpenRead(filename))
{
strmIn.Position = 3;
strmIn.CopyTo(strmOut);
}
}
}
although that's not production-quality code (should check that there are 3 bytes and that they correspond to a BOM). | {
"domain": "codereview.stackexchange",
"id": 23958,
"tags": "c#, performance"
} |
Bash on Synology - Delete recycle bin entries over X days old | Question: Synology NASs have a Task Scheduler that allows you to schedule deletion of files over X days old from recycle bins. This feature isn't working for me–running the task results in no files being deleted. The task works if I schedule a full deletion of the recycle bin, but I'd prefer to only delete a file if it sits in the bin for more than X days. As such, I've created this user script to do so. I want to make sure that it deletes exactly as I want.
Only entries within the recycle bin of the specific shared folder /volume1/share1/#recycle/
Only files that are over 60 days old
Not the recycle bin folder itself
Delete empty folders
Script:
deletepath="/volume1/share1/#recycle/"
logpath="/volume1/share2/SynoScripts/logs/deleteOlderThanXDays.txt"
errorpath="/volume1/share2/SynoScripts/errors/deleteOlderThanXDays.txt"
now=`date "+%Y-%m-%d %H:%M:%S"`
echo "" >> $logpath
echo $now >> $logpath
echo "" >> $errorpath
echo $now >> $errorpath
# Delete files
/usr/bin/find $deletepath -type f -mtime +60 -exec rm -v {} \; >>$logpath 2>>$errorpath
# Delete empty folders
/usr/bin/find $deletepath -mindepth 1 -type d -empty -exec rmdir {} \; >>$logpath 2>>$errorpath
Does this script appear to satisfy my requirements?
Answer: If this is intended to be run as a command, I recommend you add a suitable shebang line. Although the question is tagged bash, there's nothing here that isn't portable POSIX shell, so I recommend
#!/bin/sh
Is it intentional that the paths all share a common initial prefix /volume1 and that the log and error paths share a longer common prefix? If so, encode that for easier re-use:
volume=/volume1
scriptdir=$volume/share2/SynoScripts
logpath=$scriptdir/logs/deleteOlderThanXDays.txt
errorpath=$scriptdir/errors/deleteOlderThanXDays.txt
Personally, I'd call those last two logfile and errorfile for clarity.
There's no need to quote the values in these assignments, but the values should be quoted when used later, so that we don't break the script when they change to include spaces or other significant characters.
Instead of multiple echo commands, consider using a single date with tee:
date '+%n%Y-%m-%d %T' | tee -a "$logpath" >>"$errorpath"
After that, we can simply redirect all output and errors:
exec >>"$logpath" 2>>"$errorpath"
When using find, prefer to group many arguments into a few commands, using + instead of \;:
find "$deletepath" \! -type d -mtime +60 -exec rm -v '{}' +
find "$deletepath" -mindepth 1 -type d -empty -exec rmdir -v '{}' +
I assume you meant to use -v consistently for both commands here.
Modified version
#!/bin/sh
volume=/volume1
scriptdir="$volume/share2/SynoScripts"
deletepath="$volume/share1/#recycle"
logpath="$scriptdir/logs/deleteOlderThanXDays.txt"
errorpath="$scriptdir/errors/deleteOlderThanXDays.txt"
# redirect all output and errors
exec >>"$logpath" 2>>"$errorpath"
# log the start time to both logs
date '+%n%Y-%m-%d %T' | tee -a "$errorpath"
# delete old non-directory files (including devices, sockets, etc)
find "$deletepath" \! -type d -mtime +60 -exec rm -v '{}' +
# delete empty directories, regardless of age
find "$deletepath" -mindepth 1 -type d -empty -exec rmdir -v '{}' + | {
"domain": "codereview.stackexchange",
"id": 37644,
"tags": "bash, file-system, scheduled-tasks"
} |
Does ROS Python library shadow or break other python systems? | Question:
ROS puts /opt/ros/kinetic/lib/python2.7/dist-packages in PYTHONPATH. This is then prepended to the system path when python2 or python3 is used. Is this likely to shadow or break other python packages from Anaconda3 (in python3 or python2 virtual environments) ? I looked in the ROS directory and it all seems very ROS specific, but since I am completely new to python I am not sure.
I have solved my problems with Anaconda3 changing the default version of python to 3.6, so now just concerned with the presence of python2 libraries in the python systempath.
Originally posted by elpidiovaldez on ROS Answers with karma: 142 on 2018-04-02
Post score: 0
Answer:
There are three ways you can approach this.
You can use virtualenv and install anaconda3 inside it, so that there are no conflicts between the different versions of Python.
You can use two different bash files, in which one sources the python of anaconda3, and the other sources the python used for ROS.
You can create symlinks in the Path folder to link anaconda3 python to python3 and ROS python2 to python and also, the Python2 pip to pip and Python3 pip to pip3
I personally prefer the 1st method, as it is the cleanest and easiest. The second is easy to work around. The third one will likely break your system, as it makes the system extremely complex.
Can you share what packages you want to install in Anaconda3?
Originally posted by Akash Purandare with karma: 61 on 2018-04-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by elpidiovaldez on 2018-05-03:
I use a lot of 'datascience' and AI packages like Tensorflow, numpy, scipy, scilearn, pandas, pillow, opencv.
Comment by Akash Purandare on 2018-05-04:
As far as I know, if you know how to type Data Science codes in Python 3, you can do so in Python 2 as well. As far as these packages are concerned, the ROS scripts do run on the local environment of python, hence, you will be able to merge all of these to achieve the results. Install them using pip | {
"domain": "robotics.stackexchange",
"id": 30514,
"tags": "ros, ros-kinetic, pythonpath"
} |
How does resistance *really* work? (DC, battery, LED, atoms, electrons) | Question: Backstory: I’m a software engineer just getting into electronics and it seems that everything I’ve ever been told about electricity my whole life is a candy-coated lie. I can’t find consistent logical answers to the most basic of questions and it’s driving me mad!
The kindergarten math V = IR makes sense... unless accounting for conservation of energy, matter, and real laws of physics.
I’m old enough now. I just want to know the truth, even if it hurts.
The effect of voltage on current
A resistor, led, and copper wire walk into a bar
The bar tender serves a 9v battery to share
The electric field is too weak to serve the led directly
The copper wire volunteers to help direct the electric field
The ampacity of the led is too low, so the resistor hops up on the bar in-line with the copper wire so the LED can get a drink
Ohms Law is a lie, but the coulombs are absolutely intoxicating. All hell breaks loose and the resistor catches the whole bar on fire, burns it to the ground, everybody dies, and I lost five dollars.
Pause.
No, wait, there was no fire, that was just my anger at how every explanation I read of this scenario is in direct contradiction to what I thought I knew about conservation of energy and matter.
Contradictions for which I'd like answers
If charge causes the electrical field, then why does the voltage drop across the resistor? The electrons didn’t just magic themselves away. Isn’t the charge the same?
If charge passing though the resistor causes the atoms to enter a lower energy state, thereby releasing IR photons that heat up the place... then where did the extra coulombs go each second?
How come 2x resistance makes my battery lasts (on the scale of) twice as long but at (on the scale of) 1/4 of the power?
If resistance slows the flow of current, shouldn't ALL of the current still be accounted for somewhere on the system? solved: many of the explanations I was reading made it sound as though resistors lowered the current (...from infinity?) by "burning off" the "extra" current, which made no sense and contradicted the idea that the current supply and current drain were equal (Kirchoff's Law, common sense). Hence, the oversimplification of some of what I was reading confused me greatly.
... either my understanding is way off or there’s a well kept secret that few people are sharing (or my Google fu is busted)
Answer: What really happens to the electrons in a solid when an electric field is applied is extremely complicated, and depends heavily on the material in question. What's more, the electrons cease to be "electrons" as the elementary particle in vacuum, they become quasiparticles with non well defined velocity and with other strange properties. I'm afraid there's no simple answer to the original question. It has to be at the level of quantum field theory applied to condensed matter. I do not have such a level of understanding (yet at least).
Nevertheless, I can offer a much different insight than the ones already posted, closer to what really happens in a conductor when an electric field is applied. Let's take a simple material as conductor such as an alkali metal. Its atoms/ions form a crystal. An intuitive way to think about the electrons in that solid, is to assume that all the core electrons, i.e. the ones in filled shells, are not free electrons and we can ignore them completely for electrical conduction matters. Only the single valance electron is a free electron. That yields one free electron per atom. All these free electrons behave roughly as in a cold Fermi gas, that is, they have to satisfy Pauli's exclusion principle and their occupation number obeys Fermi-Dirac statistics. Thus:
Case when $\vec E = \vec 0$. In that case, the energy of the electrons range from 0 up to about the Fermi energy, $E_F$ (if the temperature is at absolute 0, then it is exactly at the Fermi energy). In k-space (momentum space, not real space), the electron's momentum form a Fermi sphere. Note that this is valid for most alkali metals, but for metals like copper and iron, the shape is not quite spherical. The wavefunction of each electron extends to the crystal sample (they are not localized), and they have velocities ranging from 0 up to the Fermi velocity, which is about two order of magnitude slower than light. But they go in all possible directions and thus the mean velocity is null: there is no current, the drift velocity is 0.
- Case when $\vec E \neq \vec 0$. What happens when we apply an electric field? Usually ordinary currents have a magnitude which causes a very, very small perturbation to the energy of whole system. Contrarily to what the Drude model assumes, in reality only the electrons near the Fermi surface of the sphere (or simply surface in general) can "feel" or react to the applied electric field. This is due to Pauli's exclusion principle which implies that no two electrons can share the same state. Thus the free electrons that have an energy much lower than $E_F$ cannot increase their energy, since all the states which have an energy slightly above them are already occupied. Therefore the net result of the applied field is to cause the electrons that were moving in the field's direction with momentum near $p_F$, to interact with the field and have their momentum switched in the other direction, with roughly the same magnitude. The fraction of the free electrons that can react to the electric field is of the order of $v_d/v_F$, or about $10^{-4}/10^6 =10^{-10}$. Hence only about one free electron per ten billions will get influenced by the electric field. Mathematically it is equivalent to a shifting of the Fermi surface against the direction of the electric field, by an extremely small amount (because the E field is such a small perturbation). Note that the drift velocity that arises in that free electron model is the same as in Drude's model, but the physics is quite different and proved to be more correct than Drude's.
Just to clear some misconceptions: when one applies an electric field in a conductor, it "travels" at a fraction of the speed of light, roughly about 20% to 80% of light's speed. The electrons that take part in electrical conduction move at speed about two orders of magnitude slower than light, and they are extremely less numerous than the number of free electrons. This yields a drift velocity that matches the one in Drude's model. Note that the number of electrons that can react to an applied electric field does not match the number of electrons that can absorb heat, or take place in heat conduction.
About the resistance (or resistivity): The resistivity is partly due to the scattering of the few free electrons going against the $\vec E$ field that take part in the electrical current. They are scattered by phonons and "put back" in the energy state where they were before the $\vec E$ field was applied. Note that they interact with phonons (a quasiparticle), and defects (like a missing atom in the crystal lattice), among others. The electrons that participate in electrical conduction do not really bump into atoms as Drude's model claims. | {
"domain": "physics.stackexchange",
"id": 55208,
"tags": "electric-current, charge, electrical-resistance, voltage, batteries"
} |
Finding the greatest common factor of a list of numbers in JavaScript | Question: The goal of this code is to take an array of two or more positive integers and return their greatest common factor, or GCF. (This is the biggest number that can be divided into all the numbers in the array without any remainder).
I feel like my code is much too long and complicated. Surely there's a more efficient way to do this. Also, I know it's best practice to avoid global variables, and I used one here.
Codepen
/* Test whether a number is a factor of another number */
function isFactor(num, fact){
if(num % fact == 0){
return true;
}
else{
return false;
}
}
/* List all the factors of a number */
function listFactors(number){
factors = [1];
var i = 2;
while(i <= number){
if(isFactor(number, i)){
factors.unshift(i);
}
i++;
}
return factors;
}
var toTest = 1;
/* Find the GCF (greatest common factor) of the numbers in an array */
function GCF(intList){
var GCF = 1;
var factorsOfEach = [];
for(item in intList){
var num = intList[item];
var factors = listFactors(num);
factorsOfEach.push(factors);
}
var count = 0;
factorsOfFirst = factorsOfEach[0];
var length = factorsOfFirst.length;
while(count < length){
var toTest = factorsOfFirst[count];
var passTest = factorsOfEach.every(arrayContains);
if(passTest){
GCF = toTest;
return GCF;
}
else{
count += 1;
}
}
return GCF
}
/* Check whether an array contains the variable "toTest" */
function arrayContains(array){
if(array.indexOf(toTest) != -1){
return true}
else{
return false}
}
Answer: Formatting
Before addressing optimizations, let's address some formatting problems
while vs. for loops
It is preferred to use a for loop when you:
Have an incrementer
The incrementer is not used outside of the loop
The incrementer is either strictly increasing or strictly decreasing
This is the case with both of your while loops. They can be refactored to the following:
In listFactors():
for (var i = 2; i <= number; i++){
if(isFactor(number, i)){
factors.unshift(i);
}
}
In GCF():
for (var count = 0; count < factorsOfFirst.length; count++){
var toTest = factorsOfFirst[count];
var passTest = factorsOfEach.every(arrayContains);
if (passTest) {
return toTest;
}
}
Avoid explicit booleans when possible
Instead of:
if(num % fact === 0){
return true;
}
else{
return false;
}
it is preferable to write:
return num % fact === 0;
and the same for your other functions.
Avoid global variables
As you correctly pointed out, you should try not to use mutable global variables; it makes the code much harder to understand. To avoid this, you could make the arrayContains function merely an arrow function and pass in toTest as well:
var testPassed = factorsOfEach.every(arr => arr.indexOf(toTest) !== -1);
A Cleaner Solution
Using the Euclidean Algorithm to find the GCD/GCF of two numbers (assuming non-negative numbers) is a much more cleaner and optimized approach:
function GCF(a, b) {
if (b === 0) return a;
else return GCF(b, a % b);
}
Then, just use a reduce statement to apply GCF to all of the numbers in the array:
function findGCFofList(list) {
return list.reduce(GCF);
} | {
"domain": "codereview.stackexchange",
"id": 26163,
"tags": "javascript, beginner, mathematics"
} |
hector slam configuration | Question:
Hi,
i have kinect 1, i conver it to laser_scan and all is ok.
i want to make hector SLAM, i install it with:
git clone https://github.com/tu-darmstadt-ros-pkg/hector_slam.git
now, i don't know how to configure it with rviz although i follow this tutorial link text
can you show me how to configure it please?
Originally posted by Emilien on ROS Answers with karma: 167 on 2016-05-24
Post score: 0
Answer:
Instead of getting the source code and building it by yourself, you can download the binaries i.e. hector_slam package itself for your ROS distribution. You can use command
sudo apt-get install ros-your_ros_dist_name-hector-slam
for eg. in case of ros indigo you can use command
sudo apt-get install ros-indigo-hector-slam
this will download and install all the necessary packages for hector slam.
Originally posted by gp with karma: 166 on 2016-05-27
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Emilien on 2016-05-27:
yes and after installation, i must modify something?
i have laser scan data and odometry data
Comment by gp on 2016-05-27:
then follow this tutorial setting your robot. For example you can check this project hector_slam_example. In hector_hokuyo.launch file, replace hokuyo part by kinect.
Comment by gp on 2016-05-27:
If you just want to check hector slam you can follow this tutorial. After installing binaries you don't need to modify anything. You can straightaway use it.
Comment by Emilien on 2016-05-27:
ok thanks i will try this
Comment by Emilien on 2016-05-27:
thank you it works well | {
"domain": "robotics.stackexchange",
"id": 24726,
"tags": "hector-slam"
} |
Curvature tensor of 2-sphere using exterior differential forms (tetrads) | Question: $ds^2= r^2 (d\theta^2 + \sin^2{\theta}d\phi^2)$
The following is the tetrad basis
$e^{\theta}=r d{\theta} \,\,\,\,\,\,\,\,\,\, e^{\phi}=r \sin{\theta} d{\phi}$
Hence, $de^{\theta}=0 \,\,\,\,\,\, de^{\phi}=r\cos{\theta} d{\theta} \wedge d\phi = \frac{\cot{\theta}}{r} e^{\theta}\wedge e^{\phi}$
Setting the torsion tensor to zero gives: $de^a + \omega^a _b \wedge e^b =0$.
This equation for $a=\theta$ gives $\omega^{\theta}_{\phi}=0$. (I have used $\omega^{\theta}_{\theta}=\omega^{\phi}_{\phi}=0$)
The equation for $\phi$: $\omega^{\phi}_{\theta} \wedge e^{\theta}=\frac{\cot{\theta}}{r} e^{\phi} \wedge e^{\theta} \implies \omega^{\phi}_{\theta}=\frac{\cot{\theta}}{r} e^{\phi}=\cos{\theta} d{\phi}$
$d\omega^{\phi}_{\theta}=-\sin{\theta} d\theta \wedge d\phi$
Hence $R^i_j = d\omega^i_j+ \omega^i_b \wedge \omega^b_j$ gives $R^{\phi}_{\theta}=-\sin{\theta} d\theta \wedge d\phi$ and $R^{\theta}_{\phi}=0$
Writing in terms of components gives $R^{\phi}_{\theta \theta \phi}=-\sin{\theta}$, and $R^{\theta}_{\phi \phi \theta}=0$
However this is wrong. I have done the same problem using Christoffel connections, and the answer which I know to be correct is
$R^{\phi}_{\theta \theta \phi}=-1$, and $R^{\theta}_{\phi \phi \theta}=\sin^2{\theta}$
Please could anyone tell mw what I am doing wrong? Any help will be appreciated?
Answer: $R^{\phi}_{\theta}=-\sin{\theta} d\theta \wedge d\phi$
So:
$R^{\phi}_{\theta}= - \frac{ e^\theta \wedge e^\phi}{r^2}$
So:
$R^{\phi}_{\theta \theta \phi}= - \frac{1}{r^2}$
Then $R_{\phi\theta \theta \phi}= g_{\phi\phi}R^{\phi}_{\theta \theta \phi} = -sin^2\theta$
Then $R_{\theta \phi \theta \phi} = - R_{\phi\theta \theta \phi} = sin^2\theta$
Then $R^\theta _{\phi \theta \phi} = g^{\theta\theta}R_{\theta \phi \theta \phi} = \frac {sin^2\theta}{r^2}$
Also, you could not have $\omega^\theta_\phi=0$, it is because $\omega^{ab}=−\omega^{ba}$ and the metrics is diagonal | {
"domain": "physics.stackexchange",
"id": 8094,
"tags": "homework-and-exercises, general-relativity, differential-geometry, curvature"
} |
Calculator in Python 3 using Tkinter | Question: Edit: New version at Python 3 Tkinter Calculator - follow-up
New status: I´ve refactored the code trying to follow the recommendations from the guys who answered this question. The new version is on the link above.
I'm a beginner developer and I chose Python as my initial language to learn. This is my very first project: a calculator using Tkinter for GUI.
I've tried to apply some OOP and modules approach, as an attempt to make it like a real job, instead of simply putting it all on a single file or procedural mode.
I need some feedback about module naming and organization, class naming and organization, PEP-8 style, and structure in general.
Module: window.py
This should be the main module, but I´m facing some circular import issue that I can't figure why yet.
import tkinter as tk
import frame_display
import frame_botoes
root = tk.Tk()
root.geometry("640x640")
visor = frame_display.DisplayContainer(root)
numeros = frame_botoes.ButtonsContainer(root)
root.mainloop()
Module: calculadora.py
I made some sort of workaround and the programs runs here:
agregator = ""
result = ""
def pressNumber(num):
global agregator
global result
agregator = agregator + str(num)
result = agregator
window.visor.updateTextDisplay(result)
def pressEqual():
try:
global agregator
total = str(eval(agregator))
window.visor.updateTextDisplay(total)
agregator = ""
except ZeroDivisionError:
window.visor.updateTextDisplay("Erro: Divisão por zero")
agregator = ""
except:
window.visor.updateTextDisplay("Error")
agregator = ""
def pressClear():
global agregator
agregator = ""
window.visor.updateTextDisplay("Clear")
import window
I tried to use separate modules and classes as an attempt to use good practices.
Module: frame_display.py
import tkinter as tk
from tkinter import Frame
from tkinter import StringVar
class DisplayContainer(Frame):
def __init__(self, root):
Frame.__init__(self, root)
self.parent = root
self.configure(bg="cyan", height=5)
self.text_display = StringVar()
# Layout DisplayContainer
self.grid(row=0 , column=0 , sticky="nwe")
self.parent.columnconfigure(0, weight=1)
# Call DisplayContainer widgets creation
self.createWidgets()
# Create widgets for DisplayContainer
def createWidgets(self):
self.label_display = tk.Label(self)
self.label_display.configure(textvariable=self.text_display)
self.label_display["font"] = 15
self.label_display["bg"] = "#bebebe"
self.label_display["relief"] = "groove"
self.label_display["bd"] = 5
self.label_display["height"] = 5
# Layout widgets for DisplayContainer
self.label_display.grid(row=0 , column=0 , sticky="nswe")
self.columnconfigure(0, weight=1)
def updateTextDisplay(self, text):
self.text_display.set(text)
Module: frame_botoes.py
import tkinter as tk
from tkinter import Frame
import calculadora
class ButtonsContainer(Frame):
def __init__(self , root):
Frame.__init__(self, root)
self.parent = root
self.configure(bg="yellow")
self.parent.bind("<Key>", self.keyHandler)
self.parent.bind("<Return>", self.returnKeyHandler)
# Layout ButtonsContainer
self.grid(row=1 , column=0 , sticky ="nsew")
self.parent.rowconfigure(1, weight=1)
self.parent.columnconfigure(0, weight=1)
# Call ButtonsContainer widgets creation
self.createWidgets()
# Create widgets for ButtonsContainer
def createWidgets(self):
button_padx = 15
button_pady = 15
self.button_1 = tk.Button(self, text="1", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(1))
self.button_2 = tk.Button(self, text="2", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(2))
self.button_3 = tk.Button(self, text="3", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(3))
self.button_4 = tk.Button(self, text="4", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(4))
self.button_5 = tk.Button(self, text="5", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(5))
self.button_6 = tk.Button(self, text="6", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(6))
self.button_7 = tk.Button(self, text="7", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(7))
self.button_8 = tk.Button(self, text="8", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(8))
self.button_9 = tk.Button(self, text="9", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(9))
self.button_0 = tk.Button(self, text="0", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(0))
self.button_open_parens = tk.Button(self, text="(", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("("))
self.button_close_parens = tk.Button(self, text=")", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(")"))
self.button_dot = tk.Button(self, text=".", padx= button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("."))
self.button_plus = tk.Button(self, text="+", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("+"))
self.button_minus = tk.Button(self, text="-", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("-"))
self.button_multiply = tk.Button(self, text="*", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("*"))
self.button_divide = tk.Button(self, text="/", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("/"))
self.button_equal = tk.Button(self, text="=", padx=button_padx, pady=button_pady, command=calculadora.pressEqual)
self.button_clear = tk.Button(self, text="CLEAR", padx=button_padx, pady=button_pady, command=calculadora.pressClear)
# Layout widgets for ButtonsContainer
self.button_1.grid(row=0, column=0, sticky="nswe")
self.button_2.grid(row=0, column=1, sticky="nswe")
self.button_3.grid(row=0, column = 2, sticky="nswe")
self.button_4.grid(row=1, column=0, sticky="nswe")
self.button_5.grid(row=1, column=1, sticky="nswe")
self.button_6.grid(row=1, column=2, sticky="nswe")
self.button_7.grid(row=2, column=0, sticky="nswe")
self.button_8.grid(row=2, column=1, sticky="nswe")
self.button_9.grid(row=2, column=2, sticky="nswe")
self.button_open_parens.grid(row=3, column=0, sticky="nswe")
self.button_close_parens.grid(row=3, column=2, sticky="nswe")
self.button_0.grid(row=3, column=1, sticky="nswe")
self.button_dot.grid(row=4, column=2, sticky="nswe")
self.button_plus.grid(row=0 , column=3, sticky="nswe")
self.button_minus.grid(row=1 , column=3, sticky="nswe")
self.button_multiply.grid(row=2 , column=3, sticky="nswe")
self.button_divide.grid(row=3 , column=3, sticky="nswe")
self.button_equal.grid(row=4 , column=3, sticky="nswe")
self.button_clear.grid(row=4 , columnspan=2, sticky="nswe")
for x in range(0,5):
self.rowconfigure(x, weight=1)
for i in range(0, 4):
self.columnconfigure(i, weight=1)
#Bind keyboard events
def keyHandler(self, event):
calculadora.pressNumber(event.char)
#Bind Return key
def returnKeyHandler(self, event):
calculadora.pressEqual()
Answer: Disclaimer: you should not be using eval that said I am not going to be removing it from the code as you can work out the correct options on your own. I will be reviewing the overall code issues. Just know eval is evil! :D
Ok so quick answer to fix the main problem is to add a new argument to all functions in calculadora.py lets call this argument window because we are passing the the root window to each function.
Then you need to build the root window as a class with class attributes. This way your functions in calculadora can actually update the the fields.
Once we changed those 2 parts we need to pass that window to those functions from the frame_botoes.py buttons so we will update those buttons as well.
Updated window.py:
import tkinter as tk
import frame_display
import frame_botoes
class Main(tk.Tk):
def __init__(self):
super().__init__()
self.geometry("640x640")
self.visor = frame_display.DisplayContainer(self)
self.numeros = frame_botoes.ButtonsContainer(self)
Main().mainloop()
Updated calculadora.py:
agregator = ""
result = ""
def pressNumber(num, window):
global agregator
global result
agregator = agregator + str(num)
result = agregator
window.visor.updateTextDisplay(result)
def pressEqual(window):
try:
global agregator
total = str(eval(agregator))
window.visor.updateTextDisplay(total)
agregator = ""
except ZeroDivisionError:
window.visor.updateTextDisplay("Erro: Divisão por zero")
agregator = ""
except:
window.visor.updateTextDisplay("Error")
agregator = ""
def pressClear(window):
global agregator
agregator = ""
window.visor.updateTextDisplay("Clear")
Updated frame_botoes.py:
import tkinter as tk
from tkinter import Frame
import calculadora
class ButtonsContainer(Frame):
def __init__(self , root):
Frame.__init__(self, root)
self.parent = root
self.configure(bg="yellow")
self.parent.bind("<Key>", self.keyHandler)
self.parent.bind("<Return>", self.returnKeyHandler)
# Layout ButtonsContainer
self.grid(row=1 , column=0 , sticky ="nsew")
self.parent.rowconfigure(1, weight=1)
self.parent.columnconfigure(0, weight=1)
# Call ButtonsContainer widgets creation
self.createWidgets()
# Create widgets for ButtonsContainer
def createWidgets(self):
button_padx = 15
button_pady = 15
self.button_1 = tk.Button(self, text="1", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(1, self.parent))
self.button_2 = tk.Button(self, text="2", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(2, self.parent))
self.button_3 = tk.Button(self, text="3", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(3, self.parent))
self.button_4 = tk.Button(self, text="4", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(4, self.parent))
self.button_5 = tk.Button(self, text="5", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(5, self.parent))
self.button_6 = tk.Button(self, text="6", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(6, self.parent))
self.button_7 = tk.Button(self, text="7", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(7, self.parent))
self.button_8 = tk.Button(self, text="8", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(8, self.parent))
self.button_9 = tk.Button(self, text="9", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(9, self.parent))
self.button_0 = tk.Button(self, text="0", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(0, self.parent))
self.button_open_parens = tk.Button(self, text="(", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("(", self.parent))
self.button_close_parens = tk.Button(self, text=")", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(")", self.parent))
self.button_dot = tk.Button(self, text=".", padx= button_padx, pady=button_pady, command=lambda: calculadora.pressNumber(".", self.parent))
self.button_plus = tk.Button(self, text="+", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("+", self.parent))
self.button_minus = tk.Button(self, text="-", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("-", self.parent))
self.button_multiply = tk.Button(self, text="*", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("*", self.parent))
self.button_divide = tk.Button(self, text="/", padx=button_padx, pady=button_pady, command=lambda: calculadora.pressNumber("/", self.parent))
self.button_equal = tk.Button(self, text="=", padx=button_padx, pady=button_pady, command=calculadora.pressEqual(self.parent))
self.button_clear = tk.Button(self, text="CLEAR", padx=button_padx, pady=button_pady, command=calculadora.pressClear(self.parent))
# Layout widgets for ButtonsContainer
self.button_1.grid(row=0, column=0, sticky="nswe")
self.button_2.grid(row=0, column=1, sticky="nswe")
self.button_3.grid(row=0, column = 2, sticky="nswe")
self.button_4.grid(row=1, column=0, sticky="nswe")
self.button_5.grid(row=1, column=1, sticky="nswe")
self.button_6.grid(row=1, column=2, sticky="nswe")
self.button_7.grid(row=2, column=0, sticky="nswe")
self.button_8.grid(row=2, column=1, sticky="nswe")
self.button_9.grid(row=2, column=2, sticky="nswe")
self.button_open_parens.grid(row=3, column=0, sticky="nswe")
self.button_close_parens.grid(row=3, column=2, sticky="nswe")
self.button_0.grid(row=3, column=1, sticky="nswe")
self.button_dot.grid(row=4, column=2, sticky="nswe")
self.button_plus.grid(row=0 , column=3, sticky="nswe")
self.button_minus.grid(row=1 , column=3, sticky="nswe")
self.button_multiply.grid(row=2 , column=3, sticky="nswe")
self.button_divide.grid(row=3 , column=3, sticky="nswe")
self.button_equal.grid(row=4 , column=3, sticky="nswe")
self.button_clear.grid(row=4 , columnspan=2, sticky="nswe")
for x in range(0,5):
self.rowconfigure(x, weight=1)
for i in range(0, 4):
self.columnconfigure(i, weight=1)
#Bind keyboard events
def keyHandler(self, event):
calculadora.pressNumber(event.char, self.parent)
#Bind Return key
def returnKeyHandler(self, event):
calculadora.pressEqual()
Now that the quick fix is dealt with its time to go in depth as to the other formatting issues and PEP8 changes we should make.
I will keep each one of your files separate but honestly I do not think it is necessary to separate the main window file from the frame data.
1st: I would like to address is PEP8 standards. Personally I think you should use CamelCase for Class names and lowercase_with_underscores for functions/methods.
2nd: Lets look at your buttons in frame_botoes. You should probably be generating your buttons with loops so we can keep the code short and clean. I have 2 examples here. One uses simple counting for the layout and the other uses a list with grid values for placement.
3rd: We should avoid using global so lets convert your calculadora functions into a class that we use with class attribute to manage the aggregator.
4th: You only need self. prefix for a variable that will be changed later in the class outside of the method it is generated in. So for all your buttons we can remove this prefix. At the same time we don't need to name them as we are generating them from a loop. Naming doesn't help us here as the layout is simple enough and we are not changing the buttons later.
5th: We do not need from tkinter import Frame as you are already using import tkinter as tk so we can simply call tk.Frame or any other widget for that matter where it is needed.
With some general clean up and the things I mentioned above here is your modified code:
New window.py:
import tkinter as tk
import frame_display
import frame_botoes
class Main(tk.Tk):
def __init__(self):
super().__init__()
self.geometry("640x640")
self.columnconfigure(0, weight=1)
self.rowconfigure(1, weight=1)
self.visor = frame_display.DisplayContainer().grid(row=0, column=0, sticky="new")
self.numeros = frame_botoes.ButtonsContainer().grid(row=1, column=0, sticky="nsew")
Main().mainloop()
New calculadora.py:
class Press:
def __init__(self, master):
self.master = master
self.aggregator = ''
def num(self, n):
self.aggregator += str(n)
self.master.visor.update_text_display(self.aggregator)
def equal(self, _):
try:
total = str(eval(self.aggregator))
self.aggregator = ''
self.master.visor.text_display.set(total)
except ZeroDivisionError:
self.master.visor.text_display.set("Error: Divisão por zero")
except:
self.master.visor.text_display.set("Unexpected error")
raise
def clear(self):
self.master.visor.text_display.set("Clear")
New frame_display.py:
import tkinter as tk
class DisplayContainer(tk.Frame):
def __init__(self):
super().__init__()
self.configure(bg="cyan", height=5)
self.columnconfigure(0, weight=1)
self.txt = tk.StringVar()
label_display = tk.Label(self, textvariable=self.txt, font=15, bg="#bebebe", relief="groove", bd=5, height=5)
label_display.grid(row=0, column=0, sticky="nsew")
def update_text_display(self, text):
self.text_display.set(text)
New frame_botoes.py:
import tkinter as tk
import calculadora
class ButtonsContainer(tk.Frame):
def __init__(self):
super().__init__()
self.configure(bg="yellow")
self.screen = calculadora.Press(self.master)
self.master.bind("<Key>", self.key_handler)
self.master.bind("<Return>", self.screen.equal)
for x in range(0, 5):
self.rowconfigure(x, weight=1)
if x < 4:
self.columnconfigure(x, weight=1)
pad = 15
r = 0
c = 0
for i in range(10):
if i == 0:
tk.Button(self, text=i, padx=pad, pady=pad,
command=lambda n=i: self.screen.num(n)).grid(row=3, column=1, sticky="nswe")
else:
tk.Button(self, text=i, padx=pad, pady=pad,
command=lambda n=i: self.screen.num(n)).grid(row=r, column=c, sticky="nswe")
if c == 2:
c = 0
r += 1
else:
c += 1
for i in [["-", 1, 3], ["*", 2, 3], ["/", 3, 3], ["(", 3, 0],
[")", 3, 2], [".", 4, 2], ["+", 0, 3], ["=", 4, 3], ["CLEAR", 4, 0]]:
if i[0] == 'CLEAR':
tk.Button(self, text=i[0], padx=pad, pady=pad,
command=self.screen.clear).grid(row=i[1], column=i[2], columnspan=2, sticky="nsew")
elif i[0] == '=':
tk.Button(self, text=i[0], padx=pad, pady=pad,
command=self.screen.equal).grid(row=i[1], column=i[2], sticky="nsew")
else:
tk.Button(self, text=i[0], padx=pad, pady=pad,
command=lambda v=i[0]: self.screen.num(v)).grid(row=i[1], column=i[2], sticky="nsew")
def key_handler(self, event):
self.screen.num(event.char)
If you have any questions let me know :D
Just for fun here is how I would have build this calc.
Its a small enough program I think most if not all is fine in a single class. Also by placing everything in a single class we can avoid a lot of the back and forth going on and keep our code simple. By doing this we took your roughly 180+ lines of code and reduced them to around 80+ lines of code.
My example:
import tkinter as tk
class Main(tk.Tk):
def __init__(self):
super().__init__()
self.geometry("640x640")
self.columnconfigure(0, weight=1)
self.rowconfigure(1, weight=1)
self.aggregator = ''
self.txt = tk.StringVar()
self.bind("<Key>", self.key_handler)
self.bind("<Return>", self.equal)
dis_frame = tk.Frame(self)
dis_frame.grid(row=0, column=0, sticky="new")
btn_frame = tk.Frame(self)
btn_frame.grid(row=1, column=0, sticky="nsew")
dis_frame.configure(bg="cyan", height=5)
dis_frame.columnconfigure(0, weight=1)
for x in range(0, 5):
btn_frame.rowconfigure(x, weight=1)
if x < 4:
btn_frame.columnconfigure(x, weight=1)
self.display = tk.Label(dis_frame, textvariable=self.txt, font=15,
bg="#bebebe", relief="groove", bd=5, height=5)
self.display.grid(row=0, column=0, sticky="nsew")
pad = 15
r = 0
c = 0
for i in range(10):
if i == 0:
tk.Button(btn_frame, text=i, padx=pad, pady=pad,
command=lambda n=i: self.num(n)).grid(row=3, column=1, sticky="nswe")
else:
tk.Button(btn_frame, text=i, padx=pad, pady=pad,
command=lambda n=i: self.num(n)).grid(row=r, column=c, sticky="nswe")
if c == 2:
c = 0
r += 1
else:
c += 1
for i in [["-", 1, 3], ["*", 2, 3], ["/", 3, 3], ["(", 3, 0],
[")", 3, 2], [".", 4, 2], ["+", 0, 3], ["=", 4, 3], ["CLEAR", 4, 0]]:
if i[0] == 'CLEAR':
tk.Button(btn_frame, text=i[0], padx=pad, pady=pad,
command=self.clear).grid(row=i[1], column=i[2], columnspan=2, sticky="nsew")
elif i[0] == '=':
tk.Button(btn_frame, text=i[0], padx=pad, pady=pad,
command=self.equal).grid(row=i[1], column=i[2], sticky="nsew")
else:
tk.Button(btn_frame, text=i[0], padx=pad, pady=pad,
command=lambda v=i[0]: self.num(v)).grid(row=i[1], column=i[2], sticky="nsew")
def key_handler(self, event):
self.num(event.char)
def num(self, n):
self.aggregator += str(n)
self.txt.set(self.aggregator)
def equal(self, event=None):
try:
total = str(eval(self.aggregator))
self.txt.set(total)
self.aggregator = total
except ZeroDivisionError:
self.txt.set("Error: Divisão por zero")
except:
self.txt.set("Unexpected error")
raise
def clear(self):
self.txt.set("Clear")
self.aggregator = ''
Main().mainloop() | {
"domain": "codereview.stackexchange",
"id": 34356,
"tags": "python, beginner, python-3.x, calculator, tkinter"
} |
What are the most important differences between HSP70 and HSP90? | Question: Question originally asked on Quora. These proteins have many functional similarities, so why do cells need both to handle unfolded proteins?
Answer: Often cells have multiple types of the same protein — this redundancy can have different effects for different requirements such as having proteins function under different physiological conditions, or providing specificity to a certain class of ligand proteins or so on.
But here, it seems like the two have some synergistic interaction, a tag team if you will.
Wegele H, Müller L, Buchner J. 2004. Hsp70 and Hsp90 — a relay team for protein folding. Reviews of physiology, biochemistry and pharmacology 151: 1–44.
Unfortunately this article's full version can only be accessed if you're at a university or somewhere that has a subscription to some of the large research databases, but the abstract is free and it may provide more clarification. | {
"domain": "biology.stackexchange",
"id": 189,
"tags": "proteins, protein-folding"
} |
Does concentration or pKa define acid strength? | Question: For example, is 3M HCl a stronger acid than 1M HCl?
I would reason that the concentration of an acid/base does not influence its strength. Strength is determined by the pKa, and, as per Le Chatelier's Principle, the initial concentration does not influence the equilibrium constant.
It may shift the equilibrium concentrations (meaning that the pH is higher for the 3M HCl), but it will not change $K_a=\frac{[H_3O^+][Cl^-]}{[HCl]}$ at equilibrium.
Is this right?
Answer: The term ‘strong acid’ is sometimes used in a rather fuzzy way and you ran into problems doing so. I prefer to use the term ‘strong acid’ only with respect to an acid’s $\mathrm pK_\mathrm{a}$ value and disregard all other influences. This gives a clearly defined measure of acid strength and we can easily sort various acids by their strength into stronger or weaker acids.
However, this is looking at the acid as a molecule. In real-life applications you are typically more interested in the property of a solution. To give a real-world example, imagine a $\pu{1M}\ \ce{HBr}$ solution and a $\pu{12M}\ \ce{HCl}$. solution. Obviously, $\ce{HBr}$ is the stronger acid, but the concentration of $\ce{HCl}$—also a strong acid and thus fully deprotonated—is higher. Therefore, the $\ce{HCl}$ solution is more concentrated or, as some would say, more acidic. It can do greater harm and it is able to protonate more Brønsted base molecules than its $\ce{HBr}$ counterpart.
If instead of examining a $\pu{1M}$ and a $\pu{12M}$ solution I had examined a $\pu{e-3M}$ and a $\pu{1M}$ solution, we could even base the discussion around the solution’s resulting pH: the stronger acid is much more diluted and will result in a solution of pH 3 while the weaker acid results in a solution of pH 0. | {
"domain": "chemistry.stackexchange",
"id": 11050,
"tags": "acid-base, equilibrium"
} |
Is there and visualization software to create a sdf file? Eg-: can we create a sdf file from a CAD model etc | Question:
Is there and visualization software to create a sdf file? Eg-: can we create a sdf file from a CAD model etc.
Originally posted by rohanbhargava11 on Gazebo Answers with karma: 3 on 2013-06-18
Post score: 0
Answer:
You can use the SolidWorks URDF exporter to export from SolidWorks to URDF and then you can convert URDF to SDF (or simply stick with URDF and have Gazebo convert it at runtime).
Originally posted by ThomasK with karma: 508 on 2013-06-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 3344,
"tags": "gazebo, sdformat"
} |
What does Google's claim of "Quantum Supremacy" mean for the question of BQP vs BPP vs NP? | Question: Google recently announced that they have achieved "Quantum Supremacy": "that would be practically impossible for a classical machine."
Does this mean that they have definitely proved that BQP ≠ BPP ? And if that is the case, what are the implications for P ≠ NP ?
Answer: Google's paper/results are kind of sideways to questions in computational complexity about the relation between $\mathrm{BPP}$ and $\mathrm{BQP}$ (and even further from questions about whether $\mathrm{P}\ne\mathrm{NP}$). It's more as if Google relies on the hypothesis that $\mathrm{BPP}\ne\mathrm{BQP}$ as evidence that their quantum computer performs a task many orders of magnitude faster than a classical computer could.
Google performed a sampling task on their quantum computer, that they have strong theoretical reasons to believe is not easily performed on a classical computer. If we say that these complexity classes live in some idealized platonic universe, then Google's results don't shed any light about the difficulty of proving whether or not they are equal to one another - because Google's paper assumes that they are not equal to one another.
What Google's paper does do, is provide evidence that the hypothesis that "a probabilistic Turing machine can efficiently simulate any realistic model of computation" is incorrect. They have prepared and maintained coherence of a state of their choosing in a Hilbert space of dimension $2^{53}$. As Aaronson argues, is akin to the Wright Flyer providing evidence that "heavier-than-air human-controlled powered flight is impossible" is incorrect. | {
"domain": "quantumcomputing.stackexchange",
"id": 1016,
"tags": "quantum-algorithms, complexity-theory, quantum-advantage, bqp, google-sycamore"
} |
Find a minimum-weight perfect b-matching, where b is even | Question: How would one find a minimum-weight perfect b-matching of a general graph, where the number of edges incident on each vertex is a positive even number not greater than b?
A minimum-weight perfect b-matching of a graph G is a subgraph M of minimal total edge weight, such that each vertex in G is incident by exactly b edges from M.
Answer: A subgraph in which each vertex has degree exactly $b$ is known as a $b$-factor. You are asking for something similar (but not identical) to the minimum weight $b$-factor.
Tutte showed how to reduce minimum weight $b$-factor to minimum weight perfect matching in his paper A short proof of the factor theorem for finite graphs.
We will split each vertex $v$ of degree $d(v)$ into $2d(v) - b$ vertices: $v_1,\ldots,v_{d(v)},v'_1,\ldots,v'_{d(v)-b}$ (we can assume that $d(v) \geq b$, since otherwise the graph has no $b$-factor). We lift each edge $(x,y)$ to an edge $(x_i,y_j)$ of the same weight in such a way that each $x_i$ and $y_j$ participates in exactly one such edge. We furthermore connect each $v_i$ to each $v'_j$ with a zero-weight edge, for all $i \in [d(v)]$ and $j \in [d(v)-b]$.
Each $b$-factor of the original graph corresponds to a matching in the new graph of the same weight. Indeed, lift the $b$-factor to the new graph, and for each vertex $v$, add an arbitrary matching between $v'_1,\ldots,v'_{d(v)-b}$ and the $d(v)-b$ new vertices corresponding to "unused ports".
Conversely, we can convert every matching in the new graph to a $b$-factor of the same weight in the original graph by undoing this construction. | {
"domain": "cs.stackexchange",
"id": 13259,
"tags": "matching"
} |
how to link openni library through the "CMakeLists.txt" file | Question:
I have installed ROS groovy and openni library in the ARM chip. Now I want to use openni library fouction , but I do not know how to link openni library through the "CMakeLists.txt" file. I used "target_link_libraries(${PROJECT_NAME} OpenNI)",but it can not link openni founction.Who can tell me how to link openni and write cmake file?
Originally posted by Robin Hu on ROS Answers with karma: 39 on 2013-02-26
Post score: 1
Answer:
Assuming that ROS Groovy uses OpenNI v1.*, as I presume (don't have an installation handy), try something along these lines:
pkg_check_modules (LIBUSB REQUIRED libusb-1.0)
include_directories (${LIBUSB_INCLUDE_DIRS})
link_directories (${LIBUSB_LIBRARY_DIRS})
FIND_PATH(OPEN_NI_INCLUDE "XnOpenNI.h" "OpenNIConfig.h" HINTS "$ENV{OPEN_NI_INCLUDE}" "/usr/include/ni" "/usr/include/openni" "/opt/ros/groovy/include/openni_camera")
FIND_LIBRARY(OPEN_NI_LIBRARY NAMES OpenNI libOpenNI HINTS $ENV{OPEN_NI_LIB} "/usr/lib")
LINK_DIRECTORIES($ENV{OPEN_NI_LIB})
INCLUDE_DIRECTORIES(${OPEN_NI_INCLUDE})
LINK_LIBRARIES(${OPEN_NI_LIBRARY})
FIND_PATH(XN_NITE_INCLUDE "libXnVNite.so" HINTS "$ENV{XN_NITE_INSTALL_PATH}" "/usr/include/nite")
FIND_LIBRARY(XN_NITE_LIBRARY NAMES libXnVNite_1_5_2.so HINTS $ENV{XN_NITE_INSTALL_PATH} "/usr/lib")
LINK_DIRECTORIES($ENV{XN_NITE_LIB_INSTALL_PATH} "/usr/lib")
INCLUDE_DIRECTORIES(${XN_NITE_INCLUDE})
LINK_LIBRARIES(${XN_NITE_LIBRARY})
target_link_libraries (your_project_name ${SSE_FLAGS} ${OPEN_NI_LIBRARIES} ${XN_NITE_LIBRARIES})
If, instead, you want to use a manually installed copy of OpenNI 2.* (not in ROS yet), please see this discussion for a starting CMakeLists.txt which solves some problems but not all. Disclaimer: this is a shameless link to a question related to OpenNI, CMake but not to ROS that I myself posted on the OpenNI forums.
Originally posted by Giovanni Saponaro with karma: 68 on 2013-02-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 13087,
"tags": "ros"
} |
Example demonstrating the power of non-deterministic circuits | Question: A non-deterministic Boolean circuit has, in addition to the ordinary inputs $x = (x_1,\dots,x_n)$, a set of "non-deterministic" inputs $y=(y_1,\dots,y_m)$. A non-deterministic circuit $C$ accepts input $x$ if there exists $y$ such that the circuit output $1$ on $(x,y)$.
Analogous to $P/poly$ (the class of languages decidable by polynomial size circuits), $NP/poly$ can be defined as the class of languages decidable by polynomial size non-deterministic circuits. It is widely believed that non-deterministic circuits are more powerful than deterministic circuits, in particular $NP \subset P/poly$ imply that the polynomial hierarchy collapses.
Is there an explicit (and unconditional) example in the literature
showing that non-deterministic circuits are more powerful than
deterministic circuits?
In particular, do you know of a function family $\{f_n\}_{n > 0}$
computable by non-deterministic circuits of size $cn$, but not
computable by deterministic circuits of size $(c+\epsilon)n$?
Answer: If this problem has no progress, I have an answer.
--
I also have considered this problem since my COCOON'15 paper (before your question).
Now, I have a proof strategy, and it immediately gives the following theorem:
There is a Boolean function $f$ such that the nondeterministic $U_2$-circuit complexity of $f$
is at most $2n + o(n)$ and the deterministic $U_2$-circuit complexity of $f$ is $3n - o(n)$.
I apologize that I haven't written the paper.
The proof sketch below might be enough to explain my proof strategy.
I aim to write the paper with more results by the STACS deadline (Oct. 1).
[Proof Sketch]
Let $f = \lor_{i=0}^{\sqrt{n}-1}{Parity_{\sqrt{n}}(x_{\sqrt{n} \cdot i + 1}, \ldots, x_{\sqrt{n} \cdot i + \sqrt{n}}})$.
The deterministic lower bound proof is based on a standard gate elimination method with a little modification.
The nondeterministic upper bound proof is a construction of such nondeterministic circuit.
Construct a circuit computing $Parity_{\sqrt{n}}$. (The number of gates is $o(n)$.)
Construct a circuit selecting $\sqrt{n}$ inputs nondeterministically. (The number of gates is $2n + o(n)$.)
Combine the two circuits. | {
"domain": "cstheory.stackexchange",
"id": 4506,
"tags": "cc.complexity-theory, circuit-complexity, nondeterminism"
} |
2D-Input to LSTM in Keras | Question: I have following problem: I would like to feed LSTM with
train_datagen.flow_from_directory
The input is basically a spectrogram images converted from time-series into time-frequency-domain in PNG format that has a dimension of: timestep x frequency spectrum. 1 sample = 1 PNG image in uint8. In my example: 3601 timesteps with 217 frequency spectrum (=features) / timestep.
The spectrogram itself is just 1D, but I think "flow from directory" function was hard-coded to only prepare 3D image matrix and thus the input shape was becoming , which is totally pity because there are some people who are only working with purely greyscale uint8 image, and some who are working with multispectral and hyperspectral images.
My codes are following:
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.layers import LSTM
from keras import optimizers
from keras import backend as K
import tensorflow as tf
img_width, img_height = 3601,217
train_data_dir = 'sensor1/training'
validation_data_dir = 'sensor1/validation'
num_classes = 10
nb_train_samples = num_classes*70
nb_validation_samples = num_classes*20
epochs = 20
batch_size = 10
input_shape = (img_width, img_height)
model.add(LSTM(units=256, input_shape= input_shape, return_sequences=True))
model.add(LSTM(units=128, return_sequences=True))
model.add(LSTM(units=64))
model.add(Dense(128))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
train_datagen = ImageDataGenerator(rescale = 1. / 255)
test_datagen = ImageDataGenerator(rescale = 1. / 255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size)
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size)
model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=epochs,
callbacks=[plot_losses],
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
And then as soon as I run the program, of course it gives an error message::
**ValueError: Error when checking input: expected lstm_50_input to have 3 dimensions, but got array with shape (10, 3601, 217, 3)**
The message:
expected lstm_50_input to have 3 dimensions, but got array with shape (10, 3601, 217, 3)
clearly suggests it does not agree with my definition of input shape of: (3601, 217)
Any idea to easy fix the problem?
Thanks in advance.
Answer: Why do you define the last dimension of input_shape as $3$? Just put your desired input dimensions accordingly and it should be fine:
input_shape = (img_width, img_height)
Update with the full code:
The best way would be to use TimeseriesGenerator instead of ImageDataGenerator but there seems there not flow_from_directory method meeting your needs. So, I think the best solution is to squeeze the last dimension of the generator output. Also, you have a color_mode option that allows to generate a 1-channel only tensor for grayscale images.
Full code of concerned parts:
model = Sequential()
model.add(Lambda(lambda x: x[:,:,:,0], input_shape=(*input_shape, 1)))
model.add(LSTM(units=256, return_sequences=True))
model.add(LSTM(units=128, return_sequences=True))
model.add(LSTM(units=64))
model.add(Dense(128))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
color_mode='grayscale')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
color_mode='grayscale') | {
"domain": "datascience.stackexchange",
"id": 5981,
"tags": "neural-network, keras, time-series, lstm, rnn"
} |
Simple streaming parser to extract columns | Question: In reply to previous question I rewrote ColumnReader and would like more suggestions
using System;
namespace R2D.IO
{
public sealed class LineReader
{
public string NewLine { get; set; }
string _buffer = "";
public void Parse(string text)
{
_buffer += text;
var lastline = _buffer.LastIndexOf(NewLine);
if (lastline == -1)
return;
var lines = _buffer.Substring(0, lastline).Split(NewLine.ToCharArray(), StringSplitOptions.RemoveEmptyEntries);
_buffer = _buffer.Substring(lastline);
foreach (var line in lines)
Receive(line);
}
public event Action<string> Receive = (line) => { };
}
public sealed class ColumnReader
{
public string Delimiter { get; set; }
public string NewLine
{
get { return reader.NewLine; }
set { reader.NewLine = value; }
}
private LineReader reader = new LineReader();
public ColumnReader()
{
reader.Receive += (line) => Receive(line.Split(Delimiter.ToCharArray()));
}
public event Action<string[]> Receive = (columns) => { };
}
}
My main use-case is reading lines from a SerialPort:
var parser = new IO.LineReader { NewLine = port.NewLine };
port.DataReceived += (o, e) => parser.Parse(port.ReadExisting());
parser.Receive += (line) => { };
Answer: Split(NewLine.ToCharArray(), StringSplitOptions.RemoveEmptyEntries)
If NewLine is "\r\n", then this will split the input on every \r and every \n. You then take care of the resulting empty strings by specifying RemoveEmptyEntries. But this means that if the input actually contains an empty line, you won't receive it.
To fix that, you can instead use:
Split(new[] { NewLine }, StringSplitOptions.None) | {
"domain": "codereview.stackexchange",
"id": 12579,
"tags": "c#"
} |
Integrating drag force | Question: I need help with an integration problem. We have $F_d = \frac{1}{2} \rho c_d Av^2$ and $$W = \int F_d\cdot dx = \frac{1}{2}\rho c_dA\int v^2\cdot dx = \frac{1}{2}\rho c_dA\int(\frac{dx}{dt})^2\cdot dx.$$ I need to find this integral to find the drag coefficient. I tried integration by parts which didnt work. Any tips?
Answer: Unfortunately, $v=dx/dt$ isn't known from just your expression for the drag force. In general, you would find an expression for $v(t)$ using the differential equation, then integrate the force integral with respect to time by replacing $dx$ with $v dt$. Here's how it might be done:
First, assuming you have an object with mass $m$ that has an initial velocity $v_0$ experiences this drag force, and no other forces. Then, writing $F=ma$:
$$ma = -\frac{1}{2} \rho c_d A v^2$$
$$\frac{dv}{dt} = -\frac{\rho c_dA}{2 m} v^2$$
This differential equation can be solved by separation of variables:
$$\int_{v_0}^{v(t)} \frac{dv}{v^2} = \int_0^t -\frac{\rho c_d A}{2m}dt$$
$$\frac{1}{v_0} - \frac{1}{v(t)} = -\frac{\rho c_d A}{2m}t$$
$$v(t) = \frac{1}{1/v_0 + \rho c_dAt/2m}$$
In order to find the work, you would integrate as follows:
$$W = \int -\frac{1}{2} \rho c_d A v^2 dx = \int - \frac{1}{2} \rho c_d A v^2 (vdt)$$
$$ = -\frac{1}{2} \rho c_d A \int v^3 dt$$
$$ = -\frac{1}{2} \rho c_d A \frac{1}{\rho c_dA/m (\rho c_d At/2m + 1/v_0)^2} + C$$
which is just equal to $\Delta K = \Delta (\frac{1}{2} mv^2)$ as expected from the work-energy theorem. | {
"domain": "physics.stackexchange",
"id": 78341,
"tags": "homework-and-exercises, newtonian-mechanics, work, integration, drag"
} |
Why do astronomers say that there is not enough matter in Universe? | Question: I was reading today about the birth of the Universe and the conjectures about the matter that was supposed to exist at the moment of the Big Bang and what can be measured now.
There seems to be some sort of discrepancy between the calculated amount of matter in the Big Bang and the amount that can be measured in the visible Universe.
Why is that so?
And further more, why can't be this "missing matter" have been devoured by black holes through out the Universe?
Answer: If we assume that all the matter that is only visible in star systems (baryonic matter), then we are not able to account for how galaxies rotate. Similarly, we have problems trying to calculate movement in a cluster of galaxies, or the rate of expansion of the universe. It seems as if there is more gravitational interaction happening than just the matter we see. So, we postulate something called "dark matter" that is essentially matter which exerts gravitational force but is not visible to us since we can't see it (any form of electromagnetic radiation). Other observations such as gravitational lensing (light bending around a heavy object) due to invisible/dark objects also indicate the existence of dark matter.
As for you question of why this mass is not to be found in black holes... If this mass were in a black hole, it would be highly localized. Also, (super)massive black holes are typically found only in the centre of galaxies, whereas to explain the observed effects, we need dark matter to be quite well diffused all around the galaxy and some in between galaxies. We take into account gravitational interactions due to the mass which such black holes can have and ascribe the remaining discrepancy to dark matter.
As always, the Wikipedia page on dark matter gives more details if you're interested. | {
"domain": "physics.stackexchange",
"id": 2999,
"tags": "cosmology, astrophysics, dark-matter"
} |
Selecting connected subgraph that exceeds value c, with least possible weight | Question: Given a graph $G$ where each node has a value $c$ and weight $w$, I want to select a connected subgraph $V^*$, such that,
Sum of all values in $V^*$ crosses threshold $t$.
Sum of all weights(say $w^*$) in $V^*$ is as low as possible.
A practical example is finding smallest continuous area of a country that hosts at least $x\%$ of the population. In this case, value would be population, and weight would be area.
I found a related question, but it only asks about the complexity, not the algorithm.
I thought of 0 - 1 knapsack, such that values and weights swap role. So,
Size of knapsack is $t$, however we are allowed to cross it once.
minimize $w^*$.
However, I think this won't work, mainly because we can't order the nodes by $value/weights$, and secondly because of ability to exceed knapsack size.
Answer: This problem is NP-hard.
Let $S = \{x_1, \dots, x_n\}$ be an instance of partition.
Create a clique $G$ on $n$ nodes $v_1, \dots, v_n$.
Set both the cost and the weight of $v_i$ to $x_i$.
Set $t = \frac{1}{2}\sum_{x_i \in S} x_i$.
If there is a subset $C$ of $S$ such that $2 \sum_{x_i \in C} x_i = \sum_{x_i \in S} x_i$, then the set of vertices $\{v_i \mid x_i \in C\}$ is connected, has total value $t$ and total weight $t$.
If there is no subset $C$ of $S$ such that $2 \sum_{x_i \in C} x_i = \sum_{x_i \in S} x_i$ then every subset of vertices of $G$ either has total value smaller than $t$ (and hence is not a feasible solution), or has a total value larger than $t$, and hence also weight larger than $t$.
Then you have that the answer to the instance of partition is yes if and only if the optimal solution to your problem has measure (total weight) $t$. | {
"domain": "cs.stackexchange",
"id": 17717,
"tags": "algorithms, graphs"
} |
Why can't the constancy of the speed of light be deduced from classical physics? | Question: I have read over a dozen questions about the speed of light -- "why it $c$ constant?", "why can't anything travel faster than light?", "how do we know this?"
The responses are quite clear:
The invariance of light speed is determined empirically (e.g. from the Michelson-Morley experiment).
The speed of light is simply an axiom for physics and was discovered experimentally.
The invariant value of $c$ is a fundamental property of the universe.
My question is why can't the invariance of $c$ not be deduced theoretically with the following logic.
As an object's velocity increases, its kinetic energy also increases.
The kinetic energy growth is asymptotic, meaning it approaches infinity as the velocity approaches some value.
This makes it impossible for anything with a mass to reach this velocity because it would require infinite kinetic energy.
Therefore velocity must be have a limit.
See, this makes much more sense to me than the claim that the invariance of $c$ is just a postulate from lab work and that there's no reason for it to be invariant other than "it's just the way things are".
I suspect I've made a mistake. Perhaps the idea that an object's mass increases rapidly as speed increases comes from special relativity itself, which is derived from the assumption that $c$ is invariant. This doesn't, however, seem obvious to me since we should be able to observe this effect in experiments.
Answer:
The kinetic energy growth is asymptotic, meaning it approaches infinity as the velocity approaches some value.
Unfortunately, this result already assumes that you know that there is an invariant speed. Without the invariant speed the formula for KE is $KE=\frac{1}{2}mv^2$ which has no limiting speed. It is only after you already know about the invariant speed that you get the expression $KE=((1-v^2/c^2)^{-1/2}-1)mc^2$ which goes to infinity as $v$ approaches $c$.
photons don't have mass so they can
This also requires already knowing about the invariant speed. Without the invariant speed there is no known relationship between all three of mass, energy, and momentum. With the invariant speed we learn $m^2 c^2=E^2/c^2-p^2$ which given the energy and momentum of light implies that light is massless.
So yes, those things, if known independently somehow, would have led to the conclusion of an invariant speed. But how could they have been known? Perhaps they could have simply been measured and known experimentally first, but historically it didn’t happen that way. Historically the invariance of c was postulated prior to measurements experimentally showing those points. Furthermore, such measurements would have been considered violations of classical physics. | {
"domain": "physics.stackexchange",
"id": 65548,
"tags": "special-relativity, classical-mechanics, speed-of-light, inertial-frames, invariants"
} |
Derivation of general equation of heat transfer & entropy | Question: In Landau & Lifshtiz Volume 6 on fluid mechanics we derive the general equation of heat transfer by starting with the expression
$$
\partial_t \left( \frac{1}{2} \rho v^2 + \rho \varepsilon \right)
= - \vec{\nabla} \cdot \left[ \rho \vec{v} \left( \frac{1}{2} v^2 +w \right) \right]
$$
derived from the conservation of energy for an ideal fluid.
Here $\rho$ is the density, $v$ the velocity, $\varepsilon$ the internal energy per unit mass and $w$ the enthalpy per unit mass.
We argue that two terms need to be added:
$- v_i \sigma_{ij}$ due to flux related to internal friction ($\sigma_{ij}$ is the viscous stress tensor)
$q_i = -\kappa \partial_i T$, the heat flux density $q$ with thermal conductivity $\kappa$ and temperature $T$.
This then gives the final equation
$$
\partial_t \left( \frac{1}{2} \rho v^2 + \rho \varepsilon \right)
= - \vec{\nabla} \cdot \left[ \rho \vec{v} \left( \frac{1}{2} v^2 +w \right)
- \vec{v}\cdot \sigma -\kappa \vec{\nabla} T \right]
$$
This is fine, however to derive energy flux $\rho \vec{v} \left(\frac{1}{2} v^2 +w \right)$ for the ideal fluid case we assume the general adiabatic equation
$$
\partial_t s + v_i \partial_i s = 0
$$
with $s$ denoting the entropy per unit mass. Which requires the absence of heat exchange, i.e. an adiabatic motion of the fluid. Assuming this we are able to cancel terms
$+\rho T v_i \partial_i s$ coming from the kinetic energy part after substituting the Euler equation $\rho \partial_tv_i + v_j \partial_j v_i = -\partial _i P$ ($P$ denoting the pressure) and using the thermodynamic relation $\mathrm{d}w=T\mathrm{d}s + 1/\rho \mathrm{d}P$
$- \rho T v_i \partial_i s = \rho T \partial_t s$ originating from the internal energy term using $\rho\mathrm{d}\varepsilon=\rho T\mathrm{d}s+P/\rho\mathrm{d}\rho$
If we now add the term for the heat flux density mentioned above, the general adiabatic equation does no longer hold (?!), and thus we cannot cancel the mentioned terms, right?
So why are these terms not appearing in the general equation?
Answer: Alright, turns out I simply missed an important part of the derivative. For completeness and future reference I put the mention what resolved my issue.
Apparently the equation I mentioned above is the the total energy flux
$$
\partial_t \left( \frac{1}{2} \rho v^2 + \rho \varepsilon \right)
- \vec{\nabla} \cdot
\left[
\rho \vec{v}
\left(
\frac{1}{2} v^2+ w
- \vec{v} \cdot \sigma - \kappa \vec{\nabla}T
\right)
\right]
$$
which is to be compared with the proper differentiation of the term on the left. This ultimately leaves us with the general equation of heat transfer
$$
\rho T \left( \partial_t s + v_i \partial_is \right)
= \sigma_{ij} \partial_jv_i + \partial_i \left( \kappa \partial_i T \right)
$$ | {
"domain": "physics.stackexchange",
"id": 75208,
"tags": "thermodynamics, fluid-dynamics, entropy, thermal-conductivity"
} |
On partial reboiler and total condenser in distillation column | Question: Why is it that a total condenser and partial reboiler are used in a distillation column and it is not the other way around?
Answer: Partial reboiler is related to that fact that most times, there is material left in the bottom of the column. This can be because you need to maintain a level of liquid because the bottom take off is a purified stream you want (as product or waste treatment input) or there is not enough area to maintain the proper amount of boil up.
The total condenser is related to the fact that most times you want to condense the purified upper stream as that is the product (and since liquids have lower specific volumes than gases, it makes storage and transport more efficient) or you want a liquid stream into a second column or other downstream unit operation.
It is not a law of the universe that the partial reboiler and total condenser are the only way distillation can be done, but it is very common in current industries and so it is often discussed. | {
"domain": "engineering.stackexchange",
"id": 2975,
"tags": "chemical-engineering"
} |
Introduction to modeling chemical reactions | Question: I am looking for introduction to modeling of chemical reactions. I think there is the base approach, where concentrations of chemical species are given, plus ratios of each possible reaction / outcome. Is there a paper lightly explaining internal mechanics of such reactions, and their modeling via system of differential equations?
Answer: The topic you are asking about is called reaction dynamics, and is closely related to chemical kinetics (the former is more focussed on atomistic events, reactive and nonreactive collisions, while the later usually describes the mathematical treatment of the kinetic laws governing species concentrations). It was the topic of the 1986 Nobel Prize in Chemistry.
It is covered by most physical chemistry textbooks, and more in depth by specialized textbooks on that matter. It is sometimes taught together with basics of statistical mechanics or statistical thermodynamics. Check your favorite (or local) university for a list of reading material on this topic! | {
"domain": "chemistry.stackexchange",
"id": 258,
"tags": "kinetics, reference-request, books"
} |
Dimensions of a black hole | Question: How big can a black hole become and how small can a black hole become?(minimum and maximum dimensions of a black hole)
Answer: Primoridial black holes can be found (hypothetically; there is no experimental evidence yet) of any small size above the Planck mass. Stellar black holes, however, cannot have a mass below the TOV limit (1.5 to 3 solar masses)
There does seem to be an upper limit1 of 50 billion solar masses. However, I suspect2 that this takes into consideration formative constraints (i.e. the constraints posed on the formation of such a BH); and does not prohibit such a black hole from existing. After all, the Schwarzschild metric certainly does not impose limits on the size of a black hole.
Note that talking about the limiting dimensions of a black hole is slightly meaningless as the dimensions change in different reference frames. It is far easier to talk about the mass of a black hole; the radius can be calculated in various frames from that information.
1. Natarajan, P. and Treister, E. (2009), Is there an upper limit to black hole masses?. Monthly Notices of the Royal Astronomical Society, 393: 838–845. doi: 10.1111/j.1365-2966.2008.13864.x
2. but cannot confirm yet; I will have to read the paper more thoroughly | {
"domain": "astronomy.stackexchange",
"id": 1854,
"tags": "black-hole"
} |
Determine eigeinvalue and eigenvector of two operators R and L | Question: Question: Let H be a Hilbert space with countable-infinite orthonormal basis ${|n>}_{n \in N}$. The two operators R and L on H are defined by their action on the basis elements
\begin{align}
R|n\rangle &= |n+1\rangle \\
L|n\rangle &=\begin{cases}|n-1\rangle & {\rm for}\,n>1 \\ 0&{\rm for} n=1\end{cases} \end{align}
Determine the eigenvalue and eigenvectors of $R$ and $L$ and find their Hermitian conjugate.
Attempt: $R$ looks like the creation operator and $L$ looks like the annihilation operator. So I did:
\begin{align}aR|n\rangle&=a|n+1\rangle\\
&=\sqrt{n+1}|n\rangle \end{align}
Now I have a simultaneous eigenstate for the annihilation and $R$ operator with eigenvalue $\sqrt{n+1}$. I did the same with the $L$ and creation operator and found an eigenvalue of $\sqrt{n}$ The hermitian conjugates are then just $\langle n|R^\dagger = \langle n+1|$
I do not think my steps are correct, can someone help me in the right direction please.
Answer: I assume that $n =1,2,\ldots$ and I indicate by $\psi_n$ the unit vector $|n\rangle$.
A generic vector in the Hilbert space can therefore be written as
$$\psi = \sum_{n=1}^{+\infty} c_n \psi_n$$
where $\sum_n |c_n|^2 < +\infty$.
The action of $R$ and $L$ on that vector respectively is:
$$R \psi = \sum_{n=1}^{+\infty} c_n \psi_{n+1}$$
and
$$L \psi = \sum_{n=2}^{+\infty} c_n \psi_{n-1}\:.$$
Thus, assuming $\phi = \sum_{m=1}^{+\infty} b_m \psi_m$,
$$\langle \phi | R \psi \rangle = \sum_{m=1}^{+\infty} \sum_{n=1}^{+\infty} b^*_m c_n \langle \psi_m | R \psi_n \rangle = \sum_{m=1}^{+\infty} \sum_{n=1}^{+\infty} b^*_m c_n \langle \psi_m | \psi_{n+1} \rangle $$ $$=
\sum_{m=1}^{+\infty} \sum_{n=1}^{+\infty} b^*_m c_n\delta_{m,n+1}
= \sum_{n=1}^{+\infty} b^*_{n+1} c_n\:.\tag{1}$$
Similarly,
$$\langle L\phi | \psi \rangle = \sum_{m=1}^{+\infty} \sum_{n=1}^{+\infty} b^*_m c_n \langle L \psi_{m} | R \psi_n \rangle = \sum_{m=2}^{+\infty} \sum_{n=1}^{+\infty} b^*_m c_n \langle \psi_{m-1} | \psi_{n+1} \rangle $$ $$ =
\sum_{m=2}^{+\infty} \sum_{n=1}^{+\infty} b^*_m c_n\delta_{m-1,n}
= \sum_{n=1}^{+\infty} b^*_{n+1} c_n\:.\tag{2}$$
We have found that
$$\langle \phi | R \psi \rangle = \langle L\phi | \psi \rangle$$
which means $R^\dagger=L$. Taking the complex conjugate of (1) and (2), we also get
$$\langle R\psi | \phi \rangle = \langle \psi | L \phi \rangle$$
which means $L^\dagger=R$. (I deliberately omitted to discuss several issues about domains of the involved operators and convergence of series, but everything goes right taking all mathematical subtleties into account...)
Regarding eigenvectors, $R\psi = \lambda \psi$ can be rewritten as
$$R\psi = \sum_{n=1}^{+\infty} c_n R\psi_n = \sum_{n=1}^{+\infty} c_n \psi_{n+1} = \lambda \psi = \sum_{n=1}^{+\infty} \lambda c_n \psi_{n}$$
that is
$$ \sum_{n=1}^{+\infty} c_n \psi_{n+1} = \sum_{n=1}^{+\infty} \lambda c_n \psi_{n}$$
that is, in turn,
$$ \sum_{n=2}^{+\infty} c_{n-1} \psi_{n} = \lambda c_1 \psi_1 +\sum_{n=2}^{+\infty} \lambda c_n \psi_{n}\:,$$
which implies $\lambda c_1 =0$ and $\lambda c_n = c_{n-1}$ for $n>2$.
These equations only admit the solution $c_n=0$, the proof being immediate, both for $\lambda =0$ and $\lambda \neq 0$. No eigenvectors exist for $R$.
With the same approach we see the $L\psi = \lambda \psi$ is equivalent to $c_{n+1}= \lambda c_n$, so that $c_n = \lambda^{n-1} c_1$. Hence a candidate eigevector with eigenvalue $\lambda \neq 0$ is
$$\psi_\lambda = \frac{1}{\lambda}\sum_{n=1}^{+\infty} \lambda^n \psi_n\:.$$
It is an eigenvector provided the series converges in the Hilbert space, i.e.
$$\sum_{n=1}^{+\infty} |\lambda|^{2n} <+\infty\:.$$
It in fact happens only for $|\lambda|^2 <1$. So, there is an eigenvector of $L$ with eigenvalue $\lambda$ for every complex $\lambda$ with $0 <|\lambda|<1$ | {
"domain": "physics.stackexchange",
"id": 16762,
"tags": "homework-and-exercises, operators, hilbert-space, eigenvalue"
} |
Can one raise indices on covariant derivative and products thereof? | Question: Can the following be true?
$g^{\sigma\rho}\nabla_{\rho}\nabla_{\mu} = \nabla^{\sigma}\nabla_{\mu}$
$g^{\sigma\rho}\nabla_{\nu}\nabla_{\sigma} = \nabla_{\nu}\nabla^{\rho}$
$g^{\sigma\rho}\nabla_{\nu}\nabla_{\mu}T_{\sigma\rho} = \nabla_{\nu}\nabla_{\mu}T$
Answer:
This is true - in fact you could define $\nabla^\sigma = g^{\sigma\rho} \nabla_\rho$.
I assume this meant to say
$$ g^{\sigma\rho} \nabla_\nu \nabla_\sigma = \nabla_\nu \nabla^\rho. $$
Again, this is true, but for a slightly less trivial reason than (1). To employ (1) to prove this, you need to be able to switch $g^{\sigma\rho}$ with $\nabla_\nu$, which you are able to do because one of the axioms we start with when defining the covariant derivative is that it commutes with the metric (i.e., the metric has vanishing covariant derivative, so that other term in the product rule drops out).
This also holds, following the same reasoning as in (2). | {
"domain": "physics.stackexchange",
"id": 42201,
"tags": "differential-geometry, metric-tensor, tensor-calculus, differentiation, covariance"
} |
Why does a mass move out when rotating? | Question: If a mass $m$ is put on a stationary table, and then we start rotating the table counterclockwise, the mass will fall from the table, as can be seen in this video https://youtu.be/IOcrHOc23N4?t=187.
If I understand correctly, this happens due to friction $f$ between the mass and table. When the table starts rotating, the mass wants to stay inertial, and thus the table applies a force on it, which causes it to fall from the table. I've added a diagram:
Is my analysis correct? is the direction of the force correct?
This force pushes the ball out, thus it acts centrifugally (away from center). However, of course, centrifugal force are fictitious! So is this not a centrifugal force? (I'm in an inertial frame, outside the table).
I've read previous questions and couldn't understand them.
Answer:
Is my analysis correct?
Almost. A correction: Friction initiates the motion. Without friction, the object would never move and would never start sliding. Friction causes the object to speed up.
But from here on, there is no further need for friction. An object with a speed will keep that speed until stopped. This is what is meant by inertia and described via Newton's 1st law. So, simply by having a speed, the object will eventually fall off the table (if smooth).
There is a force present trying to alter the object's route towards the table edge, though, and that is friction along the centripetal direction (towards the centre). This friction comes into existence gradually as the object's straight-line motion due is no longer tangential but now slides over the spinning surface once again. So, we should keep two friction directions separated in our heads: tangential friction that causes speeding up, and centripetal friction that causes turning.
All in all, we are dealing with two types of effects simultaneously here: the tendency of objects to move in straight lines (the kinematics), and the forces that create motion (the dynamics). Each topic can easily be analysed separately - together, they easily become more confusing to overview what causes what, and we have to describe and explain both the kinematics and the dynamics at the same time as here.
is the direction of the force correct?
No. The rotation will cause the surface under the mass to move rightwards. The friction will pull rightwards as well. Not leftwards. Remember that friction tries to prevent sliding - it does so by pulling the object along with the moving surface towards the right.
Via Newton's 3rd law, all forces come in pairs. As the object is pulled rightwards by friction, simultaneously the spinning surface is pulled leftwards by that same friction force, counteracting the spinning a bit. (But I don't think that is what you meant.)
This force pushes the ball out, thus it acts centrifugally (away from center).
No, it acts tangentially. At first sight it might look like the object is moving straight, perpendicularly away from the centre. But it actually isn't. It starts out tangentially and ideally continues in a straight line from here which is away from the circle - realistically, the mentioned inwards friction will appear and caues the object's path to deviate and follow the circle a little bit while at each moment moving a bit further away alongs its instantaneous straight-line direction.
However, of course, centrifugal force are fictitious! So is this not a centrifugal force?
Indeed, no centrifugal force exists. Only a centripetal force does. As described, there is no force pushing the object straight outwards. Rather, the object will - when in motion, so due to its inertia - want to keep its straight-line motion which in this case started out tangentially due to a tangential force. And should the object ever fully leave the spinning surface, then it will (ideally) continue moving farther and farther away at the same, constant speed. Until something stops it. There is no force pushing it outwards; there are only forces pulling inwards and tangentially. | {
"domain": "physics.stackexchange",
"id": 85278,
"tags": "newtonian-mechanics, rotational-dynamics, reference-frames, free-body-diagram, centrifugal-force"
} |
The Expansion Coefficients for a Particle in a Box | Question: This is an expansion of two previous questions I had before (I am very confused by it). I am now encountering a problem with an integral. We want a general expression for the probability of finding E. We have the expression for the wave function for the particle in a box:
$$\psi(x)=x\sin(\frac{\pi x}{a})$$
We evaluate $c_n$ to find $|c_n|^2$, the probability our measured E will be at level n. To calculate $c_n$, I did
$$c_n = \int_0^a \sin(\pi\, n\frac{x}{a})x\sin(\pi\frac{x}{a})dx$$
All times some normalization constant which is not important for my question. We evaluate this integral using mathematica, and find that it equals
$$\frac{a^2(-2n(1+\cos(n\pi))-(-1+n^2)\pi\sin(n\pi))}{(-1+n^2)^2\pi^2}$$
Obviously, we have a problem for $n=1$: we get zero over zero. Is there something we did wrong with this integral? The only way I can get an answer that isn't in-determinant is if I take n to be 1 before integration. Do we have a piecewise function for our general expression? I am just very confused. Any help would be appreciated.
Answer: Hints: It is easy to do the integral yourself without the aid of Mathematica.
For instance use the product-to-sum-formula for the two sines.
Then integrate by part to get rid of the $x$ power.
You will need the following primitive integrals (aka. antiderivatives or indefinite integrals):
$$ \int \!dx ~\cos(bx)~=~ \left\{\begin{array}{ccc} \frac{\sin(bx)}{b} &\text{for}& b\neq 0, \\ x&\text{for}& b= 0,\end{array} \right. $$
and
$$ \left\{\begin{array}{ccc}\int \!dx ~\frac{\sin(bx)}{b} &=& \frac{1-\cos(bx)}{b^2} &\text{for}& b\neq 0, \\ \int \!dx~x &=& \frac{x^2}{2}&\text{for}& b= 0.\end{array} \right. $$
Here the various integration constants have been chosen such that the $b=0$ case can be viewed as the limit $b\to 0$ of the $b\neq0$ case.
Then the above comment of Jerry Schirmer clearly applies: You can either recover the limit $b\to 0$ from the $b\neq0$ case, or treat the $b=0$ case separately. | {
"domain": "physics.stackexchange",
"id": 9523,
"tags": "quantum-mechanics, homework-and-exercises"
} |
subsribe to two integer type msgs with one call back function | Question:
I tried to subscribe to two integer types .But lot of errors are showing .Please help me to correct the code
code:
#include <ros/ros.h>
#include <std_msgs/Int8.h>
#include <message_filters/subscriber.h>
#include <message_filters/time_synchronizer.h>
using namespace std_msgs;
using namespace message_filters;
void callback(const Int8ConstPtr& angle,const Int8ConstPtr& range_data)
{
}
int main( int argc, char** argv )
{
ros::init(argc, argv, "basic_shapes");
ros::NodeHandle n;
ros::Rate r(1);
message_filters::Subscriber<Int8> sub1(n,"angle", 1);
message_filters::Subscriber<Int8> sub2(n,"range_data", 1);
TimeSynchronizer<Int8,Int8> sync(sub1,sub2,10);
sync.registerCallback(boost::bind(&callback,_1, _2));
ros::spin();
return 0;
}
error msgs:
/opt/ros/indigo/include/message_filters/sync_policies/exact_time.h: In instantiation of ‘void message_filters::sync_policies::ExactTime<M0, M1, M2, M3, M4, M5, M6, M7, M8>::add(const typename boost::mpl::at_c<typename message_filters::PolicyBase<M0, M1, M2, M3, M4, M5, M6, M7, M8>::Events, i>::type&) [with int i = 0; M0 = std_msgs::Int8_std::allocator<void >; M1 = std_msgs::Int8_std::allocator<void >; M2 = message_filters::NullType; M3 = message_filters::NullType; M4 = message_filters::NullType; M5 = message_filters::NullType; M6 = message_filters::NullType; M7 = message_filters::NullType; M8 = message_filters::NullType; typename boost::mpl::at_c<typename message_filters::PolicyBase<M0, M1, M2, M3, M4, M5, M6, M7, M8>::Events, i>::type = ros::MessageEvent<const std_msgs::Int8_std::allocator<void > >]’:
/opt/ros/indigo/include/message_filters/synchronizer.h:358:5: required from ‘void message_filters::Synchronizer::cb(const typename boost::mpl::at_c<typename Policy::Events, i>::type&) [with int i = 0; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>; typename boost::mpl::at_c<typename Policy::Events, i>::type = ros::MessageEvent<const std_msgs::Int8_std::allocator<void > >]’
/opt/ros/indigo/include/message_filters/synchronizer.h:290:138: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&, F6&, F7&, F8&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; F6 = message_filters::NullFilter<message_filters::NullType>; F7 = message_filters::NullFilter<message_filters::NullType>; F8 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:282:52: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&, F6&, F7&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; F6 = message_filters::NullFilter<message_filters::NullType>; F7 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:275:48: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&, F6&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; F6 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:268:44: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:261:40: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:254:36: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:247:32: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:240:28: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/time_synchronizer.h:120:24: required from ‘message_filters::TimeSynchronizer<M0, M1, M2, M3, M4, M5, M6, M7, M8>::TimeSynchronizer(F0&, F1&, uint32_t) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; M0 = std_msgs::Int8_std::allocator<void >; M1 = std_msgs::Int8_std::allocator<void >; M2 = message_filters::NullType; M3 = message_filters::NullType; M4 = message_filters::NullType; M5 = message_filters::NullType; M6 = message_filters::NullType; M7 = message_filters::NullType; M8 = message_filters::NullType; uint32_t = unsigned int]’
/home/anand/catkin_ws/src/using_markers/src/basic_shapes.cpp:40:46: required from here
/opt/ros/indigo/include/message_filters/sync_policies/exact_time.h:127:101: error: ‘value’ is not a member of ‘ros::message_traits::TimeStamp<std_msgs::Int8_std::allocator<void >, void>’
Tuple& t = tuples_[mt::TimeStamp<typename mpl::at_c<Messages, i>::type>::value(*evt.getMessage())];
^
/opt/ros/indigo/include/message_filters/sync_policies/exact_time.h: In instantiation of ‘void message_filters::sync_policies::ExactTime<M0, M1, M2, M3, M4, M5, M6, M7, M8>::add(const typename boost::mpl::at_c<typename message_filters::PolicyBase<M0, M1, M2, M3, M4, M5, M6, M7, M8>::Events, i>::type&) [with int i = 1; M0 = std_msgs::Int8_std::allocator<void >; M1 = std_msgs::Int8_std::allocator<void >; M2 = message_filters::NullType; M3 = message_filters::NullType; M4 = message_filters::NullType; M5 = message_filters::NullType; M6 = message_filters::NullType; M7 = message_filters::NullType; M8 = message_filters::NullType; typename boost::mpl::at_c<typename message_filters::PolicyBase<M0, M1, M2, M3, M4, M5, M6, M7, M8>::Events, i>::type = ros::MessageEvent<const std_msgs::Int8_std::allocator<void > >]’:
/opt/ros/indigo/include/message_filters/synchronizer.h:358:5: required from ‘void message_filters::Synchronizer::cb(const typename boost::mpl::at_c<typename Policy::Events, i>::type&) [with int i = 1; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>; typename boost::mpl::at_c<typename Policy::Events, i>::type = ros::MessageEvent<const std_msgs::Int8_std::allocator<void > >]’
/opt/ros/indigo/include/message_filters/synchronizer.h:291:138: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&, F6&, F7&, F8&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; F6 = message_filters::NullFilter<message_filters::NullType>; F7 = message_filters::NullFilter<message_filters::NullType>; F8 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:282:52: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&, F6&, F7&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; F6 = message_filters::NullFilter<message_filters::NullType>; F7 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:275:48: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&, F6&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; F6 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:268:44: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&, F5&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; F5 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:261:40: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&, F4&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; F4 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:254:36: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&, F3&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; F3 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:247:32: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&, F2&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F2 = message_filters::NullFilter<message_filters::NullType>; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/synchronizer.h:240:28: required from ‘void message_filters::Synchronizer::connectInput(F0&, F1&) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; Policy = message_filters::sync_policies::ExactTime<std_msgs::Int8_std::allocator<void >, std_msgs::Int8_std::allocator<void >, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType, message_filters::NullType>]’
/opt/ros/indigo/include/message_filters/time_synchronizer.h:120:24: required from ‘message_filters::TimeSynchronizer<M0, M1, M2, M3, M4, M5, M6, M7, M8>::TimeSynchronizer(F0&, F1&, uint32_t) [with F0 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; F1 = message_filters::Subscriber<std_msgs::Int8_std::allocator<void > >; M0 = std_msgs::Int8_std::allocator<void >; M1 = std_msgs::Int8_std::allocator<void >; M2 = message_filters::NullType; M3 = message_filters::NullType; M4 = message_filters::NullType; M5 = message_filters::NullType; M6 = message_filters::NullType; M7 = message_filters::NullType; M8 = message_filters::NullType; uint32_t = unsigned int]’
/home/anand/catkin_ws/src/using_markers/src/basic_shapes.cpp:40:46: required from here
/opt/ros/indigo/include/message_filters/sync_policies/exact_time.h:127:101: error: ‘value’ is not a member of ‘ros::message_traits::TimeStamp<std_msgs::Int8_std::allocator<void >, void>’
make[2]: *** [using_markers/CMakeFiles/basic_shapes.dir/src/basic_shapes.cpp.o] Error 1
make[1]: *** [using_markers/CMakeFiles/basic_shapes.dir/all] Error 2
But the same code work for sensor_msg type. why?
Originally posted by anadgopi1994 on ROS Answers with karma: 81 on 2016-11-27
Post score: -1
Answer:
"The TimeSynchronizer filter synchronizes incoming channels by the timestamps contained in their headers,"
No header -> No TimeSynchronizer
And please:
invest some seconds to check your
spelling
don't post irrelevant
code (commented includes??)
format code as such (use to
101010-Button)
"But lot of errors
are showing" The is no ROS-interface
to chrystal balls yet, so please
post the error messages.
Originally posted by NEngelhard with karma: 3519 on 2016-11-27
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by anadgopi1994 on 2016-11-27:
i edited my post and put the error messages in the post,Now can you please help me to resolve this problem.
Comment by NEngelhard on 2016-11-27:
I already answered the question. and @gvdhoorn also told you in your first thread "If both messages have headers"... And you broke the code formatting and still have typos in the question.
Comment by anadgopi1994 on 2016-11-27:
Both message have the header #include <std_msgs/Int8.h>.I didn't understand what you are saying?Can you clarify
Comment by NEngelhard on 2016-11-27:
Please fix your question (formatting, typos)
Comment by anadgopi1994 on 2016-11-27:
I already corrected my question
Comment by NEngelhard on 2016-11-27:
neither is the code formatted correctly nor have you removed the typos. I'm sorry to be a bit pedantic, but you want some help here and some effort on your side should be visible. | {
"domain": "robotics.stackexchange",
"id": 26341,
"tags": "ros, subscribe, topics"
} |
DecisionTreeRegressor under the hood of GradientBoostingClassifier | Question: I'm inspecting the weak estimators of my GradientBoostingClassifier model. This model was fit on a binary class dataset.
I noticed that all the weak estimators under this ensemble classifier are decision tree regressor objects. This seems strange to me intuitively.
I took the first decision tree in the ensemble and used it to predict independently on my entire dataset. The unique answers from the dataset were the following:
array([-2.74, -1.94, -1.69, ...])
My question is: why and how does the gradient boosting classifier turn the weak estimators into regressor tasks (instead of classification tasks) that are not bound by 0 and 1? Ultimately the GradientBoostingClassifier outputs a pseudo-probability between 0 and 1: why aren't the ensemble of weak estimators doing the same?
Answer: After reading through more of the documentation I found the section that covers the classification case. It can be found here. Additionally this statquest was very useful.
1.11.4.5.2. Classification
Gradient boosting for classification is very similar to the regression case. However, the sum of the trees
is not homogeneous to a prediction: it cannot be a class, since the trees predict continuous values.
The mapping from the value
to a class or a probability is loss-dependent. For the deviance (or log-loss), the probability that
belongs to the positive class is modeled as
where is the sigmoid function.
For multiclass classification, K trees (for K classes) are built at each of the iterations. The probability that
belongs to class k is modeled as a softmax of the
values.
Note that even for a classification task, the
sub-estimator is still a regressor, not a classifier. This is because the sub-estimators are trained to predict (negative) gradients, which are always continuous quantities. | {
"domain": "datascience.stackexchange",
"id": 8531,
"tags": "classification, scikit-learn, regression, ensemble-modeling, natural-gradient-boosting"
} |
which is the best board with ros | Question:
hi dear
I want to use ros on a board like beagleboard,odroid,cubie board
but I do not have enough information about ros and the boards
can some one help me which board is the best and why?
Originally posted by Hamid Didari on ROS Answers with karma: 1769 on 2013-10-24
Post score: 0
Original comments
Comment by Hendrik Wiese on 2013-10-24:
We're using an Intel NUC board with a Core i3. Works like a charm.
Comment by Hamid Didari on 2013-10-24:
Hi Hendrik
could you tell me about details of this board
which version of linux you use on it?
and how hard is working with it ?
Comment by Hendrik Wiese on 2013-10-25:
It's a pretty neat small board with an mSATA and two memory slots and a Core i3 processor. Here's /proc/cpuinfo of the board: http://pastebin.com/ysmZjgU5 Working with it is like on a normal PC. Ubuntu 13.04 64bit runs pretty well on it.
Answer:
Depends on what you want to do with it. There is are instructions for the beagleboard and the Raspberry Pi. But it depends on your application if those boards have enough computing power.
Originally posted by davinci with karma: 2573 on 2013-10-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15952,
"tags": "ros, arduino, odroid"
} |
Determining number of electrons transferred | Question: When determining the number of electrons transferred in a redox reaction is it the total in both half equations?
For example:
$$\ce{2I- + Zn^2+ -> I2 + Zn}$$
First we split it up into the two half-reactions and get the following
$$\ce{2I -> I2 + 2e-}$$
$$\ce{Zn^2+ + 2e- -> Zn}$$
So would the number of electrons transferred in the redox reaction be two or four?
Answer:
When determining the number of electrons transferred in a redox reaction is it the total in both half equations?
In the total redox reaction that is properly balanced, the number of electrons transferred is equal to the stoichiometric coefficient of the $\ce{e-}$ species in either balanced half-reaction.
So would the number of electrons transferred in the redox reaction be two or four?
Your half-reactions and total redox reaction happen to be balanced, to the number of electrons transferred in this case is two. | {
"domain": "chemistry.stackexchange",
"id": 3899,
"tags": "electrochemistry, redox, electrons"
} |
A simple derivation of the Centripetal Acceleration Formula? | Question: Could someone show me a simple and intuitive derivation of the Centripetal Acceleration Formula $a=v^2/r$, preferably one that does not involve calculus or advanced trigonometry?
Answer: Imagine an object steadily traversing a circle of radius $r$ centered on the origin. Its position can be represented by a vector of constant length that changes angle. The total distance covered in one cycle is $2\pi r$. This is also the accumulated amount by which the position has changed...
Now consider the velocity vector of this object: it can also be represented by a vector of constant length that steadily changes direction. This vector has magnitude $v$, so the accumulated change in velocity is $2 \pi v$.
The magnitude of acceleration is then $\frac{\text{change in velocity}}{\text{elapsed time}}$, which we can write as:
$$a = \frac{2 \pi v}{\left(\frac{2\pi r}{v} \right)} = \frac{v^2}{r} \,.$$
Q.E.D.
Aside: that derivation is used in many algebra/trig-based textbooks. | {
"domain": "physics.stackexchange",
"id": 95116,
"tags": "homework-and-exercises, newtonian-mechanics, acceleration, rotational-kinematics"
} |
Odd number of equally spaced vectors | Question: When three vectors equally spaced out that is make an angle of 120° and placed tail to tail the vectors cancel out and have a net effect of zero.
Can this be generalized to all positive odd number of vectors equally spaced out .(excluding just 1 vector)
If so can it be proved ?
Answer: With N vectors of equal magnitude, the number does not need to be odd. If the angle from one vector to the next = 360/N degrees, they will form a closed figure. (consider a square.) | {
"domain": "physics.stackexchange",
"id": 74645,
"tags": "vectors"
} |
How to get the expectation value of the spin of a generate spin triplet states? | Question: For two spin-1/2 electrons, the general spin triplet states is a linear combination of the three basis states: $\left.|\uparrow\uparrow\right>, \left.|\uparrow\downarrow\right>+\left.|\downarrow\uparrow\right>, \left.|\downarrow\downarrow\right>$, which are the simultaneously eigenstates of $S^z=S_1^z+S_2^z$ and $\bf{S}^2=(S_1+S_2)^2$.
When denote these three states in the matrix form respectively(named as direct product of states?): $\chi^{1}=\begin{pmatrix}1&0\\0&0\end{pmatrix}, \chi^{0}=\begin{pmatrix}0&1\\1&0\end{pmatrix}, \chi^{-1}=\begin{pmatrix}0&0\\0&1\end{pmatrix}$, the most generate form of a spin triplet states can be written as a linear combination of them, explicitly:
$$
\chi= \sum_i \alpha^i\chi^i
$$
Question is, how can I get the expectation value of $\bf{S}$ of this generate spin state using its matrix form? I don't know how to express the operator $\bf{S}$ in matrix form, it seems that $\bf{S}=\bf{S_1}\otimes1+1\otimes S_2$ is a $4\times 4$ matrix and I don't know how to apply this on this $2\times 2$ state.
ps: I know the method using the operator form, which requires that $\bf S_1$ only operate on the first spin and $\bf S_2$ only operate on the second spin, etc... I just want to see how the matrix form of spin state can be used to do the calculation...
Answer: The expressions you wrote for the $\chi^j$ are not exactly what you want. You'd like to define a composite state in the basis
$$ \vert\uparrow\rangle = \begin{pmatrix}1\\0\end{pmatrix},\quad \vert\downarrow\rangle = \begin{pmatrix}0\\1\end{pmatrix} $$
which is
$$ \vert\uparrow\uparrow\rangle\equiv\vert\uparrow\rangle \otimes \vert\uparrow\rangle = \begin{pmatrix}1\\0\end{pmatrix} \otimes \begin{pmatrix}1\\0\end{pmatrix} = \begin{pmatrix}1\\0\\0\\0\end{pmatrix}$$ by the common definition of the Kronecker product.
As you noted correctly, by adopting this convention operators like $S^1\otimes 1$ are represented by $4\times 4$ matrices. Again, their representations in any given basis is by the Kronecker Product, e.g
$$ S^3\otimes 1 = \frac{1}{2}\begin{pmatrix} 1 &&& \\ & 1 && \\ && -1 & \\ &&& -1\end{pmatrix} $$
Instead of this more canonical choice you wrote e.g
$$ \chi^1 = \begin{pmatrix}1&0\\0&0\end{pmatrix}$$
which is also possible, but uncommmon. One would rather identify this expression with $$ \vert\uparrow\rangle \otimes \langle\uparrow\vert \in \mathbb{C}^2\otimes (\mathbb{C}^2)^*$$
Since $\mathbb{C}^4 \simeq \mathbb{C}^{2\times 2} \simeq \mathbb{C}^2\otimes (\mathbb{C}^2)^* \simeq\mathrm{End}(\mathbb{C}^2)$, both choices are equivalent. Work out the isomorphism relating the two representations! From there it should not be hard to derive the action of the composite operators on $\chi^j$. Be guided by what you already know, namely that by definition
$$ (A\otimes B)\cdot (u\otimes v) = (Au)\otimes(Bv) $$
It then just comes down to choosing a basis for the tensor product space as either $\mathbb{C}^4$ or $\mathbb{C}^{2\times 2}$. | {
"domain": "physics.stackexchange",
"id": 27453,
"tags": "quantum-mechanics, homework-and-exercises, superconductivity"
} |
Work done by me and Kinetic friction | Question: What will be value of work in following cases.
I move a box on a rough floor in a straight line for a distance 'd' from A to B then B to A back.
1- Is work done by me= 0? as displacement=0 (is that correct? Or should it be +2Fd)
2- work done by Kinetic friction= -2fd. (This one is already provided in books)
3- I believe forces applied by me and friction can be same or different in magnitude therefore work done by me and friction can be same or different in magnitude. Or Is it necessary that work done will be equal in magnitude?
Answer: Work is defined as dot product of force vector applied and the displacement vector caused due to that force.
So for very small displacement $\vec{ds}$ caused due to some force $\vec{F}$,
the small amount of work done will be:
$$dW=\vec{F}.\vec{ds}$$
So total work done over a path (say A to B) will be:
$$W=\int_A^B \vec{F}.\vec{ds}$$
In your question, even if displacement is zero but you have done positive work in both trips i.e. A to B then B to A. This is because in both the trips displacement is in same direction as force applied, so the dot product is positive so the work done.
Note that if there was no friction then work done will be zero in both the trips and also overall. While going from A to B you first apply a force causing block to move in forward direction; here you are doing positive work and Kinetic energy of block is increasing (Work energy theorem). But you also have to stop at B and for stopping you will have to apply a force in opposite direction of the motion. Work done by this force should be negative but equal in magnitude with the previous mentioned force. This is because, when the block stops at B its kinetic energy is zero so the net work done must be zero during the trip. Some goes for trip B to A.
If we consider friction also then things will go a bit different. While considering friction it is believed that applied force is just enough to overcome maximum static friction ($\mu_s N$) and thus the block moves very very slowly (zero kinetic energy). In this case we don't need to apply another external force to stop the block since equilibrium is maintained at all points (due to very very slow movement). So work done by external force (applied by you) and work done by kinetic friction adds up to zero. Same goes for B to A trip.Thus overall work done is zero but work done by you is positive (2Fd) and work done by friction is same but with negative sign (-2Fd).
Note that in this case block does not have any velocity while moving (very slow movement); but if applied force is large enough than kinetic friction then block will gain velocity and it may occur that you have to apply an another external force to stop the block on time i.e. at B. | {
"domain": "physics.stackexchange",
"id": 90514,
"tags": "newtonian-mechanics, forces, work, friction, free-body-diagram"
} |
What does "ACCEPTED" and "CANCELING" states do in the ros2 action state machine? | Question:
http://design.ros2.org/img/actions/goal_state_machine.png
From the figure above, when exactly does the used-defined functions (handle_goal, handle_cancel, and handle_accepted) passed to the create_server function get called?
What I understand from the document is that: once we return rclcpp_action::GoalResponse::ACCEPT_AND_EXECUTE from the handle_goal (passed to the create_server), then at some point, the handle_accepted function will get called and we can perform the task inside the handle_accepted function.
But for cancelling the action, I am a little bit confused reading the document. From the document, it is written that:
CANCELING - The client has requested that the goal be canceled and the action server has accepted the cancel request. This state is useful for any user-defined “clean up” that the action server may have to do.
But once we return rclcpp_action::CancelResponse::ACCEPT in the function handle_cancel (passed to the create_server), what happen and what does the "CANCELING" state in the figure do? Where should we define the user-defined "clean up" function?
Originally posted by wkpst on ROS Answers with karma: 3 on 2021-03-01
Post score: 0
Original comments
Comment by kscottz on 2021-03-01:
Just a note, please DO NOT cross post ROS answers questions to ROS Discourse.
Comment by wkpst on 2021-03-01:
@kscottz Sorry, I posted on ROS Discourse first and I realized I should post here. Will remove my post on ROS Discourse.
Answer:
With respect to the diagram:
handle_goal is called when a goal is received (before ACCEPTED).
If handle_goal returns "accept", then the goal state is set to ACCEPTED. The ACCEPTED state tells us (and the action client) that the goal is queued for execution. It's possible that the action server chooses to defer execution (e.g. queue the goal for later), hence we don't always jump straight to EXECUTING. The "ACCEPT_AND_EXECUTE" return code is a convenient shortcut if we want to skip straight to EXECUTING.
handle_accepted is called after a goal enters the ACCEPTED state.
This is a convenient way to let the code know that a particular goal handle could be executed. It is up to the action server to explicitly transition the goal to the EXECUTING state.
CANCELING and CANCELED work in a similar way:
handle_cancel is called when a cancel request is received (before CANCELING).
If the cancel request is accepted, then the goal enters in to the CANCELING state.
Note, however, this does not mean the goal has been completely canceled yet. There may be a time delay between entering CANCELING and transitioning to CANCELED. It is up to the action server to explicitly signal that the goal is canceled.
Calling canceled() on the goal handle is the signal to transition the state to CANCELED.
I.e. canceled() indicates that the action server is finished canceling the goal.
Perhaps the demo code makes this more concrete:
if (goal_handle->is_canceling()) {
result->sequence = sequence;
goal_handle->canceled(result);
RCLCPP_INFO(this->get_logger(), "Goal canceled");
return;
}
Above, the goal execution thread checks if a cancel request has been accepted (goal_handle->is_canceling()), and if so it immediately transitions the goal to CANCELED (goal_handle->canceled()). In practice, it's possible the code might do something more complex before calling goal_handle->canceled() (e.g. wait for a robot should put down an object it is grasping).
Originally posted by jacobperron with karma: 1870 on 2021-03-01
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by wkpst on 2021-03-01:
@jacobperron Thanks for your detailed explanation. Aha, so it seems the design is that: the user (server side) is responsible for checking if goal_handle->is_canceling() and handle the clean-up task here. Once finished, call goal_handle->canceled() in order to transition the goal_handle to the terminal state CANCELED.
Comment by wkpst on 2021-03-02:
@jacobperron I'm just curious if I understand correctly that the handle_cancel is also a blocking call as well as handle_accepted right? Because sometimes if handle_cancel (which takes some time before returning rclcpp_action::CancelResponse::ACCEPT) returns after handle_accepted has finished, I do get this error: what(): goal_handle attempted invalid transition from state SUCCEEDED with event CANCEL_GOAL.
Comment by jacobperron on 2021-03-02:
It's an implementation detail, but I believe both handle_cancel and handle_accepted are blocking in both rclcpp and rclpy. From the error message, it sounds like the action server is marking the goal as done (called goal_handle->succeeded()) and then a cancel request is accepted. I don't think you should be seeing this error; it could be a bug in rclcpp_action. Consider opening an issue on GitHub with a SSCCE. | {
"domain": "robotics.stackexchange",
"id": 36152,
"tags": "ros, ros2, action"
} |
Particle in an infinite potential well | Question: In quantum mechanics we have the system of an infinite potential well and then we find out the energy of the particle inside the well using Schrödinger's equation which gives,
$$E=\frac{n^2π^2\hbar^2}{2ma}.$$
I was wondering where does the particle inside the well come from? Was it always there? Can there be particles in the region of infinite potential?
Edit: From the answer, suppose we have a particle outside the well which has energy greater than the outside potential. Now, according to the inside of the well both the outside potential and the outside particle have an infinite parameter. So can this particle enter the well?
Answer: I don't really like to call this problem the Particle in an Infinite Potential Well for precisely this reason. It invites questions like, "how does a particle behave in a region of infinite potential?" to which the answer is invariably that by infinite potential, we simply mean that the particle cannot access that region of space. My response would then be, "well why didn't you just say that in the first place?"
I prefer to call this system the Free Particle on an Interval. It's a system which consists of a particle which is not under the influence of any potential at all, but whose wavefunction lives in $L^2\big([0,a]\big)$ rather than $L^2(\mathbb R)$. This makes it clear that it doesn't make sense to talk about the particle being outside the well, and sidesteps any (reasonable) questions about what it means for the potential to be infinite everywhere except a small interval.
Now, there is a sense in which the name Particle in an Infinite Potential Well is a very good name. If you treat the perfectly reasonable Particle in a Finite Potential Well, you can find its bound-state energy eigenfunctions (of which there is always at least one). If you take the limit as the potential $V_0$ outside the box goes to infinity, the aforementioned eigenfunctions converge to the energy eigenfunctions of a particle resticted to an interval$^\dagger$. In this sense, "infinite potential well" can be interpreted as the system being a limiting case of a finite potential well as $V_0\rightarrow \infty$.
$^\dagger$As an obligatory side note, the Free Particle on an Interval is defined not only by its Hilbert space $L^2([0,a])$ and the form of its Hamiltonian $-\frac{\hbar^2}{2m}\frac{d^2}{dx^2}$, but also by the domain of its Hamiltonian, which is essentially the twice-differentiable functions $\psi$ such that $\psi(0)=\psi(a)=0$. The boundary conditions are extremely important, but are inserted by hand because we simply choose them to be so.
We could in principle choose the periodic boundary conditions $\psi(0)=\psi(a)$ without setting these values to zero. This would define the Free Particle on a Ring.
However, if we view the Free Particle on an Interval as a limiting case of the finite potential well, then we find that the eigenfunctions of the Hamiltonian are exponentially suppressed outside of the interval $[0,a]$, tending to zero as $V_0\rightarrow \infty$. Therefore, the boundary conditions $\psi(0)=\psi(a)=0$ are a natural choice from this point of view. | {
"domain": "physics.stackexchange",
"id": 72420,
"tags": "quantum-mechanics, wavefunction, potential, schroedinger-equation"
} |
Tried to change a if-else condition, but can it be better? | Question: I recently came across this code snippet, and I have tried to change it.
this.lblCheck.Visible = false;
this.lblBackup.Visible = false;
this.txtEmpNo.Visible = false;
this.CheckButton.Enabled = false;
if (matchedCode)
{
if (checkBackdatedLeave)
{
this.lblBackup.Visible = true;
this.txtEmpNo.Visible = true;
this.CheckButton.Enabled = true;
}
else
{
this.lblCheck.Visible = true;
if (startDate > todayDate)
{
this.lblBackup.Visible = true;
this.txtEmpNo.Visible = true;
this.CheckButton.Enabled = true;
}
}
}
into this:
this.lblCheck.Visible = false;
this.lblBackup.Visible = false;
this.txtEmpNo.Visible = false;
this.CheckButton.Enabled = false;
if (matchedCode)
{
if (checkBackdatedLeave || startDate > todayDate)
{
if (!checkBackdatedLeave) { this.lblCheck.Visible = true; }
this.lblBackup.Visible = true;
this.txtEmpNo.Visible = true;
this.CheckButton.Enabled = true;
}
}
However, I'm quite bothered with the line
if (!checkBackdatedLeave) { this.lblCheck.Visible = true; }, as it contradicts with condition on the previous line. Is there still any improvement on this code?
Answer: The answer of Ivo Beckers already provides a viable solution. The only thing I would change is to place the three same checks in a variable and use that variable, instead of checking again and again. And only for lblCheck, do another check.
For example:
var status = matchedCode && (checkBackdatedLeave || startDate > todayDate);
this.lblBackup.Visible = status;
this.txtEmpNo.Visible = status;
this.CheckButton.Enabled = status;
this.lblCheck.Visible = matchedCode && !checkBackdatedLeave; | {
"domain": "codereview.stackexchange",
"id": 13648,
"tags": "c#"
} |
micro black hole forces | Question: A black hole would radiate mass optimally for interstellar-travel applications in the range between $10^7$ and $10^8$ kilograms. Assuming a light-only radiation emission spectrum, with a parabolic reflector with efficiency $f$, this would create an acceleration
$$ a = \frac{f P}{mc}$$
$$ a = \frac{ f \hbar c^5 }{ 15360 \pi G^2 M^3}$$
$$ a = \frac{ f 10^{24} m \times sec^{-2}}{M^3} $$
The problem is that the schwarzschild radius at this mass is a few attometers, which creates a host of problems:
1) the rate at which it can feed from normal matter is too small compared to the rate BH mass is being radiated
2) any electric charge we throw in the BH will be quickly radiated by super radiance effects and Schwinger pair production, so it will stay neutral most of the time.
3) only super hard gamma rays have (to my limited knowledge) the short enough wavelength in order to scatter against such a tiny BH
By the 3 points above, it is unclear how to apply a back-force on the black hole so that a payload, comprising at least of the parabolic reflector, can be accelerated with it
are there any ideas out there about how to exert a force or moment on such a tiny black hole?
Answer: Gravity is the only force remaining to use for capture and co-acceleration. It turns out that there's a reasonable sweet spot in the design space. Mutual attraction between BH and ship balances the thrust on the ship via reflection of the radiation off the paraboloid, which is attached to the ship. A ship-BH separation measured in centimetres up to a good fraction of a metre is achievable in some cases, but this must be dynamically adapted to the m(t) evaporation of the BH. When the BH lies aft of the focus, the system self-corrects to some extent. Trouble brews when the BH wanders off focus to the ship side - then the ship has to devise a way to add temporary extra acceleration via auxiliary engines. This whole arrangement is a nontrivial control problem, of course.
I don't have a handle on how fast an attometer-scale radius BH loses its charge as a fraction of its lifetime. If this charge evaporation time is long enough, we can hope to send an electron beam in to keep the charge topped up. I have not seen any attempts at this calculation. Were that to be possible, BH capture and control becomes far more tractable.
With a single BH of ~1 Mtonne, we can send a manned ship to Alpha Centauri in about 19.5 years of ship time, including decelerating to a stop - a highly asymmetric operation for these BHs. But these past few days I have been playing with "staged" BHs, and, for two BHs of masses ~0.9 and ~0.4 Mt, this ship time (including braking to a stop, again) comes down to about 12.5 years. It is therefore to be expected that extrapolation of this BH staging technique using N BHs, so as to maintain about 1 gee throughout the trip, will be able to get us to the nearest star, and stop, within 3.5 years ship time, which is the theoretical minimum at a constant 1 gee. | {
"domain": "physics.stackexchange",
"id": 18650,
"tags": "hawking-radiation, interstellar-travel, black-holes"
} |
Gases produced by pyrolysis of cellulose | Question: I heated cotton in a sealed container (with a small hole) over a natural gas flame. Some gases and smoke were produced. What would they probably be? I can come up with some guesses based on the composition of cellulose: $\ce{CO2}$, $\ce{CH4}$ or possibly other hydrocarbons, $\ce{CO}$, $\ce{H2}$, $\ce{H2O}$, however I do not know which of those they are. Obviously, soot ($\ce{C}$) was also formed, due to the visible smoke particles.
Answer: During pyrolysis, organic compounds are thermally decomposed in the absence of oxygen. The pyrolysis products are classified into categories based on their physical state of existence: char (solid), bio-oil (liquid) and non-condensable gases (gas). The relative proportions of these three product fractions significantly vary depending upon the process conditions, as is shown in the table below.
$$ \small
\begin{array}{lcc}
\hline
\text{Pyrolosis Technology} & \text{Residence Time} & \text{Heating Rate} & \text{Temperature} & \text{Char} & \text{Bio-Oil} & \text{Gases} \\
\hline
\text{Conventional} & \text{5-30}\ \mathrm{min} & \text{<50} ^\circ \mathrm{C\ min^{-1}} & \text{400-600} ^\circ \mathrm{C} & \text{<35%} & \text{<30%} & \text{<40%}\\
\text{Fast Pyrolysis} & \text{<5}\ \mathrm{s} & \text{~1000} ^\circ \mathrm{C\ s^{-1}} & \text{400-600} ^\circ \mathrm{C} & \text{<25%} & \text{<75%} & \text{<20%}\\
\text{Flash Pyrolosis} & \text{<0.1}\ \mathrm{s} & \text{~1000} ^\circ \mathrm{C\ s^{-1}} & \text{650-900} ^\circ \mathrm{C} & \text{<20%} & \text{<20%} & \text{<70%}^{[1]}\\
\hline
\end{array}
$$
The exact compositions of the products of cellulose pyrolysis at different temperatures can be seen below.
$$ \small
\begin{array}{lcc}
\hline
\text{Products} & \text{Peak Temp,}\ 500 ^\circ \mathrm{C} & \text{Holding Temp,}\ 400 ^\circ \mathrm{C} & \text{Peak Temp,}\ 750 ^\circ \mathrm{C} & \text{Peak Temp,}\ 1000 ^\circ \mathrm{C}\\
\hline
\ce{CO} & \text{0.99%} & \text{0.25%} & \text{15.82%} & \text{22.57%}\\
\ce{CO2} & \text{0.3%} & \text{1.45%} & \text{2.38%} & \text{3.36%}\\
\ce{H2O} & \text{3.55%} & \text{6.49%} & \text{8.72%} & \text{9.22%}\\
\ce{CH4} & \text{0%} & \text{0%} & \text{1.11%} & \text{2.62%}\\
\ce{C2H4} & \text{0%} & \text{0%} & \text{1.05%} & \text{2.18%}\\
\ce{C2H6} & \text{0%} & \text{0%} & \text{0.17%} & \text{0.28%}\\
\ce{C3H6} & \text{0%} & \text{0%} & \text{0.70%} & \text{0.80%}\\
\ce{H2} & \text{0%} & \text{0%} & \text{0.36%} & \text{1.18%}\\
\ce{CH3OH} & \text{0.25%} & \text{0.21%} & \text{1.03%} & \text{0.98%}\\
\ce{CH3CHO} & \text{0.01%} & \text{0.05%} & \text{1.58%} & \text{1.7%}\\
\text{tar} & \text{16.37%} & \text{83.35%} & \text{59.92%} & \text{49.12%}\\
\text{char} & \text{83.63%} & \text{6.17%} & \text{3.32%} & \text{3.91%}\\
\text{other} & \text{0.19%} & \text{0.16%} & \text{2.14%} & \text{1.78%}\\
\text{total} & \text{105.25%} & \text{98.36%} & \text{98.8%} & \text{99.86%}\\
\hline
\end{array}
$$
The holding time for each of these reactions was $30\ \mathrm{s}^{[2]}$. As shown in the table, $\ce{CO}$, $\ce{H2O}$, and $\ce{CO2}$ are the major gaseous products, with $\ce{H2}$ and hydrocarbons being produced in considerably smaller proportion.
$^{[1]}$ Patwardhan, Pushkaraj Ramchandra, "Understanding the product distribution from biomass fast pyrolysis" (2010). Graduate Theses
and Dissertations. Paper 11767.
$^{[2]}$ Hajaligol, M. R.; Howard, J. B.; Longwell, J. P.; Peters, W. A. Product Compositions and Kinetics for Rapid Pyrolysis of Cellulose. Industrial & Engineering Chemistry Process Design and Development Ind. Eng. Chem. Proc. Des. Dev. 1982, 21, 457–465. | {
"domain": "chemistry.stackexchange",
"id": 5607,
"tags": "home-experiment, combustion, carbohydrates, pyrolysis"
} |
Speeding up Python program that converts DOCX to PDF in Windows | Question: This is meant to be a performance-centric question as this type of conversion is obviously very common. I'm wondering about the possibilities for making this process faster.
I have a program that creates several thousand QR codes from a list, embeds them in an MS Word docx template, and then converts the docx files to pdf. Problem is, what I've designed is very slow. When creating several thousand pdf files, it takes hours on a local machine.
What can I do to speed this program up? Is there a way to multithread it? (Total newb to that topic). Or, what about my program design is inherently flawed?
Repeatable program below, meant to be run in local Windows 10 directory:
import pyqrcode
import pandas as pd
from docx import Document
from docx.enum.text import WD_ALIGN_PARAGRAPH
import glob
import os
from docx2pdf import convert
def make_folder():
os.mkdir("codes")
os.mkdir("docs")
def create_qr_code():
for index, values in df.iterrows():
data = barcode = values["barcode"]
image = pyqrcode.create(data)
image.png("codes\\"+f"{barcode}.png", scale=3)
def embed_qr_code():
qr_images = glob.glob("codes\\"+"*.png")
for image in qr_images:
image_name = os.path.basename(image)
doc = Document()
doc.add_picture(image)
last_paragraph = doc.paragraphs[-1]
last_paragraph.alignment = WD_ALIGN_PARAGRAPH.CENTER
doc.save("docs\\"+f"{image_name}.docx")
convert("docs\\"+f"{image_name}.docx")
def clean_file_names():
paths = (os.path.join(root, filename)
for root, _, filenames in os.walk("docs\\")
for filename in filenames)
for path in paths:
newname = path.replace(".png", "")
if newname != path:
os.rename(path, newname)
data = {'barcode': ['teconec', 'tegovec', 'teconvec', 'wettrot', 'wetocen']}
df = pd.DataFrame(data)
make_folder()
create_qr_code()
embed_qr_code()
clean_file_names()
Thank you!
Answer: I recently learned how easy it is to incorporate multiprocessing and multithreading into a Python program, and would like to share it with you.
Python's built-in multiprocessing module offers a simple way to add multiprocessing to a program. However, since your program performs a lot of writes, I think it may be better to use the multiprocessing.dummy module instead. This module offers the same API as the multiprocessing module, but is used for multithreading instead of multiprocessing, and thus is better suited to programs that are IO intensive.
First, import the Pool class from the multiprocessing.dummy module:
import Pool from multiprocessing.dummy as ThreadPool
I aliased it as ThreadPool just for added clarity that we are using multithreading and not multiprocessing. Next, take a look at all the for loops in your code. To add multithreading, we will have to change those for loops to call a function on each iteration instead of performing a series of steps:
def create_qr_code(values):
data = barcode = values["barcode"]
image = pyqrcode.create(data)
image.png("codes\\"+f"{barcode}.png", scale=3)
def create_qr_codes():
for index, values in df.iterrows():
create_qr_code(values)
def embed_qr_code(image):
image_name = os.path.basename(image)
doc = Document()
doc.add_picture(image)
last_paragraph = doc.paragraphs[-1]
last_paragraph.alignment = WD_ALIGN_PARAGRAPH.CENTER
doc.save("docs\\"+f"{image_name}.docx")
convert("docs\\"+f"{image_name}.docx")
def embed_qr_codes():
qr_images = glob.glob("codes\\"+"*.png")
for image in qr_images:
embed_qr_code(image)
def clean_file_name(path):
newname = path.replace(".png", "")
if newname != path:
os.rename(path, newname)
def clean_file_names():
paths = (os.path.join(root, filename)
for root, _, filenames in os.walk("docs\\")
for filename in filenames)
for path in paths:
clean_file_name(path)
Now for the fun part. In each of the functions that use for loops, we will replace the for loops with usages of ThreadPools as context managers. We can then call each ThreadPool's map method to perform the actions using multithreading:
def create_qr_codes():
rows = (values for _, values in df.iterrows())
with ThreadPool() as pool:
pool.map(create_qr_code, rows)
def embed_qr_codes():
qr_images = glob.glob("codes\\"+"*.png")
with ThreadPool() as pool:
pool.map(embed_qr_code, qr_images)
def clean_file_names():
paths = (os.path.join(root, filename)
for root, _, filenames in os.walk("docs\\")
for filename in filenames)
with ThreadPool() as pool:
pool.map(clean_file_name, paths)
Note that when instantiating a Pool, you can pass a value to it to specify the number of processes (or in our case, threads) to use. According to the Python docs:
If processes is None then the number returned by os.cpu_count() is used.
Also, don't forget to change these two function calls...
create_qr_code()
embed_qr_code()
...to these:
create_qr_codes()
embed_qr_codes()
This should speed up your program significantly. If not, try using the multiprocessing module instead of multiprocessing.dummy.
Finally, one extra tip. You may want to refactor this function:
def make_folder():
os.mkdir("codes")
os.mkdir("docs")
To use the os.makedirs function instead to avoid raising an error if the directories already exist:
def make_folder():
os.makedirs("codes", exist_ok=True)
os.makedirs("docs", exist_ok=True) | {
"domain": "codereview.stackexchange",
"id": 40891,
"tags": "python, performance, multithreading, pdf, ms-word"
} |
Determine the index where two lists diverge | Question: Background
A class provides an API to determine the index where two string lists diverge.
import java.util.LinkedList;
import static java.lang.System.out;
/**
* Provides the ability to determine the index whereat two lists begin
* to differ in content. Both this list and the list to comapre against
* must not contain null strings.
*/
public class DivergentList extends LinkedList<String> {
/**
* Answers the index at which the strings within this list differ from
* the strings in the given list.
*
* @param list The list to compare against.
*
* @return -1 if the lists have no common strings.
*/
public int diverges( DivergentList list ) {
int index = -1;
if( valid( list ) && valid( this ) ) {
while( equals( list, ++index ) );
}
return index;
}
/**
* Answers whether the element at the given index is the same in both
* lists. This is not null-safe.
*
* @param list The list to compare against this list.
* @return true The lists have the same string at the given index.
*/
private boolean equals( DivergentList list, int index ) {
return (index < size()) && (index < list.size()) &&
get( index ).equals( list.get( index ) );
}
/**
* Answers whether the given element path contains at least one
* string.
*
* @param list The list that must have at least one string.
* @return true The list has at least one element.
*/
private boolean valid( DivergentList list ) {
return list != null && list.size() > 0;
}
/**
* Test the functionality.
*/
public static void main( String args[] ) {
DivergentList list1 = new DivergentList();
list1.addLast( "name" );
list1.addLast( "first" );
list1.addLast( "middle" );
list1.addLast( "last" );
list1.addLast( "maiden" );
DivergentList list2 = new DivergentList();
list2.addLast( "name" );
list2.addLast( "middle" );
list2.addLast( "last" );
// Prints 1
out.println( list2.diverges( list1 ) );
list1.clear();
list1.addLast( "name" );
list1.addLast( "middle" );
list1.addLast( "last" );
list2.clear();
list2.addLast( "name" );
list2.addLast( "middle" );
list2.addLast( "last" );
list2.addLast( "maiden" );
list2.addLast( "honorific" );
// Prints 3
out.println( list2.diverges( list1 ) );
list1.clear();
list2.clear();
// Prints -1
out.println( list1.diverges( list2 ) );
list1.add( "name" );
list2.add( "address" );
// Prints 0
out.println( list1.diverges( list2 ) );
list2.addFirst( "name" );
// Prints 1
out.println( list1.diverges( list2 ) );
}
}
Questions
A few questions:
How can the code be simplified (e.g., use a different structure)?
How can the code be improved (e.g., use generics; change the name, etc.)?
How would you make the code null-safe (e.g., override all add methods)?
How would you optimize the code?
Answer: Some notes:
Use ArrayList. LinkedList is not random access list. LinkedList#get(index) has O(n), ArrayList#get(index) has O(1)
The second check valid(this) is not required
Use Iterator to avoid (index < size()) && (index < list.size())
Not sure it is really necessary to define a new Class. Simple static util method can also do the job
Avoiding “!= null” statements in Java?
Example:
static int firstMismatch(List<?> original, List<?> other) {
int index = -1;
Iterator<?> it = other.iterator();
for(Object el : original){
++index;
if(!(it.hasNext() && el.equals(it.next()))){
return index;
}
}
// TODO: other.size() > original.size()
return it.hasNext() ? (index + 1) : -1;
}
Prints:
Original: [name, first, middle, last, maiden]
Other: [name, first, middle, last, maiden]
-1
---------
Original: [name, middle, last]
Other: [name, first, middle, last, maiden]
1
---------
Original: [name, middle, last, maiden, honorific]
Other: [name, middle, last]
3
---------
Original: []
Other: []
-1
---------
Original: [name]
Other: [address]
0
---------
Original: [name]
Other: [name, address]
1 // TODO: original has only 1 element
For completeness, another approach is to determine the list with fewer items, then iterate against the larger of the two, and exit when no more matches are found:
public int diverges( DivergentList<E> list ) {
int index = -1;
// Determine the larger list for iterating.
boolean smaller = list.size() < this.size();
Iterator<E> iFew = (smaller ? list : this).iterator();
// Iterate over the larger list.
for( E e : (smaller ? this : list) ) {
// Terminate when no more items exist in the smaller list.
if( !(iFew.hasNext() && e.equals( iFew.next() )) ) {
break;
}
index++;
}
return index;
}
This introduces a new variable, uses a single return statement, eliminates a final return calculation, and ensures list1.diverges( list2 ) == list2.diverges( list1 ) is always true.
The revised code prints:
Original: [name, first, middle, last, maiden]
Other: [name, first, middle, last, maiden]
4
---------
Original: [name, middle, last]
Other: [name, first, middle, last, maiden]
0
---------
Original: [name, middle, last, maiden, honorific]
Other: [name, middle, last]
2
---------
Original: []
Other: []
-1
---------
Original: [name]
Other: [address]
-1
---------
Original: [name]
Other: [name, address]
0
In this case, the return values indicate the last common index, which allows an empty list to be differentiated from an equal list. | {
"domain": "codereview.stackexchange",
"id": 12770,
"tags": "java, performance, algorithm, linked-list"
} |
Finite quasiparticle lifetimes in Fermi Liquid Theory | Question: I am trying to clarify a conceptual issue about phenomenological Fermi liquid theory. My confusion can be explained using the following two sentences from Dupuis's many body theory notes, but the same sentiment is present in many other sources as well. The sentences are:
According to the adiabatic continuity assumption, as the interaction is slowly turned on we generate an (excited) eigenstate of the interacting system. However, because of the interactions the state under study is damped and acquires a finite lifetime.
There seems to be two competing concepts here, both of which seem central to FLT. On one hand,
Quasiparticles correspond to excited energy eigenstates of the interacting Fermi liquid. These eigenstates can be obtained by starting with a corresponding excited state of the free Fermi gas and adiabatically switching on the interactions. Landau's theory postulates adiabatic continuity, so that the interacting eigenstates stand in 1-1 correspondence with the free eigenstates and can therefore be labeled by the same quantum numbers.
On the other hand,
Whereas free particles do not interact, and an excited state of the free theory will persist indefinitely, quasiparticles of the interacting theory do interact with each other. While a quasiparticle will then in general decay, its lifetime will diverge as it approaches the Fermi surface.
These two notions of quasiparticles seem contradictory to me. If the quasiparticles are eigenstates of the interacting theory, then they should not decay. Conversely, if the quasiparticles do interact and decay, then how are they related to the free particle excitations, and how should I understand that the quasiparticles carry the same quantum numbers as the free particles?
EDIT: After talking to a friend, I think the answer lies in the fact that the adiabatic theorem does not hold for an eigenstate without a gap. If the system were gapped, and none of the energies crossed as the interaction were turned on, then eigenstates would necessarily evolve into eigenstates. But since the Fermi system is gapless, there's no reason that the adiabatically evolved eigenstates remain eigenstates. But it would be nice to have confirmation from someone more knowledgeable, and it's strange that this point is not discussed in any of the sources I've checked.
EDIT 2: Apologies for the multiple edits. After doing some research, I think that my previous edit was incorrect. As far as I can tell, the Gell-Mann and Low theorem guarantees that an infinitely slow adiabatic turning on of the interaction evolves eigenstates of the free theory into eigenstates of the interacting theory. The application to FLT seems immediate to me: if we start with an excited free particle state, and turn on the interactions infinitely slowly, we expect the state we obtain to be an eigenstate. But clearly this cannot be what we are actually doing in FLT, since the quasiparticles are not eigenstates. So how should I make sense of this?
Answer: I suggest thinking about Fermi liquid theory as two successive levels of approximation (which are unfortunately not clearly delineated in the sentences you quote):
In the limit of vanishing energy above the Fermi level, the eigenstates of the interacting theory have the same quantum numbers as free electrons, and the two are smoothly connected as the interactions are turned on.
For a small but nonvanishing energy above the Fermi level, this is no longer true. However, the first-order correction is to consider these adiabatically connected states as 'almost-eigenstates' which acquire a finite lifetime but are still nearly conserved (more precisely, they are conserved for long times compared to the energy of the excitation). In a spectral function picture, which is widely used and useful in this context, the quasiparticles are not Dirac delta function poles, as eigenstates with infinite lifetimes would be, but are broadened to sharp peaks with a finite width.
Also, I should point out that the idea of the adiabatic turn-on of the interactions is that it is slow enough to smoothly connect particles to quasiparticles, but not so slow that the quasiparticles can decay. So, it is not infinitely slow, and the Gell-Mann and Low theorem doesn't apply. You should remember that this is an argument, which is empirically successful in many systems, but it is definitely not a theorem and Fermi liquid theory does break down in various cases.
This answer does not come from a single source that I can point to. However, my recommended general exposition of the theory, if you can get it, is Pines and Nozieres' The theory of quantum liquids (Vol. 1), which I find is still unbeatable despite being nearly six decades old.
Edit: paraphrasing slightly, a question in the comments is: why work with quasiparticles, which are approximate eigenstates, instead of the exact excited state eigenstates? If we postulate a Fermi liquid-like theory in which the excitations are instead exact eigenstates, which might seem like the natural thing to do, what are we missing?
Well, the first thing to notice here is that this is equivalent to taking the approximation that the quasiparticle lifetimes are infinite. In other words, this corresponds to taking only approximation 1 above, and not going on to approximation 2, which will result in a restricted version of Fermi liquid theory that is only valid for energies very close to the Fermi level. Of course, for some purposes this might be enough. However, for experimental probes such as conductivity the finite quasiparticle lifetime plays an essential role, so extending the theory to account for these makes it much more powerful.
One could imagine that historically the theory was developed in this way- first considering the limit of quasiparticles that are completely stable, then generalizing to unstable but long-lived- but I have no idea whether this is actually how it happened. | {
"domain": "physics.stackexchange",
"id": 77146,
"tags": "condensed-matter, solid-state-physics, many-body, fermi-liquids"
} |
How can opposite charges neutralize so fast? | Question: I saw that the two end wires of a capacitor when touched to each other neutralizes quite fast(as the flash after their contact was there only for a second), how can charges neutralize the capacitor so fast?
Answer: Actually, electrons move very slowly in a typical conductor like copper: on the order of microns per second. (See Wikipedia "drift velocity".) However, an enormous number of electrons are moving, so it takes hardly any time for enough charge to be transferred to discharge the capacitor completely. A good analogy is to think of the wires as being pipes with huge diameter but filled with slow-moving water. It would only take a moment to move many gallons of water. The actual amount of charge stored in a capacitor is tiny compared to the number of electrons in a cubic millimeter of copper. | {
"domain": "physics.stackexchange",
"id": 68581,
"tags": "electrostatics, electricity, electric-circuits, capacitance"
} |
Calculating basic systematic error | Question: My lab partner and I are in disagreement about what the systematic error of our temperature measurement is.
The digital temperature gauge measured to one decimal place (i.e 20.3°C). We took a number of readings.
I think that the systematic error is $\pm \, 0.05°C$
My partner thinks that is it $ \pm \frac {0.05}{mean} $
Which one is correct?
Answer: That would be a much simpler question to answer back in the days when temperature reading was an analogue process - reading the mercury level on a thermometer for example. For a digital gauge it is a bit more complicated.
Let's assume that the actual temperature is steady and all the variation is due to
various forms of noise in the measurement (for example - noise due to current variation in a resistive thermometer circuit),
noise in digital extraction (for example - resolution of DTA converter, 1/f noise due to acquisition time),
noise in data output (for example - round-off method in digital display).
Further assume that steps 1) and 2) result in a mean temperature measurement with some variance that is converted to the digital output display in stage 3). If the variance is fairly large compared with the 0.1 degree resolution, then the temperature is approximately given by the average reading +/- the calculated standard deviation. But if the variance is comparable in magnitude to the digital resolution then the oscillation of output temperature display doesn't describe the average temperature.
For example, if the 'internal' standard deviation was ~0.01 degrees and the digital output rounds to the nearest 0.1 degree then as the temperature shifted from 20.00 degrees to 20.10 degrees, the 'average' temperature calculated from the displayed temperatures will initially lag behind actual temperature and then jump upwards as measured temperature rises above 20.05 degrees. See simulated data below.
In general, the rounding process can result in a systematic error as much as half the displayed resolution (+/-0.05 degrees) as you suggest, but noise in the input signal could actually average that away (as your partner suggests). But unless you know the internals of the measurement system it is safer to accept the larger value as a possible systematic error. | {
"domain": "physics.stackexchange",
"id": 76522,
"tags": "statistics"
} |
How can kinetic theory be used to verify Dalton's law of partial pressures? | Question: I have been told Dalton's law of partial pressures can be proved from kinetic theory. This is my reasoning for why this might be:
Since in kinetic theory there are no forces of attraction between the particles, the only "force" on the container is the pressure exerted by the different gas species hitting on it, multiplied by the collision area, which will be different for the different species of gas. We thus consider the total pressure to be given by the sum of these individual pressures from the different gas species. This would also make sense since both kinetic theory and Dalton's law work on the assumption that the gas is an ideal one.
Is this too simplistic of an explanation? Many thanks.
Answer:
Is this too simplistic of an explanation?
It is basically correct, since Dalton's law of partial pressures is based on ideal gas behavior of the individual gases as well as the mixture of gases. And ideal gas behavior is based on the kinetic theory of gases.
The partial pressure of the $i$ th gas in a mixture of gases in a volume $V$ is the pressure that that gas would alone exert if all the other gases were removed from the volume, and is given by the ideal gas equation
$$P_{i}=\frac{m_{i}R_{i}T}{V}$$
where $m_i$ and $R_i$ is the mass and gas constant for gas $i$.
Then $$P_{tot}=\sum P_i$$
Also
$$P_{i}=x_{i}P_{tot}$$
Where $x_i$ is the mole fraction of the $i$ th gas.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 78102,
"tags": "thermodynamics, pressure, ideal-gas, kinetic-theory"
} |
How much photon energy has already been destroyed? | Question: We get taught in school that energy can neither be created, nor destroyed. The law of energy conservation is confirmed by many processes. It is an incredibly accurate assumption for every-day life.
When we study physics at university level, we get taught the "fine-print", the exceptions to the rule. For example in cosmology we learn that the universe as a whole is expanding. The expansion affects the wavelength of photons: the wavelength gets stretched (while c remains constant), which means photons lose energy. To nothing. The energy is just gone. Experimental evidence is the cosmic
microwave background (CMB), which is red-shifted to a black-body spectrum of 2.7K.
If we estimate the number of CMB photons originally produced and their energy (1eV?), subtract photon energy as observed today, then how much photon energy has been lost during the universe's expansion (in Joules)? Is it a significant amount, say, more than 1% of total hadron mass?
Answer: To answer this question, a few assumptions must be made:
The Universe may or may not be infinite. It would therefore make sense to answer your question considering the observable Universe only. But the observable Universe increases in size, not only due to expansion (which doesn't add or remove photons), but also because light from more and more distant regions reaches us. In comoving coordinates — i.e. the coordinates that expand with space — the observable Universe has increased in linear size by a factor of 50 since then, so the comoving volume, and hence the total number of photons, has increased by a factor $\gt10^{5}$.
In other words, new photons enter our observable Universe all the time, and they do so at a faster rate than the individual photons lose energy.
Moreover, most of the photons were not created when the CMB was emitted — they had ben around since the end of inflation, scattering aorund on free electrons, until they were "released" at the decoupling/recombination era. I'll address this in the end of my answer.
I know this is not really what you had in mind, so to be specific, I'll compare the total amount of energy in the observable Universe today, to the same space when the CMB was emitted.
Each $\mathrm{cm}^3$ of space holds roughly $n_\mathrm{ph}$ = 411 CMB photons. There are also photons coming from various astrophysical processes (mostly star formation and dust emission), but those are smaller in number by more than two orders of magnitude, and smaller in energy by at least one order of magnitude, probably more (Hill et al 2018), so let's ignore those.
With a CMB temperature of $T_0 = 2.718\mathrm{K}$ (Planck Collaboration et al. 2016), the average energy of a CMB photon is $E_\mathrm{ph,now} = k_\mathrm{B}T_0 = 2.3\times10^{-4}\,\mathrm{eV}$. Since they're seen redshifted by $z\simeq1100$, each photon has lost energy by the same factor. With radius $R = 46.3\,\mathrm{Glyr}$ of the Universe, the total amount of energy lost is
$$
\begin{array}{rcl}
\Delta E & = & E_\mathrm{tot,then} - E_\mathrm{tot,now} \\
& = & (E_\mathrm{ph,then} - E_\mathrm{ph,now}) \,\times\, N_\mathrm{ph,tot}\\
& = & \left((1+z)E_\mathrm{ph,now} - E_\mathrm{ph,now}\right) \,\times\, n_\mathrm{ph} \,\times\, \frac{4\pi}{3}R^3\\
& \simeq & (1+z)E_\mathrm{ph,now} - E_\mathrm{ph,now} \,\times\, n_\mathrm{ph} \,\times\, \frac{4\pi}{3}R^3\\
& \simeq & 4\times10^{88}\,\mathrm{eV}\\
& \simeq & 6\times10^{76}\,\mathrm{erg}\\
& \simeq & 6\times10^{69}\,\mathrm{J},
\end{array}
$$
where the first approximation acknowledges the fact that the energy density today can be neglected compared to the energy density when the photons were emitted.
If you want to know how much energy these photons have lost since they were initially created, at the end of inflation, you just use the redshift corresponding to this epoch — roughly $z\sim10^{26}$. In that case you get some $10^{93}\,\mathrm{J}$. | {
"domain": "physics.stackexchange",
"id": 62063,
"tags": "cosmology, energy-conservation, universe, space-expansion, cosmic-microwave-background"
} |
Ordered sequence with logarithmic insert and remove | Question: Problem: we have a sequence of numeric values, e.g. [102, 25, 77, 17, 2, 13]. We need to implement 3 operations, each can be at most logarithmic time complexity.
insert(i, v) - inserts value v at index i.
remove(i) - removes value at index i.
peek(i) - returns value at index i.
What I've tried: AVL tree, where node keys are elements' indices. It theoretically guarantees log time for all three operations. The problem is when we use e.g. insert(0, v), we need to update all of the other indices (we want to keep the proper order), resulting in O(n) complexity. Sample sequence would look like this in my way of thinking:
Answer: I think you have already a good idea with AVL trees.
To improve the data structure, you should keep track of the size of each subtree (and discard indices, they are not really useful here).
In details, a node $x$ of the tree should have the following information:
value;
size;
height;
pointer to the left child;
pointer to the right child.
There is no key in there, since those trees are not binary search trees, but the values are ordered by increasing indices when considering the in-order (like the keys in a BST). However, since the indices may change during the process, as you have stated, there is no use in storing those.
Peeking is the easiest algorithm:
define a.peek(i):
s ← a.left.size
if s = i then return a.value
if s > i then return a.left.peek(i)
return a.right.peek(i - s - 1)
To insert an element at position $i$, we search the position the same way as the peeking is done and continue in the left or right child (eventually with the node it replaces). All size and heigths or ancestors must be modified and some rotations may be necessary to keep the balance of each node.
define a.insert(i, v):
if a is empty then
a.value ← v
a.size ← 1
a.height ← 0
a.left ← empty
a.right ← empty
return
s ← a.left.size
if s = i then
if a.left.height < a.right.height then
a.left.insert(i, v)
else
a.right.insert(0, a.value)
a.value ← v
a.size ← a.size + 1
a.height ← 1 + max(a.left.height, a.right.height)
else if s > i then
a.left.insert(i, v)
a.size ← a.size + 1
a.height ← 1 + max(a.left.height, a.right.height)
if a.left.height > a.right.height + 1 then
do a right rotation
else
//similar insertion in the right child
When deleting a node, it need to be replaced. The easiest way to do it is to use a node in the child of greatest height, either the rightmost node in the left child or the leftmost node in the right child. Implementation left to you.
All those operations have time complexity linear in the height of the tree, which is logarithmic in the size, given that the balance is keeped. | {
"domain": "cs.stackexchange",
"id": 20786,
"tags": "algorithms, time-complexity, data-structures, search-algorithms, avl-trees"
} |
Velocity of bead when a light inextensible frictionless cord becomes taut. (Pathfinder methods of impulse build up 3rd question) | Question: The complete question is
"A thin light inextensible frictionless chord of length $l$ wearing a small bead is tied between two nails that are in the same level a distance $(0.5)l$ apart. Initially the bead is held close to a nail and released. Find speed of the bead immediately after the chord becomes taut."
The Answer given is $\sqrt{\frac{3gl}{20}}$.
I tried to find $h$ by applying conditions for $F_x=0$ and pythagoras relation of $x\ y$ and $h$($x$ = part of rope on the side from which bead was released, $y$ = horizontal distance from the initial nail and $h$ = height it descended)
Answer: Take the velocity to be perpendicular to net tension at the bottom point.Use pythagoras to find the angles.Try to solve Further | {
"domain": "physics.stackexchange",
"id": 87488,
"tags": "homework-and-exercises, newtonian-mechanics, string"
} |
PCA on genotype matrix with multiple alleles | Question: Consider an m x n genotype matrix of m haploid samples and n SNPs where each value is an allele encoded by an integer (0,1,2,3).
Is there a good/standard way to encode the alleles in order to perform a PCA on this matrix to investigate population structure ?
I have seen this with matrices of diploid samples where each SNP is encoded as 0/1/2 to represent the number of non-reference allele, but is there a way to consider more than 2 alleles ?
e.g. difference between alleles 1 and 3 should be equal to difference between alleles 1 and 2
Answer: You can make a 'dummy variable' for each allele. That means that you don't have info per SNP, but for SNPs with more alleles, the allele is present (1) or not (0). | {
"domain": "bioinformatics.stackexchange",
"id": 503,
"tags": "snp, pca"
} |
Is a calculus or ML approach to varying learning rate as a function of loss and epoch been investigated? | Question: Many have examined the idea of modifying learning rate at discrete times during the training of an artificial network using conventional back propagation. The goals of such work have been a balance of the goals of artificial network training in general.
Minimal convergence time given a specific set of computing resources
Maximal accuracy in convergence with regard to the training acceptance criteria
Maximal reliability in achieving acceptable test results after training is complete
The development of a surface involving these three measurements would require multiple training experiments, but may provide a relationship that itself could be approximated either by curve fitting or by a distinct deep artificial network using the experimental results as examples.
Epoch index
Learning rate hyper-parameter value
Observed rate of convergence
The goal of such work would be to develop, via manual application of analytic geometry experience or via deep network training the following function, where
$\alpha$ is the ideal learning rate for any given epoch indexed by $i$,
$\epsilon$ is the loss function result, and
$\Psi$ is a function the result of which approximates the ideal learning rate for as large an array of learning scenarios possible within a clearly defined domain.
$\alpha_i = \Psi (\epsilon, i)$
The development of arriving at $\Psi$ as a closed form (formula) would be of general academic and industrial value.
Has this been done?
Answer:
Has this been done?
Difficult to prove a negative, but I suspect although plenty of research has been done into finding ideal learning rate values (the need for learning rate at all is an annoyance), it has not been done to the level of suggesting a global function worth approximating.
The problem is that learning rate tuning, like other hyperparameter tuning, is highly dependent on the problem at hand, plus the other hyperparamater values currently in use, such as size of layers, which optimiser is in use, what regularisation is being used, activation functions.
Although you may be hoping for $\Psi(\epsilon, i | P)$ to exist where P is the problem domain, it likely does not except as a mean value over all $\Psi(\epsilon, i | D, H)$ for the problem domain, where D is the dataset and H all the other hyperparameters.
It is likely that such a function exists, of ideal learning rate for best expected convergence per epoch. However, it would be incredibly expensive to sample it with enough detail to make approximating it useful. Coupled with limited applicability (not domain-specific, but linked to data and other hyperparameters), a search through all possible learning rate trajectories looks like it would give poor return on investment.
Instead, the usual pragmatic approaches are:
Include learning rate in hyperparameter searches, such as grid search, random search, genetic algorithms and other global optimisers.
Decay learning rate using one of a few approaches that have been successfully guessed and experiments have shown working. These have typically been validated by plotting learning curves of loss functions or other metrics, and the same tracking is usually required in new experiments to check that the approach is still beneficial.
Some optimisers use a dynamic learning rate parameter, which is similar to your idea but based on reacting to measurements during learning as opposed to changes based on an ideal function. They have a starting learning rate, then adjust it based on heuristics derived from measuring learning progress. These heuristics can be based on per-epoch measurements, such as whether a validation set result is improving or not. One such approach is to increase learning rate whilst results per epoch are improving, and reduce learning rate if results are not improving, or have got worse.
I have tried this last option, on a Kaggle competition, and it worked to some extent for me, but did not really improve results overall - I think it is one of many promising ideas in ML that can be made to work, but that has not stayed as a "must have", unlike say dropout, or using CNNs for images.
Some optimisers store a multiplier per layer or even per weight - RMSProp and Adam for example track rate of change of each parameter, and adjust the rate for each weight during updates. These can work very well in large networks, where the issue is not so much needing a specific learning rate at any time, but that a single learning rate is too crude to cover the large range of gradients and differences in gradients across the index space of all the connections. With RMSProp and Adam, the need to pick specific learning rates or explore them is much reduced, and often a library's default is fine. | {
"domain": "ai.stackexchange",
"id": 731,
"tags": "deep-learning, optimization, topology, convergence, hyper-parameters"
} |
Descriptive quantity (one, two, many) from number | Question: I just write a simple method which returns a string based on the number in accepts as a parameter. I used if/elif/esle here but I think for this simple scenario I wrote long code. I want to know is it possible to shorten this code in any way except using a lambda method. Also, I'm interested to know about the performance of if/elif/else. Is there any other solution which can make faster this method?
def oneTwoMany(n):
if n == 1:
return 'one'
elif n == 2:
return 'two'
else:
return 'many'
Answer: You can instead use a dictionary to define your n values:
nums = {1: 'one', 2: 'two'}
Then, when you want to use this, you can use .get(). .get() has a default argument which is returned when a dict doesn't have a key you've requested; here we use 'many' to be returned.
def oneTwoMany(n):
nums = {1: 'one', 2: 'two'}
return nums.get(n, 'many')
If you really want to be concise you can stop defining nums:
def oneTwoMany(n):
return {1: 'one', 2: 'two'}.get(n, 'many')
As an aside to the downvoters of the question; it may be a simple question but I don't see how it breaks any rules.
EDIT: incorporating some other answers which (quite rightly) suggest catching certain invalid inputs.
def oneTwoMany(n):
if type(n) is not int:
raise TypeError('Invalid input, value must be integer')
elif n < 1:
raise ValueError('Invalid input, value is below 1')
else:
return {1: 'one', 2: 'two'}.get(n, 'many') | {
"domain": "codereview.stackexchange",
"id": 35190,
"tags": "python, performance, python-3.x"
} |
What does detection stability mean? | Question: I am reading the paper "A new approach to intrusion detection using Artificial Neural Networks and fuzzy clustering" by Gang Wang, Jinxing Hao, Jian Ma and Lihua Huang (Expert Systems with Applications, 37(9):6225–6232, 2010, available at Science Direct). I don't understand the term "detection stability". What does that mean?
The context is that existing intrusion detection systems are claimed to have poor "detection stability" when it comes to rare attacks.
Answer: Detection stability is about consistency of detection, in this case rare events are classified independent of small changes. One might think of detector having smooth function when it is stable and chaos-like, heavily discontinous when it is not.
Detection stability in the case of rare events (rare attacks) is about consistency and fluctuation. The lack of stability means that similar events, that differ marginaly will not be detected, becase the techniques are very prone to fluctuations - that is a very bad property, malicious query with e.g. $99\%$ classification when changed slightly might drop below detection threshold. Another outcome is that slight changes to the rest of population will influence rare queries. | {
"domain": "cs.stackexchange",
"id": 7017,
"tags": "terminology"
} |
Difference between convolving before/after discretizing LTI systems | Question: Suppose I have transfer functions for two continuous causal linear-time invariant (LTI) systems:
$F_1(s)$ and $F_2(s)$.
Let $D\left\{\cdot\right\}$ denote the function that maps a transfer function from the continuous-time domain to the discrete-time domain (as another transfer function via the $z$-transform) using ZOH discretization.
What is the relationship between: $$D\left\{F_1(s) F_2(s)\right\} \quad \longleftrightarrow \quad D\left\{F_1(s)\right\}D\left\{F_2(s)\right\}$$
In other words, what is the difference between discretizing the convolution of two transfer functions versus convolving their discretization?
Some brief experimentation has shown they are "almost" equal, but why?
Can we be precise about this difference?
As an aside,
Could someone point me to a reference on other properties of discretization, e.g., linearity $D\left\{a_1F_1(s) + a_2F_2(s)\right\} = a_1D\left\{F_1(s)\right\} + a_2D\left\{F_2(s)\right\}$?
Answer: Assuming you refer to the ZOH-discretization as shown in this figure from the mathworks site
the relationship between the continuous-time signals and the discrete-time signals can be derived as follows. The signal $u(t)$ is given by
$$u(t)=\sum_ku[k]g(t-kT)\tag{1}$$
where $g(t)$ is a rectangular impulse response (constant in the interval $t\in [0,T]$, zero anywhere else) representing the ZOH, and $T$ is the sampling period. Let $h(t)$ be the impulse response corresponding to the transfer function $H(s)$. Furthermore, let $f(t)$ be the convolution of $g(t)$ and $h(t)$:
$$f(t)=(g\star h)(t)\tag{2}$$
Then the continuous-time output signal $y(t)$ is given by
$$\begin{align}y(t)&=(u\star h)(t)\\
&=\int_{-\infty}^{\infty}u(\tau)h(t-\tau)d\tau\\&=\sum_ku[k]\int_{-\infty}^{\infty}g(\tau-kT)h(t-\tau)d\tau\\&=\sum_ku[k]f(t-kT)\tag{3}\end{align}$$
where I've used Equations $(1)$ and $(2)$. From $(3)$ the discrete-time output signal is easily obtained as
$$y[n]=y(nT)=\sum_ku[k]f((n-k)T)=\sum_ku[k]f[n-k]\tag{4}$$
with $f[n]=f(nT)$. So $y[n]$ is simply the discrete-time convolution of the input signal $u[n]$ and the total impulse response $f[n]$.
From this result it immediately follows that linearity must be satisfied, simply because convolution is a linear operation:
$$f(t)=\alpha_1(g\star h_1)(t)+\alpha_2(g\star h_2)(t)=(g\star (\alpha_1h_1+\alpha_2h_2))(t)\tag{5}$$
However, the (ZOH-)discretization of the concatenation of two transfer functions is not equivalent to the concatenation of the two discretized transfer functions. Referring to the figure above, the first case is equivalent to replacing $H(s)$ by the concatenation of $H_1(s)$ and $H_2(s)$. The other case involves concatenating two complete systems as shown in the figure, the first with $H(s)=H_1(s)$, the second with $H(s)=H_2(s)$. The difference between the two cases is that in the first case you only have one ZOH, whereas in the second one you get two ZOHs. The equivalent discrete-time impulse responses are
$$f[n]=(g\star h_1\star h_2)(nT)\tag{6}$$
and
$$f[n]=(g\star h_1)\star (g\star h_2)(nT)\tag{7}$$
which are generally not identical. The first case in Eq. $(6)$ is a ZOH discretization of the total transfer function $H(s)=H_1(s)H_2(s)$, whereas the second case in Eq. $(7)$ corresponds to a first-order hold (FOH) discretization of the total transfer function. | {
"domain": "dsp.stackexchange",
"id": 4117,
"tags": "discrete-signals, convolution, linear-systems"
} |
Why did high A+T content create problems for the Plasmodium falciparum genome project? | Question: The main paper for the Plasmodium palciparum genome project (Gardner et al., 2002) repeatedly mentioned that the unusually high A+T content (~80%) of the genome caused problems. For example they imply that it prevented them using a clone-by-clone approach:
Also, high-quality large insert libraries of (A + T)-rich P. falciparum DNA have never been constructed in Escherichia coli, which ruled out a clone-by-clone sequencing strategy.
And that it made gene annotation difficult:
The origin of many candidate organelle-derived genes could not be conclusively determined, in part due to the problems inherent in analysing genes of very high (A + T) content.
Question:
What is the biological significance of high A+T content, and why would it cause problems in genome sequencing?
Ref:
Gardner, M.J., Hall, N., Fung, E., White, O., Berriman, M., Hyman, R.W., Carlton, J.M., Pain, A., Nelson, K.E., Bowman, S., Paulsen, I.T., James, K., Eisen, J.A., Rutherford, K., et al. (2002) Genome sequence of the human malaria parasite Plasmodium falciparum. Nature. 419 (6906), 498–511.
Answer: The sequencing technologies that were developed in the last 20 years have a range of optimal use at an average A+T/G+C rate. Both highly AT-rich and GC-rich regions are complicated to process by the different sequencing technologies. Each technology has different ranges of usage, but to name one, Illumina technology prefers sequences in the middle range. If you try to sequence an AT-rich genome with the Illumina standard protocol, you will sequence an incomplete genome, the fragments of which are not a perfect reflection of the original complete genome. Other technologies claim to be completely unbiased to nucleotide content. Pacific Biosciences is one of them, and people seem to agree on that claim, after having analyzed the data that is produced by their machines. Oxford Nanopore Technologies claims that they have almost no biases, but as of today (2012-06-13), there is no confirmation of that by external analyses.
Beyond sequencing problems, the software used to assemble and annotate the sequences may also be prone to errors in AT-rich and GC-rich regions. But many of those problems stem from the incompleteness of the sequencing. | {
"domain": "biology.stackexchange",
"id": 5149,
"tags": "dna, bioinformatics, dna-sequencing, genomics"
} |
Conformal infinities | Question: What is the exact definition of the conformal infinities in a conformal compactification of a spacetime (not necessarily asymptotically flat)? I want to say that it's something of the type (for a time-oriented spacetime) :
The future and past timelike infinity $i^+$ and $i^-$ corresponds to the image of the set of future and past inextendible timelike curves of infinite half-length at $\pm \infty$, on the boundary $\mathscr I$ .
with similar definitions for null and spacelike infinity. Is that a valid definition for it? I'm having trouble finding an actual definition for it independent of any specific spacetime.
Answer: This topic is called "boundary constructions." There are multiple ways of defining a boundary, including Geroch's g boundary, Schmidt's b boundary, and the Geroch-Kronheimer-Penrose boundary. There have been seemingly pointless religious wars over which is the right one. None really seems to have the complete list of correct properties, including coordinate-independence.
Note that we don't just want to define the boundary as a set of idealized points, we also want to define a topology on it. E.g., a problem with the b boundary is the fact that the topology comes out non-Hausdorff for both FRW and Schwarzschild.
A review article on this topic is Parrado and Senovilla, Causal Structures and Causal Boundaries, http://arxiv.org/abs/gr-qc/0501069 . | {
"domain": "physics.stackexchange",
"id": 41588,
"tags": "general-relativity, topology, causality"
} |
Why doesn't the binary classification log loss formula make it explicit that natural log is being used? | Question: I'm completing a DataCamp course where we are introduced to the log loss formula for binary classification:
Two scenarios are given to show how the formula is used. One with p=0.1 and one with p=0.5. The answers the instructor displayed were 2.3 and .69, respectively. However, using a calculator, the answers for log(0.1) and log(0.5) are -1 and -0.30, respectively. I later tried using natural log instead and got the same answers as the instructor, except negative. Specifically, the calculator returned -2.3 for ln(0.1) and -0.69 for ln(0.5).
Is it common in math for log to implied to be "ln" or "log e" without stating it explicitly in the formula? Also, is there something about the log loss binary classification formula that suggest that the absolute value of the result should be taken?
Answer: This comes down to the change-of-base formula. For any two numbers $a$ and $b$, the following equation is true.
$$
\log_a(x) = \frac{\log_b(x)}{\log_b(a)}.
$$
What this means is that the errors are proportional. So if you wanted to change to using $\log_{10}$, you would simply end up multiplying by a constant factor, and model selection would be the same.
Explicitly,
$$logloss(N=1) = \frac{y \log_{10}(p) + (1 - y) \log_{10}(p)}{\log_{10}(e)}$$
Or, equivalently
$$
logloss(N=1) \cdot \log_{10}(e) = y \log_{10}(p) + (1 - y) \log_{10}(p)
$$
In other words: The base of the logarithm doesn't matter because everything ends up being proportional. | {
"domain": "datascience.stackexchange",
"id": 5774,
"tags": "classification, machine-learning-model, loss-function"
} |
What does one mean by heuristic statistical physics arguments? | Question: I have heard that there are heuristic arguments in statistical physics that yield results in probability theory for which rigorous proofs are either unknown or very difficult to arrive at. What is a simple toy example of such a phenomenon?
It would be good if the answer assumed little background in statistical physics and could explain what these mysterious heuristics are and how they can be informally justified. Also, perhaps someone can indicate the broad picture of how much of these heuristics can be rigorously justified and how the program of Lawler, Schramm and Werner fits into this.
Answer: The second paragraph of RJK's response deserves more detail.
Let $\phi$ be a formula in conjunctive normal form, with m clauses, n variables, and at most k variables per clause. Suppose we want to determine if $\phi$ has a satisfying assignment. Formula $\phi$ is an instance of the k-SAT decision problem.
When there are few clauses (so m is quite small compared to n), then it is almost always possible to find a solution. A simple algorithm will find a solution in roughly linear time in the size of the formula.
When there are many clauses (so m is quite large compared to n), then it is almost always the case that there is no solution. This can be shown by a counting argument. However, during search it is almost always possible to prune large parts of the search space by means of consistency techniques, because the many clauses interact so extensively. Establishing unsatisfiability can then usually be done efficiently.
V. Chvátal and B. Reed. Mick gets some (the odds are on his side), FOCS 1992. doi: 10.1109/SFCS.1992.267789
In 1986 Fu and Anderson conjectured a relationship between optimisation problems and statistical physics, based on spin glass systems. Although they used sentences like
Intuitively, the system must be sufficiently large, but it is difficult to be more specific.
they do actually give specific predictions.
Y Fu and P W Anderson. Application of statistical mechanics to NP-complete problems in combinatorial optimisation, J. Phys. A. 19 1605, 1986. doi: 10.1088/0305-4470/19/9/033
Based on arguments from statistical physics, Zecchina and collaborators conjectured that k-SAT should become hard when $\alpha = m/n$ is near a critical value. The precise critical value depends on k, but is in the region of 3.5 to 4.5 for 3-SAT.
Rémi Monasson, Riccardo Zecchina, Scott Kirkpatrick, Bart Selman, Lidror Troyansky. Determining computational complexity from characteristic `phase transitions', Nature 400 133–137, 1999. (doi: 10.1038/22055 , free version)
Friedgut provided a rigorous proof of these heuristic arguments. For every fixed value of k, there are two thresholds $\alpha_1 < \alpha_2$. For $\alpha$ below $\alpha_1$, there is a satisfying assignment with high probability. For a value of $\alpha$ above $\alpha_2$, formula $\phi$ is unsatisfiable with high probability.
Ehud Friedgut (with an appendix by Jean Bourgain), Sharp thresholds of graph properties, and the $k$-sat problem, J. Amer. Math. Soc. 12 1017–1054, 1999. (PDF)
Dimitris Achlioptas worked on many of the remaining issues, and showed that the above argument holds for constraint satisfaction problems, too. These are allowed to use more than just two values for each variable. One key paper shows rigorously why the Survey Propagation algorithm works so well to solve random k-SAT instances.
A. Braunstein, M. Mézard, R. Zecchina, Survey propagation: An algorithm for satisfiability, Random Structures & Algorithms 27 201–226, 2005. doi: 10.1002/rsa.20057
D. Achlioptas and F. Ricci-Tersenghi, On the Solution-Space Geometry of Random Constraint Satisfaction Problems,
STOC 2006, 130–139. (preprint) | {
"domain": "cstheory.stackexchange",
"id": 2766,
"tags": "sat, pr.probability, physics, statistical-physics"
} |
Cooling of compressed air | Question: If we compress air from $p_1=1bar$ with starting temperature $T_1=20°C$, the pipes behind the compressor become usually "very" hot. However, if we start at $T_1=20°C$ and $p_1=10bar$ and we decompress the air to $p_2 = 1bar$, the temperature drop is rather "small". This can be seen be using the Joules-Thomson coefficient $\mu\approx 0.2 K/bar$ of nitrogen (from Wikipedia). What is the reason for this huge asymmetry? Does a technical method exists, which circumvent this and allows to achieve low temperatures?
Background information: Unfortunately, I cannot use a vortex tube.
Answer: The answer is yes, and here's how.
You first compress the gas. You will notice that it heats up. You hold it in its compressed state and let the heat leak away into the surroundings, so now you have compressed gas at room temperature. You then bleed it through a small valve down to atmospheric pressure and it gets nice and cold.
The cooling effect can be magnified by chilling the hot gas with water from a tap, lake, or stream if that water is colder than the surrounding air. | {
"domain": "physics.stackexchange",
"id": 65593,
"tags": "thermodynamics, pressure, temperature, cooling"
} |
Color of chromate and permanganate | Question: I've heard quite a few times that the chromate and permanganate have a $d^3s$ configuration. Also, their colors arise due to a rapid switching of electrons between the oxygen and metal atoms.
I don't really understand the 'rapid switching' part--it's obvious why it can give color, but I fail to see why there is a need for such a switching-what's so special about $\mathrm{Cr}$ and $\mathrm{Mn}$? (I also do not know what the switching exactly is)
An explanation of $d^3s$ would be appreciated, though not necessary.
Answer: By "rapid switching" they technically mean Ligand-to-Metal Charge Transfer (LMCT). A more modern framework is Ligand Field Theory. I would have to teach a class to fully explain it in those terms, but I'll try to explain it in terms of this hybridization you brought up.
A chemical bond implies a higher probability of finding electrons between two bound nuclei. Atomic orbitals describe the electron density for individual nuclei. A bond between two nuclei necessitates some physical overlap of the relevant atomic orbitals.
It is in this way that we say that the linear combination of atomic orbitals, say two consisting of a $2s$ and $2p$ orbital, results in a molecular orbital that describes the molecule. We would write the wave function for the complex as:
$$\Psi = C_1 \psi(2s) \pm C_2 \psi(2p)$$
The amount of "mixing" is just a matter of adjusting the coefficients, $C_n$. And so we might say that the bond in this made-up molecule is $sp$ hybridized. $sp^3$ hybridized would mean the following kind of wave function:
$$\Psi = C_1 \psi(2s) \pm C_2 \psi(2p) \pm C_3 \psi(2p) \pm C_4 \psi(2p)$$
Similarly, $sd^3$ could mean a wave function of the following nature:
$$\Psi = C_1 \psi( (n+1) s) \pm C_2 \psi(n d) \pm C_3 \psi(nd) \pm C_4 \psi(nd) $$ where $n=3$ in the case of Mn or Cr.
If one performs the proper molecular orbital calculations on the valence $3d$, $4s$, and $4p$ for manganese or chromium and the $2s$ and $2p$ for oxygen in tetrahedral symmetry then you can draw the molecular orbital energy diagram for your complex once you match your eigenvalues to the correct trace of the appropriate transformation matrix... and you can hear a bunch more jargon on it, or I can explain the main point of our models: why we think MnO$_4$ $^-$ is purple.
The bonds in, specifically permanganate (I have never actually done the calculations on dichromate, although they should be similar in principle) are between Mn$^{+7}$ and four O$^{-2}$ in the geometry shown here. The electronegativity of oxygen clearly dictates most of the electron density and, as a result, has them sitting on ligands (oxygen) instead of the metal. An electronically excited state can be reached with light absorption in the range of 500-600 nm light due to the relative weakness of the interactions between the ligand and metal. If they were stronger interactions then it would take more energy to promote electrons into a higher energy state and the color would be shifted into the UV. A colorwheel tells you that light absorption in the 500-600nm range should be approximately violet, which is what we see for the color of this particular complex. Anyway, this excited state means some electrons temporarily move from the ligand to the metal, resulting in a LMCT band. | {
"domain": "chemistry.stackexchange",
"id": 3,
"tags": "hybridization, color"
} |
Continuous spectrum of hydrogen atom | Question: I wonder if there is a nice treatment of the continuous spectrum of hydrogen atom in the physics literature--showing how the spectrum decomposition looks and how to derive it.
Answer: The term to look for is Coulomb wave. These wavefunctions are well explained in the corresponding Wikipedia article.
Depending on your mathematical background, you should be ready for a bit of a formula jolt, as these wavefunctions rely very intimately on the confluent hypergeometric function. If you want the short of it, then I can tell you that the solutions $\psi_\mathbf k^{(\pm)}(\mathbf r)$ to the continuum hydrogenic Schrödinger equation
$$
\left(-\frac12\nabla^2+\frac Zr\right)\psi_\mathbf k^{(\pm)}(\mathbf r)=\frac12 k^2\psi_\mathbf k^{(\pm)}(\mathbf r)
$$
with asymptotic behaviour
$$
\psi_\mathbf k^{(\pm)}(\mathbf r)\approx \frac{1}{(2\pi)^{3/2}}e^{i\mathbf k·\mathbf r}
\quad\text{as }\mathbf k·\mathbf r\to\mp \infty
$$
are
$$
\psi_\mathbf k^{(\pm)}(\mathbf r)
=
\frac{1}{(2\pi)^{3/2}}
\Gamma(1\pm iZ/k)e^{-\pi Z/2k}
e^{i\mathbf k·\mathbf r}
{}_1F_1(\mp iZ/k;1;\pm i kr-i\mathbf k·\mathbf r)
.$$
You can also ask for solutions with definite angular momentum (which do exist for any $m$ and $l\geq|m|$); those are detailed in the partial wave expansion section of the Wikipedia article.
If you want textbooks which develop these solutions, look at
L. D. Faddeev and O. A. Yakubovskii, Lectures on quantum mechanics for mathematics students. American Mathematical Society, 2009;
and
L. A. Takhtajan, Quantum mechanics for Mathematicians, American Mathematical Society, 2008.
Hat-tip to Anatoly Kochubei for providing these references in an answer to my MathOverflow question Is zero a hydrogen eigenvalue? | {
"domain": "physics.stackexchange",
"id": 23884,
"tags": "quantum-mechanics, mathematical-physics, operators, resource-recommendations, hydrogen"
} |
turtlebot_calibration isn't working as expected | Question:
My robot isn't a turtlebot, but it's very close / uses many turtlebot parts.
I'm trying to run the turtlebot calibration routine. On the first step, it rotates 360 degrees. On the second, it only makes it half way, 180 degrees.
Alternativly, I can set odom_angular_scale_correction so that it goes 2 full circles the first step and the other speeds only go one.
Any idea what would cause the lower speed s the higher speeds to be completely off?
[edit] I replaced the create and the power/gyro board without and changes, so it's probably sotware [/edit]
Originally posted by Murph on ROS Answers with karma: 1033 on 2012-04-01
Post score: 1
Original comments
Comment by Murph on 2012-04-02:
Dropping the update_rate of turtlebot_node to 10 instead of 30 made it perform a lot better in the calibration, but the rotation still seems way off when I actually drive the robot around..
Answer:
It is supposed to turn 720 the first time then 360 the following times.
Originally posted by tfoote with karma: 58457 on 2012-04-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8824,
"tags": "ros, turtlebot-calibration"
} |
MVC Improvement - The View Module - 1 | Question: /****************************************************************
VIEW MODULE
view, view_database, view_message
view_arche_1 (external), view_arche_2 (external) - these are
primarily html files
****************************************************************/
/*view*/
class view extends database
{
}
/*view_db*/
class view_db extends view
{
function __construct($type)
{
parent::__construct();
$this->invoke($type);
}
private function invoke($type)
{
switch ($type)
{
case "bookmarks":
$this->html_bookmarks();
break;
case "tweets":
$this->html_tweets();
break;
default:
throw new Exception('Invalid View Type');
break;
}
}
private function html_bookmarks()
{
$email = $_SESSION['email'];
$query_return = database::query("SELECT * FROM bo WHERE email='$email' ORDER BY name ASC");
$html_string='';
while ($ass_array = mysqli_fetch_assoc($query_return))
{
$fav=$this->favicon($ass_array['url']);
$html_string = $html_string . "<img name='bo_im' class='c' src='$fav' onerror='i_bm_err(this)'><a target='_blank' name='a1' class='b' href = $ass_array[url]>$ass_array[name]</a>";
}
echo $html_string;
}
private function html_tweets()
{
$query_return = database::query("SELECT * FROM tw ORDER BY time DESC LIMIT 7");
$time = time();
$html_string='';
while ($a = mysqli_fetch_assoc($query_return))
{
$html_string = $html_string . "<div class='Bb2b'><img class='a' src='pictures/$a[email].jpg' alt=''/><a class='a' href='javascript:void(0)'>$a[fname] posted <script type='text/javascript'>document.write(v0($time,$a[time]))</script></a><br/><p class='c'>$a[message]</p></div>";
}
echo $html_string;
}
private function favicon($url)
{
$pieces = parse_url($url);
$domain = isset($pieces['host']) ? $pieces['host'] : '';
if(preg_match('/(?P<domain>[a-z0-9][a-z0-9\-]{1,63}\.[a-z\.]{2,6})$/i', $domain, $regs))
{
return $pieces['scheme'] . '://www.' . $regs['domain'] . '/favicon.ico';
}
return false;
}
}
class view_message extends view
{
function __construct($type)
{
$this->invoke($type);
}
private function invoke($type)
{
$this->message($type);
}
private function message($type)
{
$type_message = array(
'empty' => '<si_f>Please complete all fields.',
'name'=> '<su_f>Only letters or dashes for the name field.',
'email' => '<si_f>Please enter a valid email.',
'taken' => '<si_f>Sorry that email is taken.',
'pass' => '<si_f>Please enter a valid password, 6-40 characters.',
'validate' => '<si_f>Please contact <a class="d" href="mailto:support@archemarks.com">support</a> to reset your password.');
echo $type_message[$type];
}
}
Answer: Some random observations:
What does your base class database do? I hope not the stuff it sounds like. Your aim is to separate storage and view logic, not add view logic to your database.
Having both static and non-static functions in a class should make you suspicious (as this is an indicator this class has one then more objective or is inconsistently written).
Some functions do echo, some return and some call other functions. Decide for one way ;).
invoke for parameter to function resolving? Have a look at the much cooler magic function __call.
Some suggestions:
It is probably a good idea to move the single view operations into it's own classes. This becomes more important when you add even more operations.
Unify the way your class is accessed and the return values.
Views don't extend a database.
Easy to implement way:
Let every view operation be it's own class.
Define a common interface for all view operations (e.g. ViewOperation with one required function run()).
Create one main class View that is responsible to resolve view operations to subclasses.
Example implementation:
interface ViewOperation
{
public function run();
}
public class ControllAdd implements ViewOperation
{
private $_types = array('pass' => '<xx_p>', 'fail' => '<xx_f>');
public function run()
{
$params = func_get_arg(1);
if(isset($params) && !empty($params) && array_key_exists($this->_types, $params))
{
return $this->_types[$params];
}
}
}
public class View
{
public function __call($name, $arguments)
{
// More security required here of course
require_once './ViewOperartopns/' . $name . '.php';
$instance = new $name;
return $instance->run($arguments);
}
} | {
"domain": "codereview.stackexchange",
"id": 733,
"tags": "php"
} |
Does particle's speed affects double slit result? | Question: Just curious did anyone try to do double slit using particles traveling at the slowest speed possible, imagine the particle trajectory subject to Earth's gravity and the screen is on the floor. This may sound preposterous even by my standard but I like to know do speed matters or not, thanks.
Answer: Not exactly what you seek but close: the diffraction of $C_{60}$ molecules by a grating [ANVA+99]. The speed was about 220 m/s. An interference pattern was observed.
The bottom picture shows the profile of the beam without the grating and the top one shows the outcome with the grating. The solid line is a fit using Kirchhoff diffraction theory.
So now, a bit of theory! The relevant parameter is the de Brolie wavelength of the particles, which is $\lambda=h/p$ where $h$ is the Planck constant and $p$ is the momentum. The width of the slits in the grating has to be approximately equal to $\lambda$ for diffraction to happen. For slow moving particle such as in this experiment, the Newtonian version is enough: $p=mv$ where $m$ is the mass of the particle. So yes, the speed $v$ matters but the theory accounts for it.
[ANVA+99] Markus Arndt, Olaf Nairz, Julian Vos-Andreae, Claudia Keller, Gerbrand van der Zouw, and Anton Zeilinger. Wave-particle duality of c60 molecules. Nature, 401(6754):680–682, 10 1999. | {
"domain": "physics.stackexchange",
"id": 43113,
"tags": "double-slit-experiment"
} |
Best way to get nearest non-zero value from list | Question: I have a list of values and I need to, given any arbitrary starting index, find the closest value which is non zero, if the value at the starting index is zero...
Here is what I have:
def getNearestNonZero(start_index):
mylist = [4,2,6,7,3,0,0,9,4,2,5,8,1,7]
val = mylist[start_index]
if val == 0:
loop_length = 0
after = mylist[start_index+1:]
before = mylist[:start_index]
before = before[::-1]
print(before, after)
if len(before) >= len(after):
loop_length = len(before)
else:
loop_length = len(after)
for i in range(loop_length):
if i < len(before):
before_val = before[i]
if i < len(after):
after_val = after[i]
if before_val > 0:
return before_val
if after_val > 0:
return after_val
return val
result = getNearestNonZero(6)
print(result)
result = getNearestNonZero(5)
print(result)
[0, 3, 7, 6, 2, 4] [9, 4, 2, 5, 8, 1, 7]
9
[3, 7, 6, 2, 4] [0, 9, 4, 2, 5, 8, 1, 7]
3
What I do, is I first check to see if the value at the start_index is > 0. If it is, great, return it. If however, the value is zero, we need to find the closest non-zero, with a preference for before, rather than after...
To do this, I split mylist into two separate lists, before and after. If my start index is 6, before will now look like: [4,2,6,7,3,0] and after will look like: [9,4,2,5,8,1,7].
Since I need the closest value to the start_index, I reverse my before list: before = before[::-1]
I then get the length of the longest of the two (before and after).
I then loop and check the value at each index of the two lists. The first one to have a value > 0 is returned and my work is done.
However, this feels very clunky and as if it can be done in a cleaner way.
Does anyone have any recommendations? What is the faster/cleaner/pythonic way for finding the nearest non-zero value in a list, given a starting index?
Answer: I really like your idea of splitting the list -- it's very functional! Let's look at optimizing that approach, though.
def nearest_non_zero(lst, idx):
# note that I parameterized the list. Should make it easier to test.
# though I'll use [4,2,6,7,3,0,0,9,4,2,5,8,1,7] as in your code for
# all examples throughout
if lst[idx] > 0:
return lst[idx]
# I find it easier to track early-exit routes through code than
# to rule everything out. Let's short-circuit if possible!
before, after = reversed(lst[:idx]), lst[idx+1:]
# using `reversed` here lets us avoid creating a new list, which
# is otherwise necessary if we did `(lst[:idx])[::-1]`. Imagine
# using that on a list with length of ten million. Yikes!
So far I've really only cleaned up a little bit of code, nothing big, but here is where I'll make a decent departure from your original code. Your code does this:
if len(before) >= len(after):
loop_length = len(before)
else:
loop_length = len(after)
This is unnecessary, and it forces you to write in the guards inside your for loop to make sure you're not indexing a value outside of the list length. That's too much work. There's a couple better ways, the primary one being itertools.zip_longest.
The builtin zip pulls together two (or more!) lists into one list of tuples. For instance
>>> a = [1, 2, 3]
>>> b = ['a', 'b', 'c']
>>> zip(a, b)
[(1, 'a'), (2, 'b'), (3, 'c')]
>>> zip(b, a)
[('a', 1), ('b', 2), ('c', 3)]
However if you have lists of uneven size, it will only go so far as the shortest list
>>> a = [1, 2]
>>> b = ['a', 'b', 'c']
>>> zip(a, b)
[(1, 'a'), (2, 'b')] # where's 'c'?!?!
That's not what we want here. We want to zip together every value from both the before and after lists and compare them. That's what itertools.zip_longest does. But of course it needs another value: what should I use to pair with a value that doesn't exist? That's the parameter fillvalue, which by default is None
>>> a = [1, 2]
>>> b = ['a', 'b', 'c']
>>> itertools.zip_longest(a, b)
[(1, 'a'), (2, 'b'), (None, 'c')] # there's 'c', phew!
>>> a = [1, 2]
>>> b = ['a', 'b', 'c']
>>> itertools.zip_longest(b, a, fillvalue="foobar")
[('a', 1), ('b', 2), ('c', 'foobar')]
That's exactly what we want! In this case we want our fillvalue to explicitly be a failing case (since if the number doesn't really exist we obviously don't want to use it), and our failing case is zero.
After that we compare to zero our "before" value, then our "after" value (to implement that preference for before rather than after), and return either if it's appropriate to do so.
import itertools
for b_val, a_val in itertools.zip_longest(before, after, fillvalue=0):
if b_val > 0:
return b_val
elif a_val > 0:
return a_val
else:
# what do you do if you have a list that's all zeroes?
This keeps us from having to deal with any indexing at all, which is common idiomatically in Python. If you have to do for i in range(...): some_list[i], then you're probably doing something wrong!
Our final code, then is:
from itertools import zip_longest
def nearest_non_zero(lst, idx):
if lst[idx] > 0:
return lst[idx]
before, after = lst[:idx], lst[idx+1:]
for b_val, a_val in zip_longest(reversed(before), after, fillvalue=0):
# N.B. I applied `reversed` inside `zip_longest` here. This
# ensures that `before` and `after` are the same type, and that
# `before + [lst[idx]] + after == lst`.
if b_val > 0:
return b_val
if a_val > 0:
return a_val
else:
return 0 # all zeroes in this list | {
"domain": "codereview.stackexchange",
"id": 26848,
"tags": "python"
} |
How to produce Fe(OH)2? | Question: I'm interested in producing Ferrum hydroxide (II), I did some research on the internet and made a plan for myself:
Conduct electrolysis:
Conduct electrolysis producing $\ce{NaOH}$ and chlorine solution.
Produce $\ce{HCl}$:
Produce $\ce{HCl}$ from chlorine solution by exposing it to UV light.
Make iron and hydrochloric acid react:
Adding iron in hydrochloric acid makes this reaction: $\ce{2HCl + Fe -> FeCl2 + H2}$ producing a solution with $\ce{HCl}$ and $\ce{FeCl2}$, then evaporate it and be left with solid $\ce{FeCl2}$ crystals.
Produce $\ce{Fe(OH)2}$:
As we have $\ce{FeCl2}$ and $\ce{NaOH}$, we can make this reaction: $\ce{2NaOH + FeCl2 -> Fe(OH)2 + 2NaCl}$ producing a participate ($\ce{Fe(OH)2}$).
The information above was taken from internet, and I am not sure that it will work, I asked a question about electrolysis before and I got an answer that when conducting electrolysis of saltwater it generates $\ce{OH-}$ ions that create different chlorine-based substances, maybe they are good for making $\ce{HCl}$ acid as shown in step 2 but I'm not sure.
If this won't work, maybe there is another way of producing Ferrum hydroxide (II), if yes, then please write it in an answer. I also think that I could use baking soda as a electrolyte, but doesn't that mean that it will be same as with salt? Could there be electrolytes that don't react with chlorine? Or is there any way of producing "pure" $\ce{NaOH}$ and $\ce{FeCl2}$?
Answer: The main problem in the synthesis of $\ce{Fe(OH)2}$ is not the procedure to use. The main problem is how to prevent $\ce{Fe(OH)2}$ from air oxidation. Whatever the technique used, the synthesis of $\ce{Fe(OH)2}$ will be done in contact with atmospheric air. And this contact oxidizes quickly $\ce{Fe(OH)2}$ which is green, into a ferric oxide, which is brown. So it is difficult to synthesise pure green $\ce{Fe(OH)2}$, whatever the reaction to be used.
In my opinion, the best way is to dissolve maybe $1$ mole commercial $\ce{FeSO4·7H2O}$ in $1-2$ liter water (depending on the temperature), and add $2$ moles of $\ce{NaOH}$ dissolved in the minimum volume of water, filtrate the green precipitate, wash and dry it as quickly as possible, if possible in a nitrogen atmosphere. Traces of air oxidation are easy to discover by observing brown spots on the green powder. | {
"domain": "chemistry.stackexchange",
"id": 17768,
"tags": "experimental-chemistry"
} |
Multiple observers watching a light source | Question: Every time I think I understand something about QED some very basic things just trip me up again.
So, I realise a photon can be thought of as an observation that carries two parameters, its wave length and its direction of propagation.
Also, I realise that if an observer observes a photon because of some interaction with matter (meaning electrons) - the classic example being a mirror, - the observation can be thought of to be built of all possible paths of the photon cancelling out each other and averaging out. I also appreciate you should not overthink this and it’s just what happens.
I also understand the dual split + detector observation that illustrates a collapsing of the wavelike nature of photon observations into a particle like behaviour when it hits a detector at a split.
However, I just can’t understand anymore how multiple observers can all look at a small source of EM, a light source, and all see the same at the same time. It’s driving me insane all of a sudden. Suppose billions of observations are made at the same time. All of humanity is looking at a small led in the vacuum of space.
Why don’t they collapse the photons into only a couple of them reaching a couple of observers? Am I getting a traditional confusion of Copenhagen interpretation or just not seeing something silly?
Also, if we consider matter being observed through its interaction with/by photons (which is maybe the only way to observe matter aside of gravity?), why do the photons not emerge as somewhat like pin needles sticking out and why do faraway observations not experience even more “holes” because of collapsed observations..
Sorry if this is trivial Copenhagen confusion. I’m just really confused and suddenly can no longer explain anything about the world.
Answer: The EM wave of light is made up of gazillions of photons. This is why multiple observers can see the same event. But they aren’t detecting the same photons, rather they are detecting photons from the same source.
However, if our optic system was capable of very high frame rates, high enough to resolve between photons in an EM wave, then we’d see in flashes. And this now will be unique for every observer.
Take this experiment for example. Photons are sent one at a time through a double slit and captured on a screen. Each time, the photon only blips at one point on the screen. And many many photons later what we see is an interference pattern.
Now, if you repeat the experiment, you won’t get the same sequence of detections but finally the pattern will still be that of interference. If our detectors aren’t fast enough to refresh, we won’t see the building up of the pattern but the pattern directly. | {
"domain": "physics.stackexchange",
"id": 65295,
"tags": "quantum-field-theory, photons, quantum-electrodynamics"
} |
MoveIt and ROS-Industrial: No controller_list specified | Question:
I am trying to control my KUKA Agilus with the ROS-I experimental package and MoveIt and i get an error for no controller_list specified.
[ERROR] [1466689198.911826798]: No controller_list specified.
[ INFO] [1466689198.911892610]: Returned 0 controllers in list
[ INFO] [1466689198.919276819]: Trajectory execution is managing controllers
However, i do explicitly load the yaml config:
controller_list:
- name: ""
action_ns: joint_trajectory_action
type: FollowJointTrajectory
joints: [joint_a1, joint_a2, joint_a3, joint_a4, joint_a5, joint_a6]
And i can list the configuration with:
$ rosparam get /controller_list
What am i missing? The test XML reads as follows:
<launch>
<rosparam command="load" file="$(find kuka_rsi_hw_interface)/config/controller_joint_names.yaml"/>
<rosparam command="load" file="$(find agil_moveit_config)/config/controllers.yaml"/>
<arg name="sim" default="true" />
<arg name="robot_ip" unless="$(arg sim)" />
<include file="$(find agil_moveit_config)/launch/planning_context.launch" >
<arg name="load_robot_description" value="true" />
</include>
<!-- remap topics to conform to ROS-I specifications -->
<remap from="/position_trajectory_controller/follow_joint_trajectory" to="/joint_trajectory_action" />
<remap from="/position_trajectory_controller/state" to="/feedback_states" />
<remap from="/position_trajectory_controller/command" to="/joint_path_command"/>
<!-- run the robot simulator and action interface nodes -->
<group if="$(arg sim)">
<include file="$(find industrial_robot_simulator)/launch/robot_interface_simulator.launch" />
<include file="$(find kuka_rsi_hw_interface)/test/test_hardware_interface.launch" />
</group>
<include file="$(find agil_moveit_config)/launch/move_group.launch">
<arg name="publish_monitored_planning_scene" value="true" />
</include>
<include file="$(find agil_moveit_config)/launch/moveit_rviz.launch">
<arg name="config" value="true"/>
</include>
<include file="$(find agil_moveit_config)/launch/default_warehouse_db.launch" />
</launch>
Originally posted by cschindlbeck on ROS Answers with karma: 56 on 2016-06-23
Post score: 0
Answer:
From your error message, I think you forgot to update the agil_moveit_config/launch/kuka_kr6r900sixx_moveit_controller_manager.launch file. That is the file that actually uploads the contents of the config/controller.yaml file, and sets the moveit_controller_manager to the correct value.
Without that, MoveIt won't know how to interface with your controllers.
I've also just created an example MoveIt configuration package for the KR 6 R900 sixx, see the kr6r900sixx_moveit_rsi_convenience in my fork of ros-industrial/kuka_experimental. I've added a bit more than just a working configuration, so please test it out and let me know if it works for you.
The main issue is that the kuka_rsi_hw_interface doesn't (by default) setup all the topics and services that the tutorial you followed to create your MoveIt configuration expects. Especially the namespacing of the action server and naming of the joint state and feedback topics is different. The structure I set up in kr6r900sixx_moveit_rsi_convenience takes care of that, so everything 'above' kuka_kr6_support should now be able to interact with the RSI hardware_interface as it would with a regular ROS-Industrial driver (but see also the comments in the launch files).
Originally posted by gvdhoorn with karma: 86574 on 2016-06-24
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by cschindlbeck on 2016-06-28:
Thanks a lot for all your effort! The simulation now works as expected. The default RRT planner now also solves the problem of the robot's weird floating behavior when executing a trajectory.However, I tried to connect the real robot with the RSI interface and there seems to be be a problem.
Comment by cschindlbeck on 2016-06-28:
Maybe it is just a minor spelling error. Executing the "rsi_axis_correction.src" on the KUKA controller yields: "File not found: rsi_axis_correction". When i change the line to load "rsi_axis_corr.rsi", it says "RSI_CREATE: File not found ETHERNET1". Do i have to remap or specify something?
Comment by cschindlbeck on 2016-06-29:
after some analysis i suppose the ETHERNET1 block in the RSI diagramm has no input source, could be the problem
Comment by gvdhoorn on 2016-06-29:
@cschindlbeck: could you please report this to the issue tracker? That way we can make sure to capture the problem and eventual solution in the proper place.
Comment by gvdhoorn on 2016-06-29:
Also: it sounds like your initial problem with the controller_list parameter has been resolved. Could I ask you to mark this question as solved then? The RSI / controller setup problems are unrelated to that. | {
"domain": "robotics.stackexchange",
"id": 25038,
"tags": "ros, moveit, kuka, ros-industrial"
} |
Improving my Tic Tac Toe Solution in Scala | Question: I'm relatively new to Scala and made a small TicTacToe in a functional Style.
Any improvement options would be highly appreciated.
There are some things which I am unsure of if they are made in an optimal way.
Error Handling in the readMove Function
If i should incorporate types like (type Player = Char, type Field = Int) and if they would benefit the code
The printBoard function looks unclean and I can't yet figure out how to best change it.
board(nextMove) = nextPlayer(board) seems to break the style of immutable values.
This is already an improvement from my first version
import scala.annotation.tailrec
import scala.io.StdIn.readLine
object TicTacToeOld {
val startBoard: Array[Char] = Array('1', '2', '3', '4', '5', '6', '7', '8', '9')
val patterns: Set[Set[Int]] = Set(
Set(0, 1, 2),
Set(3, 4, 5),
Set(6, 7, 8),
Set(0, 3, 6),
Set(1, 4, 7),
Set(2, 5, 8),
Set(0, 4, 8),
Set(2, 4, 6)
)
def main(args: Array[String]): Unit = {
startGame()
}
def startGame(): Unit ={
println("Welcome to TicTacToe!")
println("To play, enter the number on the board where you want to play")
printBoard(startBoard)
nextTurn(startBoard)
}
@tailrec
private def nextTurn(board: Array[Char]): Unit = {
val nextMove = readMove(board)
board(nextMove) = nextPlayer(board)
printBoard(board)
if (!isWon(board)) {
nextTurn(board)
}
}
@tailrec
private def readMove(board: Array[Char]): Int ={
try {
val input = readLine("Input next Turn: ").toInt-1
if(input<0 || input>8 || !board(input).toString.matches("[1-9]")) {
throw new Exception
}
input
} catch {
case _: Exception => readMove(board)
}
}
private def nextPlayer(board: Array[Char]): Char = {
val remainingTurns = board.count(_.toString.matches("[1-9]"))
if(remainingTurns%2 == 0) 'x' else 'o'
}
private def printBoard(board: Array[Char]): Unit = {
print(
0 to 2 map { r =>
0 to 2 map { c =>
board(c + r*3)
} mkString "|"
} mkString ("__________\n", "\n------\n", "\n")
)
println("Next Player is " + nextPlayer(board))
}
private def isWon(board: Array[Char]): Boolean = {
patterns.foreach(pattern=>{
if(pattern.forall(board(_) == board(pattern.head))) {
print("Winner is " + board(pattern.head))
return true
}
})
false
}
}
Answer:
you can remove main method and use extends App. It's safe 3 line of code.
replace Array('1', '2', '3', '4', '5', '6', '7', '8', '9') on ('1' to '9').toArray
I prefer always use type annotation if it is not clear. E.g.: val nextMove: Int = readMove(board)
use Intellij autoformat (ctrl + alt + L)
it is good idea to say user what is wrong with his input before prompt new one
[Char].toString.matches("[1-9]") may be replaced on [Char].isDigit (PS: may be it is not correct, because 0 is also digit)
don't use return keyword, especially in foreach, map and so on. It is not what you want, at least in this cases. foreach + forall + return should be replaced on contains + forall or something.
usually list map f used only for non-chained calls, but not for list map m filter filter collect g, because it is unclear for reader. UPDATE: also this syntax used for symbol-named functions, like +, ::, ! (in akka).
you can use special type for positive digit, for example, case class PosDigit(v: Int) {require(v >= 0 && v <= 9, s"invalide positive digit: $v")}
there is no essential reason to pass board as argument. Commonly if argument is passed, it is not changed. In your code it is not.
UPDATE for 10. In functional programming the clear way is to pass immutable collection to function and return new one if you need. It is programming without side effects. In OOP there is way to use mutable collection of class (or class's object). Scala both OOP and FP language.
Sorry for my English. | {
"domain": "codereview.stackexchange",
"id": 41312,
"tags": "functional-programming, scala"
} |
easily move relative base position in urdf (for 2 seperate robots)? | Question:
I want to display two robots in Rviz using URDF that do not share a base link (to be controlled in simulation using joint_state_publisher) by only changing 1 base offset origin in the world coordinate system for each robot.
I experimented with a lot of combinations using urdf displayed in rviz, but no luck. I'm sure someone has done this, only I cant find the answer.
I thought the axis and origin were offsets from the previous link (but since everything is defined on top of 0,0,0 this may be an erroneous assumption.)
TIA
Edit: This URDF only moves the base link offset - all connected links and joints above are aligned along (0,0,0) axiss:
<link name="base_link">
<visual>
<origin rpy="0 0 0" xyz="0 -0.5 0"/>
<geometry>
<mesh filename="package://fanuc_lrmate200id_support/meshes/lrmate200id/visual/base_link.stl" scale="1.0 1.0 1.0"/>
</geometry>
....
Want to move (offset) robot -.5 in y axis so I can add another robot in the +.5 in the y axis.
Sample would work best - tried suggestions found on internet but didnt work. And could not find anything on the www.
Appears as if I have to offset all links/joints by base amount along lines of pr2.
Originally posted by rosnutsbolts on ROS Answers with karma: 3 on 2016-09-16
Post score: 0
Original comments
Comment by Mark Rose on 2016-09-16:
Normally the robot is moved around in rviz by the frame transformations published by your odometry components. That is, the odometry publishes the transform from the odom frame to base_link (or whatever the name of your robot's main frame is). joint_state_publisher is not used for this.
Comment by Mark Rose on 2016-09-16:
Maybe give a little more detail what you're trying to do? You want to visualize two robots? That's doable. How are you publishing odometry for each? Do you have a separate URDF for each? (Or two instances of the state publishers and the same URDF.)
Comment by JoshMarino on 2016-09-17:
You can create a dummy link for each robot that will connect from world link to the base link of each robot, assuming you are having two robots defined in the same URDF. Not sure if this is what you are looking for.
Answer:
Below is an example of what I commented earlier, if this is what you are looking for. You will have to modify it for your specific robot of course.
<robot xmlns:xacro="http://www.ros.org/wiki/xacro" name="robot" >
<!-- World Link -->
<link name="world" />
<!-- Dummy Link -->
<link name="link0" />
<joint name="world_joint" type="fixed">
<parent link="world" />
<child link="link0" />
<origin xyz="0 0 0" rpy="0 0 0"/>
<axis xyz="0 0 1"/>
</joint>
<!-- First Robot -->
<joint name="link0_joint" type="fixed">
<parent link="link0" />
<child link="robot1_link1" />
<origin xyz="0 -0.5 0" rpy="0 0 0"/>
</joint>
<link name="robot1_link1" />
<joint name="robot1_joint1" type="revolute">
<link name="robot1_link2" />
<joint name="robot1_joint2" type="revolute">
<link name="robot1_link3" />
<!-- Second Robot -->
<joint name="link0_joint" type="fixed">
<parent link="link0" />
<child link="robot2_link1" />
<origin xyz="0 0.5 0" rpy="0 0 0"/>
</joint>
<link name="robot2_link1" />
<joint name="robot2_joint1" type="revolute">
<link name="robot2_link2" />
<joint name="robot2_joint2" type="revolute">
<link name="robot2_link3" />
</robot>
Originally posted by JoshMarino with karma: 592 on 2016-09-21
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by rosnutsbolts on 2016-09-22:
Thanks - explanation helpful. One caveat. The robot URDF I was testing against had a fixed base joint connected to joint_1, when I changed base joint to 0,-0.5,0 as above it only moved the base. When I changed the first revolute joint in the robot to 0,-0.5,0, the entire robot shifted.
Comment by gvdhoorn on 2016-09-22:
@rosnutsbolts: you don't want to move joints. To properly do what @JoshMarino is suggesting, create a fixed joint between your shared world link and the root link of the fanuc_lrmate200id model you're instantiating twice. The root would be base_link. That will automatically move the rest too. | {
"domain": "robotics.stackexchange",
"id": 25776,
"tags": "ubuntu-precise, ubuntu"
} |
Dimensional analysis of power | Question: I am trying to do the dimensional analysis of power using two different ways
Using $P = \frac{dW}{dt}$
$\frac{[W]}{[t]} = L^2\cdot T^{-3}\cdot M$
since $W$ is in joule
Using the power formula $P = VI$
I get to $[P] = [V]\cdot I$ and $[P] = [L]\cdot [T]^{-1}\cdot I$ and I am stuck here, how can I get to the first result? $I$ is SI so I don't know what to do next...
Answer: In the equation $P = V \cdot I$, the $V$ stands for voltage, not velocity. The units of voltage are joules/coulomb, and the units of $I$ (current) are coulombs/second, so you should get the same answer. | {
"domain": "physics.stackexchange",
"id": 71095,
"tags": "dimensional-analysis, power"
} |
Flying Insect Identification | Question: What insect is this?
Found in the UK, approximately 10mm-15mm in length.
Answer: That insect looks quite like a Mayfly, it's an ancient insect that belongs to the order Ephemeroptera. I'll go a step further and guess that species, that one there is most probably a Green drake, they aren't harmful at all as in they don't sting or bite but if a lot of dead mayflies pile up they can cause respiratory inflamations, asthma, hay fever that sort of stuff.
They're a vital part of the pond ecosystem. Here's a picture of a Green drake and it's quite similar to your picture. Credit to Missoulian Angler
I think some of the major reasons why it is a Mayfly are a s follows:-
The transparent wings are a minor giveaway.
The three tail filaments are a major giveaway.
Very short almost non-existent antennae | {
"domain": "biology.stackexchange",
"id": 11429,
"tags": "species-identification, entomology"
} |
Labeled points in $\{0,1\}^n$ such that every linear separator requires exponential weights | Question: I want to find labeled samples in $\{0,1\}^n$ such that the Perceptron algorithm takes $2^{\Omega(n)}$ steps to converge. One way to do this would be to find a sequence of labeled examples that are linearly separable, but require every linear separator to have at least one exponentially large weight. To show that the samples are linearly separable, it is enough to show that they are consistent with a decision list, which should be apparent from the list of samples. So, my question is
Does there exist a set of labeled samples $S$ in $\{0,1\}^n$ that are consistent with a decision list and such that any linear threshold function that correctly labels $S$ has at least one exponentially large weight $w_i = 2^{\Omega(n)}$?
Here are the definitions that I'm working with: A linear threshold function $f \colon \{0,1\}^n \to \{0,1\}$ with associated weights $w_0, \dots, w_n \in \mathbb{R}$ gives $f(x) = 1$ if and only if $w_1x_1 + w_2x_2 + \dots + w_nx_n \geq w_0$. Given a set $S$ of points in $\{0,1\}^n$ labeled $0$ or $1$, we say that a linear threshold function $f$ correctly labels $S$ if $f(x) = 1$ whenever $x$ is labeled $1$ and $f(x) = 0$ whenever $x$ is labeled $0$ for all $x \in S$.
Note: I had asked the same question on math.stackexchange since it seemed relevant to both fields. Here is the link for that.
Answer: Håstad gave an even better example in his paper On the Size of Weights for Threshold Gates, which requires super exponential weights.
A simple example which requires exponential weights is the function $\sum_i 2^i (x_i - y_i) \geq 0$ or variants. | {
"domain": "cs.stackexchange",
"id": 15994,
"tags": "algorithms, time-complexity, machine-learning, linear-programming, learning-theory"
} |
How to prove that the symmetrisation Operator is hermitian? | Question: Let $\mathcal{H}_N$ be the $N$ particle Hilbert space. So a quantum state $\left| \Psi \right>$ may be representated by
$$\left| \Psi \right> = \left| k_1 \right>^{(1)}\left| k_2 \right>^{(2)}...\left| k_N \right>^{(N)}$$
, where the $\left| k_i \right>^{(n)}$ each form a basis of the single particle Hilbert space. In order to obtain the fully symmetric subspace $\mathcal{H}_N^S$ the permutation Opertaror
$$\hat P \left[ \left| k_1 \right>^{(1)}\left| k_2 \right>^{(2)}...\left| k_N \right>^{(N)} \right] = \left| k_{P_1} \right>^{(1)}\left| k_{P_2} \right>^{(2)}...\left| k_{P_N} \right>^{(N)}$$
and the symmetrisation Operator
$$ \hat S = \frac{1}{N!} \sum_P \hat P$$
are introduced.
I am asked to prove that the symmetrisation Operator is hermitian. My first idea was that the permutation operator is hermitian and therefore the symmetrisation Operator, as a sum of hermitian Operators, is too. However I strugle to prove it and by now am unsure if $\hat P$ is hermitian after all. Any help is most welcome.
Edit
With some inspiration from the answers I think I got a prove. I think it is different from what the answers are aiming at, but I don't see how to do it in another way.
So here my shot: Every permutation Operator can be decomposed into a number of Exchange Operators $\hat E_{ij}$ that exchanges two particles:
$$\hat P = \prod_k \hat E_{i_k j_k}$$, with
$$\hat E_{ij} \left| k_1 \right>^{(1)}...\left| k_i \right>^{(i)}...\left| k_j \right>^{(j)}...\left| k_N \right>^{(N)} = \left| k_1 \right>^{(1)}...\left| k_{j} \right>^{(i)}...\left| k_{i} \right>^{(j)}...\left| k_N \right>^{(N)}$$
To prove that $\hat P$ (and therefore $\hat S$) is hermitian it is sufficient to prove that $\hat E_{ij}$ is. This finally can be done explicetly via
$$\left< k_1' \right|^{(1)}...\left< k_i' \right|^{(i)}...\left< k_j' \right|^{(j)}...\left< k_N' \right|^{(N)} \hat E_{ij} \left| k_1 \right>^{(1)}...\left| k_i \right>^{(i)}...\left| k_j \right>^{(j)}...\left| k_N \right>^{(N)}\\
=\delta(k_1',k_1)... \delta(k_i', k_j)... \delta(k_j', k_i)...\delta(k_N', k_N) $$
,which is the same as
$$\left< k_1 \right|^{(1)}...\left< k_i \right|^{(i)}...\left< k_j \right|^{(j)}...\left< k_N \right|^{(N)} \hat E_{ij} \left| k_1' \right>^{(1)}...\left| k_i' \right>^{(i)}...\left| k_j' \right>^{(j)}...\left| k_N' \right>^{(N)}\\
=\delta(k_1,k_1')... \delta(k_i, k_j')... \delta(k_j, k_i')...\delta(k_N, k_N') $$
Since both terms are reals, we may freely take the complex conjugate and arrive at the hermitian condition for the matrix elemets. I think this should prove it for $\hat E_{ij} \Rightarrow \hat P \Rightarrow \hat S$.
Edit 2
Darn, the prove doesn't work since the $\hat E_{ij}$ don't commute.
Answer: Lets start with a simple case where we consider only two particles. Take a complete set of one particle state such that a basis of two particle states can be obtained as a product
$|k_{i}^{(1)}\rangle\bigotimes|k_{j}^{(2)}\rangle=|k_{i}^{(1)},k_{j}^{(2)}\rangle$
with the permutation operator that exchanges particles
$P=|k_{i}^{(1)},k_{j}^{(2)}\rangle=|k_{i}^{(2)},k_{j}^{(1)}\rangle$
Applying twice this operator we recover the original state, thus $P^{2}=1$. $P$ is showed to be hermitian considering any matrix element
$\langle u_{i}^{(1)},u_{j}^{(2)}|P|u_{i'}^{(1)},u_{j'}^{(2)}\rangle=\langle u_{i}^{(1)},u_{j}^{(2)}|u_{i'}^{(2)},u_{j'}^{(1)}\rangle=\delta_{ij'} \delta_{ji'}$
Since all the matrix elements are real you get $P^{\dagger}=P$. And since $P^{2}=1$ we also have $PP=P^{\dagger}P=1$, thus $P$ is unitary. This can be easily generalized for $N$ particles. This implies $S^{\dagger}=S$, that the operator is hermitian. This result is also true for the antisymmetrization operator. To convince yourself, take states that are eigenstates of all permutation operators in a system with $N$ particles. Index with $p$ the number of all possible $N!$ permutation and let $|\psi\rangle$ such that $P_{p}|\psi\rangle=c_{p}|\psi\rangle$. Its easily seen that the eigenvalue for a permutation should be $c_{p}=(\pm 1)^{n_p}$ where $n_{p}$ is the number of transpositions in which the $p$-th permutation can be splitted. Now you find two case, the totally symmetric and antisymmetric case. which leads you to the symmetrisation and antisymmetrisation operators.
EDIT:
For $N>2$ I generalize the case for two particles; construct a basis for the $N$-particles state as a product of one particle states
$|k_{i_1}^{(1)},\dots,k_{i_N}^{(N)}\rangle=|k_{i_1}^{(1)}\rangle\bigotimes\dots\bigotimes|k_{i_N}^{(N)}\rangle$
In general there are $N!$ possible permutations. A general permutation is denoted by
$P_{l_{1}\dots{l_N}}$ where $l_{j}=1\dots N$ and $j=1\dots N$
However, not to carry to many indices, I'll discuss the case for $N=3$. Obviously the are $6$ possible permutation $P_{123},P_{231},P_{312},P_{213},P_{321},P_{132}$. The permutation operators form a group since
the product of two permutation operators is a permutation operator $P_{312}P_{213}=P_{132}$
the identity is also a permutation operator. $P_{123}$ in this case
the inverse of a permutation is also a permutation. $P_{231}^{-1}=P_{312}$, etc.
But, in general permutation do not commute. Here it can be seen that $P_{213}P_{312}=P_{321}\ne P_{132}$. As you know, a permutation where only two particles are permuted is called a transposition. For $N=3$ these are $P_{132}, P_{321},P_{213}$. As I've showed above for $N=2$ the transpositions are hermitian and unitary. Also, each permutation can be expressed as product of transpositions, $P_{312}=P_{132}P_{213}$. The parity of a permutation is given by the parity of the number of transpositions in which a permutation can be splitted. Since a permutation can be expressed as a product of transpositions and transpositions are unitary, permutations are unitary. But, as above, permutations do not commute in general, although transpositions are hermitian, a general permutation is not.
Now, with this lets pass to symmetrization and antisymmetrization operators. As stated in the last part of my comment before the edit we look for states that are eigenstates of all permutation operators in a system with $N$ particles. The states must satisfy $P_{p}|\psi\rangle=c_{p}|\psi\rangle$ ($p$ is the same as stated above before the edit). Now, the easiest case is that of a transposition, where $c_{p}^{transpose}=(\pm 1)^{n_{p}}$. Since $|\psi\rangle$ is assumed to be an eigenvector of all permutations, it should be an eigenvector of all transpositions. Also, if the particles are indistinguishable, the eigenvalue of the transposition cannot depend on the particular transposition, thus the eigenvalue should be the same for all transpositions. The eigenvalue for a permutation is then $c_{p}=(\pm 1)^{n_{p}}$. It is even or odd depending on the parity of the permutation. This gives us two cases
Total symmetric case $P_{p}|\psi_{S}\rangle=|\psi_{S}\rangle$, $c_{p}=1$
Total antisymmetric case $P_{p}|\psi_{A}\rangle=a_{p}|\psi_{A}\rangle$ where $a_{p}=\pm 1$ for odd and even permutations
Now, the totally symmetric and antisymmetric states spans a subspace of the total Hilbert space which we denote $\mathcal{H}_{S}$ and $\mathcal{H}_{A}$. Its easily proven that the states of these two subspaces are orthogonal to each other
$\langle\psi_{S}|\psi_{A}\rangle=\langle\psi_{S}|P_{p}^{\dagger}|\psi_{A}\rangle=\langle\psi_{S}|P_{p}^{-1}|\psi_{A}\rangle=-\langle\psi_{S}|\psi_{A}\rangle$
Here I assumed that the parity of $P_{p}^{-1}$ is odd. Finally, we build the projection operators on the subspaces $\mathcal{H}_{S}$ and $\mathcal{H}_{A}$ s.t. we obtain elements of either subspaces from any given states
$S=\frac{1}{N!}\sum_{p}P_{p}$ symmetrization op.
$A=\frac{1}{N!}\sum_{p}a_{p}P_{p}$ antisymmetrization op.
Since $P_{p}$ is unitary, $P^{\dagger}=P^{-1}$ is also a permutation operator. All of this, implies that $S^{\dagger}=S$ and $A^{\dagger}=A$, thus both operators are hermitian. Also, given a random permutation operator $P_{r}$ we find
$P_{r}S=\frac{1}{N!}\sum_{p}P_{r}P_{p}=\frac{1}{N!}\sum_{t}P_{t}=S$
with this you can show that $S^{2}=S$. This can help you in a more detailed proof. Hope this helps. | {
"domain": "physics.stackexchange",
"id": 6920,
"tags": "quantum-mechanics, homework-and-exercises, operators"
} |
Mechanism of the formation of peracetic acid | Question: Wikipedia says that the equilibrium $$\ce{H2O2 + CH3COOH ⇌ CH3COOOH + H2O}$$ occurs. What is its mechanism?
The following is my speculation.
The first possibility is that $\ce{CH3COOH}$ is protonated into $\ce{CH3CO(OH2)+}$ because of the strong acid condition and then turns into $\ce{CH3C+O}$. Because the oxygen atom in $\ce{H2O2}$ is electron rich, it will bond with the carbon atom with positive charge to form $\ce{CH3C(=O)O(OH+)H}$ and then peracetic acid is formed by deprotonation.
The second one is that the oxygen atom in $\ce{H2O2}$ attacks the carbon atom in $\ce{MeCOOH}$, then the $\ce{OH}$ in $\ce{COOH}$ and
one of the $\ce H$ in $\ce {H2O2}$ leave.
Is the mechanism above right or not? If it is not, what's the correct one?
P.S. (this question does not answer my question)
Answer: You are not too far off. It is somewhat of a mixture of the two mechanisms you proposed. The carbonyl oxygen is much more basic than the non-carbonyl oxygen and will be protonated preferentially. Then hydrogen peroxide can attack, and once the tetrahedral intermediate collapses, deprotonation yields peracetic acid. | {
"domain": "chemistry.stackexchange",
"id": 11858,
"tags": "organic-chemistry, reaction-mechanism"
} |
tf installation problem | Question:
Hello,
I have been trying to start working with tf.
I followed this tutorial
http://www.ros.org/wiki/tf/Tutorials/Introduction%20to%20tf
But I get this error
ERROR: Rosdep cannot find all required resources to answer your query
Missing resource turtle_tf
ROS path [0]=/opt/ros/groovy/share/ros
ROS path [1]=/opt/ros/groovy/share
ROS path [2]=/opt/ros/groovy/stacks
when I try to run
rosdep install turtle_tf rviz
Any suggestions will be appreciated.
Thanks!
Originally posted by TheSkyfall on ROS Answers with karma: 39 on 2013-02-12
Post score: 0
Answer:
The reason for this is that ROS is not able to find turtlesim package. Either package is not installed path to ros package directory is not correctly defined. Try command "roscd turtlesim" and see whether ur directed to turtlesim folder.
Originally posted by ayush_dewan with karma: 1610 on 2013-02-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by TheSkyfall on 2013-02-12:
Thank you. the "roscd turtlesim" is working correctly! but I still have the problem :(
Comment by ayush_dewan on 2013-02-12:
Sorry check for "turtle_tf" pacakge instead of turtlesim..
Comment by TheSkyfall on 2013-02-12:
There is no such package "turtle_tf". I had to install its stack first using this line: sudo apt-get install ros-groovy-geometry-tutorials. Now it's working | {
"domain": "robotics.stackexchange",
"id": 12856,
"tags": "ros"
} |
frequency sampling method) why the coefficient is not symmetric? | Question: Here is matlab code
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
fs = 2000;
f = 0:20:1000;
D = besselj(0,f);
DD = fliplr(D);
DD(1) = [];
DD(end) = [];
H = [D DD];
h = ifft(H);
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
to make even fir filter
D : half desired frequency response
DD : half desired frequency response
h(1) and h(end) is not symmetric..........
help me
Answer: Your impulse response is symmetric. Why do you think it's not?
Symmetry here means
$$h[n] = h[-n]$$
and since a DFT of length $N$ is inherently periodic in both domains with $N$ we can extend this to
$$h[n+kN] = h[-n+mN] \qquad m,n \in \mathbb{Z}$$
Matlab uses unfortunately an array offset of 1, i.e. $h[0]$ is coded as h(1) and $h[N-1]$ is h(N) or h(end).
Symmetry requires (for example) $h[1] = h[-1]$ which given the periodicity is also $h[1] = h[N-1]$ which in Matlab becomes h(2) = h(N) | {
"domain": "dsp.stackexchange",
"id": 10866,
"tags": "filter-design, finite-impulse-response, frequency-response"
} |
Transformation of E.coli cells | Question: As a team of undergraduate researchers we are looking to transform two genes of interest into our competent E.coli cells. Both of these genes are in separate plasmids and we would like to put them into one E.coli cell. Is it possible for a cell to uptake two plasmids in one transformation process? If the last question is impossible, is it possible to transform a cell and then transform the colonies produced by that transformation? Thank you very much.
Answer: Double transformation (you may want to try searches using that term) has been doable for a very long time, but the efficiency will of course be less than either single transformation below the saturation point.
As for your second question, if the cells are made competent, I think a sequential transformation should work. | {
"domain": "biology.stackexchange",
"id": 4056,
"tags": "plasmids, transformation"
} |
What is the ground state of the Hubbard model for $t = 0$? | Question: I am new to the Hubbard model and it is not clear to me how we go to the so called 'atomic limit' where $t = 0$. So we only have a $U > 0$ term in the Hamiltonian. This should be a trivial problem to solve from what I have heard but I am unable to see what is the ground state energy of an $N$ lattice Hubbard model in this case.
Answer: Ill try and give an answer. I am thinking you have a fixed number of electrons. The Hilbert space of the model is $4^L$ with L the number of sites in your system, since you can have either zero electrons on each site or you can have one of spin up or spin down or you can have both.
Your $H = U \sum_r c^{\dagger}_{r,\uparrow} c_{r,\uparrow} c^{\dagger}_{r,\downarrow} c_{r,\downarrow}$ where $c_{r,\sigma}$ annihilates an electron at site $r$ with spin $\sigma$. This is just the Hubbard model with $t=0$.
From this we now start from the vacuum state with no electrons and you would find $H |0> = 0|0>$, so the energy of this state is 0. If you create only one electron on either site you'll get zero also. If you ever put two electrons on the same site the state will then have $U$ energy. If you had N site that are doubly occupied you'll have $N\times U$ energy. So the ground state would be to have as few pairs as possible. If you had a classical half-filled band so there are $L$ electrons, one for each site the ground state would be one electron on each site having zero energy and each pair would cost $U$ energy. This ground state is very degenerate though, I would think $2^L$ times since there are no spin-interactions so you can choose many ways to place your L electrons. | {
"domain": "physics.stackexchange",
"id": 86232,
"tags": "ground-state, bose-hubbard-model"
} |
How fast can manned spacecraft slingshot from Black Holes? | Question: I've read of Black Holes launching particles at over 99% the speed of light. Could manned spacecraft use Black Holes to slingshot ourselves at this speed, or would the G forces kill us?
Intuitively, I worry the inertia of turning the curve would kill the crew.
Answer: See StephenG's answer for why eccentric orbit doesn't let you gain any more energy leaving the mass than you had when you started approaching the mass.
You will probably get vaporized by other high-energy particles particles in your path, but let's ignore those since you asked about gravity problems only.
There won't be any any felt gravitational acceleration or felt centripetal acceleration. These are exactly balanced regardless of orbital eccentricity. An object in an eccentric orbit is still an object in free-fall, which is to say, it's weightless.
There will be tidal acceleration problems caused by the greater gravity closer to the black hole than slightly farther away, which (if you're standing with your head away from the black hole and your feet towards it) will cause your feet to try to pull away from your head.
There is some tidal acceleration that will be hazardous to our astronauts. Call that $\Delta a_{max}$. We could determine this experimentally using a centrifuge and some volunteers. My intuition is that it's probably about the same as maximum safe linear acceleration, so somewhere under 10 gravities.
Acceleration due to gravity from an object of mass M is
$a(r) = \frac{GM}{r^2\sqrt{1-\frac{R_s}{r}}}$
where $R_s = \frac{2GM}{c^2}$
so the tidal acceleration across two meters is
$\Delta a = a(r) - a(r+2m)$
And our distance of closest approach, r is a function of M obtained by solving for r in
$\Delta a_{max} = |\frac{GM}{r^2\sqrt{1-\frac{R_s}{r}}} - \frac{GM}{(r+2m)^2\sqrt{1-\frac{R_s}{(r+2m)}}}|$
I don't think this is solvable without a computer approximation. We can approximate by using the derivative,
$\Delta a \approx \Delta r \frac {d}{dr}a = -\dfrac{GM\left(4r-3R_s\right)\Delta r}{2\left(1-\frac{R_s}{r}\right)^\frac{3}{2}r^4} \approx -\dfrac{2GM\Delta r}{r^3}$
Which then becomes easy to solve,
$r_{min} \approx \sqrt[3]{|\frac{2GM\Delta r}{\Delta a_{max}}|}$
Checking to make sure that $r_{min} \gg R_s$ as a sanity-check after the fact.
We can then plug $r_{min}$ back into $a(r)$ to obtain our maximum safe rate of gravitational acceleration. | {
"domain": "physics.stackexchange",
"id": 84288,
"tags": "black-holes, acceleration, estimation"
} |
Sinusoid format | Question:
I am having a bit of trouble understanding why, after we defined the sinusoid as (2.1), we changed the sin to a cos in (2.2).
Thanks.
Answer: The typical complex Fourier transform or decomposition has the cosine function representing the real components and the sine function representing the imaginary components of the decomposition.
The above cosine normalization allows the DC term, or cos(0), to be real and non-zero, given strictly real input (because ωt == 0 for all t if ω == 0). It would be a bit strange to input a strictly real signal and get an imaginary DC term out of a Fourier analysis.
Another feature of the cosine normalization is that the cosine terms decompose a signal into a strictly even symmetric function. The remaining sine terms are all odd functions. Only an even function can represent a constant (DC) non-zero signal. | {
"domain": "dsp.stackexchange",
"id": 5165,
"tags": "signal-analysis, fourier-transform"
} |
Infinite Slider Conversion to Prototype | Question: I've found the following example of an infinite slider to use on a project but as I will have multiple instances I have converted it to a prototype.
The original example
https://medium.com/@claudiaconceic/infinite-plain-javascript-slider-click-and-touch-events-540c8bd174f2
My conversion
function Slideshow(slider) {
const _self = this;
this.slider = slider;
this.sliderItems = slider.querySelector('.slides');
this.next = slider.querySelector('.control.next');
this.prev = slider.querySelector('.control.prev');
this.posX1 = 0;
this.posX2 = 0;
this.posInitial = null;
this.posFinal = null;
this.threshold = 100;
this.slides = this.sliderItems.getElementsByClassName('slide');
this.slidesLength = this.slides.length;
this.slideSize = this.sliderItems.getElementsByClassName('slide')[0].offsetWidth;
this.firstSlide = this.slides[0];
this.lastSlide = this.slides[this.slidesLength - 1];
this.cloneFirst = this.firstSlide.cloneNode(true);
this.cloneLast = this.lastSlide.cloneNode(true);
this.index = 0;
this.allowShift = true;
// Listen for mousedown events,
// when they happen, call the dragStart function
this.sliderItems.onmousedown = (ev) => {
this.dragStart.call(_self, ev);
}
// Touch Events
this.sliderItems.addEventListener('touchstart', (ev) => {this.dragStart(ev)});
this.sliderItems.addEventListener('touchend', (ev) => {this.dragEnd(ev)});
this.sliderItems.addEventListener('touchmove', (ev) => {this.dragAction(ev)});
// Click Events
this.next.addEventListener('click', () => this.shiftSlide(1));
this.prev.addEventListener('click', () => this.shiftSlide(-1));
// Transition Events
this.sliderItems.addEventListener('transitionend', this.checkIndex.bind(_self));
this.slide.call(this);
}
Slideshow.prototype.slide = function() {
this.cloneSlides.call(this);
}
// Clone Slides
Slideshow.prototype.cloneSlides = function() {
this.sliderItems.appendChild(this.cloneFirst);
this.sliderItems.insertBefore(this.cloneLast, this.firstSlide);
this.slider.classList.add('loaded');
}
// Drag Start
Slideshow.prototype.dragStart = function(event) {
_self = this;
event = event || window.event;
event.preventDefault();
this.posInitial = this.sliderItems.offsetLeft;
if(event.type === 'touchstart') {
this.posX1 = event.touches[0].clientX;
} else {
this.posX1 = event.clientX;
document.onmouseup = (ev) => {
this.dragEnd.call(_self, ev)
}
// document.onmousemove = this.dragAction;
document.onmousemove = (ev) => {
this.dragAction.call(_self, ev);
}
}
}
// Drag Action
Slideshow.prototype.dragAction = function(event) {
event = event || window.event;
if(event.type === 'touchmove') {
this.posX2 = this.posX1 - event.touches[0].clientX;
this.posX1 = event.touches[0].clientX;
} else {
this.posX2 = this.posX1 - event.clientX;
this.posX1 = event.clientX;
}
this.sliderItems.style.left = (this.sliderItems.offsetLeft - this.posX2) + "px";
}
// Drag Action
Slideshow.prototype.dragEnd = function(ev) {
this.posFinal = this.sliderItems.offsetLeft;
if(this.posFinal - this.posInitial < -this.threshold) {
this.shiftSlide(1, 'drag');
} else if(this.posFinal - this.posInitial > this.threshold) {
this.shiftSlide(-1, 'drag');
} else {
this.sliderItems.style.left = (this.posInitial) + "px";
}
document.onmouseup = null;
document.onmousemove = null;
}
// Shift Slide
Slideshow.prototype.shiftSlide = function(dir, action) {
this.sliderItems.classList.add('shifting');
if(this.allowShift) {
if(!action) {
this.posInitial = this.sliderItems.offsetLeft;
}
if(dir == 1) {
this.sliderItems.style.left = (this.posInitial - this.slideSize) + "px";
this.index++;
} else if(dir == -1) {
this.sliderItems.style.left = (this.posInitial + this.slideSize) + "px";
this.index--;
}
};
this.allowShift = false;
}
// Check Index
Slideshow.prototype.checkIndex = function() {
this.sliderItems.classList.remove('shifting');
if(this.index == -1) {
this.sliderItems.style.left = -(this.slidesLength * this.slideSize) + "px";
this.index = this.slidesLength - 1;
}
if(this.index == this.slidesLength) {
this.sliderItems.style.left = -(1 * this.slideSize) + "px";
this.index = 0;
}
this.allowShift = true;
}
window.addEventListener('load', function () {
const slider = document.querySelectorAll('.slider');
slider.forEach((slide) => {
new Slideshow(slide);
})
})
@import url('https://fonts.googleapis.com/css?family=Roboto');
:root {
--slider-width: 400px;
--slider-height: 300px;
}
* { box-sizing: border-box; }
body {
height: 100%;
background-color: #7656d6;
color: #333;
font-family: 'Roboto', sans-serif;
text-align: center;
letter-spacing: 0.15em;
font-size: 22px;
}
.slider {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: var(--slider-width);
height: var(--slider-height);
box-shadow: 3px 3px 10px rgba(0,0,0,.2);
}
.wrapper {
overflow: hidden;
position: relative;
width: var(--slider-width);
height: var(--slider-height);
z-index: 1;
}
.slides {
display: flex;
position: relative;
top: 0;
left: calc(var(--slider-width) * -1);
width: 10000px;
}
.slides.shifting {
transition: left .2s ease-out;
}
.slide {
width: var(--slider-width);
height: var(--slider-height);
cursor: pointer;
display: flex;
flex-direction: column;
justify-content: center;
transition: all 1s;
position: relative;
background: #FFCF47;
border-radius: 2px;
}
/*.slider.loaded {*/
.slider.loaded .slide:nth-child(2),
.slider.loaded .slide:nth-child(7) { background: #FFCF47 }
.slider.loaded .slide:nth-child(1),
.slider.loaded .slide:nth-child(6) { background: #7ADCEF }
.slider.loaded .slide:nth-child(3) { background: #3CFF96 }
.slider.loaded .slide:nth-child(4) { background: #a78df5 }
.slider.loaded .slide:nth-child(5) { background: #ff8686 }
/*}*/
.control {
position: absolute;
top: 50%;
width: 50px;
height: 50px;
background: #fff;
border-radius: 50px;
margin-top: -20px;
box-shadow: 1px 1px 10px rgba(0, 0, 0, 0.3);
z-index: 2;
}
.prev,
.next {
background-size: 22px;
background-position: center;
background-repeat: no-repeat;
cursor: pointer;
}
.prev {
background-image: url(https://cdn0.iconfinder.com/data/icons/navigation-set-arrows-part-one/32/ChevronLeft-512.png);
left: -20px;
}
.next {
background-image: url(https://cdn0.iconfinder.com/data/icons/navigation-set-arrows-part-one/32/ChevronRight-512.png);
right: -20px;
}
.prev:active,
.next:active {
transform: scale(.8);
}
<div id="slider" class="slider">
<div class="wrapper">
<div id="slides" class="slides">
<span class="slide">Slide 1</span>
<span class="slide">Slide 2</span>
<span class="slide">Slide 3</span>
<span class="slide">Slide 4</span>
<span class="slide">Slide 5</span>
</div>
</div>
<a href="" id="prev" class="control prev"></a>
<a href="" id="next" class="control next"></a>
</div>
Now it seems I have done the conversion correctly, and works as expected when compared to the original. But I think what I am looking to know is how well have I done this conversion?
Use of this
Using and tracking this is confusing at times. Have I been over-the-top in my use of this?
Calling (or Binding) this
Its a concept that is still new to me, are there areas where I have needed to use call()? Have I used call() when I should be using bind()?
I'm sure there are other areas I am not aware of that can be improved upon, so any input that can help improve this would be excellent.
Answer: Question Responses
Using and tracking this is confusing at times. Have I been over-the-top in my use of this?
I don't feel it is "over-the-top" though storing a reference to this in another variable is a sign that context isn't bound properly. Bear in mind that an arrow function "Does not have its own bindings to this or super, and should not be used as methods."1 so referencing this within an arrow function declared inside another function will be the same context as outside the arrow function.
Calling (or Binding) this Its a concept that is still new to me, are there areas where I have needed to call()? Have I used call() when I should be using bind()?
Using call(this) adds unnecessary complexity which could confuse readers. If the function is declared on the object (either directly or via its prototype) then the context when it is called regularly will be set to this.
The original code contains this line at the end of the constructor:
this.slide.call(this);
That can simply be a regular function call:
this.slide();
The same is true in the slide method. It can simply call this.cloneSlides().
There are places where bind could be used to create a bound function - for example:
this.sliderItems.onmousedown = (ev) => {
this.dragStart.call(_self, ev);
}
This could simply be a reference without the excess arrow function, since dragStart is a function:
this.sliderItems.onmousedown = this.dragStart.bind(this);
See the code down below which achieves the same thing without using .call().
Bug
Clicking the next and previous anchor links makes the browser navigate to another page. There are various ways to stop this, including calling preventDefault() in the Javascript handler. For more solutions see answers to this StackOverflow question. Actually I see you mentioned something about that in your post Showing form on btn click - preventDefault of submit btn, then remove listener.
Review suggestions
Class syntax
The ES6 Class syntax could be used to simplify the prototype syntax.
Variable names
The constructor method has this line to select items by class name slides:
this.sliderItems = slider.querySelector('.slides');
The name sliderItems makes me think it would be multiple elements, but yet it is a single element. A more appropriate name would be something like sliderContainer.
DOM selection
As I mentioned in a previous review, querySelector() can be slower than other DOM methods like getElementById and getElementsByClassName(). So the line above could simply be:
this.sliderContainer = slider.getElementsByClassName('slides')[0];
Event handlers
The original code registering event handlers like this:
document.onmouseup = (ev) => {
this.dragEnd.call(_self, ev)
}
While it may not be necessary for this code, a drawback to this approach is that it doesn't allow multiple event handlers to be used. That could be achieved with addEventListener and cleared with removeEventListener(). One important thing to note about removeEventListener() is that the listener to be removed must be a reference to the function added - so for a bound method it needs to be a reference to the function that was bound when addEventListener() was called - typically this requires storing the bound function in a variable so it can be used in both places.
Prefer strict equality comparisons
There are some comparisons using loose comparisons:
if(dir == 1) {
A good habit and recommendation of many style guides is to use strict equality operators (i.e. ===, !==). The problem with loose comparisons is that it has so many weird rules one would need to memorize in order to be confident in its proper usage.
Minimize DOM access
In the slideshow constructor there are two lookups for elements with class name 'slide' within three lines:
this.slides = this.sliderItems.getElementsByClassName('slide');
this.slidesLength = this.slides.length;
this.slideSize = this.sliderItems.getElementsByClassName('slide')[0].offsetWidth;
DOM access is expensive. The line to set the sliderSize can simply reference this.slides[0] instead of re-querying the DOM.
Default parameters
ES6 functions can have default parameters.
The dragAction method can be simplified from:
Slideshow.prototype.dragAction = function(event) {
event = event || window.event;
To:
Slideshow.prototype.dragAction = function(event = window.event) {
Use of window.event
There is a note on the MDN documentation for window.event:
You should avoid using this property in new code, and should instead use the Event passed into the event handler function. This property is not universally supported and even when supported introduces potential fragility to your code.
Updated Code
The code below uses suggestions from above. Notice it has no .call() calls.
class Slideshow { //function Slideshow(slider) {
constructor(slider) {
this.slider = slider;
this.sliderContainer = slider.getElementsByClassName('slides')[0];
this.next = slider.querySelector('.control.next');
this.prev = slider.querySelector('.control.prev');
this.posX1 = 0;
this.posX2 = 0;
this.posInitial = null;
this.posFinal = null;
this.threshold = 100;
this.slides = this.sliderContainer.getElementsByClassName('slide');
this.slidesLength = this.slides.length;
this.slideSize = this.slides[0].offsetWidth;
this.firstSlide = this.slides[0];
this.lastSlide = this.slides[this.slidesLength - 1];
this.cloneFirst = this.firstSlide.cloneNode(true);
this.cloneLast = this.lastSlide.cloneNode(true);
this.index = 0;
this.allowShift = true;
//Bound methods for adding and removing event listeners
this.boundDragAction = this.dragAction.bind(this);
this.boundDragEnd = this.dragEnd.bind(this);
// Listen for mousedown events,
// when they happen, call the dragStart function
//this.sliderItems.onmousedown = this.dragStart.bind(this);
this.sliderContainer.addEventListener('mousedown', this.dragStart.bind(this));
// Touch Events
this.sliderContainer.addEventListener('touchstart', this.dragStart.bind(this));
this.sliderContainer.addEventListener('touchend', this.dragEnd.bind(this));
this.sliderContainer.addEventListener('touchmove', this.dragAction.bind(this));
// Click Events
this.next.addEventListener('click', e => this.shiftSlide(1) || e.preventDefault());
this.prev.addEventListener('click', e => this.shiftSlide(-1) || e.preventDefault());
// Transition Events
this.sliderContainer.addEventListener('transitionend', this.checkIndex.bind(this));
this.slide();
}
slide() { //Slideshow.prototype.slide = function() {
this.cloneSlides();
}
// Clone Slides
cloneSlides() { //Slideshow.prototype.cloneSlides = function() {
this.sliderContainer.appendChild(this.cloneFirst);
this.sliderContainer.insertBefore(this.cloneLast, this.firstSlide);
this.slider.classList.add('loaded');
}
// Drag Start
dragStart(event) { //Slideshow.prototype.dragStart = function(event) {
event = event || window.event;
event.preventDefault();
this.posInitial = this.sliderContainer.offsetLeft;
if (event.type === 'touchstart') {
this.posX1 = event.touches[0].clientX;
} else {
this.posX1 = event.clientX;
//document.onmouseup = this.dragEnd.bind(this);
document.addEventListener('mouseup', this.boundDragEnd);
// document.onmousemove = this.dragAction;
//document.onmousemove = this.dragAction.bind(this);
document.addEventListener('mousemove', this.boundDragAction);
}
}
// Drag Action
dragAction(event = window.event) { //Slideshow.prototype.dragAction = function(event) {
if (event.type === 'touchmove') {
this.posX2 = this.posX1 - event.touches[0].clientX;
this.posX1 = event.touches[0].clientX;
} else {
this.posX2 = this.posX1 - event.clientX;
this.posX1 = event.clientX;
}
this.sliderContainer.style.left = (this.sliderContainer.offsetLeft - this.posX2) + "px";
}
// Drag Action
dragEnd(ev) { //Slideshow.prototype.dragEnd = function(ev) {
this.posFinal = this.sliderContainer.offsetLeft;
if (this.posFinal - this.posInitial < -this.threshold) {
this.shiftSlide(1, 'drag');
} else if (this.posFinal - this.posInitial > this.threshold) {
this.shiftSlide(-1, 'drag');
} else {
this.sliderContainer.style.left = (this.posInitial) + "px";
}
//document.onmouseup = null;
//document.onmousemove = null;
document.removeEventListener('mouseup', this.boundDragEnd);
document.removeEventListener('mousemove', this.boundDragAction);
}
// Shift Slide
shiftSlide(dir, action) { //Slideshow.prototype.shiftSlide = function(dir, action) {
this.sliderContainer.classList.add('shifting');
if (this.allowShift) {
if (!action) {
this.posInitial = this.sliderContainer.offsetLeft;
}
if (dir === 1) {
this.sliderContainer.style.left = (this.posInitial - this.slideSize) + "px";
this.index++;
} else if (dir === -1) {
this.sliderContainer.style.left = (this.posInitial + this.slideSize) + "px";
this.index--;
}
};
this.allowShift = false;
}
// Check Index
checkIndex() { //Slideshow.prototype.checkIndex = function() {
this.sliderContainer.classList.remove('shifting');
if (this.index === -1) {
this.sliderContainer.style.left = -(this.slidesLength * this.slideSize) + "px";
this.index = this.slidesLength - 1;
}
if (this.index === this.slidesLength) {
this.sliderContainer.style.left = -(1 * this.slideSize) + "px";
this.index = 0;
}
this.allowShift = true;
}
}
window.addEventListener('load', function() {
const sliderElements = document.getElementsByClassName('slider');
[...sliderElements].forEach((slide) => {
new Slideshow(slide);
})
})
@import url('https://fonts.googleapis.com/css?family=Roboto');
:root {
--slider-width: 400px;
--slider-height: 300px;
}
* { box-sizing: border-box; }
body {
height: 100%;
background-color: #7656d6;
color: #333;
font-family: 'Roboto', sans-serif;
text-align: center;
letter-spacing: 0.15em;
font-size: 22px;
}
.slider {
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: var(--slider-width);
height: var(--slider-height);
box-shadow: 3px 3px 10px rgba(0,0,0,.2);
}
.wrapper {
overflow: hidden;
position: relative;
width: var(--slider-width);
height: var(--slider-height);
z-index: 1;
}
.slides {
display: flex;
position: relative;
top: 0;
left: calc(var(--slider-width) * -1);
width: 10000px;
}
.slides.shifting {
transition: left .2s ease-out;
}
.slide {
width: var(--slider-width);
height: var(--slider-height);
cursor: pointer;
display: flex;
flex-direction: column;
justify-content: center;
transition: all 1s;
position: relative;
background: #FFCF47;
border-radius: 2px;
}
/*.slider.loaded {*/
.slider.loaded .slide:nth-child(2),
.slider.loaded .slide:nth-child(7) { background: #FFCF47 }
.slider.loaded .slide:nth-child(1),
.slider.loaded .slide:nth-child(6) { background: #7ADCEF }
.slider.loaded .slide:nth-child(3) { background: #3CFF96 }
.slider.loaded .slide:nth-child(4) { background: #a78df5 }
.slider.loaded .slide:nth-child(5) { background: #ff8686 }
/*}*/
.control {
position: absolute;
top: 50%;
width: 50px;
height: 50px;
background: #fff;
border-radius: 50px;
margin-top: -20px;
box-shadow: 1px 1px 10px rgba(0, 0, 0, 0.3);
z-index: 2;
}
.prev,
.next {
background-size: 22px;
background-position: center;
background-repeat: no-repeat;
cursor: pointer;
}
.prev {
background-image: url(https://cdn0.iconfinder.com/data/icons/navigation-set-arrows-part-one/32/ChevronLeft-512.png);
left: -20px;
}
.next {
background-image: url(https://cdn0.iconfinder.com/data/icons/navigation-set-arrows-part-one/32/ChevronRight-512.png);
right: -20px;
}
.prev:active,
.next:active {
transform: scale(.8);
}
<div id="slider" class="slider">
<div class="wrapper">
<div id="slides" class="slides">
<span class="slide">Slide 1</span>
<span class="slide">Slide 2</span>
<span class="slide">Slide 3</span>
<span class="slide">Slide 4</span>
<span class="slide">Slide 5</span>
</div>
</div>
<a href="" id="prev" class="control prev"></a>
<a href="" id="next" class="control next"></a>
</div> | {
"domain": "codereview.stackexchange",
"id": 39548,
"tags": "javascript, html, ecmascript-6, event-handling, user-interface"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.