anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Operations on files and their locks - code too bulky? | Question: A couple of hours ago I asked a question on Stackoverflow to find out if there is a good way to delete files in a folder only if all files are indeed deletable and I got some good answers that led me (I think) in the right direction
Based on the two currently most upvoted answers I scrambled together the following code:
import java.io.File;
import java.io.IOException;
import java.io.RandomAccessFile;
import java.nio.channels.FileLock;
import java.util.ArrayList;
import java.util.List;
public class Main {
public static void main(String[] args) {
File directory = new File("T:\\download\\test\\1234");
if (safeDelete(directory)) {
System.out.println("All deleted");
} else {
System.out.println("All kept");
}
}
private static boolean safeDelete(File directory) {
if (directory == null || !directory.isDirectory()
|| !directory.canWrite()) {
return false;
}
// First get a lock on the directory itself
FileLock dirLock;
if ((dirLock = getLock(directory)) == null) {
return false;
}
List<File> filesToDelete = new ArrayList<>();
List<FileLock> fileLocks = new ArrayList<>();
// Get a lock on all the files in the folder
for (File file : directory.listFiles()) {
FileLock lock;
if (file.canWrite() && ((lock = getLock(file)) != null)) {
filesToDelete.add(file);
fileLocks.add(lock);
} else {
System.out.println("Currently unable to write to "
+ file.getName());
releaseLocks(fileLocks);
return false;
}
}
// If we reach this line all the files are locked for us now
for (File file : filesToDelete) {
if (!file.delete()) {
System.out.println("IMPOSSIBLE");
}
}
releaseLocks(fileLocks);
directory.delete();
releaseLock(dirLock);
return true;
}
private static FileLock getLock(File file) {
try {
return new RandomAccessFile(file, "rw").getChannel().tryLock();
} catch (IOException ex) {
System.out.println(ex.toString());
return null;
}
}
private static void releaseLocks(List<FileLock> locks) {
for (FileLock lock : locks) {
releaseLock(lock);
}
}
private static void releaseLock(FileLock lock) {
if (lock != null) {
try {
lock.release();
System.out.println("Released lock on " + lock.toString());
} catch (IOException ex) {
System.out.println("Cant release lock on " + lock.toString());
}
}
}
}
But it seems so... bulky to me. There are many return statements in there (I personally like it better when there is only one at the very end), a lot of if-else branching, two ArrayLists that manage almost the same stuff and a lot of calls to the releaseLocks() method.
The core problem is in my opinion that I have to manage two sets of data on which I both have to operate:
The Files on which I have to execute the deletion
And the Locks which I have to release after I'm done
So the question is:Can this be done more efficiently? Should I refactor the safeDelete() into more sub-methods?
Answer: Why not use a class to hold both the file and the lock, so you don`t need to manage two Lists like:
public class SafeDirectory {
private File directory = null;
private FileLock lock = null;
private List<SafeFile> deleteableSafeFiles = new ArrayList<>();
public SafeDirectory(String directoryPath) {
directory = new File(directoryPath);
}
public boolean CanBeDeleted() {
boolean canBeDeleted = false;
if (directory != null && directory.canWrite() && directory.isDirectory()
&& ((lock = getLock(directory)) != null)) {
canBeDeleted = true;
for (File file : directory.listFiles()) {
SafeFile safeFile = new SafeFile(file);
if (safeFile.CanBeDeleted()) {
deleteableSafeFiles.add(safeFile);
} else {
canBeDeleted = false;
System.out.println("Currently unable to write to "
+ file.getName());
RelaseLock();
break;
}
}
}
return canBeDeleted;
}
public boolean Delete() {
for (SafeFile safeFile : deleteableSafeFiles) {
safeFile.Delete();
}
directory.delete();
RelaseLock();
return true;
}
private void RelaseLock() {
for (SafeFile safeFile : deleteableSafeFiles) {
releaseLock(safeFile.lock);
}
releaseLock(lock);
}
private boolean releaseLock(FileLock lock) {
if (lock != null) {
try {
lock.release();
System.out.println("Released lock on " + lock.toString());
return true;
} catch (IOException ex) {
System.out.println("Cant release lock on " + lock.toString());
return false;
}
}
return true;
}
private static FileLock getLock(File file) {
try {
return new RandomAccessFile(file, "rw").getChannel().tryLock();
} catch (IOException ex) {
System.out.println(ex.toString());
return null;
}
}
private class SafeFile {
private FileLock lock = null;
private File file = null;
SafeFile(File file) {
this.file = file;
}
boolean CanBeDeleted() {
return (file.canWrite() && ((lock = SafeDirectory.getLock(file)) != null));
}
boolean Delete() {
return file.delete();
}
}
} | {
"domain": "codereview.stackexchange",
"id": 5803,
"tags": "java, optimization, locking"
} |
Does regularization just mean using an augmented loss function? | Question: We need to use a loss function for training the neural networks.
In general, the loss function depends only on the desired output $y$ and actual output $\hat{y}$ and is represented as $L(y, \hat{y})$.
As per my current understanding,
Regularization is nothing but using a new loss function
$L'(y,\hat{y})$ which must contain a $\lambda$ term (formally called
as regularization term) for training a neural network and can be
represented as
$$L'(y,\hat{y}) = L(y, \hat{y}) + \lambda \ell(.) $$
where $\ell(.)$ is called regularization function. Based on the
definition of function $\ell$ there can be different regularization
methods.
Is my current understanding complete? Or is there any other technique in machine learning that is also considered a regularization technique? If yes, where can I read about that regularization?
Answer: Regularization is not limited to methods like L1/L2 regularization which are specific versions of what you showed.
Regularization is any technique that would prevent network from overfitting and help network to be more generalizable to unseen data. Some other techniques are Dropout, Early Stopping, Data Augmentation, limiting the capacity of network by reducing number of trainable parameters. | {
"domain": "ai.stackexchange",
"id": 2938,
"tags": "machine-learning, reference-request, terminology, regularization"
} |
Problems installing Hydro on 12.04 from source on armhf | Question:
Hi
I'm having a dependency issue trying to install Hydro on 12.04 from source. I get an error:
E: Unable to locate package python-wstool
after the part:
sudo apt-get install python-rosdep python-rosinstall-generator python-wstool build-essential
I was getting the same error but for all the dependencies, however that was resolved by apt-get update command.
Any suggestions. This is going onto an ARM device with 12.04 installed and did have fuerte installed.
EDIT
I found and followed this option to use pip to install wstools instead. However now getting the following error:
Found existing installation: rospkg 1.0.18
Uninstalling rospkg:
Successfully uninstalled rospkg
Running setup.py install for rospkg
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named setuptools
Complete output from command /usr/bin/python -c "import setuptools;__file__='/home/ubuntu/build/rospkg/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-65brKO-record/install-record.txt:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named setuptools
----------------------------------------
Rolling back uninstall of rospkg
Command /usr/bin/python -c "import setuptools;__file__='/home/ubuntu/build/rospkg/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-65brKO-record/install-record.txt failed with error code 1
Storing complete log in /home/ubuntu/.pip/pip.log
EDIT
Okay needed to install setuptools as per the directions here.
came up with a new error ends with:
ImportError: No module named pkg_resources
EDIT
Okay theres a happy ending to this story however there seems to be a degree of luck in achieving it, so I am happy if someone can improve this.
First I purged and removed python-pkg-resources, this is the part that seems most dodgy:
sudo apt-get remove --purge python-pkg-resources
and then reinstalled it:
sudo apt-get install python-pkg-resources python-nose python-pip python-setuptools
Things were a bit broken (eggs) but this seemed to do the trick.
sudo pip install --upgrade setuptools
after which the pip install command worked:
sudo pip install -U wstool
EDIT
Well it actually didn't work when I went to deal with the installation of rosdep. there were unmet dependencies which may have resulted directly from using pip to install wstools.
A full reinstall of ubuntu didn't work either, apt-get still cant find python-wstools.
EDIT
I reused pip to install rosdep, wstools and rosinstall_generator, and was able to download the source packages, but when executing the command:
rosdep install --from-paths src --ignore-src --rosdistro hydro -y
It would download and unpack dependencies for a couple of hours then stop on error:
executing command [sudo apt-get install -y python-rospkg]
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package python-rospkg is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'python-rospkg' has no installation candidate
ERROR: the following rosdeps failed to install
apt: command [sudo apt-get install -y python-rospkg] failed
Where I used pip to download ros package, and then reexecute the rosdep update, after a short while another error:
executing command [sudo apt-get install -y python-rosdep]
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python-rosdep : Depends: python-rospkg but it is not installable
Depends: python-rosdistro (>= 0.2.1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
ERROR: the following rosdeps failed to install
apt: command [sudo apt-get install -y python-rosdep] failed
when installing rosdep via pip does not flag it as being installed when searching for python-rosdep, same goes for python-rospkg. I can comment out the offending lines in the package.xml for the stacks that have a dependency on these programs, stopping them looking for the apt-get versions, but will this break my installation when I go to use them?
Originally posted by PeterMilani on ROS Answers with karma: 1493 on 2013-10-07
Post score: 2
Answer:
The solution:
First need to download and install setuptools and extract.
Enter the extracted folder and run:
$ sudo python setup.py install
Now install stdeb as per the instructions for self install.
Create the python dependencies:
$ pypi-install rosdep
$ pypi-install rosinstall-generator
$ pypi-install wstools
$ sudo apt-get install build-essential
Continue with the hydro installation till all source folders have been installed and your ready to use rosdep to install dependencies
Now there is an issue with collada-dom-dev dependencies and assimp (has been since Fuerte, Groovy) follow the instructions on sysadminfixes to install. See the sections on Fix Missing Dependencies For Collada Packages .
Continue the ros-hydro install as listed in the hydro source installation guide to install dependecies and build workspace.
Acknowledgements
Thanks to William (+1) for identifying stdeb, that really was the key to resolving these python dependencies. Also sysadmin fixes has a number of workarounds for various things if you get stuck but primarily for beagle based boards
This was for a minimal armhf install on Ubuntu 12.04 running on a beaglebone white.
Originally posted by PeterMilani with karma: 1493 on 2013-10-09
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 15776,
"tags": "rosdep, ros-hydro, ubuntu, ubuntu-precise, armhf"
} |
Can we not backpropagate model | Question: I saw a model based on CNN for question classification. The author said that they don't backpropagate gradient to embeddings. How this is possible to update network if you don't backpropagate please?
Thanks
Answer: When we are using pretrained embeddings as model inputs sometimes we dont want to update embedding so thats why we don't backpropogate gradients to embeddings | {
"domain": "datascience.stackexchange",
"id": 10548,
"tags": "deep-learning, cnn, training, backpropagation"
} |
Capacitors in parallel and dieletric | Question: I'm trying to solve this problem:
Two capacitors of capacitance $C_1=200pF$ and $C_2=1000pF$ are connected in parallel and loaded to a potential difference of $400V$. Subsequently the space between the plates of $C_1$ is completely filled with distilled water (with $ε = 80$). Calculate the variation of the difference $ΔV$ voltage across $C_2$, the polarization charge $q_p$ on the faces of dielectric, the variation of electrostatic energy of the system $ΔU$.
Now in order to obtain a lot of information I calculated $Q_1=8.0 \times 10^{-8}C$, $Q_2=4.0\times 10^{-7}C$, $C_{total}=1.2\times 10^{-9}F$ and $Q_{total}=4.8\times 10^{-7}C$, then I calculated the new capacitance of $C_1$ because of the dieletric so $C_{1ε}=1.6\times 10^{-8}$
Here I'm stuck, how can I calculate the variation of the difference of potential $ΔV$ voltage across $C_2$ since the capacitors are in parallel? Then how can I calculate the polarization charge $q_p$ on the faces of dielectric and the variation of electrostatic energy of the system $ΔU$. I also want to understand how the system change after the introduction of the dieletric. Can someone guide me to the solution in a fine way?
Answer: Well first of all the major points here are potential difference across both capacitors will remain same after insertion of dielectric in $C1$, and the total charge of the system will remain constant.
Now in $C1$ after insertion of dielectric, calculate change of capacitance, assume that it has new potential $V1$ and charge $Q1$, assume potential and charge for 2nd capacitor too as $V2$ and $Q2$.
Equate $V1$ and $V2$, also equate $Q1 + Q2$ with initial total charge of system. This way you can calculate $V$ across capacitors and $Q1$ and $Q2$.
Now to calculate polarisation charge, check the electric field $ E1 (without dielectric) $ Now find electric field $E2 (with dielectric)$ this difference has come because of insertion of dielectric so the electric field of dielectric (calculate for a hypothetical parallel plate capacitor) is equal to the difference in electric fields, this will give you tue induced charge.
Lastly, since you know initial and final potential and charges, apply formula for potential energy of capacitors both before and after insertion of dielectric and take the difference, this is the change in potential energy.
The system undergoes changes as the following, once the capacitors are charged, when dielectric is being inserted in the capacitor $C1$ charges are induced in it and it is pulled in by electrostatic interactions, this process consumes some energy and this changes the potential. Also since the capacitors are in parallel and maintain same potential, the charges travel from one capacitor to another after insertion of dielectric to eliminate the potential difference between capacitors. | {
"domain": "physics.stackexchange",
"id": 11277,
"tags": "homework-and-exercises, electrostatics, electric-circuits, capacitance, dielectric"
} |
Hamiltonian with identity operator: how to visualize the (time-evolution) rotation? | Question: For the Hadarmard Hamiltonian, $\hat H = (\hat X+\hat Z)/\sqrt 2$, where $\hat X$ and $\hat Z$ are Pauli matrices. The time evolution of a state under this Hamiltonian could be visualized by a rotation on the Bloch sphere with an axis
$$
\hat n = \frac{1}{\sqrt2}\begin{bmatrix}
1 \\
0 \\
1 \\
\end{bmatrix}
$$
However, I'm wondering if I have another Hamiltonian defined as
$$
\hat H_1 = \frac{1}{\sqrt3}(\hat X +\hat Z +\hat I)
$$
where $\hat I$ is the identity operator. Then what the role $\hat I$ would have on this Hamiltonian? If I still want to visualize the time-evolution rotation on the Bloch sphere, what the 'new' axis would be?
Thanks:)
Answer: It does not change the direction of a ket on the Bloch sphere. Only the total phase of the ket changes. It corresponds to the constant energy shift, as naturally understood from the meaning of 'Hamiltonian'. | {
"domain": "physics.stackexchange",
"id": 72960,
"tags": "quantum-mechanics, hamiltonian, bloch-sphere"
} |
Chocolate Distribution in a school | Question:
Problem Statement
In a school, chocolate bars have to be distributed to children waiting
in a queue. Each Chocolate bar is rectangular in shape. Consider its
side lengths are integer values.
The distribution procedure is as follows:
If a bar is not square in shape, then the largest possible square piece of Chocolate is broken and given to the first child in queue.
If bar is square in shape, then complete bar is given to the first child in queue.
Once a child receives his share of Chocolate, he leaves the queue. The
remaining portion of the Chocolate bar is dealt in same fashion and
the whole or a portion of it is given to the next child in the queue.
School has a carton of Chocolate bars to be distributed among the
children all over the School. The Chocolate bars in the carton are of
different sizes. A bar of length i and breadth j is considered to be
different from a bar of length j and breadth i.
For every i such that M<=i<=N and every j such that P<=j<=Q (where M,
N, P and Q are integers). Each Chocolate bar in carton is unique in
length (i) and breath(j).
Given the values of M, N, P and Q (where M, N values are the ranges
for length of Chocolate and P, Q values are the ranges for breadth of
the chocolate). Find the number of children who will receive Chocolate
from the carton.
Input Specification:
M, N, P, Q are of integer type (M, N values are the ranges for length
of chocolate bar. P, Q values are the ranges for breadth of chocolate
bar).
Output Specification:
Number of children who will receive Cadbury bar from the carton.
M = 5, N = 6, P = 3, Q=4 Here, i can be from 5 to 6 and j can be from
3 to 4. So the four bars will be in carton of sizes 5x3, 5x4, 6x3,
6x4.
First we choose a cadbury bar of size 5x3:
first child would receive 3x3 portion (remaining 2x3 portion)
next child would receive 2x2 portion (remaining 2x1 portion)
now the remaining portion are 2 square pieces of (1x1), which can be given to 2 more children
So the Cadbury bar with the size of 5x3 can be distributed to 4
children.
Similarly we can find out number of children for rest of the
combinations (i.e. 5x4, 6x3, 6x4) in the given range as follows:
Please let me know the corrections that I can make to improve the code.
public class CandidateCode {
public int distributeChocolate(int input1,int input2,int input3,int input4){
int[] chocolatelengthLimits = {input1,input2};
int[] chocolatewidthLimits = {input3,input4};
Set<Chocolate> chocolateCarton = makeSetOfChocolatesOutOfTheLimits(chocolatelengthLimits, chocolatewidthLimits);
return getTotalNumberofChildrenThatCanBeFed(chocolateCarton);
}
private int getTotalNumberofChildrenThatCanBeFed(Set<Chocolate> chocolateCarton){
int totalNumberOfChildrenThatCanBeFed = 0;
for (Chocolate chocolate : chocolateCarton) {
int childrenFedFromTheChocolate = numberOfChildrenThatCanBeFedFromTheChocolate(chocolate);
totalNumberOfChildrenThatCanBeFed+=childrenFedFromTheChocolate;
}
return totalNumberOfChildrenThatCanBeFed;
}
private Set<Chocolate> makeSetOfChocolatesOutOfTheLimits(int[] lengthLimits,int[] widthLimits){
Set<Chocolate> chocolates = new HashSet<Chocolate>();
for(int i=0;i<lengthLimits.length;i++){
for(int j=0;j<widthLimits.length;j++){
Chocolate rectangle = new Chocolate(lengthLimits[i], widthLimits[j]);
chocolates.add(rectangle);
}
}
return chocolates;
}
private int numberOfChildrenThatCanBeFedFromTheChocolate(Chocolate chocolate){
int numberOfChildren = 0;
int chocolateBlocksToExhaust = chocolate.getNumberOfChocolateBlocks();
while (chocolateBlocksToExhaust!=0){
int chocolateBlocksThatCanBeExhausted = chocolate.numberOfBlocksOfTheLargestSquareThatCanbeFormedFromTheChocolate();
chocolateBlocksToExhaust=chocolateBlocksToExhaust-chocolateBlocksThatCanBeExhausted;
numberOfChildren++;
}
return numberOfChildren;
}
}
class Chocolate {
int length;
int width;
public Chocolate(int length,int width){
this.length=length;
this.width=width;
}
public int numberOfBlocksOfTheLargestSquareThatCanbeFormedFromTheChocolate(){
return decreaseDimensionsMaximumMinusMinimum();
}
private int decreaseDimensionsMaximumMinusMinimum(){
if(this.length>this.width) {
this.length=this.length-this.width;
return this.width*this.width;
}
if(this.width>this.length) {
this.width=this.width-this.length;
return this.length*this.length;
}
if(this.length==this.width){
return this.length*this.width;
}
return 0;
}
public int getNumberOfChocolateBlocks(){
return this.length*this.width;
}
//HashCode and Equals Methods here
}
Answer: I'd try a more top-down approach. At the moment, you make a lot of internal information of a Chocolate object available to public in order to do some calculation and decision making outside that class.
Your method numberOfChildrenThatCanBeFedFromTheChocolate for example deals with blocks of chocolate. A block of chocolate is the atomic element a chocolate bar is made from. The problem here is that the question doesn't ask for chocolate blocks. Sure, the number of blocks and how they are positioned in a grid determines how often a square can be broken off the chocolate bar, but thinking about it in terms of blocks is a bottom-up way of thinking. I as the user of your class could not care less about blocks. The problem asks for squares, not blocks. Why do I have to deal with blocks when using your class?
If you model the data/code according to the question, you get a more top-down approach, which is more abstract, but easier to digest.
You have a chocolate bar in your hand. You can break off a square piece and be left with a different sized chocolate bar or nothing. You don't really care about the size of the square broken off or the remainder. All you really care about is how often you can perform that action until there's no chocolate bar remaining. Of course you have to care about it internally somehow, but again, this is top-down thinking. Look at how your class (its objects) should be used
Another idea that you can use to your advantage is information hiding. As it turns out, the logic is often concerned with what's the longer side and what's the shorter one. Then why not store exactly that information?
Here's a version of Chocolate.java that works with the ideas mentioned above:
public class Chocolate
{
private int min;
private int max;
public Chocolate(int width, int height)
{
min = Math.min(width, height);
max = Math.max(width, height);
}
public Chocolate remainderAfterSquareBreakoff()
{
if ((min == 1 && max == 1) || min == max)
{
return null;
}
return new Chocolate(max - min, min);
}
public static void main(String[] args)
{
Chocolate chocolate = new Chocolate(6, 3);
int numberOfSquares = 0;
do
{
++numberOfSquares;
chocolate = chocolate.remainderAfterSquareBreakoff()
}
while (chocolate != null);
System.out.println("# squares: " + numberOfSquares);
}
}
The two important things are:
Count how often a square can be broken off until no remainder remains. This is very close to how you would break the chocolate in real life and is thus hopefully intuitive and easy to understand.
The size of the chocolate is stored in terms of longest and shortest side, not width and length.
You use long descriptive names for your methods, which is good. But you have so much logic outside of the Chocolate class that you need many such long descriptive names to keep track of everything. That bloats the code and reduces readability. With only a few things exposed to public, the code becomes less bloated and you need fewer identifiers.
This is clearly not providing the same functionality that your code has. Most importantly, the following doesn't hold any more:
Each Chocolate bar in carton is unique in length (i) and breath(j).
By only storing them as longest and shortest side, the orientation is lost. I'd say the orientation is not necessarily necessary to solve the task at hand. The problem only arises if a Set should be used to store the Chocolate objects, because you cannot define an equals() method to distinguish two objects only by their max and min properties. If you put them into a different data structure that does not require uniqueness like ArrayList for example, everything is fine.
If you insist on uniqueness and the use of Set, you can add another property isLandscapeFormat which can then be used to distinguish between different orientations.
public class Chocolate
{
private int min;
private int max;
private boolean isLandscapeFormat;
public Chocolate(int width, int height)
{
min = Math.min(width, height);
max = Math.max(width, height);
isLandscapeFormat = width > height;
}
And now for something completely different.
The above assumes that you don't care about what square is broken off the bar. It also makes it necessary to reassign the return value of the method to the chocolate object.
A more common approach for this call-method-until-null-is-returned structure is an iterator.
Here's a different version of Chocolate.java that incorporates that principle.
public class Chocolate
{
private int min;
private int max;
public Chocolate(int width, int height)
{
setMinMax(width, height);
}
private void setMinMax(int a, int b)
{
min = Math.min(a, b);
max = Math.max(a, b);
}
public boolean hasNextSquare()
{
return min > 0 && max > 0;
}
public Chocolate getNextSquare()
{
if (!hasNextSquare())
{
return null;
}
setMinMax(max-min, min);
return new Chocolate(min, min);
}
public static void main(String[] args)
{
Chocolate chocolate = new Chocolate(5, 3);
int numberOfSquares = 0;
while(chocolate.hasNextSquare())
{
++numberOfSquares;
chocolate.getNextSquare();
}
System.out.println("# squares: " + numberOfSquares);
}
}
The important differences are:
The new method setMinMax, which I introduced because its logic is now necessary at multiple places in the class. Creating a method prevents duplicated code.
The method hasNextSquare. As long as there is still one block remaining, a square can be broken off. As long as that's the case, this method will return true;
getNextSquare, which breaks off the next square from the bar.
The while loop to iterate over all the squares:
while(chocolate.hasNextSquare())
{
++numberOfSquares;
chocolate.getNextSquare();
}
The idea is still the same: break off squares as long as that's possible. But now the square is explicitly returned and can be used in the program.
There are more fancy ways to create iterators, with an interface to be implemented, but for this simple example, I think it is sufficient to provide a hasNext() and getNext() method, which implicitly removes the returned object. | {
"domain": "codereview.stackexchange",
"id": 34400,
"tags": "java, programming-challenge"
} |
What benefits does loopshaping have over other control design methods? | Question: I just finished my first controls course (on linear control systems) and we learned about 3 types of controllers: PID, State Space, and Frequency Response/Loopshaping. From what I understand, PID has the advantages of being really easy and it can be applied without knowing the dynamics of the system. State space provides really good responses. But loopshaping seemed very guess and check when we were doing it and kind of arbitrary. In addition, we got to actually implement the first two on a multirotor for our labs and didn't have time to do the last beyond homework. So when would it be advantageous to use loopshaping?
Answer: When designing a controller the first priority should always be stability and then performance/robustness.
I do not know what method you used for tuning a PID controller, but I would suspect that it might not be easy to proof stability. Increasing performance would then probably involve increasing the gains. And it is probably hard to make a PID controller robust.
For state space controller design you can look at the closed loop dynamics (matrix) and check whether it is stable (Hurwitz for continues time and Schur for discrete time). For performance you can try to place the closed loop poles as far as possible into the left half plane. I haven't looked at it, but there will probably be methods for robustness in state space. For disturbance rejection you can find a state estimator using a Kalman filter (by solving a Riccati equation).
For loopshaping you can proof stability using a Nyquist plot using the Nyquist stability criterion. For performance you can just increase your bandwidth (sometimes defined at the point where the open loop crosses the 0 dB). Gain and phase margin can be used to see how stable/robust your closed loop system is. Closely related to this (but not often mentioned) is the modulus margin, which is basically the shortest distance to the minus one point in the Nyquist plot. This inversely relates to the highest peak of the sensitivity function of the closed loop system, which is a measure of the worst disturbance rejection. There are also other methods for robustness, such as H infinity.
So a PID controller might be the easiest, but it is much less advanced. The advantage of state space control is that it can also be applied to MIMO systems, which is harder to do for loopshaping. But the advantage of loopshaping is that you can just measure a frequency response function and design a controller for that directly, while for state space you first have to fit a model onto it first. Also if your system is continues, but also has delays it is harder to capture that into a state space model, but for loopshaping this does not change anything. | {
"domain": "engineering.stackexchange",
"id": 1198,
"tags": "control-engineering, control-theory"
} |
tinyxml lib cmake error | Question:
Hi,
I want to compile my project, but I dont get it running. include <tinyxml.h> is included in roc_choice.cpp. I have no idea, what I could try.
Please, can help me someone?
[ 52%] Linking CXX executable /media/jester/e5ff9dd5-b341-d101-e043-9dd5b341d101/home/jester/Dropbox/Programming/ClionProjects/face_detection/image_transport_ws/devel/lib/image_transport_tutorial/roc_choice
[ 52%] Built target _image_transport_tutorial_generate_messages_check_deps_ResizedImage
[ 58%] Built target image_transport_tutorial_generate_messages_cpp
[ 64%] Built target image_transport_tutorial_generate_messages_lisp
[ 76%] Built target image_transport_tutorial_generate_messages_py
[ 76%] Built target image_transport_tutorial_generate_messages
CMakeFiles/roc_choice.dir/src/overlapping_rect.cpp.o: In function `main':
overlapping_rect.cpp:(.text+0x16b0): multiple definition of `main'
CMakeFiles/roc_choice.dir/src/roc_choice.cpp.o:roc_choice.cpp:(.text+0x20fa): first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [/media/jester/e5ff9dd5-b341-d101-e043-9dd5b341d101/home/jester/Dropbox/Programming/ClionProjects/face_detection/image_transport_ws/devel/lib/image_transport_tutorial/roc_choice] Error 1
make[1]: *** [tedusar_detector_evaluator/CMakeFiles/roc_choice.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j8 -l8" failed
This is my Makefilelist.txt
cmake_minimum_required(VERSION 2.8)
project(image_transport_tutorial)
find_package(catkin REQUIRED cv_bridge genmsg image_transport sensor_msgs)
find_package(OpenCV)
find_package(TinyXML)
if(CMAKE_COMPILER_IS_GNUCXX)
execute_process(COMMAND ${CMAKE_C_COMPILER} -dumpversion OUTPUT_VARIABLE GCC_VERSION)
if (GCC_VERSION VERSION_GREATER 4.7 OR GCC_VERSION VERSION_EQUAL 4.7)
message(STATUS "C++11 activated.")
add_definitions("-std=gnu++11")
elseif(GCC_VERSION VERSION_GREATER 4.3 OR GCC_VERSION VERSION_EQUAL 4.3)
message(WARNING "C++0x activated. If you get any errors update to a compiler which fully supports C++11")
add_definitions("-std=gnu++0x")
else ()
message(FATAL_ERROR "C++11 needed. Therefore a gcc compiler with a version higher than 4.3 is needed.")
endif()
else(CMAKE_COMPILER_IS_GNUCXX)
add_definitions("-std=c++0x")
endif(CMAKE_COMPILER_IS_GNUCXX)
add_message_files(DIRECTORY msg
FILES ResizedImage.msg)
generate_messages(DEPENDENCIES sensor_msgs)
catkin_package()
include_directories(include)
include_directories(${catkin_INCLUDE_DIRS})
include_directories(include ${OpenCV_INCLUDE_DIRS})
include_directories(include ${TinyXML_INCLUDE_DIRS})
add_executable(image_publisher src/image_publisher.cpp src/abstract_choice.cpp src/my_exception.cpp)
target_link_libraries(image_publisher ${catkin_LIBRARIES} ${TinyXML_LIBRARIES})
#target_link_libraries(image_publisher ${catkin_LIBRARIES} /*${OpenCV_LIBRARIES}*/)
# add the subscriber example
#add_executable(my_subscriber src/my_subscriber.cpp)
#target_link_libraries(my_subscriber ${catkin_LIBRARIES} ${OpenCV_LIBRARIES})
add_executable(overlapping_rect src/abstract_choice.cpp src/my_exception.cpp src/overlapping_rect.cpp)
target_link_libraries(overlapping_rect ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} ${catkin_INCLUDE_DIRS})
add_executable(roc_choice src/roc_choice.cpp src/abstract_choice.cpp src/my_exception.cpp src/overlapping_rect.cpp)
target_link_libraries(roc_choice ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} ${catkin_INCLUDE_DIRS} ${TinyXML_LIBRARIES})
catkin_add_gtest(utest test/test.cpp include/image_transport_tutorial/overlapping_rect.h)
target_link_libraries(utest ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} ${catkin_INCLUDE_DIRS})
Originally posted by jester on ROS Answers with karma: 5 on 2016-02-12
Post score: 0
Answer:
The error is the multiple definition of main function.
A Program has only one main function.
Originally posted by duck-development with karma: 1999 on 2016-02-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 23747,
"tags": "ros, tinyxml"
} |
Is S-T CONNECTEDNESS #P-complete on instances when all s-t paths are of the same length? | Question: S-T CONNECTEDNESS
Input: a (undirected) graph $G=(V,E)$; $s,t \in V.$
Output: number of spanning subgraphs of $G$ in which there is a path from $s$ to $t$.
S-T CONNECTEDNESS problem is known to be #P-complete ([Valiant, L. G. (1979). The complexity of enumeration and reliability problems. SIAM Journal on Computing, 8(3), 410-421.]).
I'm interested in those instances of S-T CONNECTEDNESS in which all simple paths between $s$ and $t$ are of the same length. Is anything known about the complexity of the problem on these instances? In particular, does the problem remain #P-complete?
Remark: In the Valiant's reduction, there are many simple paths between $s$ and $t$ of different lengths.
Answer: After polynomial time preprocessing, you may assume that every edge in $E$ lies on some simple $s$-$t$-path.
If all simple paths between $s$ and $t$ are of the same length, then $G$ is series-parallel, with $s$ as left terminal and $t$ as right terminal vertex. This can be seen easily, since the forbidden subgraphs for series-parallel graphs (= subdivisions of diamonds) create two paths of different lengths between $s$ and $t$.
In a series-parallel graph, the number of spanning subgraphs containing a simple $s$-$t$-path can be determined in polynomial time by dynamic programming.
In the dynamic program, we build up the graph by series and parallel compositions. For every resulting subgraph $G'$, we compute the number $p^+(G')$ of spanning subgraphs that do contain a simple path from its left terminal to its right terminal and the number $p^-(G')$ of spanning subgraphs that do not contain such a path.
In a series composition on $G_1$ and $G_2$, we get $p^+(G)=p^+(G_1)\cdot p^+(G_2)$.
In a parallel composition on $G_1$ and $G_2$, we get $p^-(G)=p^-(G_1)\cdot p^-(G_2)$.
In both cases, the missing value $p^+(G)$ or $p^-(G)$ for $G=(V,E)$ can be computed from $p^+(G)+p^-(G)=2^{|E|}$. | {
"domain": "cstheory.stackexchange",
"id": 4616,
"tags": "cc.complexity-theory, counting-complexity"
} |
Indenting and dedenting ASP Classic code with Python | Question: I've created a Python script that tries to properly indent and dedent ASP code. Similar to http://www.aspindent.com/
Since I've began using it I've noticed some edge cases that I hadn't anticipated. The reason these edge-cases appear is because of the fact that I'm not parsing straight ASP code. There are instances of HTML and Javascript in the file, thus, some of this is causing indentations to occur sporadically throughout the final output. I realize that part of the problem stems from my regex use. I'm here to see if there is a cleaner way to go about this.
import re, sys
class Indenter:
def __init__(self, string):
self.space = 0
self.count = 0
self.string = string
def print_ln(self, string):
sys.stdout.write(" " * self.space + str(string))
sys.stdout.flush()
def indent(self):
self.print_ln(self.string[self.count])
self.space += 4
def dedent(self):
self.space -= 4
self.print_ln(self.string[self.count])
def dedent_indent(self):
self.space -= 4
self.print_ln(self.string[self.count])
self.space += 4
def main(self):
while self.count < len(self.string):
if re.search("^\s*if.*then", str(self.string[self.count]), re.IGNORECASE):
self.indent()
elif re.search("^\s*for", str(self.string[self.count]), re.IGNORECASE):
self.indent()
elif re.search("^\s*with", str(self.string[self.count]), re.IGNORECASE):
self.indent()
elif re.search("^\s*do until", str(self.string[self.count]), re.IGNORECASE):
self.indent()
elif re.search("^\s*do$", str(self.string[self.count]), re.IGNORECASE):
self.indent()
elif re.search("^\s*Select Case", str(self.string[self.count]), re.IGNORECASE):
self.indent()
elif re.search("^\s*End Select", str(self.string[self.count]), re.IGNORECASE):
self.dedent()
elif re.search("^\s*loop", str(self.string[self.count]), re.IGNORECASE):
self.dedent()
elif re.search("^\s*end with", str(self.string[self.count]), re.IGNORECASE):
self.dedent()
elif re.search("^\s*end if", str(self.string[self.count]), re.IGNORECASE):
self.dedent()
elif re.search("^\s*next", str(self.string[self.count]), re.IGNORECASE):
self.dedent()
elif re.search("^\s*Case", str(self.string[self.count]), re.IGNORECASE):
self.dedent_indent()
elif re.search("^\s*else", str(self.string[self.count]), re.IGNORECASE):
self.dedent_indent()
elif re.search("^\s*elseif.*then", str(self.string[self.count]), re.IGNORECASE):
self.dedent_indent()
else:
self.print_ln(self.string[self.count])
self.count += 1
with open("scratch.html") as s:
ind = Indenter(s.readlines())
ind.main()
Answer: You should always write the main program protected within a if __name__ == '__main__': block, so that you can import the file in another program without automatically executing code. Eg)
if __name__ == '__main__':
ind = Indenter(s.readlines())
ind.main()
Don't write classes where you create the class object just to call one instance method of the object. See the video Stop Writing Classes.
Your code has:
ind = Indenter(s.readlines())
which reads every single line of the file into memory, and stores it in a list. Then, you call the Indenter.main() function which executes ...
while self.count < len(self.string):
# ... many lines of code omitted ...
self.count += 1
... sequentially loops through every element of the list (line of the file) for processing one at a time. Each member function, including this main() function, uses self.string[self.count] to reference the current line. This must have been maddening to write, having to type (or copy/paste) the same code over and over. It is frightening to read.
Let's clean this up.
print_ln(self, string) is structured correctly. It takes the current line string as input. Let's duplicate that in indent(), dedent(), and dedent_indent(), by adding the string argument to them, like:
def indent(self, string):
self.print_ln(string)
self.space += 4
In the main() function, we now need to pass the line to each function, just like it was passed to self.print_ln(self.string[self.count]). But that is a lot of duplicate self.string[self.count] access, so let's make a temporary variable for it. We can use that temporary variable in the re.search(....) calls too.
while self.count < len(self.string):
string = self.string[self.count]
if re.search("^\s*if.*then", string, re.IGNORECASE):
self.indent(string)
elif # ... many lines of code omitted ...
else:
self.print_ln(string)
self.count += 1
Note that each string is a string, so the str(...) calls can just be omitted.
Now that self.count is only used in main(), we can make it a local variable, instead of an instance member.
count = 0
while count < len(self.string):
string = self.string[count]
# ... many lines of code omitted ...
count += 1
A while loop which counts from 0 up to some limit an be replaced by a for ... in range(...) loop:
for count in range(len(self.string)):
string = self.string[count]
# ... many lines of code omitted ...
And since count is only ever used to index into the list that we are looping over the indices of, we can omit the indices altogether, and just loop over the list of strings itself.
for string in self.string:
# ... many lines of code omitted ...
Now that we are looping directly over self.string, we don't need to explicitly know that it is a list of strings. It can be any iterable object that returns, one-at-a-time, all the string we are interested in. Like a file object. And that file object can be passed to the main() function.
class Indenter:
# ... omitted ...
def main(self, lines):
for line in lines:
if re.search("^\s*if.*then", line), re.IGNORECASE):
self.indent(line)
elif # ... many lines of code omitted ...
else:
self.print_ln(line)
if __name__ == '__main__':
with open('scratch.html') as file:
ind = Indenter()
ind.main(file)
Now we are no longer reading the entire file into memory. Rather, we are opening the file, and reading it line-by-line, processing each line as it is read in, and then discarding it.
What does your code do if it encounters a file which is already indented? It indents it some more!
That is probably undesired behaviour. You probably want to strip off any existing indentation before adding the new indentation.
for line in lines:
line = line.strip()
# ... if/elif/else statements omitted ...
Note that this removes the newline from the end of the lines as well, so you'll want to add that back in, perhaps by changing the sys.stdout.write(...) \ sys.stdout.flush() into a simple print(...) statement.
With each line stripped of any previous indentation, you can remove the \s* from the start of all of the reg-ex patterns.
There are 6 reg-ex patterns where you want to indent, 5 where you want to dedent, and 3 where you want to do both. You always print. You dedent before printing. You indent after printing.
Instead of writing many lines of code which do the same thing with different patterns, it is much, much easier to write a loop, which loops through the patterns. Let's put the patterns into lists:
indent_patterns = ['^if.*then', '^for', '^with', '^do until', '^do$', '^select case',
'^case', '^else', '^elseif.*then']
dedent_patterns = ['^end select', '^loop', '^end with', '^end if', '^next',
'^case', '^else', '^elseif.*then']
Now we can say if the current line matches any of the dedent_patterns, reduce space. Then print the line. Then, if the current line matches any of the indent_patterns, increase space.
if any(re.search(pattern, line, re.IGNORECASE) for pattern in dedent_patterns):
self.space -= 4
self.print_ln(line)
if any(re.search(pattern, line, re.IGNORECASE) for pattern in indent_patterns):
self.space += 4
... and the indent, dedent and dedent_indent functions just became unnecessary.
Since print_ln() is called in one spot, we can write than inline as well, which eliminates the need for self.space to be a member variable.
With no member variables left, the entire class can be replaced with one function.
import re
def indent(lines, spacing=4):
indent_patterns = ['^if.*then', '^for', '^with', '^do until', '^do$', '^select case',
'^case', '^else', '^elseif.*then']
dedent_patterns = ['^end select', '^loop', '^end with', '^end if', '^next',
'^case', '^else', '^elseif.*then']
space = 0
for line in lines:
line = line.strip()
if any(re.search(pattern, line, re.IGNORECASE) for pattern in dedent_patterns):
space -= spacing
print(" "*space + line)
if any(re.search(pattern, line, re.IGNORECASE) for pattern in indent_patterns):
space += spacing
if __name__ == '__main__':
with open('scratch.html') as file:
indent(file)
And like what was said above, stop writing classes.
This does not address the sporadic indentation caused by HTML and Javascript in the file, but
this is a Code Review, which only addresses the code you have actually written. This is not a forum where you come to get help writing new code, but to get a review of code you have already written.
with a cleaner implementation of the indent code, you should have an easier time addressing that issue. When you have finished, feel free to post a follow-up question. | {
"domain": "codereview.stackexchange",
"id": 34154,
"tags": "python, regex, formatting"
} |
Is it possible to describe every possible spacetime in Cartesian coordinates? | Question: Curvature of space-time (in General Relativity) is described using the metric tensor. The metric tensor, however, relies on the choice of coordinates, which is totally arbitrary.
See for example answers to this question: https://physics.stackexchange.com/a/499297/374314
As the choice of coordinates is arbitrary, can't I just "postulate" to use cartesian coordinates to describe any possible spacetime?
Do I make a mistake by using cartesian coordinates?
Would I diminish the number of possibilities somehow by using Cartesian coordinates?
To the best of my knowledge, there shouldn't be any problem with using (postulating to use) Cartesian coordinates, as the choice of coordinates is totally arbitrary.
Every problem (spacetime) should be possible to describe in Cartesian coordinates.
Right?
If that were right it would be remarkable, because we are calculating so much with different coordinates AND different spacetimes and with only using Cartesian Coordinates we could easier think about the structure of spacetime itself.
Edit: Here, with "using Cartesian coordinates", I mean to globally define a coordinate system and define this to be the most simple one, Cartesian. Coordinates are arbitrary but influence the metric tensor the same as curvature does. I find it very difficult to think about spacetime curvature if I am not allowed to use a stable ground. That's why I want to know whether it's allowed or not to use a global coordinate system. The choice of coordinates and the curvature of spacetime both influence the metric tensor. However, there is only one real spacetime which it describes. I want to chose coordinates which make it possible to only let the curvature define the metric tensor. (To furthermore analyze then, which curvature is possible.)
With using spherical coordinates, for example, in the Schwarzschild solution, we simplify the problem which allows finding the solution easily. However, this comes to the cost of high symmetry of the problem. If I want to understand Einstein's field equations in general, I need (and want) to give up any of those symmetries (and, of course, this comes with the cost of the non-diagonal terms not to vanish)
In this question, Can we just take the underlying set of the spacetime manifold as $\mathbb{R^4}$ for all practical purposes?, which sounds similar to my question - but is similar only in the title,
the topology censorship problem is touched (which is btw not solved yet). This problem asks whether the universal topology can be measured by an observer at all before it collapses. That says nothing about whether I can use Cartesian coordinates or not to describe the curvature. I need not to measure it to simply describe it mathematically.
the OP questions whether manifolds are necessary at all. In contrast, I don't want to question the necessity of manifolds. I only want to use Cartesian coordinates to describe those manifolds to get rid of the double description of curvature.
My question is in contrast, "Can we describe every possible spacetime curvature using a metric tensor and Cartesian coordinates?"
Answer:
As the choice of coordinates is arbitrary, can't I just "postulate" to use cartesian coordinates to describe any possible spacetime?
If by cartesian coordinates you mean a set of four coordinates $\in \mathbb{R}^4$ (i.e., no vector space structure), then yes, you can introduce such coordinates in any generic spacetime.
However they might not be very useful as the resulting metric might become complex. This is because the metric will develope non-diagonal terms, which makes calculation much more difficult.
with only using Cartesian Coordinates we could easier think about the structure of spacetime itself.
It is true that certain coordinate systems make calculation easy, but it is not true that these will always be cartesian. Think about problems of spherical symmetry where you intentionally decide to work with spherical coordinates as the latters coordinates are simpler to work with. | {
"domain": "physics.stackexchange",
"id": 97672,
"tags": "general-relativity, spacetime, metric-tensor, coordinate-systems, curvature"
} |
Digit sum loops+random positive number generator | Question: I have to prepare program that summing the digits of two given by user numbers and then give back the random number which is
natural number
the digit sum of this number is bigger then the digit sum of given by user numbers
It's look like everything is ok but i dont know if it is just luck in random number generator or it's working well.
Thank you for review.
import java.util.Random;
import java.util.Scanner;
public class Ex1 {
public static void main(String[] args) {
Random rnd = new Random();
Scanner scn = new Scanner(System.in);
int sum1 = 0;
int sum2 = 0;
int sum3 = 0;
int sum4 = 0;
System.out.println("1TH NUMBER : ");
int a1 = scn.nextInt();
System.out.println("2ND NUMBER : ");
int a2 = scn.nextInt();
System.out.println((0 > a1 || 0 > a2 ? "ERROR-NEGATIVE NUMBER" : "OK"));
while (a1 > 0) {
sum1 += a1 % 10;
a1 /= 10;
}
//System.out.println(sum1);
while (a2 > 0) {
sum2 += a2 % 10;
a2 /= 10;
}
//System.out.println(sum2);
int temp = sum1 + sum2; //temporary-for storage /=
while (temp > 0) {
sum3 += (temp) % 10;
(temp) /= 10;
}
// System.out.println(sum3);
while (true) {
int a3 = rnd.nextInt(Integer.MAX_VALUE);
sum4 += (a3) % 10;
(a3) /= 10;
// System.out.println(sum4);
if (sum4 > sum3) {
System.out.println(a3 + " this is my number");
break;
}
}
}
}
Answer: I have some suggestion for you.
Code duplication
In your code, you have some code duplication that can be extracted in a method. By extracting the code, the code will become shorter, be less error-prone and easier to read.
I suggest that you make a method to print a question and read the user input.
public static void main(String[] args) {
//[...]
int a1 = askQuestionAndReceiveAnswer(scn, "1TH NUMBER : ");
int a2 = askQuestionAndReceiveAnswer(scn, "2ND NUMBER : ");
//[...]
}
private static int askQuestionAndReceiveAnswer(Scanner scn, String s) {
System.out.println(s);
return scn.nextInt();
}
Since the logic is the same to handle the sum, you can extract both of the while into a method. This extraction will remove lots of code!
public static void main(String[] args) {
//[...]
int sum1 = getSum(a1);
int sum2 = getSum(a2);
int temp = sum1 + sum2; //temporary-for storage /=
int sum3 = getSum(temp);
//[...]
}
private static int getSum(int userInput) {
int currentSum = 0;
while (userInput > 0) {
currentSum += userInput % 10;
userInput /= 10;
}
return currentSum;
}
Other observations
In my opinion, I would extract the last calculation in a method and return the result.
public static void main(String[] args) {
//[...]
int number = findNumber(rnd, sum3);
System.out.println(number + " this is my number");
//[...]
}
private static int findNumber(Random rnd, int sum3) {
int sum4 = 0;
while (true) {
int a3 = rnd.nextInt(Integer.MAX_VALUE);
sum4 += (a3) % 10;
(a3) /= 10;
if (sum4 > sum3) {
return a3;
}
}
}
Refactored code
public static void main(String[] args) {
Random rnd = new Random();
Scanner scn = new Scanner(System.in);
int a1 = askQuestionAndReceiveAnswer(scn, "1TH NUMBER : ");
int a2 = askQuestionAndReceiveAnswer(scn, "2ND NUMBER : ");
System.out.println((0 > a1 || 0 > a2 ? "ERROR-NEGATIVE NUMBER" : "OK"));
int sum1 = getSum(a1);
int sum2 = getSum(a2);
int temp = sum1 + sum2; //temporary-for storage /=
int sum3 = getSum(temp);
int number = findNumber(rnd, sum3);
System.out.println(number + " this is my number");
}
private static int findNumber(Random rnd, int sum3) {
int sum4 = 0;
while (true) {
int a3 = rnd.nextInt(Integer.MAX_VALUE);
sum4 += (a3) % 10;
(a3) /= 10;
if (sum4 > sum3) {
return a3;
}
}
}
private static int getSum(int userInput) {
int currentSum = 0;
while (userInput > 0) {
currentSum += userInput % 10;
userInput /= 10;
}
return currentSum;
}
private static int askQuestionAndReceiveAnswer(Scanner scn, String s) {
System.out.println(s);
return scn.nextInt();
} | {
"domain": "codereview.stackexchange",
"id": 37496,
"tags": "java, mathematics"
} |
Does no optical rotation always implies optical inactivity? | Question: This question popped up in my mind in reference to this question,
Is a compound optically active if plane polarised light is deflected by an angle of n*(2π) angles?
Suppose that I give a chemistry exam in future in which a question is asked like this,
A pure organic compound(not a racemic mixture) was analysed once and it was found that that there was absolutely zero rotation of PPL when passed through it(not even slight rotation as in cryptochirality). This means that it must be an optically active compound.
Now there are two options whether the above statement is true or false. What will be the answer? Assume that the organic compound is pure ie it is not a mixture (not even a racemic mixture). Also assume that the enantiomers (if possible) are separable which means that there is nothing like amine inversion or any other similar mechanism through which enantiomers can interconvert into each other.
Though it might be quite intuitive that the answer should be false but think about the person who set this question. (It has quite often happened with me that even after having the correct logic, my answer is consider wrong by majority.)
Answer:
Now there are two options whether the above statement is true or false. What will be the answer?
I choose to false. We have two cases for no rotation of plane polarized light. One of the case is that the compound is optically inactive. And the other case is when the analyte is a racemic mixture.
Wikipedia states that,
A racemic mixture, or racemate, is one that has equal amounts of left- and right-handed enantiomers of a chiral molecule.
So the rotation of plane polarized light is a combined effect of nature of compound and nature of analyte. So no rotation doesn't implies that the compound is optically active. | {
"domain": "chemistry.stackexchange",
"id": 16748,
"tags": "stereochemistry, homework, optical-properties"
} |
Arduino tutorial: compile error in Arduino IDE | Question:
I'm working on the rosserial_arduino/Tutorials/Hello World tutorial and when I compile the example code I get this error message:
In file included from /home/ryan/sketchbook/libraries/ros_lib/ros.h:38:0,
from HelloWorld.cpp:6:
/home/ryan/sketchbook/libraries/ros_lib/ros/node_handle.h:39:38: fatal error: rosserial_msgs/TopicInfo.h: No such file or directory
compilation terminated.
I've tried copying and pasting files and folders with no luck.
I'm thinking maybe Arduino can't find my ros_workspace folder?
UPDATE: I repeated the tutorial and discovered that rosmake fails after the command: rosmake rosserial_arduino. The rosmake summary reads like this:
[ rosmake ] Output from build of package rosserial_arduino written to:
[ rosmake ] /home/ryan/.ros/rosmake/rosmake_output-20120514-181947/rosserial_arduino/build_output.log
[rosmake-0] Finished <<< rosserial_arduino [FAIL] [ 6.52 seconds ]
[ rosmake ] Halting due to failure in package rosserial_arduino.
[ rosmake ] Waiting for other threads to complete. [ rosmake ] Results:
[ rosmake ] Built 17 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/ryan/.ros/rosmake/rosmake_output-20120514-181947
What would cause rosmake to fail?
Originally posted by RyanG on ROS Answers with karma: 26 on 2012-05-14
Post score: 1
Original comments
Comment by piyushk on 2012-05-14:
More information is needed to figure that out. Could you paste the output of "roscd rosserial_arduino && make"?
Answer:
Working with ROS on Arduino requires copying files into your Arduino sketchbook. Be sure you have copied ros_lib from the ROS environment to your sketchbook. The beginning of the arduino tutorials provide instructions/commands for doing so.
Originally posted by Dustin with karma: 328 on 2012-05-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jep31 on 2013-06-06:
I have the same issue in groovy. I have copied this files roslib in my sketchbook/libraries folder but that doesn't solve the problem. There is no node_handle.h in the folder.
Comment by zapk on 2014-05-01:
Hi, I have the same problem in 2014 with Hydro :( Anybody found the solution?
Comment by Akali on 2015-01-21:
Hey I have the same problem in 2015 with Hydro... | {
"domain": "robotics.stackexchange",
"id": 9393,
"tags": "arduino, rosserial, rosmake"
} |
Is the gravitational acceleration 9.8 m s$^{-1}$? | Question: I'm really new to science and physics so I apologize in advance if this question is too easy to be asked on this site. I'm retaking high school physics and and I'm using a site that claims covers the whole curriculum of grade 10.
I was always told that gravity = 9.8 m s$^{-2}$ but I recently saw a website where they state that the gravitational acceleration is 9.8 m s$^{-1}$ and I'd just like some clarity on if they're wrong or right or how this works because nowhere on Google can I find anything related to the negative one? And seeing as I just started out I don't want to follow a site for the next two years that's gonna teach me a lot of wrong information. Here's a link to where I found it on the site, it's fairly close to the end of the page, the last row in the table under the heading "Constants in equations".
Any clarification would be appreciated.
Answer: The website made a mistake.
Every object subject to a constant force will accelerate uniformly in time, having a characteristic acceleration which must have SI units $\text m/\text s^2,$ which is to say that "acceleration is a rate of change in velocity per unit time."
Now, generally a force will only impact an object inversely as its mass increases, $a = F/m,$ so if you kick a big massive object and a little object exactly the same, the little object will start to move much faster than the big one. This equation means that the units of force are $\text{kg m}/\text s^2.$
Gravity near the surface of a planet happens to be an approximately constant force for any single object, because the planet's radius is so much larger than the heights that the object is ascending or descending and the size of the object and so forth. However the force of gravity does scale with the mass of that object, $F = m g$ for some constant $g$ that depends on the mass of the planet and its radius, but not on the falling objects.
Since $m$ has units $\text{kg},$ this constant must have the units $\text m/\text s^2,$ and since $a = F/m,$ it is the acceleration that every object on that surface feels due to gravity. That's actually very important: it means that if you fill up a plastic water bottle 1/4 of the way full, and another all of the way full, even though one of them has 4 times the mass and experiences 4 times the force, those two effects perfectly balance out and they both fall exactly the same. If you have the plastic bottles to spare, feel free to do this experiment several times at home, releasing them side-by-side and confirming that they both hit the ground at the same time.
If we were to modify $g$ to have units $\text m/\text s$ then we would probably have to modify Newton's laws to say not $a = F/m$ but rather $v = F/m,$ and this would be disastrous: it would mean, for example, that you could not throw a ball upwards because the moment it left your hand it would have to have negative velocity. We would then have to postulate new "forces" to account for the fact that balls can be thrown in practice, call it an "inertial force" or whatever, that tries to keep it moving the direction it's moving... that's why Newton's laws are so very useful, because they don't require this and you can just say "once that ball leaves your hand, the only forces on it are gravity and wind resistance." | {
"domain": "physics.stackexchange",
"id": 40312,
"tags": "newtonian-gravity, acceleration, units, dimensional-analysis, textbook-erratum"
} |
A problem from Damped free Vibrations | Question: Problem:
I can understand the problem physically that the block is pulled by some amount X so that its displacement at t=0 is X. It is then released from that position so that its velocity at t=0 is zero. So by the physical nature of the problem I can deduce that the displacement vs time graph will look like:
However I'm having trouble understanding the problem from mathematical standpoint. I've learnt that the equation of motion for an underdamped system is given as, $x(t)$
where $X_o$ and $\phi_o$ can be determined from initial conditions. So my understanding tells me that if I apply the boundary conditions as given in the problem I must obtain $X_o = X$ and $\phi_o = \frac{\pi}{2}$.
So I proceeded as follows:
Using Boundary Conditions I'm getting
$$tan\phi_o=\frac{\sqrt{1-\zeta^2}}{\zeta}$$
which wont reduce further to $\phi=\frac{\pi}{2}$
where am I going wrong?
P.S. - I'm just having trouble with mathematically coming to the conclusion that $X_o = X$ and $\phi_o = \frac{\pi}{2}$. I will be able to work out for what the problem is actually asking - amplitude after n cycles, by myself.
Answer: UPDATE: after your comment I sat down and derived it myself (I couldn't make it out from the image), and I realised that your assumption that $\phi_0= \frac{\pi}{2}$ is not valid.
I prefer the following notation
$$x(t) = A e^{-\zeta \omega t } \sin\left(\sqrt{1-\zeta^2}\omega_n t +\phi_0\right)$$
and also substituting $\omega_d = \sqrt{1-\zeta^2}\omega_n$ (to keep equations shorter), this becomes:
$$x(t) = A e^{-\zeta \omega_n t } \sin\left(\omega_d t +\phi_0\right)$$
displacement BC
So for time $t=0$, $x(t=0)= X_0$.
therefore:
$$x(t=0) = A e^{0} \sin\left(\sqrt{1-\zeta^2}\cdot 0 +\phi_0\right)$$
$$x(t=0) = A \sin\left(\phi_0\right)$$
$$X_0 = A \sin\left(\phi_0\right)$$
velocity BC
By differentiating:
$$\dot{x}(t) = A \left((-\zeta \omega_n) e^{-\zeta \omega_n t } \sin\left(\omega_d t +\phi_0\right) + \omega_d e^{-\zeta \omega_n t } \cos\left(\omega_d t +\phi_0\right) \right)$$
Collecting term $ e^{-\zeta \omega_n t } $:
$$\dot{x}(t) = A e^{-\zeta \omega_n t } \left((-\zeta \omega_n) \sin\left(\omega_d t +\phi_0\right) + \omega_d \cos\left(\omega_d t +\phi_0\right) \right)$$
substituting and simplifying:
$$\dot{x}(t=0) = A \cdot 1 \cdot \left((-\zeta \omega_n)
\sin\left(\phi_0\right) + \omega_d \cos\left( \phi_0\right) \right)$$
$$0 = A \cdot 1 \cdot \left((-\zeta \omega_n) \sin\left(\phi_0\right) + \omega_d \cos\left( \phi_0\right) \right)$$
$$\zeta \omega_n \sin\left(\phi_0\right) = \omega_d \cos\left( \phi_0\right) $$
$$\frac{\sin\left(\phi_0\right)}{\cos\left( \phi_0\right)} = \frac{\omega_d}{\zeta \omega_n } $$
therefore
$$\tan\phi_0= \frac{\sqrt{1-\zeta^2}}{\zeta } $$
Interpretation
The two equations are
$$\begin{cases}
X_0 = A \sin\left(\phi_0\right)\\
\tan\phi_0= \frac{\sqrt{1-\zeta^2}}{\zeta }
\end{cases}$$
This is the actual correct solution. The assumption that $\phi_0=\frac{\pi}{2}$ is valid only for the case of the undamped free response. Whenever there is a damping ratio there is an angle $\phi_0$.
This can be observed in the following diagrams (I;ll get around to make them) for a $\zeta$ close to zero and close to 1.
The inflection point (change of curvature changes for different values of zeta) thus indicating the the phase angle changes.
In the case of
$\zeta \rightarrow 0$ the initial change in displacement is almost horizontal (i.e. the velocity)
$\zeta \rightarrow 1$ the initial change in displacement is very steep | {
"domain": "engineering.stackexchange",
"id": 4502,
"tags": "mechanical-engineering, vibration, homework"
} |
Is there any research on models that make predictions by also taking into account the previous predictions? | Question: With the recent revelation of severe limitations in some AI domains, such as self-driving cars, I notice that neural networks behave with the same sort of errors as in simpler models, i.e. they may be ~100% accurate on test data, but, if you throw in a test sample that is slightly different from anything it's been trained on, it can throw the neural network off completely. This seems to be the case with self-driving cars, where neural networks are miss-classifying modified/grafitied Stop Signs, unable to cope with rain or snowflakes, or birds appearing on the road, etc. Something it's never seen before in a unique climate may cause it to make completely unpredictable predictions. These specific examples may be circumvented by training on modified Stop Signs, or with rain and birds, but that avoids the point: that NN's seem very limited when it comes to generalizing to an environment that is completely unique to its training samples. And this makes sense of course given the way NNs train.
The current solution seems to be to manually find out these new things that confuse the network and label them as additional training data. But that isn't an AI at all. That isn't "true" generalization.
I think part of the problem to blame is the term "AI" in and of itself. When all we're doing is finding a global minimum to a theoretical ideal function at some perfect point before over-fitting our training data, it's obvious that the neural network cannot generalize anymore than what is possible within its training set.
I thought one way that might be possible to get around this is: rather than being static "one unique calculation at a time", neural networks could remember the last one or two predictions they made, and then their current prediction, and use the result of that to then make a more accurate prediction. In other words, a very basic form of short-term memory.
By doing this, perhaps, the neural network could see that the raindrops or snowflakes aren't static objects, but are simply moving noise. It could determine this by looking at its last couple of predictions and see how those objects move. Certainly, this would require immense additional computation overhead, but I'm just looking to the future in terms of when processing power increases how NNs might evolve further. Similar to how neural networks were already defined many decades ago, but they were not widely adopted due to the lack of computational power, could this be the same case with something like short-term memory? That we lack the practical computational power for it but that perhaps we could theoretically implement it somehow for some time in the future when we do have it.
Of course, this short-term memory thing would only be useful when a classification is related to the prior classifications, like with self-driving cars. It's important to know what was observed a few seconds ago in real life when driving. Likewise, it could be important for a neural network to know what was predicted a few live classifications ago. This might also have a use in object detection: perhaps, a representation could be learned for a moving figure in the distance. Speed could now become a representation in the hidden layers and be used in assistance for the identification of distant objects, something not possible when using a set of single static weights.
Of course, this whole thing would involve somehow getting around the problem of training weights on a live model for the most recent sample. Or, alternatively, perhaps the weights could still remain static but we'd use two or three different models of weights to represent time somehow.
Nevertheless, I can't help but see a short-term memory of some form as being a requirement, if AI is to not be "so stupid", when it observes something unique, and if it's to ever classify things based on time and recent observations.
I'm curious if there's any research papers or other sources that explore any aspect of incorporating time using multiple recent classifications or something else that could be considered a short-term memory of sorts to help reach a more general form of generalization that the neural network doesn't necessarily see in its training, i.e. making it able to avoid noise using time as a feature to help it do so, or using time from multiple recent classifications as a way to estimate speed and use that as a feature?
I'd appreciate the answer to include some specific experiment or methodology as to how this sort of thing might be added to neural networks (or list it as a source), or if this is not an area of active research, why not.
Answer: What you're describing is called a recurrent neural network. There are a large number of designs in this family that all have the ability to remember recent inputs and use them in the processing of future inputs. The "Long Short Term Memory" or LSTM architecture was one of the most successful in this family.
These are actually very widely used in things like self-driving cars too, so, on their own, they are not enough to overcome the brittleness of current models. | {
"domain": "ai.stackexchange",
"id": 1010,
"tags": "reference-request, recurrent-neural-networks, adversarial-ml, generalization, model-request"
} |
Missing dependency roscpp? | Question:
"
[rosbuild] Building package veltrobot_movement
Failed to invoke /opt/ros/cturtle/ros/bin/rospack deps-manifests veltrobot_movement
[rospack] couldn't find dependency [roscpp] of [veltrobot_movement]
[rospack] missing dependency
"
How can I fix this?
http://www.ros.org/wiki/cturtle/Installation/Ubuntu I used this tutorial to install.
Originally posted by GeniusGeeko on ROS Answers with karma: 299 on 2011-02-20
Post score: 0
Original comments
Comment by GeniusGeeko on 2011-02-20:
^ Restarted terminal and got. However when trying to build the veltrobot, now I'm getting missing dependency openni. When i run ~/ni/setup.sh , I get the roscpp error but not the openni error.
Comment by GeniusGeeko on 2011-02-20:
[rosmake ] Packages requested are: ['roscpp']
[ rosmake ] Logging to directory/home/geniusgeeko/.ros/rosmake/rosmake_output-20110220-212109
[ rosmake ] Expanded args ['roscpp'] to:
['roscpp']
[ rosmake ] Generating Install Script using rosdep then executing. This may take a minute, you will be prompted for permissions. . .
executing this script:
set -o errexit
#No Packages to install
[ rosmake ] rosdep successfully installed all system dependencies
[rosmake-0] Starting >>> roslang [ make ]
[rosmake-0] Finished <<< roslang ROS_NOBUILD in package roslang
No Makefile in package roslang
[rosmake-1] Starting >>> roslib [ make ]
[rosmake-1] Finished <<< roslib ROS_NOBUILD in package roslib
[rosmake-0] Starting >>> xmlrpcpp [ make ]
[rosmake-0] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp
[rosmake-1] Starting >>> rosconsole [ make ]
[rosmake-1] Finished <<< rosconsole ROS_NOBUILD in package rosconsole
[rosmake-1] Starting >>> roscpp [ make ]
[rosmake-1] Finished <<< roscpp ROS_NOBUILD in package roscpp
[rosmake-1] Starting >>> rosout [ make ]
[rosmake-1] Finished <<< rosout ROS_NOBUILD in package rosout
[ rosmake ] Results:
[ rosmake ] Built 8 packages with 0 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/geniusgeeko/.ros/rosmake/rosmake_output-20110220-212109
Comment by Eric Perko on 2011-02-20:
Can you include the output of 'rosmake --rosdep-install roscpp'?
Comment by GeniusGeeko on 2011-02-20:
"rosmake veltrobot_teleop veltrobot_movement robot_state_publisher rviz"
http://www.ros.org/wiki/cturtle/Installation/Ubuntu I used that tutorial to install.
Comment by Eric Perko on 2011-02-20:
Can you list the exact command you are trying to use to build veltrobot_movement?
Comment by Eric Perko on 2011-02-20:
How did you install ROS? Debs? Did you setup your .bashrc properly?
Answer:
Conflicting packages... the problem was that when the terminal started it would cturtle/setup.bash then ~/ni/setup.sh the setup.sh would set the ROSROOT to the unstable and get rid of the veltrop packet list from the ROS_PACKAGE_PATH. When adding the veltrop package to the PATH it would rid of the OPENNI package path. When running the ~/ni/setup.sh it would bring me to the unstable which would render to a roscpp due to conflicting paths. Thanks for the help.
Originally posted by GeniusGeeko with karma: 299 on 2011-02-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 4812,
"tags": "roscpp"
} |
Autoware NDT_Matching with ouster-128 | Question:
Hello,
I want to run Autoware NDT_Matching with ouster-128
I am getting a transform error
For frame [/os1_lidar]: No transform to fixed frame [world]. TF error: [, when looking up transform from frame [os1_lidar] to frame [world]]
I have tested many parameters,
voxel_grid_filter
ndt_matching
but it can’t transform [/os1_lidar] -> [world] correctly.
Environment: Ubuntu 16.04, ROS Kinetic, Autoware 1.12.0
Originally posted by ken86202 on ROS Answers with karma: 11 on 2020-08-10
Post score: 1
Original comments
Comment by Patrick N. on 2020-08-10:
Did you define a transform from map->world? NDT matching should create a transform relative to map frame.
Comment by ken86202 on 2020-08-10:
Yes
I have loaded tf_local.launch (map->world) ,and I have error
For frame [/os1_lidar]: No transform to fixed frame [world]. TF error: [Lookup would require extrapolation into the past. Requested time 1550.048495240 but the earliest data is at time 1597062760.654010145, when looking up transform from frame [os1_lidar] to frame [world]]
Comment by Patrick N. on 2020-08-10:
Are you replaying recorded data?
Comment by ken86202 on 2020-08-10:
yes I play rosbag
Comment by Patrick N. on 2020-08-10:
Are you using the simulation tab on the gui or setting the use_sim_time rosparam. Seems like some part of your TF tree is using system time and another part using something else.
Answer:
By default, Autoware.AI does not understand the frame /os1_lidar because it assumes that the lidar's frame is /velodyne. There are a few ways to correct this:
Change your Ouster driver to publish the point clouds with the frame velodyne. (easiest)
Change the code / launch files / configuration files to expect the frame os1_lidar. (hardest)
Change the localizer parameter in ndt_matching to os1_lidar and publish a static transform between os1_lidar and velodyne with parameters 0 0 0 0 0 0. (this is unttested but may work)
Originally posted by Josh Whitley with karma: 1766 on 2020-08-13
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35394,
"tags": "ros, localization, navigation, ros-kinetic, transform"
} |
envelope rms implementation by Matlab | Question: I am trying to implement the moving RMS by Matlab.
x = randn(50, 1);
xRMS = sqrt(movmean(x.^2, 21));
xRMSref = envelope(x, 21, 'rms');
plot(xRMSref,'DisplayName','xRMSref');hold on;plot(xRMS,'DisplayName','xRMS');hold off;
legend()
Why my estimation differs from Matlab's?
Answer: That is because envelope centers the signal before computing the standard deviation and afterwards corrects for the mean. Try this instead:
x = randn(50, 1);
xRMS = sqrt(movmean(x.^2, 21));
xRMS_centered = sqrt(movmean((x-mean(x)).^2, 21)) + mean(x);
xRMSref = envelope(x, 21, 'rms');
plot(xRMSref,'x-','DisplayName','xRMSref');hold on;plot(xRMS,'o-','DisplayName','xRMS');plot(xRMS_centered,'s-','DisplayName','xRMS centered');hold off;
legend()
*edit: since you were asking why envelope behaves this way: I think it's because it tries to return an envelope of the signal. It actually comes with two output arguments, a lower and an upper envelope with the idea that the signal should be between the two. Clearly if that's what we want, we need to account for the mean. See example in the following figure:
If you didn't center, you wouldn't capture the local variation around the mean accordingly. | {
"domain": "dsp.stackexchange",
"id": 9369,
"tags": "matlab, envelope"
} |
What is the systematic way to determine the spin of lowest-lying states of baryons like $uuu$, $ssd$, $uds$ etc? | Question: I learned that for the lowest lying states,
$uuu$, $sss$, $ddd$ can only have $J=\frac{3}{2}$
$uus$, $ddu$, $ddu$, $ssu$ etc can only have $J=\frac{1}{2},\frac{3}{2}$
$uds$ can have $J=\frac{1}{2}, \frac{1}{2},\frac{3}{2}$
where $J$ is the spin.
I also learned that the above results are deduced from requiring the color-flavor-spin-obirtal wavefunction of the state to be antisymmetric (by Pauli's exclusion principle). However, I still find myself confused about how to deduce the allowed $J$ values for each of the three cases.
What is the systematic way to deduce the spin of a given lowest lying baryon state?
Answer: You should be able to answer your question directly by directly inspecting the baryon wavefunctions in your textbook. They are the unique solution to the constraints you mention, so, since they are color singlets, they must be color antisymmetric: consequently they are spin-flavor-space symmetric. Lowest mass baryons are in an S state, parity +, so, symmetric: your problem reduces to mapping out all flavor-spin symmetric states.
You are studying flavor SU(3), and (by inspection of the evident Young tableaux) three triplets (quarks) of SU(3) combine to a Symmetric decuplet 10; an Antisymmetric singlet 1; and two octets 8 of mixed symmetry. The symmetric combination of three spin 1/2 doublets is the spin 3/2 quartet, while the doublets are always of mixed symmetry.
Review them here.
Within a spin multiplet or flavor multiplet, raising and lowering spin, Isospin, U-spin, and V-spin operators interchanging quark spins and flavors will keep you inside the multiplet and so will not change its symmetry. So, as always, you start with the extreme multiplet states, and you ladder yourself around the combined multiplet.
uuu is flavor symmetric, so it must be in the flavor decouplet, and so it must also be spin Symmetric (for total symmetry), and so must be in the spin quartet: spin 3/2.
It is the $\Delta^{++}$ in the diagram, and the flavor ladder operators can move it to its spin 3/2 decuplet confreres $\Delta^-$ and $\Omega^-$. Note the flavor ladder operators will also move it to $\Delta^+,\Sigma^{*+},\Sigma^{*-}$, in your second rung, also spin 3/2; and, further, $\Sigma^{*0}$ in your 3rd rung, also spin 3/2. All spin 3/2 states are now accounted for, and everything else remaining must be orthogonal to them, and so must be spin 1/2. For example, lowering the $S^z$ of the $\Delta^+$ state, we evidently get
$$
|\Delta^+_\uparrow\rangle= \frac{1}{3} [ | u_\uparrow d_\downarrow u_\uparrow \rangle + | u_\uparrow u_\uparrow d_\downarrow \rangle +| d_\downarrow u_\uparrow u_\uparrow \rangle \\ + | u_\uparrow u_\downarrow d_\uparrow\rangle +| u_\uparrow d_\uparrow u_\downarrow\rangle +| u_\downarrow d_\uparrow u_\uparrow\rangle
+| d_\uparrow u_\downarrow u_\uparrow\rangle +| d_\uparrow u_\uparrow u_\downarrow\rangle +| u_\downarrow u_\uparrow d_\uparrow\rangle ].
$$
We are left with mixed-symmetry octets and the flavor antisymmetric singlet, all necessarily spin 1/2, since they must be orthogonal to the spin 3/2 states. I frankly don't have a slick way to separate the octets, but it is straightforward to verify the orthogonality of the proton wavefunction, below, to the $\Delta^+_\uparrow$ of the previous paragraph,
$$
|p_\uparrow\rangle= \frac{1}{\sqrt {18}} [ 2| u_\uparrow d_\downarrow u_\uparrow \rangle + 2| u_\uparrow u_\uparrow d_\downarrow \rangle +2| d_\downarrow u_\uparrow u_\uparrow \rangle \\ - | u_\uparrow u_\downarrow d_\uparrow\rangle -| u_\uparrow d_\uparrow u_\downarrow\rangle -| u_\downarrow d_\uparrow u_\uparrow\rangle
-| d_\uparrow u_\downarrow u_\uparrow\rangle -| d_\uparrow u_\uparrow u_\downarrow\rangle -| u_\downarrow u_\uparrow d_\uparrow\rangle ].
$$
This one is fully symmetric, even though its flavor and spin parts are of mixed symmetry, individually! You now get to its confreres in the spin 1/2 flavor octet through flavor and spin ladder operators. In particular, you may also further descend to the octet spin 1/2 uds, called $\Sigma^0$, on your 3rd rung.
Finally, a state flavor-orthogonal to it in the same flavor octet is the iso-singlet spin-1/2 $\Lambda$ baryon,
$$
|\Lambda_\uparrow\rangle= \frac{1}{\sqrt{12}}[u_\uparrow d_\downarrow s_\uparrow -u_\downarrow d_\uparrow s_\uparrow -d_\uparrow u_\downarrow s_\uparrow +d_\downarrow u_\uparrow s_\uparrow + \hbox{perms}].
$$
Marvel at its full symmetry, despite its flavor and spin mixed-symmetries.
This has now populated your three rungs.
In summary, the states discussed above are a spin quartet of flavor decuplet, and a spin doublet of flavor octet, 56 states in all, (fitting into the 56 symmetric representation of SU(6), but that is another story, yet...)
You may now further inspect the rarer different parity P-wave (L=1) states of higher energy, but I doubt you are intrigued by them at this stage. | {
"domain": "physics.stackexchange",
"id": 58108,
"tags": "particle-physics, quantum-spin, pauli-exclusion-principle, baryons, color-charge"
} |
Is there any alternative characterization of sparsity of a signal in compressed sensing | Question: The starting assumption for compressed sensing (CS) is that the underlying signal is sparse in some basis, e.g., there are a maximum of non-zero Fourier-coefficients for an $s$-sparse signal. And real life experiences do show that the signals under consideration are often sparse.
The question is - given a signal, before sending out the compressively-sampled bits to the receiver and let her recover to the best of her abilities, is there a way to tell what its sparsity is, and if it is a suitable candidate for compressed sensing in the first place?
Alternatively, is there any additional/alternative characterization of sparsity that can tell us quickly whether CS will be useful or not. One can trivially see that the sender could do exactly what the receiver will do with some randomly chosen set of measurements, and then try to figure out the answer. But is there any alternate way to resolve this question ?
My suspicion is that something like this must have been studied, but I couldn't find a good pointer.
Note : I had posted this question in Mathoverflow, a few weeks back, but didn't get any answer. Hence the cross-post.
Answer: Indeed, there are ways in which sparsity, or information content, may be estimated
at the acquisition device. The details, practicality, and actual usefulness of doing so
is debatable and heavily dependent upon the context in which it is applied. In the case
of imaging, one could determine areas of an image which are more or less compressibile
in a predetermined basis. For example, see "Saliency-Based Compressive Sampling for Image Signals" by Yu et al. In this case, the additional complexity requirements placed on the acquisition device provide marginal gains.
With regards to your questions about making determinations as to the usefulness of Compressed Sensing on a given signal at the time of acquisition: If the signal in question adheres to
any kind of model known a priori, Compressed Sensing is possible. Accurate recovery is
simply dependent upon ratio between the number of measurements taken and the degree to which
the sampled signal adheres to your model. If it is a bad model, you won't get past the phase
transition. If it is a good model, then you will be able to calculate an accurate
reconstruction of the original signal. Additionally, Compressed Sensing measurements are,
in general, future proofed. If you have a given number of measurements for a signal which
are insufficient in number of accurately recover the original signal using the model you
have today, then it is still possible to devise a better model tomorrow for which these
measurements are sufficient for accurate recovery.
Additional Note (edit):
The acquisition approach mentioned in your question sounded quite near to adaptive Compressed Sensing, so I thought the following might be of interest for the readers of this question. Recent results by Arias-Castro, Candes, and Davenport have shown that adaptive measurement strategies cannot, in theory, offer any significant gains over non-adaptive (i.e. blind) Compressed Sensing. I refer readers to their work, "On the Fundamental Limits of Adaptive Sensing" which should be appearing in the ITIT soon. | {
"domain": "dsp.stackexchange",
"id": 480,
"tags": "compressive-sensing"
} |
Inverter + Direct drive vs. Belt drive motor in washing machines? | Question: What should I choose from today.
I am looking at washing machines, and I see that LG has this Inverter Direct Drive (6 motion cleaning) technology in their machines, I see that its not like a year old one, but I am also wondering how long do these last?
So can someone tell me how does direct drive perform nowadays, especially in LG products;
If its more reliable then how cheaper is belt drive fixing?
Are there more stuff to go wrong with that technology?
I see some manufacturers use only inverter technology, what does that
mean to the average consumer, should I choose that over non-inverter?
Answer: There are tons of advantages and disadvantages, but I will mention only the relevant cons and pros.
Indirect (belt) drive usually produces less noise in comparison to direct drive (continue power transmission vs discrete power transmission like gears and chains), but you know engineers had done a lot to reduce the noise in household equipment, unfortunately not yet in industrial sector.
Belts are far more prone to relaxation and then they can transmit less torque. As you might have noticed after some years after you replace the belt. Again this is less observable for home and kitchen equipment.
Belt drives dampen vibrations, direct drives produce more vibration specially when the rpm goes up. But in washing machines, the direction of rotation changes time to time and direct drive offers more stability.
The maintenance cost of belt drives is lower than that for direct drive, they are easier to disengage for repairing.
Belt drives apply greater load on the main shaft of the machine, it has a direct effect on the life time of bearings, shaft misalignment and as a result, oil leakage and more vibrations.
Direct drives requires less space in comparison to belt drives, so smaller machines.
Belts are sensitive to the surrounding temperature, humidity, oil ..., so the probability of failure is higher plus the micro slip.
Inverters are a sort of speed and power control over electric motors, long story short, they make the rotation of induction motors smoother in comparison to classic motors without inverters, of course they cost the clients more, but it also regulates power consumption. | {
"domain": "engineering.stackexchange",
"id": 2429,
"tags": "mechanical-engineering, electrical-engineering, motors"
} |
Error using DRCSim (Gazebo) on hydro - Unable to read file[.../atlas.urdf] | Question:
Hi, I'm using DRCSim 2.6.6 and gazebo 1.8 on ROS Hydro. When I roslaunch atlas.launch this error appears:
failed to start local process: /opt/ros/hydro/lib/stereo_image_proc/stereo_image_proc __name:=stereo_proc __log:=/home/gustavo/.ros/log/eca8b864-fb95-11e2-b1fd-001e65d8adfa/multisense_sl-camera-stereo_proc-6.log
local launch of stereo_image_proc/stereo_image_proc failed
[multisense_sl/camera/stereo_proc-6] process has died [pid -1, exit code 127, cmd /opt/ros/hydro/lib/stereo_image_proc/stereo_image_proc __name:=stereo_proc __log:=/home/gustavo/.ros/log/eca8b864-fb95-11e2-b1fd-001e65d8adfa/multisense_sl-camera-stereo_proc-6.log].
log file: /home/gustavo/.ros/log/eca8b864-fb95-11e2-b1fd-001e65d8adfa/multisense_sl-camera-stereo_proc-6*.log
Error [parser.cc:636] Unable to read file[/usr/local/share/drcsim-2.6/gazebo_models/atlas_description/atlas/atlas.urdf]
Error [parser.cc:710] Error reading element <world>
Error [parser.cc:369] Unable to read element <sdf>
Error [Server.cc:284] Unable to read sdf file[atlas.world]
Error [ConnectionManager.cc:123] Connection Manager is not running
[gazebo-2] process has finished cleanly
This appears after a stereo_image_proc error. so, Is this only gazebo issue or is related with the previous error?
Originally posted by gustavo.velascoh on Gazebo Answers with karma: 20 on 2013-08-02
Post score: 0
Answer:
I rebuild gazebo and DRCSim and now it works, but this warning keeps showing:
Warning: The notion of a group name for collision tags is not supported by URDF.
at line 411 in /tmp/buildd/ros-hydro-urdfdom-0.2.8-1precise-20130721-0020/urdf_parser/src/link.cpp
Originally posted by gustavo.velascoh with karma: 20 on 2013-08-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by nkoenig on 2013-08-08:
You can ignore that warning message. | {
"domain": "robotics.stackexchange",
"id": 3415,
"tags": "ros"
} |
Determining the maximum value that can be obtained from a loot | Question: A thief finds much more loot than his bag can fit. We have to find the most valuable combination of items assuming that any fraction of a loot item can be put into his bag.
For example has a bag which can fit at most 50 Kg of items. He has 3 items to choose from: the first item has a total value of $60 for 20 Kg, the second item has a total value of $100 for 50 Kg and the last item has a total value of $120 for 30 Kg.
So, if the thief takes the most of the item that costs more per unit Kg, he/she can make a better profit out of his/her thievery. In that case, he/she would take 30 Kgs of the third item, and 20 Kgs of the first item, resulting in a total of 50 Kgs (full capacity) with the total value of $180.
// THIS IS AN EXAMPLE OF THE FRACTIONAL KNAPSACK PROBLEM
#include<cstdio>
#include<iostream>
#include<vector>
using namespace std;
double get_max_index(vector<int>, vector<int>);
double get_max_value(vector<int>, vector<int>, int);
int main() {
int num_items, bag_capacity;
cout << "Enter the number of items: " << endl;
cin >> num_items;
cout << "Enter the total capacity that the bag can support: " << endl;
cin >> bag_capacity;
vector<int> values;
vector<int> weights;
for (int i = 0; i < num_items; i++)
{
int buff_val = 0, buff_wgt = 0;
cout << "Enter the value and weight of Item " << i + 1 << ": " << endl;
cin >> buff_val >> buff_wgt;
values.push_back(buff_val);
weights.push_back(buff_wgt);
}
cout.precision(10);
cout << "The maximum loot value that can be acquired is " << fixed << get_max_value(values, weights, bag_capacity) << "." << endl; // Always display 'precision' number of digits
return 0;
}
double get_max_index(vector<int> vals, vector<int> wgts){
int max_index = 0;
double max_val_per_wgt = 0;
for (int i = 0; i < wgts.size(); i++)
{
if (wgts[i] != 0 && (double) vals[i] / wgts[i] > max_val_per_wgt)
{
max_val_per_wgt = (double) vals[i] / wgts[i];
max_index = i;
}
}
return max_index;
}
double get_max_value(vector<int> vals, vector<int> wgts, int capacity){
double max_val = 0.0;
for(int i = 0; i < wgts.size(); i++) {
if (capacity == 0)
{
return max_val; // There's no space left in the bag to carry
}
int max_value_index = get_max_index(vals, wgts); // See which item has the best value per weight index
double taken = capacity > wgts[max_value_index] ? wgts[max_value_index] : capacity; // get the minimum of the item's weight and capacity left in the bag
max_val += taken * (double) vals[max_value_index] / wgts[max_value_index]; // calculate value for the item's weight taken
capacity -= taken; // reduce capacity of the bag by the amount of weight of item taken
wgts[max_value_index] -= taken; // reduce the item's weight by the amount taken
}
return max_val;
}
Please review this code in terms of complexity or in other areas that might result in this code failing. Also, looking forward to criticism and if you decide to post criticism, kindly include relevant details that a not-so-experienced programmer might find helpful in solving the problem.
Answer: The algorithm
Your algorithm needs O(n) space (for storing all items), and O(n * k) time (for selecting the best ones) (n = items to choose from, k = items chosen, bounded by n).
By choosing as you go, you can get that down to O(k') space and O(n * k') time (k' = maximum items chosen at any time, at least k, bounded by n).
Take a look at std::push_heap, std::tuple and lambdas for implementing the new algorithm.
Avoid casting
Casting is error-prone, as it circumvents the protections of the type-system. Thus, use the most restricted cast you can, and don't cast at all if reasonably possible. In your case, why not multiply with 1.0 instead?
Floating-point is hard
You are calculating the specific value (value per weight) of your items for comparison purposes. Luckily, a double has enough precision that you are extremely unlikely to suffer from rounding-errors when dividing two 32 bit numbers. Still, instead of comparing 1.0 * a_value / a_weight < 1.0 * b_value / b_weight you could compare 1LL * a_value * b_weight < 1LL * b_value * a_weight, avoiding division and floating-point.
All those useless copies
While copying small trivial types is generally the right approach, a std::vector is neither small nor trivial; Copying it is rather expensive. If you only need to read it, use a constant reference or preferably a C++2a std::span for increased flexibility.
Gracefully handle all kinds of invalid input
No need to assume malice, you are assured your load of garbage anyway.
Just the code
I don't see where you use anything from <cstdio>, so don't include it.
Never import a namespace wholesale which isn't designed for it. Especially std is huge and ever-changing, which can silently change the meaning of your code even between minor revisions of your toolchain, let alone using a different one.
It cannot be guaranteed to break noisily.
return 0; is implicit for main(). Make of that what you will. | {
"domain": "codereview.stackexchange",
"id": 35180,
"tags": "c++, algorithm"
} |
How to tell where a meteorite came from | Question: I am told in a lecture that some particular meteorite is from Vesta. How can we know that a particular meteorite is from there?
Answer: Meteorites have distinct compositions and so meteorites with similar compositions can be grouped into families. There are, for example the iron meteorites, meteorites with lots of carbon and so on.
One family are called HED meteorites. They don't contain chondrules and show evidence of igneous processing (the rocks have been melted). The are differentiated. This suggests that they didn't form from the raw materials in the solar system, but formed on a larger body, and then were ejected into space. The look very like igneous rocks on Earth.
It is suspected that these come from Vesta. They don't come from the Moon or Mars (we know about the composition of these worlds and they don't match). Vesta is big enough to be differentiated, and in the right place to allow for pieces of rock ejected by impacts to be perturbed into Earth-crossing orbits. Observations by the Dawn probe are consistent with Vesta being the source of HED meteorites (By contrast, Ceres has a different composition and so is not a likely source), and there is a large impact crater (the Rheasilvia crater) that could have ejected a huge amount of rock into orbit around the sun. Some of the debris from this impact would still be falling to Earth, even after a billion years.
So Vesta fits the requirements. That isn't quite proof that these HED meteorites did come from Vesta, but it is a reasonable belief. | {
"domain": "astronomy.stackexchange",
"id": 5377,
"tags": "asteroids"
} |
Relation of Angular and Linear velocity with radius of circular path | Question: Linear/tangential velocity, in a circlular path, increases with the increase in radius and decreases with the decrease in radius. Hence, the angular velocity remains the same no matter what the change in radius is(W=V/r). However, when we talk about the conservation of angular momentum, we say that since the momentum is conserved, as we increase the radius, the linear velocity must decrease to keep it constant(because L=mvr), which concludes it is inversely related to the radius, which concludes that angular velocity is inversely propotional to the square of radius, ( when an ice skater brings his arms inwards, it rotates with greater angular velocity)but It is clear from what's stated above, that angular velocity must remain the same, regardless of what the radius is, isn't it?
Answer: It seems to me that you mix up some things. What you are relating to is the angular momentum of a single point particle,
$$
\vec{l} = mrv\vec{e}_z = mr^2\omega \vec{e}_z,
$$
which I have written here directly depending on the angular velocity $\omega$. So you see that a point particle has a fixed angular velocity $\omega$ as long as $l$, $r$ and $m$ are fixed. To change it, a torque has to be applied. Actually, it is defined as the change of angular momentum ($D = \dot{l}$).
When you are referring to the ice skater, it is important that this deals with the rotation of a rigid body, i.e. a system of individual point masses. The total angular momentum of this system is obtained by integrating (or for a discrete set of particles, summed) over all infinitesimal contributions of the point particles. This obviously depends on the geometry of the object. The information about the object is contained in the so-called tensor of inertia,
$$
\vec{L} = I \cdot \vec{\omega}
$$
To put a long story short, the ice-skater increases $\omega$ when contracting herself because on the system as a whole, there is no torque. Let's assume the ice-skater is a cylinder, for which one can find $I = \frac{m}{2}r^2$, so that $L = \frac{m}{2} r^2 \omega$. When the ice-skater is reducing her radius, we find a $\omega$-gain of
$$
\frac{\omega_2}{\omega_1} = \frac{r_1^2}{r_2^2}.
$$ | {
"domain": "physics.stackexchange",
"id": 35073,
"tags": "angular-momentum, velocity"
} |
Simple balanced trees with O(1) concat? | Question: In Purely Functional Worst Case Constant Time Catenable Sorted Lists, Brodal et al. present purely functional balanced trees with O(1) concatenate and O(lg n) insert, delete, and find. The data structure is somewhat complicated.
Is there a simpler balanced search tree with O(1) concatenate, functional or not?
Answer: You can trivially make a data structure with O(1) amortized concatenation time, by just reinserting everything from one tree on the other on concatenation (which has O(n log n) cost, exactly the same as was used in constructing that tree in the first place, so the overall time is still O(n log n)), but this is cheating.
For worst-case O(1) time, the authors claim it was an open problem for any data structure, so I don't think you're going to find an easy answer. | {
"domain": "cstheory.stackexchange",
"id": 113,
"tags": "functional-programming, ds.data-structures"
} |
Why is the slope of pressure and volume almost zero below critical point for liquefaction of gas? | Question:
Image shows the Andrews isotherms regarding liquefaction of gases.
Why is the $PV$ curve almost horizontal from point $B$ to $C$?
I want to understand why does a small increment in pressure result in a large change in volume in the above mentioned zone ? What is the atomic reason that leads to this?
Thank you in advance.
Answer: Inside the gray zone you have liquid in equilibrium with vapor. At any given temperature, this equilibrium exists at only one pressure. If you hold temperature constant and change the volume (move along the x-axis) you will cause some liquid to evaporate or some vapor to condense, but provided you change the volume slowly enough that equilibrium can more or less be maintained, the pressure stays essentially the same. | {
"domain": "physics.stackexchange",
"id": 75765,
"tags": "thermodynamics, phase-transition, ideal-gas, gas, liquid-state"
} |
Why Front part of a body undergoing rolling pushes the surface a "bit more"? | Question: Original Post : here
On the accepted answer , it was said that the Normal Force is more on the right side of the centre of mass which provides an anti-torque to the rotation of the body which slows down the rolling.
I also found some similar explanations on "Why a rolling Body Slows Down" in the book "Concepts of Physics by HC Verma"
In the second picture , you can see that it is written that the Normal Force is shifted Right of the center of mass because the front part pushes the surface a bit more . Here it is :
In fact, when the sphere rolls on the table, both the sphere and the surface deform near the contact. The contact is not at a single point as we normally assume, rather there is an area of contact.The front part pushes the table a bit more strongly than the back part. As a result the normal force doesnt pass through the center, it is shifted towards the right. This force then has an anticlockwise torque. The net torque causes an angular deceleration.
But it is not explicitly explained(neither in the book , nor in the answer of the above mentioned post) why the front side pushes it a "bit more" than the back side.
Why does this happen?
Answer:
But it is not explicitly explained(neither in the book , nor in the
answer of the above mentioned post) why the front side pushes it a
"bit more" than the back side.
It is due to the viscoelastic behavior of the contacting materials.
For purely elastic materials the relationship between stress and strain is linear so that the loading and unloading (compressing and uncompressing) forces are equal. See the diagram at the left below.
Viscoelastic materials behave like elastic materials in that both eventually recover from deformation when the load is removed. See diagram to the right below. However, the viscous behavior of a viscoelastic material is such that the stress (force) during unloading is less than that during loading for the same amount of deformation giving the material a strain rate dependent on time. The area in red between the loading and unloading curves represents the hysteresis heat loss. In contrast with ideal elastic behavior, the deformation when the material is viscoelastic does not recover right after the load is removed. In other words, there is a time delay for the material strain to fully recover, which is not shown in the diagram to the right.
In terms of say a tire rolling, the above means the forces acting on the leading portion of the tire (in the direction of motion) in contact with the road under compression (loading) are greater than the forces acting on the trailing portion of the tire in contact with the road under decompression (unloading). The overall result is the difference between the compression and decompression forces results in a net torque counter to the rotation of the tire.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 68129,
"tags": "forces, rotational-dynamics, rotational-kinematics"
} |
Is there a difference in handwritten nabla $\vec{\nabla}$ with an overset arrow and typeset nabla $\nabla$? | Question: According to some physicist at KIT it is usual to write the following when using pen and paper:
whereas in typeset texts you write $\nabla$.
Is that true? Are there sources for this convention?
Answer:
Yes, there are sometime different conventions for indicating vectors in hand-writing and printing.
Yes, overset arrows in handwriting and boldface in printing is one of those conventions.
No, it is not the only convention.
Yes, you should familiarize yourself with the most common conventions in your sub-discipline.
Yes, you should read the section on notation in the introduction to each book before proceeding. (And if writing a book you should write a section on notation.)
Yes, is get more complicated still if you want to visually distinguish more than just two types of values (say scalars, three-vectors and four-vectors). | {
"domain": "physics.stackexchange",
"id": 15680,
"tags": "vectors, notation, conventions, differentiation"
} |
Sound frequency of dropping bomb | Question: Everyone has seen cartoons of bombs being dropped, accompanied by a whistling sound as they drop. This sound gets lower in frequency as the bomb nears the ground.
I've been lucky enough to not be near falling bombs, but I assume this sound is based on reality.
Why does the frequency drop? Or does it only drop when it is fallling at an oblique angle away from you, and is produced by doppler shift?
I would have thought that most bombs would fall pretty much straight down (after decelerating horizontally), and therefore they would always be coming slightly closer to me (if I'm on the ground), and thus the frequency should increase..
Answer: I can't claim any experimental experience in this area (fortunately :-) but I thought it was interesting enough to be worth a bit of Googling. The results suggest there is a difference between shells and bombs.
There is an extensive collection of eye witness accounts of WW2 at http://www.bbc.co.uk/history/ww2peopleswar/categories/, and searching this suggests that falling bombs make little if any sound. I couldn't find any of the eye witness accounts that mentioned a whistling sound.
However if you Google for stories from, for example, the current troubles in Syria there are lots of reports of the whistling sounds shells make. Chapter 5 of The Art of Noises describes the stereotypical whistling sound falling in tone, and as this dates from the years before Hollywood it's presumably relatively uncontaminated. The author attributes this to fact that the shell velocity is highest immediately after firing and falls during flight due to air resistance.
It's probably relevant that shells are generally fired at greater than the speed of sound so you wouldn't hear them approaching. You'd only hear them after they passed you, and of course the sound of those shells would be red shifted. | {
"domain": "physics.stackexchange",
"id": 10405,
"tags": "acoustics, frequency, doppler-effect"
} |
Square spiral matrix | Question: I've written a Python script to generate the closest square spiral matrix to a given number.
I'm new to code reviews as a practice, and I'm interested in ways to improve. Please suggest improvements to the code where you see fit, particularly in regards to:
Algorithm: Is there a faster/more elegant way to generate the matrix?
Style: I've tried to adhere to PEP8.
"""
Outputs the closest square spiral matrix of an input number
"""
from math import sqrt
def is_even_square(num):
"""True if an integer is even as well as a perfect square"""
return sqrt(num).is_integer() and (num % 2 == 0)
def find_nearest_square(num):
"""Returns the nearest even perfect square to a given integer"""
for i in range(num):
if is_even_square(num - i):
return num - i
elif is_even_square(num + i):
return num + i
def find_lower_squares(num):
"""Returns a list of even perfect squares less than a given integer"""
squares = []
for i in range(num, 3, -1):
if is_even_square(i): squares.append(i)
return squares
def nth_row(num, n):
"""Returns the nth row of the square spiral matrix"""
edge = int(sqrt(num))
squares = find_lower_squares(num)
if n == 0:
return list(range(num, num - edge, -1))
elif n >= edge - 1:
return list(range(num - 3*edge + 3, num - 2*edge + 3))
elif n < edge // 2:
return ([squares[1] + n] + nth_row(squares[1],n-1)
+ [num - edge - n + 1])
else:
return ([num - 3*edge + 4 + n - edge] + nth_row(squares[1],n-1)
+ [num - 2*edge + 1 - n + edge])
def generate_square_spiral(num):
"""Generates a square spiral matrix from a given integer"""
edge = int(sqrt(num))
square_spiral = [[None for x in range(edge)] for y in range(edge)]
for row in range(edge): square_spiral[row] = nth_row(num, row)
return square_spiral
def main ():
num = None
while not num:
try:
num = int(input('Input number: '))
except ValueError:
print('Invalid Number')
nearest_square = find_nearest_square(num)
matrix = generate_square_spiral(nearest_square)
for row in range(len(matrix[0])):
for col in range(len(matrix)):
if matrix[row][col] < 10:
print(' ',matrix[row][col],' ',sep='',end='')
elif matrix[row][col] < 100:
print(' ',matrix[row][col],' ',sep='',end='')
else:
print(matrix[row][col],' ',sep='',end='')
print(2*"\n",end='')
if __name__ == '__main__':
main()
Answer: Your code is overall clean and well written.
You should use list comprehension:
def find_lower_squares(num):
"""Returns a list of even perfect squares less than a given integer"""
squares = []
for i in range(num, 3, -1):
if is_even_square(i): squares.append(i)
return squares
Should become:
def find_lower_squares(num):
"""Returns a list of even perfect squares less than a given integer"""
return [i for i in range(num, 3, -1) if is_even_square(i)]
Scientific/mathematical code like this greatly benefits from automated testing, let me show you an example:
import doctest
def find_lower_squares(num):
"""
Returns a list of even perfect squares less than a given integer
>>> find_lower_squares(40)
[36, 16, 4]
"""
return [i for i in range(num, 3, -1) if is_even_square(i)]
doctest.testmod()
This gives double benefits:
Changes to the code can be made without being worried: if you break something, the error message will show up immediately
Code readability and documentation increase a lot, the reader can see some examples of the function being used and will understand it faster and better.
Avoid None-important assignment
In mid level languages such as C you must declare array first as empty and then fill them. In high level languages such as Python you should avoid such low level behaviour and insert the values in the arrays directly:
def generate_square_spiral(num):
"""
Generates a square spiral matrix from a given integer
>>> generate_square_spiral(5)
[[5, 4], [2, 3]]
>>> generate_square_spiral(17)
[[17, 16, 15, 14], [5, 4, 3, 13], [7, 1, 2, 12], [8, 9, 10, 11]]
"""
edge = int(sqrt(num))
square_spiral = [nth_row(num, row) for row in range(edge)]
return square_spiral
Also we could remove the unnecessary assignment:
edge = int(sqrt(num))
return [nth_row(num, row) for row in range(edge)] | {
"domain": "codereview.stackexchange",
"id": 12382,
"tags": "python, python-3.x, matrix"
} |
How to simulate plane Poiseuille flow using Molecular Dynamics? | Question: I want to simulate 2D plane Poiseuille flow using molecular dynamics (velocity verlet algorithm) but not able to understand exactly to how to do this. Boundary condition is fine, I am confused about the forces to use in the verlet algorithm, I may use Lennard Jones but not sure if it is okay to use. Please help me with the algorithm only and suggest suitable references. Thanks.
Answer: Probably the best reference is Nonequilibrium molecular dynamics: theory, algorithms and applications by Billy D Todd and Peter J Daivis, Cambridge University Press (2017). Chapter 9 discusses how to set up a simulation of this kind. They recommend applying a constant external force per atom in the flow direction, confining the system between parallel walls consisting of atoms (for example, Lennard-Jones atoms) and applying periodic boundary conditions in other directions. The fluid atoms would interact with each other, for example also with LJ forces, and with the wall atoms, once more with LJ interactions. You can choose to make the fluid-fluid, and fluid-wall, interactions of different strength.
There is a small subtlety in this choice of external force: it implies that the pressure gradient, which is proportional to the number density multiplied by the force per atom, is not constant across the cavity (because the number density varies, near the walls). However they argue that this is still OK.
There is more subtlety associated with the choice to either fix the wall atoms or allow them to vibrate, connected to each other by springs, as they would in a real solid. I've seen both methods used: I think they generally prefer to allow these atoms to move, otherwise the collisions of fluid atoms with them can be a bit unphysical (specular reflection or rebounding), but this may not worry you too much. Finally, you'll need to thermostat the system, otherwise it will heat up. Many people apply the thermostat to all the atoms, but I think Todd and Daivis prefer the more physical approach of just thermostatting the wall atoms, so the heat is conducted away through the walls. Again, this may not be a critical issue for you. | {
"domain": "physics.stackexchange",
"id": 56782,
"tags": "molecular-dynamics"
} |
ROS Answers SE migration: ROS on Windows | Question:
Hi all,
I would like to run a very thin client on windows. It is supposed to be only a message sender. I found this link on ROS web site. http://www.ros.org/wiki/cturtle/Installation/Windows
It asks me to install roslib and rospy on my windows client. The question is how it is possible. Is there any binary file that I install it on my windows, or I have to install them from their source codes? They have dependencies to other packages. What should I do with them?
Bests,
Shams
Originally posted by Shams on ROS Answers with karma: 1 on 2012-04-25
Post score: 0
Original comments
Comment by joq on 2012-04-25:
Now that Fuerte is released, this should be easier, but I don't know how to do it.
Answer:
Update
It is possible to install and use ROS Lunar on Windows 10 thanks to WSL:
http://wiki.ros.org/Installation/Windows
This is the best way to use ROS on Windows right now, though it is not perfect.
Please see the more recent Fuerte-era ROS Windows instructions. ROS on Windows is a work in progress, and you should always use the most recent release.
http://www.ros.org/wiki/fuerte/Installation/Windows
Originally posted by kwc with karma: 12244 on 2012-04-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2017-08-07:
I believe it's important to recognise that ROSonWSL is not the same as having a native Win32/64 ROS. WSL does not have the same level of access to Windows resources, so 'driver' components wanting to access hw (usb, gpus for gpgpu fi) will not work (easily). | {
"domain": "robotics.stackexchange",
"id": 9130,
"tags": "ros, roslib, windows, rospy"
} |
Find maximal difference between two consecutive numbers | Question: You are given $n$ real (unsorted) numbers $_1,_2,...,x_n$. We want to compute the maximal difference between two consecutive numbers in sorted order. Explain how to do it in $()$ time when you are allowed to use additions, subtractions, multiplications, divisions and “floor” function. Notice that sorting immediately leads to $( log)$ time.
Please help !
Answer: Determine the minimum $a$ and maximum $b$ in a first pass.
Create $n+1$ buckets, each of width $\frac{b-a}{n+1}$, with the first beginning at $a$. We have $n+1$ buckets and only $n$ points, so by the pigeonhole principle, there must be at least one empty bucket. Since we know that the first and last buckets contain at least one point, we know furthermore that there must be an empty bucket with a nonempty bucket somewhere to its left and another nonempty bucket somewhere to its right, implying that the solution must be at least $\frac{b-a}{n+1}$.
This means that no two points within the same bucket can provide the solution, since they must be strictly less than $\frac{b-a}{n+1}$ apart from each other. So, any solution will consist of a pair of points from different buckets.
A valid solution must consist of the rightmost point from some bucket $i$ and the leftmost point from some bucket $j > i$ with all buckets in between (which may be no buckets at all, when $j=i+1$) empty. So it is sufficient to know, for each bucket, its leftmost and rightmost points.
To compute these, iterate through all the points again, updating the leftmost and rightmost positions of the current point's bucket whenever necessary. (For the first point in a bucket, both of these positions will need to be updated.) A point's bucket can be computed by subtracting $a$ and dividing by $\frac{b-a}{n+1}$.
Finally, iterate through all the buckets, keeping track of the rightmost point in the last nonempty bucket seen, and comparing it with the leftmost point in the current bucket: Update the maximum whenever this exceeds the current maximum. | {
"domain": "cs.stackexchange",
"id": 17248,
"tags": "algorithms"
} |
Problem with my launch file | Question:
I have created a client (named kouna_to_asistola_client.cpp located into the turtlebot2 pkg), that moves turtlebot in gazebo. it works with the rosrun command. i want to make it work with the roslaunch command. but when i try to include the node in the launch file i get the following error:
[ERROR] [1333630193.884536832]: Failed to call service for setmodelstate
[kouna_to_asistola-5] process has died [pid 26078, exit code 1].
log files: /home/megalicious/.ros/log/d6c8e168-7f1d-11e1-bb54-14dae9034270/kouna_to_asistola-5*.log
this is my client:
#include "ros/ros.h"
#include "gazebo_msgs/SetModelState.h"
#include "gazebo_msgs/GetModelState.h"
#include "gazebo_msgs/GetPhysicsProperties.h"
int main(int argc, char** argv)
{
ros::init(argc, argv, "kouna_to_asistola");
ros::NodeHandle n;
for (double t=0.1; t<=20000; t+=0.001)
{
ros::ServiceClient gmscl=n.serviceClient<gazebo_msgs::GetModelState>("/gazebo/get_model_state");
gazebo_msgs::GetModelState getmodelstate;
getmodelstate.request.model_name ="turtlebot";
gmscl.call(getmodelstate);
ros::ServiceClient gphspro=n.serviceClient<gazebo_msgs::GetPhysicsProperties>("/gazebo/get_physics_properties");
gazebo_msgs::GetPhysicsProperties getphysicsproperties;
gphspro.call(getphysicsproperties);
geometry_msgs::Pose pose;
geometry_msgs::Twist twist;
twist.linear.x = 0.020;
twist.linear.y = 0.020;
twist.linear.z = 0.0;
twist.angular.x = 0.0;
twist.angular.y = 0.0;
twist.angular.z = 0.0;
pose.position.x = getmodelstate.response.pose.position.x + 0.05;
pose.position.y = getmodelstate.response.pose.position.y + 0.05;
pose.position.z = 0.0;
pose.orientation.x = 0.0;
pose.orientation.y = 0.0;
pose.orientation.z = 0.0;
ros::ServiceClient client = n.serviceClient<gazebo_msgs::SetModelState>("/gazebo/set_model_state");
gazebo_msgs::SetModelState setmodelstate;
gazebo_msgs::ModelState modelstate;
modelstate.model_name ="turtlebot";
modelstate.pose = pose;
modelstate.twist = twist;
setmodelstate.request.model_state=modelstate;
if (client.call(setmodelstate))
{
ROS_INFO("BRILLIANT AGAIN!!!");
ROS_INFO("%f",getphysicsproperties.response.time_step);
ROS_INFO("%f",modelstate.pose.position.x);
ROS_INFO("%f",modelstate.pose.position.y);
ROS_INFO("%f",modelstate.twist.linear.x);
ROS_INFO("%f",getmodelstate.response.pose.position.y);
}
else
{
ROS_ERROR("Failed to call service for setmodelstate ");
return 1;
}
}
}
Originally posted by Penny on ROS Answers with karma: 41 on 2012-04-05
Post score: 0
Answer:
one potential problem I can see is that you are not waiting for the service servers to come up first. secondly, you might not want to use service calls for fast pose updates, rather, I recommend using latched ros topics or writing a gazebo plugin to accomplish tthis. Here is an example plugin template. Nevertheless, below is your code with some minor fixes.
#include "ros/ros.h"
#include "gazebo_msgs/SetModelState.h"
#include "gazebo_msgs/GetModelState.h"
#include "gazebo_msgs/GetPhysicsProperties.h"
int main(int argc, char** argv)
{
ros::init(argc, argv, "kouna_to_asistola");
ros::NodeHandle n;
ros::ServiceClient gmscl=n.serviceClient<gazebo_msgs::GetModelState>("/gazebo/get_model_state");
ros::ServiceClient gphspro=n.serviceClient<gazebo_msgs::GetPhysicsProperties>("/gazebo/get_physics_properties");
ros::ServiceClient client = n.serviceClient<gazebo_msgs::SetModelState>("/gazebo/set_model_state");
gmscl.waitForExistence();
gphspro.waitForExistence();
client.waitForExistence();
for (double t=0.1; t<=20000; t+=0.001)
{
gazebo_msgs::GetModelState getmodelstate;
getmodelstate.request.model_name ="turtlebot";
gmscl.call(getmodelstate);
gazebo_msgs::GetPhysicsProperties getphysicsproperties;
gphspro.call(getphysicsproperties);
geometry_msgs::Pose pose;
geometry_msgs::Twist twist;
twist.linear.x = 0.020;
twist.linear.y = 0.020;
twist.linear.z = 0.0;
twist.angular.x = 0.0;
twist.angular.y = 0.0;
twist.angular.z = 0.0;
pose.position.x = getmodelstate.response.pose.position.x + 0.05;
pose.position.y = getmodelstate.response.pose.position.y + 0.05;
pose.position.z = 0.0;
pose.orientation.x = 0.0;
pose.orientation.y = 0.0;
pose.orientation.z = 0.0;
gazebo_msgs::SetModelState setmodelstate;
gazebo_msgs::ModelState modelstate;
modelstate.model_name ="turtlebot";
modelstate.pose = pose;
modelstate.twist = twist;
setmodelstate.request.model_state=modelstate;
if (client.call(setmodelstate))
{
ROS_INFO("BRILLIANT AGAIN!!!");
ROS_INFO("%f",getphysicsproperties.response.time_step);
ROS_INFO("%f",modelstate.pose.position.x);
ROS_INFO("%f",modelstate.pose.position.y);
ROS_INFO("%f",modelstate.twist.linear.x);
ROS_INFO("%f",getmodelstate.response.pose.position.y);
}
else
{
ROS_ERROR("Failed to call service for setmodelstate ");
return 1;
}
}
}
Originally posted by hsu with karma: 5780 on 2012-04-07
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 8875,
"tags": "gazebo, turtlebot"
} |
Time Evolving Matrix Equation to find Eigenvalues | Question: So I am looking at this paper (Instability and control of a periodically-driven Bose-Einstein condensate) and I am interested in solving equation 8, which reads
$$i\frac{d}{dt}\begin{bmatrix} u(t)\\v(t) \end{bmatrix}=L(q,t)\begin{bmatrix} u(t)\\v(t) \end{bmatrix}$$
It is then stated
To find the corresponding Floquet
states, we numerically evolve Eq. 8 over one period of
driving, using the 2x2 identity matrix as the initial state.
The result of this procedure is the single-period propagator
U. The eigenstates of U are then the excitation Floquet
states, while its eigenvalues are related to the excitation
quasienergies via $\lambda_i=\exp[−\epsilon_iTi]$.
I am confused as to how to do this. I have tried just solving for u(t) and v(t) through coupled differential equations, but from there I am not sure if I just say that I can solve the earlier equation
we now introduce a perturbation $\alpha_n(t)
> =\alpha_n^{(0)}(t) (1+u(t)\exp[iqn]+v^*(t)\exp[−iqn])$
For $\alpha(t)$ and say that $\alpha_n(t)=\alpha_n(0)\exp[-i\lambda_n(t)T]$. Since when I numerically solve the matrix equation, i get a number (for n=0) for $\alpha(T)$ that I can just invert and solve for.
I appreciate any advice on how to perform the time evolution and find the U matrix!
Answer: The single-period propagator $U$ is defined as follows:
$$ \begin{pmatrix} u(T) \\ v(T) \end{pmatrix} = U \begin{pmatrix} u(0) \\ v(0) \end{pmatrix} , $$
the eigenvalues of $U$ are related to the Floquet quasienergies as stated in the paper.
To determine the matrix $U$, take the differential equation
$$ \mathrm i \frac{\mathrm d}{\mathrm dt} \begin{pmatrix} u_1(t) & u_2(t) \\ v_1(t) & v_2(t) \end{pmatrix} = \mathcal L(q,t) \begin{pmatrix} u_1(t) & u_2(t) \\ v_1(t) & v_2(t) \end{pmatrix} . $$
Solve it numerically (for fixed $q$ and other parameters) with the initial conditions $u_1(0) = v_2(0) = 1$, $u_2(0) = v_1(0) = 0$.*
Then,
$$ U = \begin{pmatrix} u_1(T) & u_2(T) \\ v_1(T) & v_2(T) \end{pmatrix} .$$
* Which amounts to solving equation (8) once for the initial condition $(1,0)^T$ and once for $(0,1)^T$. | {
"domain": "physics.stackexchange",
"id": 44699,
"tags": "quantum-mechanics, computational-physics, bose-einstein-condensate"
} |
Is energy really quantised? | Question: I'm currently doing an introductory course to quantum mechanics, and came across an assumption that Planck used in solving the UV catastrophe. From what I understand, he essentially stated that that the change in energy cannot be smaller than $hf$. So generally $\Delta{E}$ = $n*hf$ where $n$ is a real number. This makes sense, but I never understood how energy is truly then quantised, as can't light take an infinite number of possible frequencies? (not at the same time but just generally). Maybe I'm just misunderstanding the statement or making it overcomplicated - in any case please do shed some light.
Answer: The crucial insight made by quantum mechanics is that electromagnetic waves in the cavity can be described by simple harmonic oscillators. If the angular frequency of the mode of oscillation is $\omega $ (FIXED) , then the energy associated with this mode is given by
$$E_n=\hbar \omega \left(n+\frac{1}{2}\right)\ \ \ \ \ n=0,1,2,\cdots $$
which are quantized.
NOTE that we are looking at a particular mode of frequency, not the whole spectrum. | {
"domain": "physics.stackexchange",
"id": 80118,
"tags": "quantum-mechanics, statistical-mechanics, quantum-electrodynamics, thermal-radiation, discrete"
} |
Wireframe animation on canvas is slow | Question: I'm recently working on a project that involves animation in the background. It works well on desktop but performance drops drastically on mobile. I'm using Paper.js to import a svg and animate it. This is the demo.
paper.project.importSVG(svg, function(item) {
var moveSpeed = 70;
var movementRadius = 15;
var boundingRec = item.bounds;
var lines = [];
var circles = [];
/*
Arrange lines and circles into different array
*/
$.each(item.getItems({
recursive: true,
}), function(index, item) {
if (item instanceof paper.Shape && item.type == 'circle') {
item.data.connectedSegments = [];
item.data.originalPosition = item.position;
circles.push(item);
}
if (item instanceof paper.Path) {
lines.push(item);
}
});
/*
Loop through all paths
Checks if any segment points is within circles
Anchors the point to the circle if within
*/
$.each(lines, function(pathIndex, path) {
$.each(path.segments, function(segmentIndex, segment) {
$.each(circles, function(circleIndex, circle) {
if (circle.contains(segment.point)) {
circle.data.connectedSegments.push( segment.point );
return false;
}
});
});
});
/*
Animate the circles
*/
$.each(circles, function(circleIndex, circle) {
var originalPosition = circle.data.originalPosition;
var radius = circle.radius * movementRadius;
var destination = originalPosition.add( paper.Point.random().multiply(radius) );
circle.onFrame = function() {
while (!destination.isInside(boundingRec)) {
destination = originalPosition.add( paper.Point.random().multiply(radius) );
}
var vector = destination.subtract(circle.position);
circle.position = circle.position.add(vector.divide(moveSpeed));
// move connected segments based on circle
for (var i = 0; i < circle.data.connectedSegments.length; i++) {
circle.data.connectedSegments[i].set({
x: circle.position.x,
y: circle.position.y,
});
}
if (vector.length < 5) {
destination = originalPosition.add( paper.Point.random().multiply(radius) );
}
}
});
});
A single SVG animation alone is causing the CPU usage to go up to 90% on mid-tier mobile devices and it go to the point that the interface becomes unusable. Any advice or help is much appreciated.
Answer: SVG for animation is Bad :(
SVG is a very difficult medium to use for animation as it can incur huge overheads when you use seamingly common sense structure.
I must also point out that many of the SVG frameworks such as Paper.js add to the problem with poor code and an apparent indifference to the need to create performant interfaces.
To review your code
As this is a review I must review your code.
There is no need to use jQuery as the standard DOM API's do it all and much faster.
Use constant declarations for variables that don't change, eg var lines = []; should be const lines = []; and var moveSpeed = 70 should be const moveSpeed = 70;
You have a random search for each point to test if it is inside the bounds. If you have a point outside the bounds by a distance greater than the radius, this loop may run forever trying to find a random point that is inside the bounds.
It is non deterministic search, with a worst case complexity of O(Infinity) (something that computers just do not do well LOL)
while (!destination.isInside(boundingRec)) {
destination = originalPosition.add(paper.Point.random().multiply(radius));
}
A much better deterministic approach is to test for the bounds, and if not in bounds find the closest point that is and set the point to that. this reduces the worst case complexity to O(1) which computers do very well. (see example code)
Apart from that your code is well written.
Paper.js
I did first write this answer assuming that the content was all SVG but a second look and I see that you are rendering SVG to a canvas via paper.js.
I personal think paper.js is a slow and poorly coded framework. Its core (low level) functions are bloated with overheads that far exceed the time to perform the basic functions purpose.
Rather than list the miles of overhead you add using paper.js I just compared your code to a rewrite without frameworks and using the canvas only avoiding SVG as a image source.
I then compared the run time via chrome's performance recorder in dev tools.
The code using paper.js took 6.89ms to render a frame.
The rewrite took 0.53ms to do the same.
Canvas size
I dont know how you are sizing the canvas for the handheld devices, but make sure that they match the screen resolution and do not use a large canvas that you then size to fit as you can seriously kill performance that way.
The canvas must not be larger than as follows or you use too much RAM and end up rendering pixels that are not seen.
canvas.width = innerWidth;
canvas.height = innerHeight;
Rewrite
So I will just go over the rewrite.
For your code there are 5 basic parts
Define the points and lines
Move the points
Render the lines
Render the circles
Animate and present the content
Define the points
As we are not going to use the SVG we need to define the points in javascript.
I have extracted the circles (AKA verts)
I am not going to process the data you have and just assume that lines are between a vert and the 6 closest verts. Thus we define the verts and create a function to find the lines.
const numberLines = 6; // Number of lines per circle
const verts = [
{id :1 , x: 30.7, y: 229.2 },
{id :2 , x: 214.4, y: 219.6},
{id :3 , x: 278.4, y: 186.4},
{id :4 , x: 382.5, y: 132.5},
{id :5 , x: 346.8, y: 82 },
{id :6 , x: 387.9, y: 6.7 },
{id :7 , x: 451.8, y: 60.8 },
{id :8 , x: 537.0, y: 119.9},
{id :9 , x: 545.1, y: 119.9},
{id :9 , x: 403.5, y: 122.1},
{id :10 , x: 416.3, y: 130 },
{id :11 , x: 402.6, y: 221.4},
{id :12 , x: 409.9, y: 266.4},
{id :13 , x: 437.1, y: 266.8},
{id :14 , x: 478.1, y: 269.6},
{id :15 , x: 242.6, y: 306.1},
{id :16 , x: 364.0, y: 267 },
{id :17 , x: 379.1, y: 310.7},
{id :18 , x: 451.2, y: 398.9},
{id :19 , x: 529.6, y: 377.9},
{id :20 , x: 644.8, y: 478.3},
{id :21 , x: 328.3, y: 324.5},
{id :22 , x: 314.4, y: 364.3},
{id :23 , x: 110.2, y: 327.8},
{id :24 , x: 299.1, y: 219.6},
{id :25 , x: 130.4, y: 218.1},
{id :26 , x: 307.4, y: 298.4},
{id :27 , x: 431.3, y: 360.1},
{id :28 , x: 551.7, y: 414.4},
{id :29 , x: 382.5, y: 239.7},
];
const line = (p1, p2) => ({p1, p2});
var lines = new Map(); // is var as this is replaced with an array after finding near verts
function findClosestVertInDist(vert,min, max, result = {}) {
const x = vert.x, y = vert.y;
result.minDist = max;
result.closest = undefined;
for (const v of verts) {
const dx = v.x - x;
const dy = v.y - y;
const dist = (dx * dx + dy * dy) ** 0.5;
if (dist > min && dist < result.minDist) {
result.minDist = dist;
result.closest = v;
}
}
return result;
}
// this is a brute force solution.
function createLines() {
var hash;
lines.length = 0;
const mod2Id = verts.length; // to get unique hash for a line
const closeVert = {}
for (const v of verts) {
closeVert.minDist = 0;
for (let i = 0; i < numberLines; i++) {
findClosestVertInDist(v, closeVert.minDist, Infinity, closeVert);
if(closeVert.closest) { // if you have less than 6 verts you need this test
if (v.id < closeVert.closest.id) {
hash = closeVert.closest.id * mod2Id + v.id;
} else {
hash = closeVert.closest.id + v.id * mod2Id;
}
lines.set(hash,line(v,closeVert.closest));
} else {
i--;
}
}
}
lines = [...lines.values()]; // Dont need the map so replace with array of lines
verts.forEach(v => { // verts dont need the id but need an origin so add
// the relevant data
v.ox = v.x; // the verts origin
v.oy = v.y;
v.dx = v.x; // the destination to move to
v.dy = v.y;
v.moveSpeed = Math.random() * (moveSpeedMax - moveSpeedMin) + moveSpeedMin;
v.move = 1; // unit value how far vert has moved to new point,
// 0 is at start, 1 is at destination
delete v.id; // remove the id
});
}
createLines();
After this is run you get a data structure very similar to the SVG. A set of points verts and a set of lines lines that reference points.
Rendering
The canvas 2D API is very easy to use and has functions to draw lines and circles. It can render content as paths (very similar to SVG path element) and uses the GPU and is just as fast (if not faster on some browsers than the SVG renderer)
So to create the element and render to it we need the following
// Script is placed after the canvas element via load event,
// the 2D API called context or abbreviated as ctx
// You may want to query the DOM with document.getElementById("canvas") but not needed
const ctx = canvas.getContext("2D");
canvas.width = innerWidth;
canvas.height = innerHeight;
Math.PI2 = Math.PI * 2; // create a 360 radians constant
// Define the styles
const lineStyle = {
lineWidth : 1,
strokeStyle : "#FFFFFF88",
}
const circleStyle = {
fillStyle : "cyan",
}
const circleRadius = 2.5;
const moveDist = 70;
// min and max vert speeds so points donty all change direction at once.
const moveSpeedMax = 1 / 120; // unit speed (at 60fps this moves to destination in two seconds)
const moveSpeedMin = 1 / 240; // unit speed (at 60fps this moves to destination in four seconds)
function drawLines(ctx, lines, style) { // ctx where to draw, lines what to draw
Object.assign(ctx, style);
// start a new 2D path
ctx.beginPath();
for (const line of lines) {
ctx.moveTo(line.p1.x, line.p1.y);
ctx.lineTo(line.p2.x, line.p2.y);
}
// the path has been defined so render it in one go.
ctx.stroke();
}
function drawCircles(ctx, verts, radius, style) { // ctx where to draw, verts what to draw
// radius (say no more)
// and style
Object.assign(ctx, style);
ctx.beginPath();
for (const vert of verts) {
// to prevent arcs connecting you need to move to the arc start point
ctx.moveTo(vert.x + radius, vert.y);
ctx.arc(vert.x, vert.y, radius, 0, Math.PI2);
}
// the path has been defined so render it in one go.
ctx.fill();
}
<!-- replaces the SVG element -->
<canvas id="canvas" width = "642" height = "481"></canvas>
Animation
As we are animating the content we need to make sure that what we render is presented correctly and in sync with the display. All browsers provide a special callback event via requestAnimationFrame that lets you make changes to the DOM that will only be presented on the next display refresh.
requestAnimationFrame(update); // starts the animation
function update(){
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
animateCircles(verts, ctx.canvas); // animate the verts
drawLines(ctx, lines, lineStyle);
drawCircles(ctx, verts, circleRadius, circleStyle);
// All done request the next frame
requestAnimationFrame(update);
}
function animateCircles(verts, canvas){
for(const vert of verts){
vert.move += vert.moveSpeed;
if (vert.move >= 1) { // point at dest so find a new random point
// using polar coords to randomly move a point
const dir = Math.random() * Math.PI2;
const dist = Math.random() * moveDist;
vert.ox = vert.dx; // set new origin
vert.oy = vert.dy;
let x = vert.ox + Math.cos(dir) * dist;
let y = vert.oy + Math.sin(dir) * dist;
// bounds check
if (x < circleRadius) { x = circleRadius }
else if (x >= canvas.width - circleRadius) { x = canvas.width - circleRadius }
if (y < circleRadius) { y = circleRadius }
else if (y >= canvas.height - circleRadius) { y = canvas.height - circleRadius }
// point is in bounds and within dist of origin so set its new destination
vert.dx = x;
vert.dy = y;
vert.move = 0; // set ubit dist moved.
}
vert.x = (vert.dx - vert.ox) * vert.move + vert.ox;
vert.y = (vert.dy - vert.oy) * vert.move + vert.oy;
}
}
So to make you code work using the canvas I have replaced the onframe events for a single function that handles all the circles in one pass, and added a better bounds check that uses the canvas size to check circles are inside.
Put it all together
So now putting all the above into a working snippet we have side stepped the SVG elements and improved the code by handling the array of verts and lines as single entities.
We have also reduced the workload and RAM needs of the page, as we have only one layer (the canvas) and one composite operation (which can also be avoided on some browsers)
const ctx = canvas.getContext("2d");
canvas.width = innerWidth;
canvas.height = innerHeight;
Math.PI2 = Math.PI * 2;
const lineStyle = {
lineWidth : 1,
strokeStyle : "#FF000055",
};
const circleStyle = {
fillStyle : "blue",
};
const circleRadius = 2.5;
const moveDist = 70;
const moveSpeedMax = 1 / 120;
const moveSpeedMin = 1 / 240;
const numberLines = 6;
const verts = [
{id :1 , x: 30.7, y: 229.2 },
{id :2 , x: 214.4, y: 219.6},
{id :3 , x: 278.4, y: 186.4},
{id :4 , x: 382.5, y: 132.5},
{id :5 , x: 346.8, y: 82 },
{id :6 , x: 387.9, y: 6.7 },
{id :7 , x: 451.8, y: 60.8 },
{id :8 , x: 537.0, y: 119.9},
{id :9 , x: 545.1, y: 119.9},
{id :9 , x: 403.5, y: 122.1},
{id :10 , x: 416.3, y: 130 },
{id :11 , x: 402.6, y: 221.4},
{id :12 , x: 409.9, y: 266.4},
{id :13 , x: 437.1, y: 266.8},
{id :14 , x: 478.1, y: 269.6},
{id :15 , x: 242.6, y: 306.1},
{id :16 , x: 364.0, y: 267 },
{id :17 , x: 379.1, y: 310.7},
{id :18 , x: 451.2, y: 398.9},
{id :19 , x: 529.6, y: 377.9},
{id :20 , x: 644.8, y: 478.3},
{id :21 , x: 328.3, y: 324.5},
{id :22 , x: 314.4, y: 364.3},
{id :23 , x: 110.2, y: 327.8},
{id :24 , x: 299.1, y: 219.6},
{id :25 , x: 130.4, y: 218.1},
{id :26 , x: 307.4, y: 298.4},
{id :27 , x: 431.3, y: 360.1},
{id :28 , x: 551.7, y: 414.4},
{id :29 , x: 382.5, y: 239.7},
];
const line = (p1, p2) => ({p1, p2});
var lines = new Map();
function findClosestVertInDist(vert,min, max, result = {}) {
const x = vert.x, y = vert.y;
result.minDist = max;
result.closest = undefined;
for (const v of verts) {
const dx = v.x - x;
const dy = v.y - y;
const dist = (dx * dx + dy * dy) ** 0.5;
if(dist > min && dist < result.minDist) {
result.minDist = dist;
result.closest = v;
}
}
return result;
}
function createLines() {
var hash;
lines.length = 0;
const mod2Id = verts.length;
const closeVert = {}
for (const v of verts) {
closeVert.minDist = 0;
for (let i = 0; i < numberLines; i++) {
findClosestVertInDist(v, closeVert.minDist, Infinity, closeVert);
if (closeVert.closest) {
if (v.id < closeVert.closest.id) {
hash = closeVert.closest.id * mod2Id + v.id;
} else {
hash = closeVert.closest.id + v.id * mod2Id;
}
lines.set(hash,line(v,closeVert.closest));
} else {
i--;
}
}
}
lines = [...lines.values()];
verts.forEach(v => {
v.ox = v.x;
v.oy = v.y;
v.dx = v.x;
v.dy = v.y;
v.moveSpeed = Math.random() * (moveSpeedMax - moveSpeedMin) + moveSpeedMin;
v.move = 1;
delete v.id;
});
}
createLines();
function drawLines(ctx, lines, style) {
Object.assign(ctx, style);
ctx.beginPath();
for (const line of lines) {
ctx.moveTo(line.p1.x, line.p1.y);
ctx.lineTo(line.p2.x, line.p2.y);
}
ctx.stroke();
}
function drawCircles(ctx, verts, radius, style) {
Object.assign(ctx, style);
ctx.beginPath();
for (const vert of verts) {
ctx.moveTo(vert.x + radius, vert.y);
ctx.arc(vert.x, vert.y, radius, 0, Math.PI2);
}
ctx.fill();
}
requestAnimationFrame(update); // starts the animation
function update(){
// to check is resized
if (canvas.width !== innerWidth || canvas.height !== innerHeight) {
canvas.width = innerWidth;
canvas.height = innerHeight;
} else { // resize clears the canvas so I use else here
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
}
animateCircles(verts, ctx.canvas);
drawLines(ctx, lines, lineStyle);
drawCircles(ctx, verts, circleRadius, circleStyle);
requestAnimationFrame(update);
}
function animateCircles(verts, canvas){
for(const vert of verts){
vert.move += vert.moveSpeed;
if (vert.move >= 1) {
const dir = Math.random() * Math.PI2;
const dist = Math.random() * moveDist;
vert.ox = vert.dx;
vert.oy = vert.dy;
let x = vert.ox + Math.cos(dir) * dist;
let y = vert.oy + Math.sin(dir) * dist;
if (x < circleRadius) { x = circleRadius }
else if (x >= canvas.width - circleRadius) { x = canvas.width - circleRadius }
if (y < circleRadius) { y = circleRadius }
else if (y >= canvas.height - circleRadius) { y = canvas.height - circleRadius }
vert.dx = x;
vert.dy = y;
vert.move = 0;
}
vert.x = (vert.dx - vert.ox) * vert.move + vert.ox;
vert.y = (vert.dy - vert.oy) * vert.move + vert.oy;
}
}
<!-- replaces the SVG element -->
<canvas id="canvas" width = "642" height = "481" style="position:absolute;top:0px;left:0px"></canvas>
This should now run on event the lowlest of devices that have a GPU and support the canvas. Remember that you must size the canvas to the screen via its width and height properties NOT via its style width and height properties. | {
"domain": "codereview.stackexchange",
"id": 32799,
"tags": "javascript, performance, animation, canvas, paper.js"
} |
Beginner temperature converter | Question: I'm a beginner to Java and programming in general and since I'm on a holiday to the US, I thought it would be a fun idea to write a program for converting Fahrenheit to Celsius and the other way around. I'm using WindowBuilder for the gui part so I suppose my code could use some improvement.
The main thing that's bothering me is that my code doesn't look very OO, I'm just using one class and my layout and logic isn't separated at all.
Thank you very much for any hints you could give me!
import java.awt.Component;
import java.awt.EventQueue;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JButton;
import javax.swing.JCheckBox;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JOptionPane;
import javax.swing.JPanel;
import javax.swing.JTextField;
import javax.swing.border.EmptyBorder;
public class Converter extends JFrame implements ActionListener
{
private static final long serialVersionUID = 1L;
private JPanel contentPane = new JPanel();
private JTextField textFieldCelsius = new JTextField();
private JTextField textFieldFahrenheit = new JTextField();
private JCheckBox checkBoxCelsiusToFahrenheit = new JCheckBox(" °C to °F");
private JCheckBox checkBoxFahrenheitToCelsius = new JCheckBox(" °F to °C");
private JLabel lblCelsius = new JLabel(" °C");
private JLabel lblFahrenheit = new JLabel(" °F");
private JButton btnClear = new JButton("Clear");
private JButton btnConvert = new JButton("Convert");
private Object[] myObjects =
{ textFieldCelsius, textFieldFahrenheit, checkBoxCelsiusToFahrenheit,
checkBoxFahrenheitToCelsius, lblCelsius, lblFahrenheit, btnClear,
btnConvert };
/**
* Launch the application.
*/
public static void main(String[] args)
{
EventQueue.invokeLater(new Runnable()
{
public void run()
{
try
{
Converter frame = new Converter();
frame.setVisible(true);
}
catch (Exception e)
{
e.printStackTrace();
}
}
});
}
/**
* Create the frame.
*/
public Converter()
{
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
setBounds(100, 100, 450, 300);
setLocationRelativeTo(null);
setResizable(false);
contentPane.setBorder(new EmptyBorder(5, 5, 5, 5));
setContentPane(contentPane);
contentPane.setLayout(null);
positionComponents();
btnConvert.addActionListener(this);
btnClear.addActionListener(this);
for (Object object2 : myObjects)
{
contentPane.add((Component) object2);
}
}
public void positionComponents()
{
lblCelsius.setBounds(305, 96, 70, 15);
checkBoxCelsiusToFahrenheit.setBounds(37, 8, 129, 23);
checkBoxFahrenheitToCelsius.setBounds(254, 8, 129, 23);
textFieldCelsius.setBounds(156, 94, 114, 19);
textFieldFahrenheit.setBounds(156, 121, 114, 19);
btnConvert.setBounds(254, 223, 117, 25);
btnClear.setBounds(37, 223, 117, 25);
lblFahrenheit.setBounds(305, 123, 70, 15);
textFieldCelsius.setColumns(10);
textFieldFahrenheit.setColumns(10);
}
public void clear()
{
textFieldCelsius.setText("");
textFieldFahrenheit.setText("");
checkBoxCelsiusToFahrenheit.setSelected(false);
checkBoxFahrenheitToCelsius.setSelected(false);
}
public void convertToFahrenheit()
{
String text = textFieldCelsius.getText();
int textInt = Integer.parseInt(text);
int result = textInt * 9 / 5 + 32;
String resultString = Integer.toString(result);
textFieldFahrenheit.setText(resultString);
}
public void convertToCelsius()
{
String text = textFieldFahrenheit.getText();
int textInt = Integer.parseInt(text);
int result = (textInt - 32) * 5 / 9;
String resultString = Integer.toString(result);
textFieldCelsius.setText(resultString);
}
@Override
public void actionPerformed(ActionEvent e)
{
if (e.getSource() == btnConvert)
{
try
{
if (checkBoxCelsiusToFahrenheit.isSelected())
{
convertToFahrenheit();
}
else if (checkBoxFahrenheitToCelsius.isSelected())
{
convertToCelsius();
}
else if (!checkBoxCelsiusToFahrenheit.isSelected()
&& !checkBoxFahrenheitToCelsius.isSelected())
{
JOptionPane.showMessageDialog(null,
"Please select a checkbox!");
}
}
catch (Exception e2)
{
JOptionPane.showMessageDialog(null, "Invalid input!");
}
}
else if (e.getSource() == btnClear)
{
clear();
}
}
}
Answer:
The main thing that's bothering me is that my code doesn't look very OO, I'm just using one class and my layout and logic isn't separated at all.
Yes, I think that's the crux of the issues identified too. :)
Formatting
With the exception of your myObjects declaration (perhaps it can start from the same line too?), your code's spacing is readable and quite consistent, which is good. Java's convention on braces is colloquially known as the 'Egyptian' style, which is different from what you are using. Your main() method's braces looks a little... out-of-place though. Perhaps that's due to the Markdown formatting?
UX/UI (a brief touch)
From a UX standpoint, it doesn't make much sense to use checkboxes for mutually exclusive options. I would have used radiobuttons instead. - Mat's Mug
Radio buttons are preferred for either-or options as they ensure only one state can be enabled. In your case, the user will not be able to check both checkboxes.
You should also use a layout manager to position your Swing UI components, instead of setting bounds explicitly inside positionComponents(). A layout manager tend to reduce the code you need to write, does away with 'magic numbers', and is more flexible when you want to resize your windows.
Using methods effectively
Currently, your conversion methods simply act on your textboxes directly, which makes it non-trivial for doing unit testing. Furthermore, this clubs together the presence of UI elements with calculations, which is not ideal. Instead, you can think of your conversion methods more like a function, that takes in an input and returns an output:
private static double convertToCelsius(double fahrenheit) {
// calculate and return value
}
I'm not sure why you intended to lose precision by using ints in your question, but you're probably looking for the double type here. Anyways, your UI code can then call this method as such:
public void convertToCelsius() {
double result = convertToCelsius(Double.parseInt(textFieldFahrenheit.getText()));
textFieldCelsius.setText(Double.toString(result));
}
You can also see that I have cut down the number of temporary variables, as I often find that the more temporary variables a method has, the harder is it to read it (literally). | {
"domain": "codereview.stackexchange",
"id": 14791,
"tags": "java, beginner, swing"
} |
DNA adaptation in human life | Question: Does our DNA adapt by human lifetime? Or do we have the same genetic information from birth to death?
I mean: What is usually called "evolution" means "natural selection" like this:
http://www.youtube.com/watch?v=hOfRN0KihOU&list=UUsXVk37bltHxD1rDPwtNM8Q
-stronger animals have more descendants, so they make bigger percent of strong animals.
But does evolution work by some primitive genetics-engineering too? Thank you.
Answer: Lewontin's recipe
A very nice way to consider natural selection is through the lense of Lewontin's recipe. Evolution of a given trait (tail length for example) through natural selection occurs whenever the three following conditions are met:
There is variation for this trait in the population
This variation is caused by mutations
Part (or the totality) of this variation is heritable
Heritability is the part of the variance in a trait that is explained by genes. It can be measured as a parent-offspring correlation.
The variation of this trait is correlated with fitness
loosely speaking, fitness is an index of both fecundity and survival
You can take two minutes to think about how logical it is that changes in frequency of variants (natural selection) occurs whenever these three conditions are met. You can imagine any objects you want (pencils of colors for example) and simulate these three steps:
Variation
different pencils have different colors
Heritability
simulate that your pencils reproduce (asexual reproduction is conceptually easier to understand) and offsprings have similar (or even the same) color than parents
Correlation between trait and fitness
Pencils of different colors have different fitness.
For example each red pencils create two red pencils the next generation while blue pencils do not manage to replicate at all.
You can quickly see how the red pencils become more frequent in your pencil population through time. If you wait long enough your pencil population will be totally composed of red pencils. And you'll need to simulate a mutation in order to create a blue pencil. I think you understand the how different are the roles of mutations and natural selection in evolution with this example.
Genetic Adaptation During Lifetime of a multicellular organism
For a beginner in evolutionary biology, saying that a multicellular individual's genome does not change during its lifetime might be considered satisfying. In reality it is slightly more complicated. Two elements that are mostly influent very early in the lifetime of an individual have to be considered.
While most mutations occurs during reproduction, some mutations does also occurs during body growth or stated differently during cell reproduction (Mitosis). In consequence, some pairs of your cells share exactly the same genome while other pairs of cells have a sligthly different genome.
Moreover, once some mutations had occurred within the development of an individual, these mutations might influence the fitness of the cells and therefore some alleles (variant of a gene) might raise in frequency while others decrease in frequency (Natural selection). It is important to understand that this process of natural selection select for cells that have a higher fitness and therefore do not necessarily select for the cells that allows the individual to increase its own fitness.
I don't fully understand what you mean when saying: "But does evolution work by some primitive genetics-engineering too?" Could you try to rephrase this question?
Please, let me know if I answered your question!
Note:
Here is a good way to understand the difference between "gene" and "allele". A gene might be a called the "eyes color" gene for example, while the three alleles of the "eyes color" gene might be callsed "blue eyes", "brown eyes" and "green eyes". In this sense, the alleles are the different variants of a gene. Mutations increase the number of alleles and natural selection reduces the number of alleles by selecting for the allele that cause its holder to have the greatest fitness. If you understood that, it is already good!
Note: Natural selection does not necessarily reduce the number of alleles, several alleles might be kept (polymorphism) in certain "types" of natural selection (frequency dependent selection and overdominance (heterozygote advantage), fitness varies in space and/or time, selection acts on different levels). The "type" of Natural selection that reduces the number of alleles after some time is called directional selection. | {
"domain": "biology.stackexchange",
"id": 1981,
"tags": "evolution, genetics, natural-selection"
} |
Gaps in derivation of thermodynamic property equations | Question: If $h=h(T, P)$.
Does $ dh = c_pdT + \left[v - T\left(\frac{\partial v}{\partial T}\right)_P \right]dP \Rightarrow h_2 - h_1 = \int_{T_1}^{T_2} c_pdT + \int_{P_1}^{P_2}\left[v - T\left(\frac{\partial v}{\partial T} \right)_P\right]dP $ ?
If so, how?
I apologize for this, but I just haven't been able to find an appropiate justification for this operative behavior in any of the Calculus, Differential Equations and Thermodynamics books in my possession. I'm particularly bugged by the "integration of differentials" and how it, before me, seems to damage the symmetry of the first equation in the statement.
Thanks in advance!
Answer: What's happening here is a line integral in $(T,P)$-space:
$$L(C)=\int_{C}dh$$
where $C$ is some curve that starts at $(T_1,P_1)$ and ends at $(T_2,P_2)$. In the most general case, the result $L$ of this line integral can be dependent on the curve $C$ that is integrated over.
However, assuming that the function $h(T,P)$ is differentiable, this means that $dh$ is an exact differential. One of the nice properties of an exact differential is that the result of a line integral is path-independent: any curve that starts at $(T_1,P_1)$ and ends at $(T_2,P_2)$ will have the same line integral, and in particular, that result is merely the difference in $h$ between the start and end points:
$$L(C)=h(T_2,P_2)-h(T_1,P_1)\equiv h_2-h_1$$
So, given that it doesn't matter which path we take in $(T,P)$-space, it makes the most sense to choose an "easy" one: first move "horizontally" from $T_1$ to $T_2$ while keeping $P$ constant, then move "vertically" from $P_1$ to $P_2$ while keeping $T$ constant.
Line integrals along the coordinate axes are easy to write; for example, for a path $C_0$ along the $T$-axis, from $T_1$ to $T_2$ with $P$ being kept constant at $P_0$, the line integral is:
$$\int_{C_0} dh=\int_{T_1}^{T_2}\frac{\partial h}{\partial T}\bigg\vert_{P=P_0}dT$$
and similarly for an integral along the $P$-axis. So, in the end, the result of our line integral is the sum of the integral along the "horizontal" segment and the integral along the "vertical" segment:
$$\int_C dh=h_2-h_1=\int_{T_1}^{T_2}\frac{\partial h}{\partial T}\bigg\vert_{P=P_1} dT+\int_{P_1}^{P_2}\frac{\partial h}{\partial P}\bigg\vert_{T=T_2}dP$$ | {
"domain": "physics.stackexchange",
"id": 71570,
"tags": "thermodynamics, integration"
} |
Convert exponential to normal distribution | Question: For the distribution shown below, I want to convert the exponential distribution to a normal distribution. I want to do this is as part of data pre-processing so that the classifier can better interpret the feature (named ipc here).
The regular log transformation does not work here because of the (x-axis) spread.
How can I transform this data to a normal distribution?
A related answer has been pointed out in the comment but I am looking for some Python code excerpt as well.
Thanks
Answer: The following code works:
import scipy
import numpy as np
ey = np.random.exponential(size=100)
cdfy = scipy.stats.expon.cdf(np.sort(ey))
invcdf = scipy.stats.norm.ppf(cdfy) # a normal distribution
Hope this helps | {
"domain": "datascience.stackexchange",
"id": 5405,
"tags": "machine-learning, preprocessing, normalization"
} |
Installing the fanuc_driver on fanuc crx robot | Question:
Hello, I m working on setting up the ROS Fanuc driver on CRX 10 Ia/L robot that has a R-30ib mini plus controller
Robot software is V9.40P22
I used Roboguide to compile the binaries from the ros driver following the tutorial in this link :
http://wiki.ros.org/fanuc/Tutorials/hydro/Configuration
I can see the version of the robot software isn't in robo guide the closest I can find is v9.40p/23, my first question is will this cause a problem ?
Next, I loaded the programs in the controller according to the tutorial i can see them in the select program:
Karel:
TP:
But if run the TP programs I get these errors:
ROS state:
Ros :
From the ROS troubleshooting, i can see its a configuration error i re did the setup step by step but still get these errors and i want to know where exactly is the issue and if there is any remedy
Originally posted by nizar00 on ROS Answers with karma: 17 on 2022-04-13
Post score: 0
Answer:
Did you update the configuration of the Karel programs according to the values shown in the KAREL and TPE Programs: KAREL Programs section?
The tutorial in that section states:
The data in these two tables will need to be entered into the configuration structures of the respective programs.
and then provides instructions on how to do that.
If you did, please show a picture of the Karel Vars of both programs, specifically the contents of the cfg_ variables.
I can see the version of the robot software isn't in robo guide the closest I can find is v9.40p/23, my first question is will this cause a problem ?
No, that will not be a problem. The controller would have notified you if it was a problem (by refusing to load the .pc files completely).
Originally posted by gvdhoorn with karma: 86574 on 2022-04-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by nizar00 on 2022-04-14:
is this correct ?
Comment by gvdhoorn on 2022-04-14:
That looks OK.
You just seem to have skipped/missed the Complete the Configuration step:
As the last step, complete the configuration by setting the checked entry in each of the configuration structures to TRUE.
Comment by nizar00 on 2022-04-14:
thank you, the ROS tp worked, ros relay and ros state are showing some error but probably not related to this topic I will write another question if the errors keep persisting | {
"domain": "robotics.stackexchange",
"id": 37576,
"tags": "ros, driver, ros-melodic, fanuc"
} |
Fuerte source install | Question:
What is the currently preferred method for a complete source build of ROS Fuerte?
This URL does not seem to work.
http://packages.ros.org/cgi-bin/gen_rosinstall.py?rosdistro=fuerte&variant=desktop-full&overlay=no
Originally posted by I Heart Robotics on ROS Answers with karma: 403 on 2012-02-11
Post score: 2
Original comments
Comment by tfoote on 2012-02-11:
It's now in two systems. There's not good documentation yet. I'll be writing some soon. I'll post a link here when I write it up.
Comment by Eric Perko on 2012-02-27:
I'd like to do a source install also... @tfoote have you written up this documentation yet?
Answer:
The catkin guide. As tully says, its still in the works, but its easy enough to follow for building. There is a working rosinstaller you can build off - just add your stacks to the bottom of it:
https://raw.github.com/willowgarage/catkin/master/test/test.rosinstall
To start your rosinstall, make sure you have rosinstall updated to the latest version and run it with the --catkin option:
rosinstall --catkin ~/catkin https://raw.github.com/willowgarage/catkin/master/test/test.rosinstall
Originally posted by Daniel Stonier with karma: 3170 on 2012-02-27
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by kwc on 2012-02-27:
@snorri : can you edit your answer to use this link instead (more up-to-date)? http://ros.org/doc/api/catkin/html/
Comment by tfoote on 2012-04-10:
Official guide is up at http://www.ros.org/wiki/electric/Installation/Ubuntu/Source
Comment by Eric Perko on 2012-04-10:
I think you were meaning: http://www.ros.org/wiki/fuerte/Installation/Ubuntu/Source | {
"domain": "robotics.stackexchange",
"id": 8192,
"tags": "ros, ros-fuerte, source"
} |
How do I design a neural network that breaks a 5-letter word into its corresponding syllables? | Question: I am going to design a neural network which will be able to break a 5-letter word into its corresponding syllables (hybrid syllables, I mean it will not strictly adhere to grammatical syllable rules but will be based on some training sets I provide).
Example: "train" -> "tra"-"in".
I think of implementing it in terms of some feedforward neural network with 1 input layer, 1 hidden layer and 1 output layer.
There will be 5 input nodes in the form of decimals (1/26 = 0.038 for 'A' ; 2/26 = 0.076 for 'B', etc.)
The output layer consists of 4 nodes, which corresponds to each gap between two characters in the word.
And fires as follows:
For "train" ("tra"-"in"), the input is (0.769, 0.692, 0.038, 0.346, 0.538) and the output would be (0, 0, 1, 0).
For "boric" ("bo"-"ri"-"c"), the input is something else, and the output is (0, 1, 0, 1).
Is it possible to implement the neural network in the way I am doing? If possible, then how will I decide the number of hidden layers and nodes in each layer?
In the book I am reading, the XOR gate problem and its implementation using hidden layer is given. In XOR, we could decide the number of nodes and hidden layers required by seeing the linear separability of XOR using two lines. But here I think such analysis can't be made..
So, how do I proceed? Or is it a trial and error process?
Answer: I would highly recommend modeling things differently with regard to how letters are presented to the model. While the problem is more natural, perhaps, for a Convolutional or Recurrent Neural Network, there's no problem to try and run this on a feed forward network. However, the way you give letters as input will be very confusing for a network and will make learning very hard. I'd recommend using one hot encoding or even a binary encoding for the letters. If this is for more than playing around I'd try and add some info (encode whether the letter is in "aeiou" in a separate bit).
As for the hidden layers, try playing around a bit. Two systematic approaches are to start very simple and make the model more complicated, or start complicated and make your model simpler (or just normalize a lot). Look at the performance on the training set and on a separate validation set during training. If the model keeps on improving on the training data but starts to deteriorate on the validation data, you're probably over fitting. In this case you should either make the model simpler (fewer nodes, fewer layers) or regularize (start with l2 normalization on the weights). If the data doesn't perform well on the training data, you may wish to make the model more complex.
Once you've tried the feedforward network, really do try CNN or RNNs for this task. | {
"domain": "ai.stackexchange",
"id": 188,
"tags": "neural-networks, hyperparameter-optimization, hyper-parameters, feedforward-neural-networks, network-design"
} |
Is there any method to parse a qasm in qiskit | Question: I'd want to parse the qasm file for analyze the quantum gate consisting its qasm file.
However, Qiskit provides the attribute of the quantum circuit, quantum_circuit.data.operation which summarize the quantum gates information.
Even if it returns the gates information, but the order of the each gate is changed.
For example, the original qasm file consists like this.
h q[0];
x q[1];
It is extracted in different order into below when use the data.operation attributes.
x q[1];
h q[0];
Actually, the final result of the quantum circuit is same. But I want to parse the qasm file as the same order of original one.
Is there any qiskit API or any method?
Thanks.
Answer: Qiskit comes with a QasmParser; an OpenQasm 2.0 parser. It can be used directly as follows:
from qiskit.qasm.qasmparser import QasmParser
import qiskit.qasm.node as qn
filename = '/path/to/qasm_file.qasm'
with open(filename) as file:
data = file.read()
with QasmParser(filename) as q_parser:
q_parser.parse_debug(False)
program = q_parser.parse(data)
for node in program.children:
print(node.to_string(indent = False))
if isinstance(node, qn.Gate):
print(node.name)
elif isinstance(node, qn.CustomUnitary):
print(node.name)
elif isinstance(node, qn.Barrier):
print(node.children)
elif isinstance(node, qn.Measure):
print(node.children)
# ... ... ...
It can also be used indirectly through Qasm.parse() method | {
"domain": "quantumcomputing.stackexchange",
"id": 4483,
"tags": "qiskit, quantum-gate, qasm"
} |
Epistasis Involving Multiple Loci | Question: The following problem is from Schaum's Outlines on Genetics, 5th edition, by Elrod and Stansfield. I'm having some trouble solving it. It is found under a section entitled "Interactions with Three or More Factors" in the problems at the end of the chapter on Epistasis.
Problem 4.30 (pg. 112): If a pure-white onion strain is crossed to a pure-yellow strain, the
F$_2$ ratio is 12 white : 3 red : 1 yellow. If another pure-white
onion is crossed to a pure-red onion, the F$_2$ ratio is 9 red : 3
yellow : 4 white. (a) What percentage of the white F$_2$ from the
second mating would be homozygous for the yellow allele? (b) If the
white F$_2$ (homozygous for the yellow allele) of part (a) is crossed
to the pure-white parent of the first mating mentioned at the
beginning of this problem, determine the F$_1$ and F$_2$ phenotypic
expectations.
In the first crossing, it looks like the type of interaction is one of dominant epistasis between two loci. So, we could call the alleles for the first locus W and w and the alleles for the second locus A and a. Then W-A- and W-aa would have a white phenotype, wwA- would have a red phenotype, and wwaa would have a yellow phenotype. This would be consistent with the ratios provided in the problem.
In order to explain the 9 red : 3 yellow : 4 white, a third locus can be considered with alleles B and b. Proceeding as above, where we had the locus for W or w epistatic with respect to the locus for A and a, we could have the same locus for W or w epistatic with respect to the locus for B and b. This time, wwB- and wwbb would be white, W-B- would be red, and W-bb would be yellow. This would be consistent with the 9 red : 3 yellow : 4 white ratio.
This is where I get stuck. If the locus for W and w is epistatic in a dominant way with respect to one locus and in a recessive way with respect to another locus, then all onions would be white. Therefore, I must have done something wrong. Am I on the right track?
This isn't really a homework problem, despite the "homework" tag.
Here are the answers provided in the text for the problem:
Problem 4.30 (pg. 115): (a) 25% (b) F$_1$: all white; F$_2$ : 52
white : 9 red : 3 yellow
Answer: After reading Satwik Pasani's comment to the original problem, I tried using the hypostatic locus as the common locus in the epistatic relationships, instead of the locus causing the onions to be white. The following solution fits the answers supplied in the book.
Three loci can be used to determine the colors of the onion, the alleles of which can be denoted by C, D, and E. Suppose that C and D are each epistatic to E, such that the phontypes of the onion are given as:
cc---- white
--D--- white
C-ddE- red
C-ddee yellow
Then the first crossing in the problem can be given by CCDDEE x CCddee. The second crossing could be ccddee x CCddEE. | {
"domain": "biology.stackexchange",
"id": 2358,
"tags": "genetics, homework"
} |
Image pixelwise operation function with multiple inputs in C++ | Question: This is a follow-up question for Tests for the operators of image template class in C++ and A recursive_transform template function for the multiple parameters cases in C++. I appreciated G. Sliepen's answer. I am attempting to extend the mentioned element-wise operations in TinyDIP::Image image class. In other words, not only +, -, * and / but also other customized calculations can be specified easily with the implemented pixelwiseOperation template function here. For example:
There are four images and each pixel value in these images are set to 4, 3, 2 and 1, respectively.
auto img1 = TinyDIP::Image<GrayScale>(10, 10, 4);
auto img2 = TinyDIP::Image<GrayScale>(10, 10, 3);
auto img3 = TinyDIP::Image<GrayScale>(10, 10, 2);
auto img4 = TinyDIP::Image<GrayScale>(10, 10, 1);
If we want to perform the element-wise calculation "Two times of img1 plus img2 then minus the result of img3 multiply img4, this task could be done with the following code:
auto output = TinyDIP::pixelwiseOperation(
[](auto&& pixel_in_img1, auto&& pixel_in_img2, auto&& pixel_in_img3, auto&& pixel_in_img4)
{
return 2 * pixel_in_img1 + pixel_in_img2 - pixel_in_img3 * pixel_in_img4;
},
img1, img2, img3, img4
);
The result can be printed with output.print();:
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
9 9 9 9 9 9 9 9 9 9
The experimental implementation
pixelwiseOperation template function implementation: based on recursive_transform
template<typename Op, class InputT, class... Args>
constexpr static Image<InputT> pixelwiseOperation(Op op, const Image<InputT>& input1, const Args&... inputs)
{
Image<InputT> output(
recursive_transform<1>(
[&](auto&& element1, auto&&... elements)
{
auto result = op(element1, elements...);
return static_cast<InputT>(std::clamp(
result,
static_cast<decltype(result)>(std::numeric_limits<InputT>::min()),
static_cast<decltype(result)>(std::numeric_limits<InputT>::max())));
},
(input1.getImageData()),
(inputs.getImageData())...),
input1.getWidth(),
input1.getHeight());
return output;
}
image_operations.h: The file contains pixelwiseOperation template function and other image processing functions
/* Developed by Jimmy Hu */
#ifndef ImageOperations_H
#define ImageOperations_H
#include <string>
#include "base_types.h"
#include "image.h"
// Reference: https://stackoverflow.com/a/26065433/6667035
#ifndef M_PI
#define M_PI 3.14159265358979323846
#endif
namespace TinyDIP
{
// Forward Declaration class Image
template <typename ElementT>
class Image;
template<typename T>
T normalDistribution1D(const T x, const T standard_deviation)
{
return std::exp(-x * x / (2 * standard_deviation * standard_deviation));
}
template<typename T>
T normalDistribution2D(const T xlocation, const T ylocation, const T standard_deviation)
{
return std::exp(-(xlocation * xlocation + ylocation * ylocation) / (2 * standard_deviation * standard_deviation)) / (2 * M_PI * standard_deviation * standard_deviation);
}
template<class InputT1, class InputT2>
constexpr static auto cubicPolate(const InputT1 v0, const InputT1 v1, const InputT1 v2, const InputT1 v3, const InputT2 frac)
{
auto A = (v3-v2)-(v0-v1);
auto B = (v0-v1)-A;
auto C = v2-v0;
auto D = v1;
return D + frac * (C + frac * (B + frac * A));
}
template<class InputT = float, class ElementT>
constexpr static auto bicubicPolate(const ElementT* const ndata, const InputT fracx, const InputT fracy)
{
auto x1 = cubicPolate( ndata[0], ndata[1], ndata[2], ndata[3], fracx );
auto x2 = cubicPolate( ndata[4], ndata[5], ndata[6], ndata[7], fracx );
auto x3 = cubicPolate( ndata[8], ndata[9], ndata[10], ndata[11], fracx );
auto x4 = cubicPolate( ndata[12], ndata[13], ndata[14], ndata[15], fracx );
return std::clamp(
cubicPolate( x1, x2, x3, x4, fracy ),
static_cast<InputT>(std::numeric_limits<ElementT>::min()),
static_cast<InputT>(std::numeric_limits<ElementT>::max()));
}
template<class FloatingType = float, class ElementT>
Image<ElementT> copyResizeBicubic(Image<ElementT>& image, size_t width, size_t height)
{
auto output = Image<ElementT>(width, height);
// get used to the C++ way of casting
auto ratiox = static_cast<FloatingType>(image.getWidth()) / static_cast<FloatingType>(width);
auto ratioy = static_cast<FloatingType>(image.getHeight()) / static_cast<FloatingType>(height);
for (size_t y = 0; y < height; ++y)
{
for (size_t x = 0; x < width; ++x)
{
FloatingType xMappingToOrigin = static_cast<FloatingType>(x) * ratiox;
FloatingType yMappingToOrigin = static_cast<FloatingType>(y) * ratioy;
FloatingType xMappingToOriginFloor = std::floor(xMappingToOrigin);
FloatingType yMappingToOriginFloor = std::floor(yMappingToOrigin);
FloatingType xMappingToOriginFrac = xMappingToOrigin - xMappingToOriginFloor;
FloatingType yMappingToOriginFrac = yMappingToOrigin - yMappingToOriginFloor;
ElementT ndata[4 * 4];
for (int ndatay = -1; ndatay <= 2; ++ndatay)
{
for (int ndatax = -1; ndatax <= 2; ++ndatax)
{
ndata[(ndatay + 1) * 4 + (ndatax + 1)] = image.at(
std::clamp(xMappingToOriginFloor + ndatax, static_cast<FloatingType>(0), image.getWidth() - static_cast<FloatingType>(1)),
std::clamp(yMappingToOriginFloor + ndatay, static_cast<FloatingType>(0), image.getHeight() - static_cast<FloatingType>(1)));
}
}
output.at(x, y) = bicubicPolate(ndata, xMappingToOriginFrac, yMappingToOriginFrac);
}
}
return output;
}
// multiple standard deviations
template<class InputT>
constexpr static Image<InputT> gaussianFigure2D(
const size_t xsize, const size_t ysize,
const size_t centerx, const size_t centery,
const InputT standard_deviation_x, const InputT standard_deviation_y)
{
auto output = TinyDIP::Image<InputT>(xsize, ysize);
auto row_vector_x = TinyDIP::Image<InputT>(xsize, 1);
for (size_t x = 0; x < xsize; ++x)
{
row_vector_x.at(x, 0) = normalDistribution1D(static_cast<InputT>(x) - static_cast<InputT>(centerx), standard_deviation_x);
}
auto row_vector_y = TinyDIP::Image<InputT>(ysize, 1);
for (size_t y = 0; y < ysize; ++y)
{
row_vector_y.at(y, 0) = normalDistribution1D(static_cast<InputT>(y) - static_cast<InputT>(centery), standard_deviation_y);
}
for (size_t y = 0; y < ysize; ++y)
{
for (size_t x = 0; x < xsize; ++x)
{
output.at(x, y) = row_vector_x.at(x, 0) * row_vector_y.at(y, 0);
}
}
return output;
}
// single standard deviation
template<class InputT>
constexpr static Image<InputT> gaussianFigure2D(
const size_t xsize, const size_t ysize,
const size_t centerx, const size_t centery,
const InputT standard_deviation)
{
return gaussianFigure2D(xsize, ysize, centerx, centery, standard_deviation, standard_deviation);
}
template<typename Op, class InputT, class... Args>
constexpr static Image<InputT> pixelwiseOperation(Op op, const Image<InputT>& input1, const Args&... inputs)
{
Image<InputT> output(
recursive_transform<1>(
[&](auto&& element1, auto&&... elements)
{
auto result = op(element1, elements...);
return static_cast<InputT>(std::clamp(
result,
static_cast<decltype(result)>(std::numeric_limits<InputT>::min()),
static_cast<decltype(result)>(std::numeric_limits<InputT>::max())));
},
(input1.getImageData()),
(inputs.getImageData())...),
input1.getWidth(),
input1.getHeight());
return output;
}
template<class InputT>
constexpr static Image<InputT> plus(const Image<InputT>& input1)
{
return input1;
}
template<class InputT, class... Args>
constexpr static Image<InputT> plus(const Image<InputT>& input1, const Args&... inputs)
{
return TinyDIP::pixelwiseOperation(std::plus<>{}, input1, plus(inputs...));
}
template<class InputT>
constexpr static Image<InputT> subtract(const Image<InputT>& input1, const Image<InputT>& input2)
{
assert(input1.getWidth() == input2.getWidth());
assert(input1.getHeight() == input2.getHeight());
return TinyDIP::pixelwiseOperation(std::minus<>{}, input1, input2);
}
template<class InputT = RGB>
requires (std::same_as<InputT, RGB>)
constexpr static Image<InputT> subtract(Image<InputT>& input1, Image<InputT>& input2)
{
assert(input1.getWidth() == input2.getWidth());
assert(input1.getHeight() == input2.getHeight());
Image<InputT> output(input1.getWidth(), input1.getHeight());
for (std::size_t y = 0; y < input1.getHeight(); ++y)
{
for (std::size_t x = 0; x < input1.getWidth(); ++x)
{
for(std::size_t channel_index = 0; channel_index < 3; ++channel_index)
{
output.at(x, y).channels[channel_index] =
std::clamp(
input1.at(x, y).channels[channel_index] -
input2.at(x, y).channels[channel_index],
0,
255);
}
}
}
return output;
}
}
#endif
image.h: The file contains the definition of Image class.
/* Developed by Jimmy Hu */
#ifndef Image_H
#define Image_H
#include <algorithm>
#include <array>
#include <cassert>
#include <chrono>
#include <complex>
#include <concepts>
#include <functional>
#include <iostream>
#include <iterator>
#include <list>
#include <numeric>
#include <string>
#include <type_traits>
#include <variant>
#include <vector>
#include "image_operations.h"
namespace TinyDIP
{
template <typename ElementT>
class Image
{
public:
Image() = default;
Image(const std::size_t width, const std::size_t height):
width(width),
height(height),
image_data(width * height) { }
Image(const std::size_t width, const std::size_t height, const ElementT initVal):
width(width),
height(height),
image_data(width * height, initVal) {}
Image(const std::vector<ElementT>& input, std::size_t newWidth, std::size_t newHeight):
width(newWidth),
height(newHeight)
{
assert(input.size() == newWidth * newHeight);
this->image_data = input; // Deep copy
}
Image(const std::vector<std::vector<ElementT>>& input)
{
this->height = input.size();
this->width = input[0].size();
for (auto& rows : input)
{
this->image_data.insert(this->image_data.end(), std::begin(input), std::end(input)); // flatten
}
return;
}
constexpr ElementT& at(const unsigned int x, const unsigned int y)
{
checkBoundary(x, y);
return this->image_data[y * width + x];
}
constexpr ElementT const& at(const unsigned int x, const unsigned int y) const
{
checkBoundary(x, y);
return this->image_data[y * width + x];
}
constexpr std::size_t getWidth() const
{
return this->width;
}
constexpr std::size_t getHeight() const
{
return this->height;
}
std::vector<ElementT> const& getImageData() const { return this->image_data; } // expose the internal data
void print()
{
for (std::size_t y = 0; y < this->height; ++y)
{
for (std::size_t x = 0; x < this->width; ++x)
{
// Ref: https://isocpp.org/wiki/faq/input-output#print-char-or-ptr-as-number
std::cout << +this->at(x, y) << "\t";
}
std::cout << "\n";
}
std::cout << "\n";
return;
}
// Enable this function if ElementT = RGB
void print() requires(std::same_as<ElementT, RGB>)
{
for (std::size_t y = 0; y < this->height; ++y)
{
for (std::size_t x = 0; x < this->width; ++x)
{
std::cout << "( ";
for (std::size_t channel_index = 0; channel_index < 3; ++channel_index)
{
// Ref: https://isocpp.org/wiki/faq/input-output#print-char-or-ptr-as-number
std::cout << +this->at(x, y).channels[channel_index] << "\t";
}
std::cout << ")\t";
}
std::cout << "\n";
}
std::cout << "\n";
return;
}
Image<ElementT>& operator+=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(image_data.cbegin(), image_data.cend(), rhs.image_data.cbegin(),
image_data.begin(), std::plus<>{});
return *this;
}
Image<ElementT>& operator-=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(image_data.cbegin(), image_data.cend(), rhs.image_data.cbegin(),
image_data.begin(), std::minus<>{});
return *this;
}
Image<ElementT>& operator*=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(image_data.cbegin(), image_data.cend(), rhs.image_data.cbegin(),
image_data.begin(), std::multiplies<>{});
return *this;
}
Image<ElementT>& operator/=(const Image<ElementT>& rhs)
{
assert(rhs.width == this->width);
assert(rhs.height == this->height);
std::transform(image_data.cbegin(), image_data.cend(), rhs.image_data.cbegin(),
image_data.begin(), std::divides<>{});
return *this;
}
Image<ElementT>& operator=(Image<ElementT> const& input) = default; // Copy Assign
Image<ElementT>& operator=(Image<ElementT>&& other) = default; // Move Assign
Image(const Image<ElementT> &input) = default; // Copy Constructor
Image(Image<ElementT> &&input) = default; // Move Constructor
private:
size_t width;
size_t height;
std::vector<ElementT> image_data;
void checkBoundary(const size_t x, const size_t y)
{
assert(x < width);
assert(y < height);
}
};
}
#endif
base_types.h: The base types declaration
/* Developed by Jimmy Hu */
#ifndef BASE_H
#define BASE_H
#define _USE_MATH_DEFINES
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <utility>
constexpr int MAX_PATH = 256;
#define FILE_ROOT_PATH "./"
using BYTE = unsigned char;
struct RGB
{
unsigned char channels[3];
};
using GrayScale = BYTE;
struct HSV
{
double channels[3]; // Range: 0 <= H < 360, 0 <= S <= 1, 0 <= V <= 255
};
struct BMPIMAGE
{
char FILENAME[MAX_PATH];
unsigned int XSIZE;
unsigned int YSIZE;
unsigned char FILLINGBYTE;
unsigned char *IMAGE_DATA;
};
#endif
basic_functions.h: The file contains the definition of recursive_transform
/* Developed by Jimmy Hu */
#ifndef BasicFunctions_H
#define BasicFunctions_H
#include <algorithm>
#include <array>
#include <cassert>
#include <chrono>
#include <complex>
#include <concepts>
#include <deque>
#include <execution>
#include <exception>
#include <functional>
#include <iostream>
#include <iterator>
#include <list>
#include <map>
#include <mutex>
#include <numeric>
#include <optional>
#include <ranges>
#include <stdexcept>
#include <string>
#include <tuple>
#include <type_traits>
#include <utility>
#include <variant>
#include <vector>
namespace TinyDIP
{
// recursive_variadic_invoke_result_t implementation
template<std::size_t, typename, typename, typename...>
struct recursive_variadic_invoke_result { };
template<typename F, class...Ts1, template<class...>class Container1, typename... Ts>
struct recursive_variadic_invoke_result<1, F, Container1<Ts1...>, Ts...>
{
using type = Container1<std::invoke_result_t<F,
std::ranges::range_value_t<Container1<Ts1...>>,
std::ranges::range_value_t<Ts>...>>;
};
template<std::size_t unwrap_level, typename F, class...Ts1, template<class...>class Container1, typename... Ts>
requires ( std::ranges::input_range<Container1<Ts1...>> &&
requires { typename recursive_variadic_invoke_result<
unwrap_level - 1,
F,
std::ranges::range_value_t<Container1<Ts1...>>,
std::ranges::range_value_t<Ts>...>::type; }) // The rest arguments are ranges
struct recursive_variadic_invoke_result<unwrap_level, F, Container1<Ts1...>, Ts...>
{
using type = Container1<
typename recursive_variadic_invoke_result<
unwrap_level - 1,
F,
std::ranges::range_value_t<Container1<Ts1...>>,
std::ranges::range_value_t<Ts>...
>::type>;
};
template<std::size_t unwrap_level, typename F, typename T1, typename... Ts>
using recursive_variadic_invoke_result_t = typename recursive_variadic_invoke_result<unwrap_level, F, T1, Ts...>::type;
template<typename OutputIt, typename NAryOperation, typename InputIt, typename... InputIts>
OutputIt transform(OutputIt d_first, NAryOperation op, InputIt first, InputIt last, InputIts... rest) {
while (first != last) {
*d_first++ = op(*first++, (*rest++)...);
}
return d_first;
}
// recursive_transform for the multiple parameters cases (the version with unwrap_level)
template<std::size_t unwrap_level = 1, class F, class Arg1, class... Args>
constexpr auto recursive_transform(const F& f, const Arg1& arg1, const Args&... args)
{
if constexpr (unwrap_level > 0)
{
recursive_variadic_invoke_result_t<unwrap_level, F, Arg1, Args...> output{};
transform(
std::inserter(output, std::ranges::end(output)),
[&f](auto&& element1, auto&&... elements) { return recursive_transform<unwrap_level - 1>(f, element1, elements...); },
std::ranges::cbegin(arg1),
std::ranges::cend(arg1),
std::ranges::cbegin(args)...
);
return output;
}
else
{
return f(arg1, args...);
}
}
}
#endif
The testing code
/* Developed by Jimmy Hu */
#include "image.h"
#include "basic_functions.h"
int main()
{
auto img1 = TinyDIP::Image<GrayScale>(10, 10, 4);
auto img2 = TinyDIP::Image<GrayScale>(10, 10, 3);
auto img3 = TinyDIP::Image<GrayScale>(10, 10, 2);
auto img4 = TinyDIP::Image<GrayScale>(10, 10, 1);
auto output = TinyDIP::pixelwiseOperation(
[](auto&& pixel_in_img1, auto&& pixel_in_img2, auto&& pixel_in_img3, auto&& pixel_in_img4)
{
return 2 * pixel_in_img1 + pixel_in_img2 - pixel_in_img3 * pixel_in_img4;
},
img1, img2, img3, img4
);
output.print();
return 0;
}
Test platform
MacOS: g++-11 (Homebrew GCC 11.1.0_1) 11.1.0
All suggestions are welcome.
The summary information:
Which question it is a follow-up to?
Tests for the operators of image template class in C++ and
A recursive_transform template function for the multiple parameters cases in C++
What changes has been made in the code since last question?
I am attempting to extend the mentioned element-wise operations in this post.
Why a new review is being asked for?
If there is any possible improvement, please let me know.
Answer: #ifndef M_PI
#define M_PI 3.14159265358979323846
#endif
Don't use #define, and since you marked this as C++20 use the constants supplied in the standard library.
constexpr int MAX_PATH = 256;
#define FILE_ROOT_PATH "./"
Use the path type, not fixed-size arrays of characters, for filename manipulation. Don't use #define.
using BYTE = unsigned char;
There is a std::byte type now. If that is not what you want, you ought to use uint_8 to avoid confusion and be consistent with standard code.
struct RGB
{
unsigned char channels[3];
};
Come on, you just defined BYTE on the previous line! Why don't you use it?
struct BMPIMAGE
{
char FILENAME[MAX_PATH];
unsigned int XSIZE;
unsigned int YSIZE;
unsigned char FILLINGBYTE;
unsigned char *IMAGE_DATA;
};
Use std::filesystem::path rather than a fixed array of characters. Use int32_t or whatever, rather than the implementation-dependant unsigned int. Prefer signed types unless you really need that one more bit for the range, or are doing bitwise operations.
You defined BYTE and probably mean that here, but don't use it. Use uint8_t or std::byte for these.
You might consider using a 2-D point class rather than separate height and width fields. You may find it is common to have x and y things always being used together, for positions and sizes. | {
"domain": "codereview.stackexchange",
"id": 41809,
"tags": "c++, image, template, lambda, c++20"
} |
What is this salt-water worm? | Question: We found this near Fort Pheonix in Buzzards Bay in Massachusetts. It gets long when its swimming (About 8 inches) and contracts when disturbed (2 inches).
Its head is shaped a bit like a diamond (like a cobra?). Its body looks fairly translucent and is ribbon-shaped.
Here's a short video: https://vimeo.com/240322590
Answer: It is possibly a milky ribbon worm (Cerebratulus lacteus).
source: intertidal-novascotia.blogspot.com
They can be found anywhere and everywhere along the Atlantic coastline in healthy abundance - http://intertidal-novascotia.blogspot.com/2012/05/cerebratulus-lacteus-milky-ribbon-worm.html | {
"domain": "biology.stackexchange",
"id": 7915,
"tags": "species-identification"
} |
What causes straight fatty molecules to form a lattice (e.g. saturated fats)? | Question: What causes multiple chains of saturated fats to pack together and form a solid? It is said that because the chains are straight, they form a lattice and line up more easily.
But if I had a bunch of straight sticks and drop them into a pile, they don't magically form a lattice where everything lines up. So is there something about the fatty chains that causes them to line up and pack closely together into a lattice?
Answer: Nature abhors a vacuum, and your sticks aren't sticky.
When you have a jumble of sticks, there's lots of voids and spaces between them - the pile of sticks takes up much more room than the bundle. For wooden sticks, this is no problem, as air can easily fill in the voids between them. With saturated fats, this isn't possible. There really isn't anything that can fill the voids. The only way to fill them is to line up the chains in an ordered lattice.
Also, when the chains are in an ordered lattice they're making a large number of van der Waals interactions between them. Because the chains are straight, the best way to maximize these favorable attractive interactions is to pack the chains together in an ordered array. The van der Waals forces are comparatively small, but over a long chain it adds up. (I haven't ever seen this done, but I'm guessing that if you coated your wooden sticks with something sticky like honey, and shook them around a bit, your wooden sticks would also want to line up with each other and stick together in an ordered array.)
Temperature, of course, can overcome both effects. The additional kinetic energy in the molecules allows them to overcome the attractive interactions between the molecules, and higher temperatures mean that the molecules are moving faster, pushing away other molecules and effectively filling in the voids to some extent. When you get to a high enough temperature, the fat melts.
In unsaturated fats, the chains are kinked. This means that it's much harder to line them up to form all those van der Waals interactions, and there are intrinsically voids in even the most ideal packing. Thus it's much easier to break those interactions and add the additional voids needed for a liquid state, so the solid to liquid transition happens at a lower temperature.
One thing to keep in mind is that things like solid/liquid transitions are not absolute things. Where the solid to liquid phase transition happens is based on the relative stability of the solid versus the liquid. It isn't that the saturated fat can form a solid and the unsaturated fat can't, it's just that the solid saturated fat is more stabilize relative to the liquid saturated fat, than the solid unsaturated fat is to the liquid unsaturated fat. As the change in stability is much greater for saturated fat, it takes a higher temperature to melt the saturated fat than it does the unsaturated one. | {
"domain": "chemistry.stackexchange",
"id": 5144,
"tags": "intermolecular-forces, fats"
} |
Gravitational force and potential in infite slab | Question: Let's say that we have an infinite slab of height $2h$ and mass density $\rho$. Let's define $x,y$ as the axis parallel to the slab and $z$ as the perpendicular one, with $z=0$ at the middle of the slab. I want to calculate the gravitational potential and the force as function of $z$ inside the slab.
Due to the symmetry of the system, Gauss theorem works wonders. I get $\vec{g} = -4\pi G \rho z \hat{k}$, being $\hat{k}$ the unitary $z$ vector. Integrating I get $\Phi(z) = 2\pi G \rho z^2$.
However the provided solution is $\vec{g} = -8\pi G \rho z \hat{k}$ and $\Phi(z) = 4\pi G \rho z^2$, and I cannot guess where the factor of 2 comes from.
What did I miss in the derivation?
Edit: My derivation is:
Starting with Gauss
$\iint_{\partial{V}}\vec{g}\vec{dA} = -4 \pi G M$
Due to symmetry we have only force in $z$, so let $V$ be a cylinder of base $A$ and height $2h$ centered on $z=0$, then:
$g \cdot 2A = -4\pi G(\rho M 2 z A)$
$g = -4\pi G\rho z$
Integrating to get the potential, $\nabla \Phi = -\vec{g}$
$\Phi(z) = 2\pi G\rho z^2$
Answer: I think that your answer is correct and the provided answer is wrong. Starting with Gauss's law for gravity, $-\nabla\cdot\vec g = 4\pi G\rho$, or equivalently Poisson's equation for gravity, $\nabla^2\phi = 4\pi G\rho$.
From the symmetry of the problem, $\phi$ has no $x$ or $y$ dependence:
$$\nabla^2\phi=\frac{\partial^2\phi}{\partial z^2}=4\pi G\rho$$
Integrate twice:
$$\phi = 2\pi G\rho z^2 + C_1 z + C_2$$
Since the problem is symmetric between $z$ and $-z$, $C_1$ must be zero, and we can make $C_2$ zero by choice of gauge. Then:
$$\vec g = -\nabla\phi = \frac{\partial\phi}{\partial z}\hat k = -4\pi G\rho z\hat k$$
in accord with your solution. | {
"domain": "physics.stackexchange",
"id": 25989,
"tags": "homework-and-exercises, gravity, newtonian-gravity, symmetry"
} |
Why are red blood cells considered to be cells? | Question: Wikipedia states that a cell is
the basic structural, functional and biological unit of all known living organisms. Cells are the smallest unit of life that can replicate independently.
It then goes on to state that
All cells (except red blood cells which lack a cell nucleus and most organelles to accommodate maximum space for hemoglobin) possess DNA.
Then why are red blood cells still considered cells, while they can't replicate? Is the definition on Wikipedia just a bad definition? Or are red blood cells wrongly considered cells, but remain so for historical reasons? Or are they considered cells for some other reason, such as this answer which states that red blood cells do contain a nucleus at some point?
Answer: A very good question, and it is most likely because of the last option. It had a nucleus for part of its life. After the RBC jettisons its nucleus, it still remains very metabolically active for approximately 3 months. It maintains its cell membrane integrity, it metabolizes glucose, it interacts constantly with its environment, numerous cellular functions and structure remain intact... It is extremely specialized for a primary purpose, and no longer requires the nucleus to provide more proteins. It has limited capacity to heal from injury, so it has a limited life span.
Speculation: I wonder if it might lose the nucleus early on so that when it is destroyed in the spleen at the end of its life as RBCs are, the spleen macrophages are not overwhelmed with additional processing of nucleic acids? Macrophage type cells are already working hard in there to clear infectious agents and some immune cells from the blood. | {
"domain": "biology.stackexchange",
"id": 2233,
"tags": "cell-biology, hematology, red-blood-cell"
} |
How can I prove stability of a biquad filter with non-zero initial conditions | Question: Ok, so the situation is that I have a DFII biquad with some filter coefficients:
\begin{align}
w[n] &= x[n] - a_1*w[n-1] - a_2*w[n-2]\\
y[n] &= b_0*w[n] + b_1*w[n-1] + b_2*w[n-2]
\end{align}
While the filter is running, I change the coefficients and I believe my filter is going unstable. I want to build in a check that it is going to be stable. Currently, I simply check that the poles lie inside the unit circle by checking my $a_n$ coefficients. But it doesn't work. I believe that the problem is that I have non-zero values in my states (i.e. $w[-1]$ and $w[-2]$) and my stability check doesn't even consider this. I cannot reset my states while the filter is running without producing an audible click.
The questions: How do I check for stability of a filter with non-zero initial conditions?
I have a couple thoughts for solutions:
Simply take the $\mathcal Z$-tranform of the impulse response (of the feedback portion), which looks something like this:
$$
h[n] =
\begin{cases}
0 & n < 0\\
1 - a_1*K_1 - a_2*K_2 & n = 0\\
0 - a_1*h[0] - a_2*K_1 & n = 1\\
- a_1*h[n-1] - a_2*h[n-2] & n \ge 2
\end{cases}
$$
Solve the difference equation à la differential equations:
$$
h[n] + a_1*h[n-1] + a_2*h[n-2] = 0, n \ge 2
$$
Or prove that the sequence $h[n]$ (from above) converges
$$ \sum_{n=0}^{\infty}\lvert h[n]\rvert < \infty $$
Am I on the right track? I can't seem to figure out the solution for any of these methods.
Answer: I found a solution that I believe is the proper math I was looking for. Although Hilmar's answer is also quite accurate and helpful. I found information in my old DSP textbook about the "Unilateral Z-Transform", which is briefly defined here: https://ccrma.stanford.edu/~jos/filters/Z_Transform.html. The important part is the properties some of which are described here: http://web.eecs.umich.edu/~aey/eecs451/lectures/zir.pdf. The important property is the delay property:
$$
x[n-1] \overset{\mathcal{UZ}}{\iff} z^{-1}\mathcal{X}^+(z)+x[-1]
$$
This was critical for my needs because it can be used to serve initial conditions. The next thing the book shows me is the application of this property for a two-sample delay:
$$
w[n] = y[n-1] = x[n-2] \\
\mathcal{W}^+(z) = x[-2] + x[-1]z^{-1} + z^{-2}\mathcal{X^+(z)}
$$
Applying this to the general form a 2nd order feedback filter I get
$$
y[n] - a_1y[n-1] - a_2y[n-2] = x[n] \\
h[n] - a_1h[n-1] - a_2h[n-2] = \delta[n] \\
\mathcal{H}^+(z) - a_1 \left( z^{-1}\mathcal{H}^+(z) + h[-1] \right) - a_2\left( z^{-2}\mathcal{H}^+(z) + z^{-1}h[-1] + h[-2] \right) = 1 \\
\mathcal{H}^+(z) \left(1 - a_1z^{-1} - a_2z^{-2} \right) = 1 - \left(a_1h[-1] + a_2h[-2] + a_2h[-1]z^{-1} \right)
$$
This gives me a formula with two parts, which my textbook refers to as the zero-state response + zero-input response
$$
\mathcal{H}^+(z) = \frac{1}{1 - a_1z^{-1} - a_2z^{-2}} - \frac{a_1h[-1] + a_2h[-2] + a_2h[-1]z^{-1}}{1 - a_1z^{-1} - a_2z^{-2}}
$$
I guess that means the general equation for for any causal biquad with initial conditions is
$$
\mathcal{H}^+(z) = \frac{b_0 + b_1z^{-1} + b_2z^{-2}}{ 1 - a_1z^{-1} - a_2z^{-2}} - \frac{a_1h[-1] + a_2h[-2] + a_2h[-1]z^{-1}}{1 - a_1z^{-1} - a_2z^{-2}}
$$
The result is that the feedback coefficients are the only things that can cause instability. However, maybe the initial conditions cause audio to go outside of a (stable) bound. | {
"domain": "dsp.stackexchange",
"id": 1245,
"tags": "infinite-impulse-response, z-transform, biquad"
} |
Is it possible to set xbox controller rumble using JoyFeedback? | Question:
I see when I use rxgraph while running the joy_node for an xbox controller that it does not subscribe to JoyFeedback or JoyFeedbackArray. Is there any other way to set the rumble for the xbox controller?
Originally posted by kb4722 on ROS Answers with karma: 21 on 2013-07-16
Post score: 2
Original comments
Comment by lucasw on 2013-10-24:
I'm considering buying a Logitech Rumble Gamepad F510, I'd really like to know if ROS can set the rumble there also (or if rumble works or it from Linux at all). The p3joy page mentions supporting the JoyFeedbackArray ( http://mirror.umd.edu/roswiki/ps3joy.html?distro=hydro ).
Comment by Cyril Jourdan on 2015-03-30:
Did you manage to get the xbox controller rumble working from ROS ?
Answer:
I made this feedback node a while ago (and haven't tried it recently):
https://github.com/lucasw/joy_feedback_ros
It uses a custom message type rather than JoyFeedback.
Originally posted by lucasw with karma: 8729 on 2015-06-15
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 14935,
"tags": "ros, joy-node, joy, joystick"
} |
Is Bluetooth Low Energy jamming possible with an SDR like the HackRF on GNURadio? | Question: It seems like BLE jamming is technically possible by just jamming the 3 advertising channels 37, 38 and 39 https://github.com/lws803/BLE-jammer and it has been proven to work on an STM32 + 3x NRF24l01 transceivers. Can the hackRF switch between these 3 frequencies fast enough to produce a consistent jamming effect?
Disclaimer: I understand that this is illegal and I would like to know about this for educational purposes only.
Answer: I am not revealing any big secrets here on jamming and anti-jamming techniques, nor would I condone creating any such interference. What I am about to say is quite simplistic and well known, but knowing more details in how jamming can take place and being more educated on it in general can help good actors in minimizing vulnerabilities in future designs.
Yes this is completely feasible but I would do this much more simply:
Create a single sinusoidal tone at 15 MHz and at 39 MHz.
Upconvert this tone using a carrier at 2441 MHz.
This will produce two symmetric sidebands with a tone at:
2402 MHz
2426 MHz
2456 MHz
2480 MHz
So there is one additional channel rather than just the targeted three channels in the hopping approach, but still significantly better and more effective over trying to jam the entire spectrum from 2402 to 2420 MHz, and conveniently the suppressed carrier that is at 2441 MHz is in between channels.
These tones themselves can be modulated if necessary to spread the energy over the channel bandwidth, since single tone jammers can be easily defeated with tone excision. By spreading the bandwidth, it becomes a simple SNR competition and it can't be defeated given sufficient energy, as the duration of the signal occupation will be continuous in time.
If no sources were readily available, below shows a feasible implementation for comparison to alternative approaches. A custom 2 layer circuit board layout would be sufficient and is \$65 from Express PCB , (the sensitive RF areas should be minimized in pcb track lengths, and for this application there would be no issue implementing this design on such a pcb with attention to proper RF layout techniques in those areas)- so a total cost of \$127 not including the low frequency noise source or DC supplies and DC control voltages. For single tones the low frequency noise source can be replaced with the same circuit as shown for the 51 MHz LO since the crystal oscillator (XO) can also be programmed for a 12 MHz output frequency. The noise source (modulated and filtered 12 MHz waveform) would be ideal for jamming purposes. The optional "carrier nulling" shown is a small DC offset (coupled in with a resistor) that can be adjusted to null any residual carrier feedthrough leakage at 2441 MHz if desired. Not shown in the spectrum above, but this would also result in suppressed carriers from the first upconverter stage in the output at 2413 MHz and 2467 MHz (also conveniently in between channels). The nulling control for this is in having a small DC offset control in the originating noise source. The nulling is feasible (and effective!) since the IF ports of both mixers have a response to DC. | {
"domain": "dsp.stackexchange",
"id": 9465,
"tags": "gnuradio, spread-spectrum"
} |
Django post function : split method is a good way? | Question: I'm working on my Django project and I'm trying to develop this one with 'Class Based View' (CBV) method in order to keep the code maintainable.
I have a class named FreepubHome with some methods and a very important post method. I would like to split this method into three methods and I don't know how I can do that.
This post method let to:
fill and submit the form
create a token
send an e-mail
So I would like to get a post function which let to fill and submit my form, call the token method and the sending method.
It's very important for me to understand the process, the methodology and how I can do that in order to simplify others methods that are very important.
This is my class :
from django.template.loader import get_template
from django.core.mail import EmailMultiAlternatives
from django.views.generic import CreateView
import hashlib
from .models import Publication, Document, Download
class FreepubHomeView(CreateView):
""" Render the home page """
template_name = 'freepub/index.html'
form_class = CustomerForm
def get_context_data(self, **kwargs):
kwargs['document_list'] = Document.objects.all().order_by('publication__category__name')
return super(FreepubHomeView, self).get_context_data(**kwargs)
@staticmethod
def create_token(self, arg1, arg2, datetime):
# Create token based on some arguments
plain = arg1 + arg2 + str(datetime.now())
token = hashlib.sha1(plain.encode('utf-8')).hexdigest()
return token
@staticmethod
def increment(model):
model.nb_download = model.nb_download + 1
return model.save()
def post(self, request, *args, **kwargs):
form = self.form_class()
document_choice = request.POST.getlist('DocumentChoice')
if request.method == 'POST':
form = self.form_class(request.POST)
for checkbox in document_choice:
document_edqm_id = Document.objects.get(id=checkbox).edqm_id
publication_title = Document.objects.get(id=checkbox).publication.title
email = request.POST['email']
token = self.create_token(email, document_edqm_id, datetime)
Download.objects.create(email=email, pub_id=checkbox, token=token)
document_link = Document.objects.get(id=checkbox).upload #gives media/file
document_link2 = Download.objects.get(token = token).pub_id #gives document id
print(document_link)
print(document_link2)
context = {'document_link': document_link,
'publication_title': publication_title}
if form.is_valid():
message = get_template('freepub/message.txt').render(context)
html_message = get_template('freepub/message.html').render(context)
subject = 'EDQM HelpDesk and Publications registration'
mail = EmailMultiAlternatives(subject, message, 'freepub@edqm.eu', [email])
mail.attach_alternative(html_message, "text/html")
#mail.attach_file(document_link.path) # Add attachement
mail.send(fail_silently=False)
print('Email envoyé à ' + email)
messages.success(request, str(
publication_title) + '\n' + 'You will receive an e-mail with your access to ' + document_edqm_id)
# Update number of downloads for document and publication
document = Document.objects.get(id=checkbox)
document = self.increment(document)
publication_id = Document.objects.get(id=checkbox).publication.id
publication = Publication.objects.get(id=publication_id)
publication = self.increment(publication)
else:
print('form invalid')
return HttpResponseRedirect(self.get_success_url())
def get_success_url(self):
return reverse('freepub-home')
These are the classes in models.py: (Not for review)
class Document(models.Model):
FORMAT_CHOICES = (
('pdf', 'PDF'),
('epub', 'ePUB'),
)
LANGUAGE_CHOICES = (
('FR', 'FR'),
('EN', 'EN'),
)
edqm_id = models.CharField(max_length=12, verbose_name=_('publication ID'), unique=True, default='')
language = models.CharField(max_length=2, verbose_name=_('language'), choices=LANGUAGE_CHOICES, null=False)
format = models.CharField(max_length=10, verbose_name=_('format'), choices=FORMAT_CHOICES, null=False)
title = models.CharField(max_length=512, verbose_name=_('document title'), null=False)
publication = models.ForeignKey(Publication, verbose_name=_('publication title'), null=False, related_name='documents')
upload = models.FileField(upload_to='media/', validators=[validate_file_extension])
creation_date = models.DateTimeField(auto_now_add=True, verbose_name=_('creation date'), null=False)
modification_date = models.DateTimeField(auto_now=True, verbose_name=_('modification date'), null=False)
nb_download = models.IntegerField(verbose_name=_('number of download'), default=0)
class Meta:
verbose_name = _('document')
verbose_name_plural = _('document')
def __str__(self):
return f"{self.edqm_id} : {self.title}"
class Download(models.Model):
email = models.CharField(max_length=150, verbose_name=_('e-mail'), null=False)
pub = models.ForeignKey(Document, verbose_name=_('document'), null=False)
token = models.CharField(max_length=40, verbose_name=_('download token'), unique=True, null=False)
download_date = models.DateTimeField(auto_now_add=True, verbose_name=_('download date'), null=False)
expiration_date = models.DateTimeField(auto_now=True, verbose_name=_('expiration date'), null=False)
nb_download = models.IntegerField(verbose_name=_('usage'), default=0)
class Meta:
verbose_name = _('download')
verbose_name_plural = _('download')
def __str__(self):
return f"{self.email} : {self.token}"
Answer: (I don't know Django and so some aspects of my review may be wrong)
I first removed most of your functions, as things like increment aren't really that helpful.
It also leaves your code with everything in one place, so that when you try to improve it you can see everything.
I then used guard clauses to reduce the amount of indentation post needs.
Take the following change for example, with it you know that you only perform actions on post requests. Where with your code it would take longer to know that.
if request.method != 'POST':
return HttpResponseRedirect(self.SUCCESSFUL_URL)
Assuming Document.objects.get(id=checkbox) doesn't have any side effects, then I'd just make it a variable.
I would reduce the amount of variables you have. Most of your lines of code were just variables that are used once.
# Original
context = {'publication_title': Document.objects.get(id=checkbox).publication.title}
# With (3)
context = {'publication_title': document.publication.title}
With (3) all you have to do is add document to your variable, and it removes a line of code. And so it improves readability at the expense of having to write document a couple more times.
I'd hope Django objects support += and so you can change increment to use it.
model.nb_download += 1
I'd make a function email that takes a couple of arguments but performs all EmailMultiAlternatives and get_template handling.
from django.template.loader import get_template
from django.core.mail import EmailMultiAlternatives
from django.views.generic import CreateView
import hashlib
from .models import Publication, Document, Download
def gen_token(*values):
plain = ''.join([str(i) for i in values] + [str(datetime.now())])
return hashlib.sha1(plain.encode('utf-8')).hexdigest()
class FreepubHomeView(CreateView):
""" Render the home page """
template_name = 'freepub/index.html'
form_class = CustomerForm
SUCCESSFUL_URL = reverse('freepub-home')
def get_context_data(self, **kwargs):
kwargs['document_list'] = Document.objects.all().order_by('publication__category__name')
return super(FreepubHomeView, self).get_context_data(**kwargs)
def email(self, email, upload, title, edqm_id):
context = {
'document_link': upload,
'publication_title': title
}
subject = 'EDQM HelpDesk and Publications registration'
message = get_template('freepub/message.txt').render(context)
mail = EmailMultiAlternatives(subject, message, 'freepub@edqm.eu', [email])
html_message = get_template('freepub/message.html').render(context)
mail.attach_alternative(html_message, "text/html")
#mail.attach_file(document.upload.path) # Add attachement
mail.send(fail_silently=False)
print('Email envoyé à ' + email)
messages.success(request, str(title) + '\n' + 'You will receive an e-mail with your access to ' + edqm_id)
def post(self, request, *args, **kwargs):
if request.method != 'POST':
return HttpResponseRedirect(self.SUCCESSFUL_URL)
form = self.form_class(request.POST)
email = request.POST['email']
for checkbox in request.POST.getlist('DocumentChoice'):
document = Document.objects.get(id=checkbox)
token = gen_token(email, document.edqm_id)
Download.objects.create(email=email, pub_id=checkbox, token=token)
if not form.is_valid():
print('form invalid')
continue
self.email(email, document.upload, document.publication.title, document.eqdm_id)
document.nb_download += 1
document.save()
publication = Publication.objects.get(id=document.publication.id)
publication.nb_download += 1
publication.save()
return HttpResponseRedirect(self.SUCCESSFUL_URL) | {
"domain": "codereview.stackexchange",
"id": 31967,
"tags": "python, object-oriented, python-3.x, django"
} |
Which maintainability metric has the strongest empirical evidence? | Question: There are lots of code metrics that claim to measure maintainability, e.g. CK and Li & Henry metric suits. However, it seems elusive to me to find a meta-study that compares different metrics to decide their current empirical status.
In contrast, here's a quite brilliant such study about fault prediction: https://romisatriawahono.net/lecture/rm/survey/software%20engineering/Software%20Fault%20Defect%20Prediction/Radjenovic%20-%20Software%20fault%20prediction%20metrics%20-%202013.pdf
I'm looking for something similar for maintainability/changeability prediction.
To be clear, there are multiple meta-studies on maintainability metrics, but they don't seem to compare their evidence, more like listing what's been researched about and what not.
Answer: This answer is based on the study Empirical evidence on the link between object-oriented measures and external quality attributes: a systematic literature review (PDF) by Ronald Jabangwe, Juurgen Borstler, Darja Smite, Claes Wohlin, 2014. They selected 99 papers to review in total.
"Vote counting" here means to get a point (or two) for each study that shows a strong connection between the specific metric and maintainability.
Description of important metrics
WMC = Weighted method for classes.
Weighted methods for Class (WMC) was originally proposed by C&K as the sum of all the complexities of the methods in the class 3.
LOC = Lines of code; todo: size of class, function, library, code-base?
RFC = Response for class
This is the size of the Response set of a class. The Response set for a class is defined by C&K as 'a set of methods that can potentially be executed in response to a message received by an object of that class' 3.
CBO = Coupling between objects
Coupling between objects (CBO) is a count of the number of classes that are coupled to a particular class i.e. where the methods of one class call the methods or access the variables of the other 3.
DIT = Depth of inheritance tree
Depth of Inheritance Tree (DIT) is a count of the classes that a particular class inherits from 3.
NOC = Number of children
Number of Children (NOC) is defined by C&K the number of immediate subclasses of a class 3.
Quotes from the study
There are insufficient numbers of studies on maintainability to draw conclusions. Nevertheless, Fig. 7 [picture above] shows that there is a potential link between maintainability, and measures that quantify complexity and cohesion properties.
Results from our systematic review suggest that inheritance measures have a weak link with reliability and maintainability across studies, particularly the two inheritance measures DIT and NOC.
The study also notes that code metrics must be regarded in an organizational context:
team structure and team strategy can vary across development settings, and studies show that such organizational characteristics have an impact on quality | {
"domain": "cs.stackexchange",
"id": 19233,
"tags": "empirical-research, metrics"
} |
How to compute the chance of failing to detect a gene given the detection limit of a protocol | Question: In Shapiro et al., when discussing about loss of molecules as source of error in single-cell sequencing, it is written that:
Another source of error is losses, which can be severe. The detection limit of published protocols is $5$–$10$ molecules of mRNA. If, as seems likely, the limit of detection is primarily determined by losses during sample preparation, this would indicate that $80$–$90\%$ of mRNA was lost. Or, to put it the other way around, a $90\%$ loss leads to an approximately $50\%$ chance of failing to detect a gene that is expressed at a level of seven mRNA molecules (from the binomial distribution).
How is this probability computed using the binomial distribution? I thought that $90\%$ loss corresponds to $5$ detected molecules, and I assume that $k=7$ for the binomial calculation, but I am unable to go further.
Answer: A 90% loss can be rephrased as a 10% chance of detecting anything. So what we want to find is the probability of detecting 0 molecules, when we start with 7 and have 10% probability of success. Once can do that in R as follows:
> pbinom(0, 7, 0.1)
0.4782969
So ~50%, as they stated. I suspect that part of the confusion arises from the fact that the detection limit is due to lowly-expressed genes/transcript being heavily affected by this loss. So the probability of detecting a single molecule out of 4 original molecules (assuming 90% mRNA loss) is 34%, for 3 molecules it's 27%, for 2 it's 19% and for 1 it's 10%. I think the threshold of 5 is mentioned more because it's a nice round number than there's anything particularly different in detecting a gene with 4 vs. 5 molecules (34 vs. 41% probability). | {
"domain": "bioinformatics.stackexchange",
"id": 344,
"tags": "rna-seq, rna, single-cell"
} |
What is the meaning of 'clutch fill'? | Question: What is the meaning of clutch fill?
See for example this paper.
Answer: The paper you referenced defines what the term means.
To ensure precise synchronization [when changing the drive path from one clutch to another one], before clutch engagement, it is
necessary to actuate the oncoming clutch to a position where the
clutch packs are in contact. This process is called clutch fill... | {
"domain": "engineering.stackexchange",
"id": 694,
"tags": "mechanical-engineering, fluid-mechanics, automotive-engineering, terminology"
} |
Do (classic) experiments of Compton scattering involve bound electrons? | Question: In Compton scattering, all theoretical derivations I've seen consider the case of a photon interacting with a free electron. However, in experiments of Compton scattering I see that photons are directed at an aluminum target (such as here) or a graphite target (such as here).
This makes it seem that the experiments are not dealing with free electrons but rather bound electrons. So then I would ask, how come experiments are dealing with bound electrons if the effect is meant to apply to free electrons? Is there anything I'm missing or misunderstanding?
Edit: Apparently my question has been marked as duplicate of How can a photon collide with an electron?, but I don't understand what possible similarity that question has with mine. I'm just trying to clarify the distinction/contrast between the idealized treatments of Compton scattering vs tests of Compton scattering in the real world.
The justification (which others in the answers have adequately provided) for the difference between theory and experiment is usually absent from many online sources, so I thought it would be interesting to have something here that clarifies this.
My question has nothing to do with the cited post, which asks how photons can collide to begin with, nor do I think any responses there even answer my question. (If I'm mistaken, you can point out a specific quote , and I will stand corrected). I'm not asking how they can collide.
Answer: In practice, it's difficult to keep a sufficiently dense collection of free electrons together to do the experiment. So, if you want to approximate pure Compton scattering, you use energetic gamma rays with a target that contains no elements with tightly bound electrons. The electrons are thus almost free, effectively.
In other situations, Compton-like inelastic scattering of less energetic photons from tightly bound electrons occurs, but the kinematics and cross section are modified.
As for Compton versus photoelectric interactions, they can be difficult to distinguish. They differ in that for the photoelectric effect the photon is completely absorbed, for the Compton effect it isn't. The easy thing to detect is the recoiling electron, but whether a scattered photon emerges or not can be difficult to determine.
We sometimes surround photoelectric x-ray detectors with "active shields". An active shield is itself a gamma ray detector.
A photon that scatters in the detector has a high probability of scattering into the shield and being detected. So, we reject apparent x-ray detections that are accompanied by a detection in the shield. This also rejects photons that leak through the shield, as long as they scatter at least once on their way through. | {
"domain": "physics.stackexchange",
"id": 91991,
"tags": "photons, experimental-physics, electrons, scattering"
} |
Is it possible to know that an object is at a certain location without observing any of its interactions? | Question: Let's say there is a cube in deep space. There is no light or force being applied to the cube. Is it possible to know if the cube exists in this scenario?
Sorry if the question seems too simplistic. I was just curious if scientists had non-invasive ways of knowing an object exists.
Answer: Bodies radiate EM waves as a function of its temperature. So, if the cube is above 0 K, its presence can theoretically be detected. | {
"domain": "physics.stackexchange",
"id": 72396,
"tags": "experimental-physics, observers"
} |
Computational modelling of sand pile | Question: I'm currently working on a computational model of the way sand acts when being pushed around, say, by a bulldozer. Specifically, I'm trying to determine the dynamic equations which govern the way that sand will move under gravity from an arbitrary intial configuration (say, a narrow tower) toward a position of equilibrium (the stationary pile formed after the tower collapses). That way I can apply the material shifts from the dozer while applying these equations at regular timesteps to achieve what is hopefully a physically representative model.
I'm modelling the sand using a heightmap (a 2D grid where each cell holds the corresponding height of the sand at that point), and I'm currently treating height like "concentration" and using particle diffusion equations in 2D:
$$\frac{\partial C(x, t)}{\partial t}=-D\cdot\nabla^2C(x, t)$$
The discrete-time equivalent is then generating a Gaussian diffusion kernel at each timestep that I convolve with the heightmap to get the next state, but this model tends towards a completely flat equilibrium. I was wondering what the appropriate equations would be to use for this model? Even a pointer towards some relevant resources would be hugely useful. I know that DEM is an alternative and for now I'm trying to avoid it.
Thanks!
Edit: From research I understand that the dynamics of granular materials are very complex - I was wondering if there was a simplified modelling approach that necessarily makes a few basic assumptions but enables the kind of modelling that I'm after.
Edit: I used @KGM's answer and it worked great. Computing the necessary Lipschitz function isn't trivial but can be optimised quite well afterwards.
Answer: When working with wet sant I dont see any way to solve this easily.
When working with dry sand, consider that up to a certain steepness $m$, piles are practically completely stable, at least if left untouched, and it is just when a pile gets steeper, that it "separates" into a part moving around, and a pile below standing still, with the moving part smoothing out until the whole pile does not exceed maximum steepness again.
This Behaviour can be mathematically formulated as follows:
Let $m$ be the max. steepness your pile can have.
Denote by $Ls_m(X)$ the space of 2D functions that are lipschitz-continuous with constant $m$ on the metric space X with values in $\mathbb{R}$.
Denote by $\max F$ the pointwise maximum of the functions in the set $F$.
Then one can formulate the following equation:
Define first:
$h_s(x,t):=\max \{f\in Ls_m(\mathbb{R}^2)|\forall\vec{x}'\in\mathbb{R}^2:f(\vec{x}')\leq h(\vec{x}',t)\}$
This is the stable part of the pile, furthermore it is in $Ls_m(\mathbb{R}^2)$ when considering only the position coordinates.
On the contrary, define:
$h_i(\vec{x},t):=h(\vec{x},t)-h_s(x,t)$
for the unstable remainder.
Then one could assume:
$\frac{d}{dt} h(\vec{x},t)=-c\Delta h_i(x,t)$
with a constant $c$ telling you how fast the unstable part smooths out, and $m$ telling you the maximum steepness of a stable pile (usually a little less than $1$).
So $h_i(x,t)$ crumbles, while the stable part stays the same.
Now computing $h_s(x,t)$ in two dimensions can be done by increasing the function at all points where it can be increased without violating the Lipschitz condition until there are none. This is quite expensive but not as expensive as simulating the particles.
As a first simple model this might suffice. | {
"domain": "physics.stackexchange",
"id": 96060,
"tags": "computational-physics, granulated-materials"
} |
Hashtable Implementation - C | Question: HashTable implementation
The Specifics:
It uses a simple hash function, it recieves a string and adds the ASCII values of each caracter, this can cause problems if the string is big, but for this implementation the string passed to the hash function will have no more then 5 caracters.
It has a load factor of 0.5, if reached i it will duplicate the hashtable size to the nearest prime number. To reduce collisions the hashtable sizes will only be prime numbers.
The hashtable is made of an array of "buckets" where each "bucket" is a struct storing the user information, in this case, name and nick.
The hashtable keeps track of its size and number of elements.
The hashtable uses linear probing to deal with collisions.
Implemented Methods
Create
Insert
Delete a specific item
Contains
get_Item
Display
This is my first attemp at making a hashtable from scratch, here is the code:
#define initial_size 5
typedef struct user{
char nick[6];
char name[26];
bool occupied;
}user;
typedef struct hashtable{
int size;
int elements;
user **buckets;
}hashtable;
hashtable * create() {
hashtable *htable = (hashtable*)malloc(sizeof(hashtable));
htable->size = initial_size;
htable->elements = 0;
htable->buckets = malloc(initial_size * sizeof(htable->buckets));
for(int i=0; i <initial_size; i++){
htable->buckets[i] = malloc(sizeof(user));
htable->buckets[i]->occupied = false;
}
return htable;
}
int hash(char *string) {
int hashVal = 0;
for( int i = 0; i < strlen(string);i++){
hashVal += (int)string[i];
}
return hashVal;
}
bool load_factor(hashtable *ht){
if(ht->elements >= ht->size/2){
return true;
}
return false;
}
bool isPrime(int num){
if(num==2)
return true;
if(num % 2==0)
return false;
for(int i=3; i*i<num; i+=2){
if(num%i==0)
return false;
}
return true;
}
int Prime_num(int old_size){
for(int i = old_size; i < old_size * 2 +10; i++){
if(isPrime(i) && i >= 2*old_size){
return i;
}
}
}
void insert(hashtable *HashTable, char *name, char *nick){
int hash_value = hash(nick);
int new_position = hash_value % HashTable->size;
if (new_position < 0) new_position += HashTable->size;
int position = new_position;
while (HashTable->buckets[position]->occupied && position != new_position - 1) {
position++;
position %= HashTable->size;
}
strcpy(HashTable->buckets[position]->name, name);
strcpy(HashTable->buckets[position]->nick, nick);
HashTable->buckets[position]->occupied = true;
HashTable->elements++;
}
hashtable *resize_HashTable(hashtable *HashTable){
user *tmp_array = malloc(sizeof(user) * HashTable->size);
int tmp_array_pos = 0;
for(int i = 0; i < HashTable->size; i++){
if(HashTable->buckets[i]->occupied){
strcpy(tmp_array[tmp_array_pos].nick, HashTable->buckets[i]->nick);
strcpy(tmp_array[tmp_array_pos].name, HashTable->buckets[i]->name);
tmp_array_pos++;
}
}
int new_size = Prime_num(HashTable->size);
HashTable = realloc(HashTable, new_size * sizeof(user));
for(int i = 0; i < new_size; i++){
HashTable->buckets[i] = malloc(sizeof(user));
HashTable->buckets[i]->occupied = false;
}
HashTable->size = new_size;
HashTable->elements = 0;
for(int i = 0; i < tmp_array_pos; i++){
insert(HashTable, tmp_array[i].name, tmp_array[i].nick);
}
return HashTable;
}
int contains(hashtable *HashTable, char *nick){
int hash_value = hash(nick);
int position = (hash_value % HashTable->size);
int fixed_p = position;
while(position < HashTable->size){
if(strcmp(HashTable->buckets[position]->nick, nick) == 0)
return position;
else
position++;
if(position == fixed_p)
return -1;
if(position == HashTable->size)
position = 0;
}
}
user *get_item(hashtable *HashTable, char *nick){
user *item = malloc(sizeof(user));
int position = contains(HashTable, nick);
if(position != -1){
item = HashTable->buckets[position];
}
return item;
}
void delete(hashtable *HashTable, char *nick){
int position = contains(HashTable, nick);
if(position != -1){
HashTable->buckets[position]->occupied = false;
}
}
void display(hashtable *HashTable){
for(int i = 0; i<HashTable->size; i++){
printf("i: %d ", i);
if(HashTable->buckets[i]->occupied)
printf("nick: %s, name: %s\n",HashTable->buckets[i]->nick, HashTable->buckets[i]->name);
else
printf("nick: -, name: -\n");
}
}
Note: I posted this some hours ago, but then i noticed the implementation had some big flaws in the resize method so i deleted the post. The problem should be fixed now.
Lastest Edit: Since my implementation got "popular" and the presented one has flaws, i will share a link to my github containning the updated version
Updated code version: https://github.com/MiguelDordio/Data-Structures-Implementations/blob/master/hashtable.c
Answer: I spotted some bugs:
The functions malloc and realloc might fail if there is not enough memory, and in this case they would return NULL, and since the code does not check the result, this would lead to undefined behaviour. It's necessary to check the results of these functions.
Update: a comment was skeptical about this recommendation, noting that on some operating systems, trying to write through a null pointer will result in a segmentation fault, and for some programs this is acceptable behaviour. This might indeed be fine in some cases, but when writing a reusable component like a hash table library, it is important to do better than that. The problems are: (i) Writing through a null pointer does not result in a segmentation fault in all situations, for example the code might be running on an embedded controller using an operating system that does not protect memory at address zero, or on hardware that lacks memory protection at all. The world does not consist only of Windows and Linux. (ii) In the presence of an optimizing compiler, there is no guarantee that you will merely get a segmentation fault. Writing through a null pointer is undefined behaviour, and might lead to data corruption or a security vulnerability instead of (or as well as) a crash. See this article by Harry Lee for some examples. (iii) In many cases it's not acceptable to crash, for example in safety-critical software. There are techniques for coping with memory exhaustion such as refusing requests until memory pressure abates, or garbage collection.
If the hash table is full, and you call insert with a key that hashes to zero (modulo the size of the table) then the while loop will never terminate and the program will hang.
If the hash table is full, and you call insert with a key that hashes to something other than zero (modulo the size of the table) then the new key will overwrite an old key without any kind of warning or error.
In Prime_num, if old_size is 57, then the function does not find a suitable new size, and so control runs off the end of the function and nothing is returned. This is because the next prime number after 2*old_size (114) is 127, which is more than old_size * 2 + 10 (124). (If you compiled with warnings turned on then your compiler would have complained about this.)
In get_item, there is no way for the caller to tell if the key was found or not. If it was found, then a bucket structure from the hash table is returned. But if it was not found, then a newly allocated bucket structure is returned. There is no way for the caller to tell these apart—the newly allocated bucket structure could contain any data whatsoever.
In get_item, if the key is found, then a bucket structure is allocated and neither freed nor returned to the caller. This is a memory leak.
In resize_HashTable, the tmp_array is allocated but never freed. This is a memory leak.
In resize_HashTable, the original bucket structures are not freed. This is a memory leak.
In contains, there is no check against the occupied flag. This means that if a key is not present in the table then it will be compared against all the keys in the table, including keys that were deleted or never initialized.
The algorithm in delete is incorrect: if the deleted key is in the middle of a chain of keys with identical hashes, then deleting it leaves all the following keys in the chain unfindable.
This is a common mistake! In The Art of Computer Programming, Knuth comments, "Many computer programmers have great faith in algorithms, and they are surprised to find that the obvious way to delete records from a hash table doesn't work." (Vol. III, p. 533.)
If you want to get this right, Knuth gives an algorithm (pp. 533–4) for deletion in a open hash table with linear probing.
But a simpler approach is to mark the deleted key with a flag meaning "there was a key here but it was deleted" which you later remove when growing or shrinking the table. | {
"domain": "codereview.stackexchange",
"id": 30699,
"tags": "c, hash-map"
} |
Global Costmap Implementation | Question:
Greetings,
I would like to ask if there is a way to create a global costmap for the navigation package from a map that doesnt comply to the 3 color standard ROS expects. This means having a map with extra information, such as elevation and terrain type. Ideally I would like to create the global costmap only once in order to calculate a single global path.
Originally posted by AndreasLydakis on ROS Answers with karma: 140 on 2014-12-05
Post score: 1
Answer:
The costmap_2d package only accepts a nav_msgs::OccupancyGrid type of map into the static_map layer of the costmap. Im assuming the static_map layer is what you are interested in since you want to 'create' a global costmap and then be able to calculate a single global path for the robot to traverse along. However, you must note that the nav_msgs::OccupancyGrid is a 2D representation of a grid map (3 colors as you mentioned; known, empty, unknown).
One thing you could do, is set the voxel grid parameter in your costmap configurations to true, so that your data (pointclouds, LIDAR, etc) is represented in 3D; just setup your observation sources to whatever types of sensors you use like in line 38. But i actually am not sure myself if the path planner will plan paths around elevated obstacles (for example like a hanging chandelier) since you only specify 2D footprint of your robot (width and length) but no height. But now that i think about it, i think all the 3D obstacle data gets squashed into a 2D layer anyway (can someone verify this please?).
Another option would be using a octomap which is a 3D occupancy grid. Please note, you wont be able to simply input the 3D occupancy grid into the costmap static_layer through the /map topic, since the topic sends nav_msgs::OccupancyGrid messages as i said above. However i have found a question here which basically states that the octomap_server can already just publish a 2D projection of your 3D occupancy grid, so you can use it with the map_server or costmap static layer.
Hope this helps
Originally posted by l0g1x with karma: 1526 on 2014-12-06
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20252,
"tags": "ros, navigation, costmap, 2d-mapping"
} |
Symmetries in QM and QFT --- operator transformation laws | Question: In quantum mechanics, we implement transformations by operators $U$ that map the state $|\psi\rangle$ to the state $U|\psi\rangle$. Alternatively, we could transfer the action of $U$ onto our operators:
$$ O\mapsto U^\dagger OU $$
These operators $U$ ought to constitute a representation of the transformation group of interest. The question we then ask is: which representation shall we choose? I have learnt that we pin down the matrices $U$ that we desire by asking that the position operator, or the position eigenstate, transforms in the appropriately geometric way: if $R$ is some rotation matrix in 3D, then we wish that
$$ U(R)^\dagger \vec{x} U(R) = R\vec{x} $$
I have multiple questions regarding this procedure:
1) It is often argued that $U$ is unitary since 'probabilities must be conserved'. But it is clear from my studies that operators representing Lorentz boosts are not unitary. How are these two facts reconciled?
2) On a related note, we say that a transformation is a symmetry of the system if the Hamiltonian is preserved under the transformation. This makes sense to me in that it is the Hamiltonian that defines the system; if the Hamiltonian is unchanged, then the old solutions to Schrodinger's equation are still solutions. However, it is clear that whatever transformation operator implements a Galilean boost (in non-relativistic QM) or Lorentz boost (in relativistic QFT) is not going to leave the Hamiltonian unchanged. That doesn't mean that boosts aren't symmetries of our system, however. What's going on here? Why exactly is $U^\dagger H U = H$ the condition for $U$ to be a 'symmetry'?
3) Demanding that the operators $U$ satisfy the property (choosing translation for concreteness)
$$ U^\dagger \vec{x} U = \vec{x} + \vec{a}$$
doesn't immediately tell us how the transformation will affect other operators, such as the momentum or angular momentum operators. For the case of a translation by $a$, we find that
$$ U = \exp\left( -i\frac{\vec{a}\cdot\vec{p}}{\hbar}\right) $$
satisfies the above equation (work infinitesimally). This then tells us that $U^\dagger \vec{p} U = \vec{p}$. This is what we want --- translations in space do not affect the momentum. But we haven't told our operators $U$ that they ought to satisfy this property, so the fact that they do seems something of a coincidence. That is, requiring only that conjugating the position operator with $U(R)$ rotates it seems to force every vector operator to be rotated under conjugation with $U(R)$. Is there something fundamental underlying this?
4) In QFT, we wish to implement Lorentz boosts with operators
$$ S = \exp \left( -i\frac{\omega_{\mu \nu} M^{\mu \nu}}{2\hbar}\right) $$
where $\delta^\mu_\nu + \omega^\mu{}_\nu$ is the infinitesimal Lorentz transformation we are representing. Now for $T$ representing a different, not necessarily infinitesimal, Lorentz transformation $\Lambda$, we have
$$T^{-1} S T = T^{-1} \left(1 - i\frac{\omega_{\mu \nu} M^{\mu \nu}}{2\hbar} \right) T$$
But the condition that the matrices $S$ and $T$ must constitute a representation of the Lorentz group implies that the combination on the left hand side ought to equal
$$\exp \left( -i\frac{\Omega_{\mu \nu} M^{\mu \nu}}{2\hbar}\right) $$
where $\Omega_{\mu \nu}$ defines the composite transformation:
$$ \delta^\mu_\nu + \Omega^\mu{}_\nu \equiv (\Lambda^{-1})^\mu{}_\rho(\delta^\rho_\sigma + \omega^\rho{}_\sigma)\Lambda^\sigma{}_\nu $$
Comparing these equations then gives
$$T^{-1} M^{\mu \nu} T = \Lambda^\mu{}_\rho \Lambda^\nu{}_\sigma M^{\nu \sigma}$$
using the raising and lowering properties of the Minkowski metric. My question is similar to that in 3) --- the object $M^{\mu \nu}$ is just a collection of 6 operators, and it is not at all obvious that those indices ought to be tensor indices. And yet somehow --- just by asking for the matrices $S$ to constitute a representation --- we've managed to reproduce the tensor transformation law (admittedly we have a $T^{-1}$ here rather than a $T^\dagger$ --- this relates to my first question). In other words, we've got the correct geometric action of the Lorentz transformation on the operators $M^{\mu \nu}$, even though we never asked for it!
As you can tell, I'm very confused by the whole situation. I apologise if the above discussion sounds a bit muddled, and I would appreciate any help that can be given!
Answer: Here is my answer to some of your questions - this is based purely on my understanding of these concepts and could be wrong.
(1) Whenever Lorentz transformations are a symmetry of any quantum system, they must necessarily be represented by unitary linear transformations on the quantum Hilbert space of the system. Operators representing Lorentz boosts on a relativistic quantum system are therefore unitary as pointed out in one of the links mentioned in the comments.
(2) The condition $ U^{\dagger}HU = H $ cannot, in general, be a necessary and sufficient condition for a $U$ to be a symmetry transformation in relativistic quantum mechanics. One way of seeing this is to consider a one particle eigen state of a free Klein-Gordon field $ |p\rangle $ and observe that under a Lorentz transformation, the expectation value $ \langle p|H|p\rangle $ (which gives the energy of this particle) should transform like the $0^{th}$ component of the energy-momentum four-vector. $$ \langle p|U^{\dagger}(\Lambda)HU(\Lambda)|p\rangle = {\Lambda^{0}}_{\mu}\langle p|P^{\mu}|p\rangle $$ and not be left invariant as $ U^{\dagger}HU = H $ would imply. At the same time, Lorentz transformation is a symmetry of this system precisely because similar transformation laws are valid for all the four-momentum components - which is equivalent to the scalar $ P_{\mu}P^{\mu} $ remaining invariant under a Lorentz transformation.
The point to note here is therefore that the requirement that a transformation be a symmetry does not, in general translate into commutation with the Hamiltonian. This happens in the special cases of the time-independent symmetry transformations (eg - spatial translations and rotations in non-relativistic QM ). The most general criterion is that the Action be invariant (upto a constant) under the concerned transformation.
(3) Refer to any good quantum mechanics text (eg. Sakurai, Modern Quantum mechanics chapters 2 and 4) that should answer your question.
(4) The result obtained here (that the generators of the Lorentz group should transform like tensors) is not very surprising. An elementary (though not very general) argument here is the following - for any observable which is classically a tensor (like the momentum 4-vector), there must exist a corresponding self-adjoint operator on a quantum mechanical Hilbert space. The expectation value of this operator wrt any state should necessary transform like the tensor observable itself. This is a consequence of the basic paradigm of Quantum mechanics (expectation values represent physically measurable quantities). Therefore one must have $$ T^{−1}S_{μν}T={Λ_{μ}}^ρ {Λ_{ν}}^σ S_{\rho σ} $$ quite generally for any tensor operator $ S_{\mu\nu}. $
PS : As mentioned at the beginning, this answer is based on my understanding of the concepts of 'symmetry' and 'lorentz covariance' - needless to say, I would appreciate constructive criticism. | {
"domain": "physics.stackexchange",
"id": 20304,
"tags": "quantum-field-theory, symmetry, lie-algebra"
} |
Exertion from swinging on a playground swing | Question: I've read about how by tilting one's body one changes one's center of mass while swinging on a playground swing, and thereby increases the energy of the swing.
But I would like to get a better sense of why one can get into a swing of ~1.5 meters above the ground on the back and front peaks of the swing (elevating an adult body weight that height each time), and do that repeatedly for, say, 10 minutes, and yet not at all feel fatigued--whereas if you were trying to jump up from the ground to that height you would be fatigued after just a few jumps.
I get the sense that the answer is related to the idea that you are "loading" the swing gradually with potential energy on each swing, and that each iteration of that requires only a little effort, but I'm still sort of surprised at how much sustained energy you can generate with so little feeling of exertion (it feels like one can keep swinging high with little effort for an hour, easily), and I thought there might be some subtleties to this that I don't understand.
Answer: Consider an idealized setting where there is essentially no friction or air resistance. Suppose someone gave you a quick push while you are on the swing. You will keep on swinging without having to expend any energy yourself. A rock sitting on the swing would behave in exactly the same way; it wouldn't have to "pump" to keep this idealized swing going up & down and back & forth once it was already moving. (As for how this phenomenon can occur, gravitational potential energy and conservation of mechanical energy would be a good starting search terms.)
Now, starting from rest at the bottom of this idealized swing, you would indeed need to do a bit of pumping to get started on your own. But you would only need to exert an amount of energy equal to your maximum change in potential energy desired (i.e., roughly $mgh$), and you would only need to exert this amount of energy once (well, a little at a time until you reach your desired height, but you wouldn't have to keep exerting this amount of energy each period).
Okay, the actual real wold. There are dissipative forces at play (air resistance, friction). Once you are already moving at your desired height/amplitude, the pumping that you do is only to counteract these dissipative forces. For, without these pesky forces, you wouldn't need to pump at all.
So, you aren't pumping/expending energy to make the swing up/down or back/forth, but rather to overcome the work done by dissipative forces. | {
"domain": "physics.stackexchange",
"id": 11126,
"tags": "energy, work, kinematics"
} |
Generic Vector Class using smart_pointers | Question: I have decided to rewrite what I did here, following the suggestions to use smart pointers. I will rewrite the other data structures as well using smart pointers where appropriate.
I just want to see how my code stands now, I am sure there are still areas I need to improve or fix. I again want to thank this community in their effort in evaluating my code, I really appreciate it and I believe it is slowly but surely taking my coding skills to the next level.
Here is my header file:
#ifndef Vector_h
#define Vector_h
template <class T>
class Vector {
private:
static constexpr int initial_capacity = 100;
// Instance variables
int capacity = 0;
int size = 0;
std::unique_ptr<T[]> data = nullptr;
void deepCopy(const Vector<T> &source) {
capacity = source.size + initial_capacity;
data = std::make_unique<T[]>(capacity);
for (int i = 0; i < source.size; i++) {
data[i] = source.data[i];
}
size = source.size;
}
void expandCapacity() {
auto oldData = std::move(data);
capacity *= 2;
data = std::make_unique<T[]>(capacity);
for (int i = 0; i < size; i++) {
data[i] = oldData[i];
}
}
public:
// Constructors
Vector(); // empty constructor
Vector(int n, const T &value); // constructor
Vector(Vector<T> const &vec); // copy constructor
Vector<T>& operator=(Vector<T> const &rhs); // assignment operator
// Rule of 5
Vector(Vector<T> &&move) noexcept; // move constructor
Vector& operator=(Vector<T> &&move) noexcept; // move assignment operator
~Vector(); // destructor
// Overload operators
T& operator[](int index);
T const& operator[](int index) const;
bool operator==(const Vector<T>&) const;
Vector<T>& operator+=(const Vector<T> &other) {
Vector<T> newValue(size + other.size);
std::copy(this->data, this->data + this->size, newValue.data);
std::copy(other.data, other.data + other.size, newValue.data + this->size);
newValue.swap(*this);
}
friend Vector<T>& operator+(Vector<T> &source1, Vector<T> &source2) {
int n = source1.getSize() + source2.getSize();
Vector<T> newSource(n,0);
for (int i = 0; i < source1.size; i++) {
newSource[i] = source1[i];
}
for (int i = 0; i < source2.size; i++) {
newSource[i + source1.getSize()] = source2[i];
}
return newSource;
}
friend std::ostream& operator<<(std::ostream &str, Vector<T> &data) {
data.display(str);
return str;
}
// Member functions
void swap(Vector<T> &other) noexcept;
void display(std::ostream &str) const;
int getSize() const { return size; }
int getCapacity() const { return capacity; }
bool empty() const { return size == 0; }
void clear() { size = 0; }
T get(int index) const;
void set(int index, const T &value);
void set(int index, T &&value);
void insert(int index, const T &value);
void insert(int index, T &&value);
void remove(int index);
void push_back(const T &value);
void pop_back();
};
template <class T>
Vector<T>::Vector() : capacity(initial_capacity), size(0), data{ new T[capacity] } {}
template <class T>
Vector<T>::Vector(int n, const T &value) {
capacity = (n > initial_capacity) ? n : initial_capacity;
data = std::make_unique<T[]>(capacity);
size = n;
for (int i = 0; i < n; i++) {
data[i] = value;
}
}
template <class T>
Vector<T>::Vector(Vector<T> const &vec) {
deepCopy(vec);
}
template <class T>
Vector<T>::Vector(Vector<T> &&move) noexcept {
move.swap(*this);
}
template <class T>
Vector<T>& Vector<T>::operator=(Vector<T> const &rhs) {
Vector<T> copy(rhs);
swap(copy);
return *this;
}
template <class T>
Vector<T>& Vector<T>::operator=(Vector<T> &&move) noexcept {
move.swap(*this);
return *this;
}
template <class T>
Vector<T>::~Vector() {
while (!empty()) {
pop_back();
}
}
template <class T>
T& Vector<T>::operator[](int index) {
return data[index];
}
template <class T>
T const& Vector<T>::operator[](int index) const {
return data[index];
}
template <class T>
bool Vector<T>::operator==(const Vector<T> &rhs) const {
if (getSize() != rhs.getSize()) {
return false;
}
for (int i = 0; i < getSize(); i++) {
if (data[i] != rhs[i]) {
return false;
}
}
return true;
}
template <class T>
void Vector<T>::swap(Vector<T> &other) noexcept {
using std::swap;
swap(capacity, other.capacity);
swap(size, other.size);
swap(data, other.data);
}
template <class T>
void Vector<T>::display(std::ostream &str) const {
for (int i = 0; i < size; i++) {
str << data[i] << "\t";
}
str << "\n";
}
template <class T>
T Vector<T>::get(int index) const {
if (index < 0 || index >= size) {
throw std::out_of_range("[]: index out of range.");
}
return data[index];
}
template <class T>
void Vector<T>::set(int index, const T& value) {
if (index < 0 || index >= size) {
throw std::invalid_argument("set: index out of range");
}
data[index] = value;
}
template <class T>
void Vector<T>::set(int index, T&& value) {
if (index < 0 || index >= size) {
throw std::invalid_argument("set: index out of range");
}
data[index] = std::move(value);
}
template <class T>
void Vector<T>::insert(int index, const T& value) {
if (size == capacity) {
expandCapacity();
}
for (int i = size; i > index; i--) {
data[i] = data[i - 1];
}
data[index] = value;
size++;
}
template <class T>
void Vector<T>::insert(int index, T&& value) {
if (size == capacity) {
expandCapacity();
}
if (index < 0 || index >= size) {
throw std::invalid_argument("insert: index out of range");
}
for (int i = size; i > index; i--) {
data[i] = data[i - 1];
}
data[index] = std::move(value);
size++;
}
template <class T>
void Vector<T>::remove(int index) {
if (index < 0 || index >= size) {
throw std::invalid_argument("insert: index out of range");
}
for (int i = index; i < size - 1; i++) {
data[i] = data[i + 1];
}
size--;
}
template<class T>
void Vector<T>::push_back(const T& value) {
insert(size, value);
}
template<class T>
void Vector<T>::pop_back() {
remove(size - 1);
}
#endif /* Vector_h */
Here is the main.cpp file:
#include <algorithm>
#include <initializer_list>
#include <iostream>
#include <cassert>
#include <ostream>
#include "Vector.h"
int main() {
///////////////////////////////////////////////////////////////////////
///////////////////////////// VECTOR //////////////////////////////////
///////////////////////////////////////////////////////////////////////
Vector<int> nullVector; // Declare an empty Vector
assert(nullVector.getSize() == 0); // Make sure its size is 0
assert(nullVector.empty()); // Make sure the vector is empty
assert(nullVector.getCapacity() == 100); // Make sure its capacity is greater than 0
Vector<int> source(20, 0); // Declare a 20-element zero Vector
assert(source.getSize() == 20); // Make sure its size is 20
for (int i = 0; i < source.getSize(); i++) {
source.set(i, i);
assert(source.get(i) == i); // Make sure the i-th element has value i
}
source.remove(15); // Remove the 15th element
assert(source[15] == 16); // Make sure the 15th element has value 16
source.insert(15, 15); // Insert value 15 at the index 15
assert(source[15] == 15); // Make sure the 15th element has value 15
source.pop_back(); // Remove the last element
assert(source.getSize() == 19); // Make sure its size is 19
source.push_back(19); // Insert value 20 at the bottom
assert(source.getSize() == 20); // Make sure its size is 20
assert(source.get(19) == 19); // Make sure the 19th element has value 19
Vector<int> copyVector(source); // Declare a Vector equal to source
for (int i = 0; i < source.getSize(); i++) {
assert(copyVector[i] == source[i]); // Make sure copyVector equal to source
}
std::cout << "source: \n" << source; // Print out source
std::cout << "copyVector: \n" << copyVector; // Print out copyVector
//Vector<int> newSource = source + copyVector; // Concatenate source and copyVector
//std::cout << "newSource: \n" << newSource; // Print out source + copyVector
source.clear(); // Clear source
assert(source.getSize() == 0); // Make sure its size is 0
std::cout << "Vector unit test succeeded." << std::endl;
std::cin.get();
}
Answer: #include every necessary header
or your code won't compile and is technically unfit for review here. So don't forget <memory> and <ostream> in your .h.
Memory management
That's what's most disappointing in an otherwise quite well written code. Memory management is the core of a vector class. Using smart pointers is a good practice, of course, but only solves part of the problem: it prevents memory leaks, but not other mismanagements of this rare resource.
Why would you initialize all your vectors with room for at least 100 elements? Your own example vectors contain only 20 elements, and it isn't rare at all to find smaller vectors. They're the most basic, most often used container in C++, so you can't waste that much memory in empty vectors. If you think I'm doing too much about this, just consider this:
std::vector<std::vector<std::vector<float>>>> temperatures; // occupies 3*sizeof(void*) bytes
Vector<Vector<Vector<float>>> temperatures // occupies more than 100**3*sizeof(float)
I'm not saying it's good to have nested vectors, I'm just saying it's something you should expect.
In the same way, don't add this initial_capacity to the capacity of the vector you're copying (in deepCopy), it's a waste of memory. For instance, let's say you want to compute the number of unique elements in your vector, you could write a function like this one:
int number_of_unique_elements(Vector v);
where the vector is taken by value because you'll have to reorder it when looking for duplicates and don't want to modify the original.
Constructors should be more coherent
I find a bit weird to assign 0 as a default value to the capacity variable:
template <class T>
class Vector {
// ...
int capacity = 0;
// ...
};
only to set it at initial_capacity in your default constructor:
template <class T>
Vector<T>::Vector() : capacity(initial_capacity), size(0), data{ new T[capacity] }
It is misleading and even a bit worrisome since the 0 initialization is follow by expandCapacity, which multiplies the previous capacity by 2.
Use <algorithm> whenever you can
There are more than a few functions in which you use for loops over std::copy. Just don't.
Separate memory management and operations on data
Your interface doesn't include meaningful memory management methods (reserve, shrink_to_fit, resize) but is cluttered by external, and sometimes obscure operations.
operator<< has nothing to do here. Provide a way to access your data, and enjoy the power of algorithms:
Vector<int> data;
// fill data
std::copy(data.begin(), data.end(), std::ostream_iterator<int>(cout, ", "));
operator+ is worse, because it has no obvious meaning. It could mean concatenation as well as element-wise addition. Concatenation isn't a problem from outside either:
Vector<float> result{srv_vec1};
result.reserve(result.size()+src_vec2.size());
result.insert(src_vec2.begin(), src.vec2.end());
Providing iterators is the best way to offer an access into the vector's data. If you don't want to, or at least not now, just provide a pointer to your data.
Respect conventions
get and set aren't part of the vector vocabulary. Use at instead, and make it return a reference to the element!
Use specific exceptions
std::invalid_argument isn't very explicit, all the more when std::out_of_range is available.
Conclusion
Code reviews are more about what is wrong, but your code is quite good. I believe that you should aim higher though, and think more deeply about memory management. There are two main aspects about this:
how much memory to allocate: how much memory in an empty vector? how much more memory in a full vector? You've already thought of this even if you have to refine your approach.
when do I initialize the allocated memory? A new[] operation not only allocates memory for n objects but also initializes them. It isn't necessary optimal. You may rather allocate uninitialized storage and construct only when necessary. You may even leverage this to construct new elements directly inside your vector from their constructors' arguments.
There are also some other optimizations you could consider, but they can be more complex to implement.
The first that comes to my mind is to allow for custom allocation functions: like every std::container, add Allocator to your template arguments and rely on the allocator interface to allocate memory.
Another one is to provide a Vector<bool> specialization.
Still another one would to implement a "small Vector<char>" optimization: memory for small Vector<char> could be allocated on the stack rather than on the heap. | {
"domain": "codereview.stackexchange",
"id": 31580,
"tags": "c++, pointers, vectors"
} |
Simple Random Password Generator in Python | Question: I've made a password generator that works fine. It takes alphabetic, numeric, and punctuation characters and randomly generates a 8-16 character password.
import sys
import string
import random
def PasswordGenerator():
Name = input("What is your name? ")
if Name.isalpha():
GeneratePassword = input(Name + " would you like to generate a random password? ")
YesOptions = ["Yes", "yes", "Y", "y"]
NoOptions = ["No", "no", "N", "n"]
PasswordCharacters = string.ascii_letters + string.digits + string.punctuation
while GeneratePassword in YesOptions:
Password = "".join(random.choice(PasswordCharacters) for i in range(random.randint(8, 16)))
print(Password)
GeneratePasswordAgain = input("Would you like to generate another random password? ")
while GeneratePasswordAgain in YesOptions:
Password = "".join(random.choice(PasswordCharacters) for i in range(random.randint(8, 16)))
print(Password)
GeneratePasswordAgain = input("Would you like to generate another random password? ")
break
while GeneratePasswordAgain in NoOptions:
print("Good bye!")
sys.exit()
break
while GeneratePasswordAgain not in YesOptions or NoOptions:
print("Not a valid response! Try again.")
GeneratePasswordAgain = input("Would you like to generate another random password? ")
break
while GeneratePassword in NoOptions:
print("Good bye!")
sys.exit()
while GeneratePassword not in YesOptions or NoOptions:
print("Not a valid response! Try again.")
PasswordGenerator()
while GeneratePassword in YesOptions:
Password = "".join(random.choice(PasswordCharacters) for i in range(random.randint(8, 16)))
print(Password)
GeneratePasswordAgain = input("Would you like to generate another random password? ")
while GeneratePasswordAgain in YesOptions:
Password = "".join(random.choice(PasswordCharacters) for i in range(random.randint(8, 16)))
print(Password)
break
GeneratePasswordAgain = input("Would you like to generate another random password? ")
while GeneratePasswordAgain in NoOptions:
print("Good bye!")
sys.exit()
break
while GeneratePasswordAgain not in YesOptions or NoOptions:
print("Not a valid response! Try again.")
GeneratePasswordAgain = input("Would you like to generate another random password? ")
break
while GeneratePassword in NoOptions:
print("Good bye!")
sys.exit()
while Name is not Name.isalpha:
print("Not a valid response! Try again.")
PasswordGenerator()
PasswordGenerator()
Answer: I found a bug. If the initial response to "Do you want to..." is neither yes-like or no-like, it prompts for the username again. Also, you have large sections of dead code that will never execute.
Avoid duplicating code.
Several prompts are exactly the same as each other, even with the same logic after them. Refactor the code so that each possibility only occurs once.
Avoid duplicating variables
The GeneratePassword and GeneratePasswordAgain variables serve the same purpose: to get user input and determine if it should continue. and should be unified.
Keep the code reusable.
Avoid calls like sys.exit() that prevent this snippet of code from being used elsewhere.
Use the language features properly.
You know how to use while loops -- so why are you making multiple unnecessary recursive calls, each of which performs the same task as a while loop?
Separate logic and user-interface. Separate the code into modules.
Write the function that generates the password and does nothing else. Something like
def RandomPassword(passwordCharacters, minLetters, maxLetters):
return "".join(random.choice(PasswordCharacters) for i in range(random.randint(minLetters, maxLetters)))
def DefaultRandomPassword():
passwordCharacters = string.ascii_letters + string.digits + string.punctuation
return RandomPassword(passwordCharacters, 8, 16)
Write another function that takes a prompt, displays it, waits for the user to type something, and then does it again if it was not a valid yes-no answer. Then return either yes or no. Call this function multiple times. | {
"domain": "codereview.stackexchange",
"id": 28494,
"tags": "python, python-3.x, random"
} |
Combining Two CSV's in Jupyter Notebook | Question:
I want to combine both CSV files based on Column1, also when combined each element of Column1 of both csv should match and also each row or Please suggest how to reorder Column1 according to another csv.
In Jupyter Notebook
Thank You!
Answer: You can try the below code to merge two file:
import pandas as pd
df1 = pd.read_csv(‘first.csv’)
df2 = pd.read_csv(‘second.csv’)
df = df1.merge(df2, on=‘Column1’) | {
"domain": "datascience.stackexchange",
"id": 8299,
"tags": "dataset, data-mining, data, data-cleaning, data-science-model"
} |
What is the connection between the non-reversibility of the decay of unstable nuclei (as Uranium, Plutonium) and the 2nd principle of thermodynamics? | Question: The 2nd principle of the thermodynamics says that if a system (e.g. an ideal gas) is left undisturbed, its number of microscopic states only increases. This is a statement of irreversibility of the process that the system undergoes. For instance, if we have two separated chambers, one with cold gas and one with hot gas of the same type, and bring the chambers into contact (by some pipe), the temperature of the two chambers will become equal after some time. It will not happen that the cold gas gets colder and the hot gas hotter.
In the nuclear decay, one particle (or more) get out from an unstable nucleus instead of remaining in it forever. And the process is irreversible, the particle (or particles) can be sent back (by some mirror) but the unstable nucleus won't be restored as is was before the decay, because while part of the emitted wave returns to the nucleus trying to restore the parent-nucleus, this nucleus keeps on emitting.
The question is, WHAT pushes a particle (e.g. an alpha article) out of the parent nucleus? WHY doesn't the alpha remain forever in the parent nucleus, or, more generally, inside the volume delimited by the potential barrier?
Is the irreversibility of the nuclear decay connected with the 2nd principle of the thermodynamics? Or, is there some similarity between them?
The configuration of daughter-nucleus + emitted particle, represents a system with MORE states? (This idea seems non-plausible because quantum-mechanically, this pair is described by a ONE SINGLE composite quantum state as long as de-coherence is avoided - e.g. by keeping the system in vacuum).
Alternatively, do the decay and the 2nd principle of the thermodynamics stem from a common, more fundamental principle?
Answer: It is true that classical thermodynamic equations emerge from statistical mechanics. And that the increase in entropy depends on the increase in the number of microstates.
Decays also increase the number of microstates. They are irreversible because decay releases energy and the thermodynamic system cannot deliver enough energy and combination of particles to get back to the original state, as it cannot go back to any original microstate either. If a uranium breaks up, there is a probability if the right fragments with the correct energy collide to bind up again if the correct quantized energy is supplied to the fragments by fortuitous collisions , but the probability is very very small.
The question is, WHAT pushes a particle (e.g. an alpha article) out of the parent nucleus? WHY doesn't the alpha remain forever in the parent nucleus?
Nuclear decay happens because nuclei are bound by the strong force but there is the repulsive force of the protons, which is only balanced by the neutrons along the diagonal in this plot of isotopes. The higher the number of protons the more neutrons proportionately are needed for binding the isotope. Too many neutrons allow the instability of the neutron ( it decays when free) a probability of decay. Decay and fission release binding energy, because the system is no longer bound quantum mechanically and it breaks into fragments, creating more microstates.
Is the irreversibility of the nuclear decay connected with the 2nd principle of the thermodynamics? Or, is there some similarity between them? The configuration of daughter-nucleus + emitted particle, represent a system with MORE states? (Quantum-mechanically this system is described by a ONE SINGLE quantum state).
This system was described by one quantum mechanical state function before it decayed. After it decayed it is no longer in a single quantum state once the fragments interact in the heat bath of the environment.
Or, do the decay and the 2nd principle of the thermodynamics stem from a common, more fundamental principle?
The decay happens because the system has a quantum mechanical probability of decaying, a half life. It is computable with quantum mechanical models, not thermodynamic models( i.e. statistical mechanics). Potentials enter and energy levels and the Pauli exclusion principle, the whole artillery. Thermodynamics is an emergent phenomenon from the underlying quantum mechanical framework, certainly for materials with nuclear decays but also in general, as atoms and molecules are also quantum mechanical entities.
Edit after rereading next day
When one continues studies in disciplines that depend on physics, one should keep in mind that in describing natural phenomena, the appropriate framework should be considered. Also that there exists a hierarchy in physical frameworks, starting from the microscopic range of elementary particles going to nuclei, to atoms/molecules to solid/liquid/gas states . Each framework has its region of validity, models and computational tools.
Mathematically in the models as the hierarchy rises, at the confluence of two frameworks, the larger in centimeters framework emerges. It is a many body result of the fact that everything is composed of elementary particles and their bindings. Thus thermodynamics is an emergent theory and the second law is a law for large dimensions, with respect to the quantum mechanical framework on which it is founded in nature. It emerges from the probabilistic nature of quantum mechanics.
This became very clear with the black body problem and its solution, that thermodynamics with classical statistical mechanics was inadequate to describe the situation.
In cosmic dimensions, the force of gravity is postulated, and the classical theories described the motions on earth and of planets etc very well; the present view is that it is the highest framework of General Relativity which in the limiting case turns into the Newtonian mechanics gravitational theory. So in this case, Newton's laws are dependent on General Relativity laws , from the large frame to the lower one. Thermodynamics is not such a case. | {
"domain": "physics.stackexchange",
"id": 17472,
"tags": "thermodynamics, statistical-mechanics, entropy, radiation, reversibility"
} |
Is there an optical filter which exclusively passes right or left circularly polarized light? | Question: Is there a circular analog to the linear polarizer filter, which can be configured to pass only right (left) and block left (right) circularly polarized light?
Answer: You have just described the 3D glasses handed out in movie theatres.
So, the answer is "yes." :-) . To be more helpful, here's a quote from the Wikipedia page
As shown in the figure, the analyzing filters are constructed of a
quarter-wave plate (QWP) and a linearly polarized filter (LPF). The
QWP always transforms circularly polarized light into linearly
polarized light. However, the angle of polarization of the linearly
polarized light produced by a QWP depends on the handedness of the
circularly polarized light entering the QWP. In the illustration, the
left-handed circularly polarized light entering the analyzing filter
is transformed by the QWP into linearly polarized light which has its
direction of polarization along the transmission axis of the LPF.
Therefore, in this case the light passes through the LPF. In contrast,
right-handed circularly polarized light would have been transformed
into linearly polarized light that had its direction of polarization
along the absorbing axis of the LPF, which is at right angles to the
transmission axis, and it would have therefore been blocked. | {
"domain": "physics.stackexchange",
"id": 29289,
"tags": "optics"
} |
Introductory Physics Video Courseware Recommendations | Question: I'm looking for something to supplement my Physics II class. Last year I started using these video lectures to supplement my Calculus class and it helped tremendously.
I also turned to this educational software when I was stuck in Physics I.
I can't seem to turn up anything similar that deals with Physics II.
I'm pretty much looking for something that I can watch repeatedly until it sinks in. Perhaps you know of something?
Answer: Walter Lewin's lectures (dead link).
I have no idea of the copyright status of the following transcodes on YouTube. So I'll let them worry about it. It was originally MIT OpenCourseware, FWIW. Prof. Lewin has moved much of the original content there, so seems okay for the SE User Agreement.
Lectures by Walter Lewin: An Introduction a Framework of Beauty
1.Introduction | 8.01 Classical Mechanics, Fall 1999
2.Introduction | 8.02 Electricity and Magnetism, Spring 2002
3.Introduction | 8.03 Physics III: Vibrations and Waves, Fall 2004
CourseWork
8.01 Homework, Exams, Solutions & Lecture Notes
8.02 Homework, Exams, Solutions & Lecture Notes
8.03 Homework, Exams, Solutions & Lecture Notes | {
"domain": "physics.stackexchange",
"id": 609,
"tags": "electromagnetism, classical-mechanics, waves, resource-recommendations, education"
} |
Michaelis-Menten rate law for enzyme which catalyzes two reactions: steady state? | Question: Suppose an enzyme $\ce{E}$ can catalyze two reactions:
\begin{align}
\ce{S1 + E &<=> S1E -> P1 + E} \tag{R1} \\
\ce{S2 + E &<=> S2E -> P2 + E} \tag{R2}
\end{align}
I want to derive a rate law. Can I assume that
\begin{align}
\frac{d[\ce{S1E}]}{dt} &= 0 \tag{1}\\
\frac{d[\ce{S2E}]}{dt} &= 0 \tag{2}
\end{align}
like in the derivation of the Michaelis-Menten rate law?
Answer:
In the steady-state reaction, the intermediate concentration [ES] is assumed to remain at a small constant value. So in this case only if k2 >> k1 and similar for the second reaction. ES is now a reactive intermediate and there is no stable equilibrium between S, E and P.
\begin{align}
\frac{d[\ce{S1E}]}{dt} &= \ce{k1}[\ce{S1}][\ce{E}] - k_{-1} [\ce{S1E}] - \ce{k2}[\ce{S1E}] = 0\\
\end{align}
Therfore:
\begin{align}
[\ce{S1E}] &= \frac{\ce{k1}}{k_{-1}+\ce{k2}}[\ce{S1}][\ce{E}] = K_a[\ce{S1}][\ce{E}]
\\
\end{align}
Similarly for the second reaction:
\begin{align}
[\ce{S2E}] &= \frac{\ce{k3}}{k_{-3}+\ce{k4}}[\ce{S2}][\ce{E}] = K_b[\ce{S2}][\ce{E}]
\\
\end{align}
The enzyme E is involved in both reactions and its total concentration(bound and unbound) is constant. This total concentration of enzyme $[E]_{0}$ is equivalent to the concentration of the free enzyme before adding the substrates. The concentration of the free enzyme at a certain time t is [E]:
\begin{align}
[E]_{0} &= [E] + [\ce{S1E}] + [\ce{S2E}]
\\
\end{align}
if you substitute $[\ce{S1E}]$ and $[\ce{S2E}]$ with the previous expressions then:
\begin{align}
[E] &= \frac{[E]_{0}}{1 + k_{a}[\ce{S1}]+k_{b}[\ce{S2}]}
\\
\end{align}
Eventually:
\begin{align}
\frac{d[P_{1}]}{dt} = k_2[\ce{S1E}] = \ce{k2}(K_a[\ce{S1}][\ce{E}]) = K_p[\ce{S1}][\ce{E}] = K_p[\ce{S1}]\frac{[E]_{0}}{1 + k_{a}[\ce{S1}]+k_{b}[\ce{S2}]}
\\
\end{align}
and similarly for the rate of production of $P_2$ | {
"domain": "chemistry.stackexchange",
"id": 12412,
"tags": "kinetics, enzyme-kinetics"
} |
Azure cache getoradd (without locking) | Question: Inspired by the non-locking implementation proposed in this post, the following code is an attempt to (about) do the same using the azure cache. As I'm totally green on the Azure cache I'd appreciate input on best practices or pitfalls. I should add that this method is assumed to be the only interaction with the azure cache in the application.
public class AzureCacheWrapper
{
private readonly DataCache cache;
public AzureCacheWrapper(DataCacheFactory cacheFactory)
{
this.cache = cacheFactory.GetDefaultCache();
}
public T GetOrAdd<T>(string key, Func<T> factoryMethod)
{
var factoryMethodAsLazy = new Lazy<T>(factoryMethod);
var cachedFactory = cache.Get(key) as Lazy<T>;
if (cachedFactory == null)
{
try
{
cache.Add(key, factoryMethodAsLazy);
cachedFactory = factoryMethodAsLazy;
}
catch (DataCacheException ex)
{
if (ex.ErrorCode != DataCacheErrorCode.KeyAlreadyExists)
{
throw;
}
// We know for sure that the key exists at this point:
// Two concurrent callers tried to get it, and this thread
// happened to be deadlock victim.
cachedFactory = (Lazy<T>)cache.Get(key);
}
}
return cachedFactory.Value;
}
}
Answer: I do not have an Azure account, so I cannot test this, but does this work in general? The linked question uses a MemoryCache which is in-memory and probably references the added objects directly. As Azure cache is a distributed system, that will probably not work and the values need to be serialized in some way to be passed to the remote cache.
As I was writing this, I stumbled upon this SO question which clearly shows that Lazy<T> is serializable and thus can probably be stored in Azure, but the value will be evaluated on serialization. Basically, using Lazy<T> does not give you any advantage here, because of the aforementioned side-effect of putting it in the cache.
// We know for sure that the key exists at this point:
// Two concurrent callers tried to get it, and this thread
// happened to be deadlock victim.
That state is probably not true, because the cache item with that key might have expired between the exception and the call to Get. Granted, that this is relatively unlikely and heavily depends on how small the cache timeout is, but it could happen. This scenario could be avoided if you retried the whole method upon receiving a DataCacheErrorCode.KeyAlreadyExists error code. | {
"domain": "codereview.stackexchange",
"id": 9003,
"tags": "c#, cache, lazy, azure"
} |
Local Map (navigation stack) not showing obstacle | Question:
Hi,
Can someone shed some light on how to display lasercan obstacles for use in Navigation Stack (AMCL+MOVE_BASE, ubuntu 18.04, Melodic)?
I spent lots of time changing configurations and searching for answers but I can't figure out how to consider obstacles detected by lasercan as in this image:
https://drive.google.com/open?id=1WxaY8cegTQQEh0aSPu24EqrngDP1EraU
In the image there is an obstacle (pointed in RED) as detected by laserscan but is not marked as obstacle by local map.
However if I change common costmap file and remove "plugins" section going back to pre-Hidro parameters it is showed as image below.
https://drive.google.com/open?id=1rubBdsOHd7ypbb_39yodAKZWbizF9VXn
Hera are my relevant config files:
costmap_common_params.yaml
obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[-0.2,-0.2],[-0.2,0.2], [0.2, 0.2], [0.2,-0.2]]
inflation_radius: 0.30 # max. distance from an obstacle at which costs are incurred for planning paths.
cost_scaling_factor: 50 # exponential rate at which the obstacle cost drops off (default 10)
observation_sources: scan
scan: {data_type: LaserScan, sensor_frame: base_link, topic: /zed/scan, marking: true, clearing: true, min_obstacle_height: 0.0, max_obstacle_height: 0.50}
plugins:
- {name: static_map, type: "costmap_2d::StaticLayer"}
- {name: obstacle_2d_layer, type: "costmap_2d::ObstacleLayer"}
- {name: inflation, type: "costmap_2d::InflationLayer"}
global_costmap_params.yaml
global_costmap:
global_frame: map
robot_base_frame: base_link
update_frequency: 1.0
#static_map: true
publish_frequency: 0.5
transform_tolerance: 1.0
#resolution: 0.010
local_costmap_params.xml
local_costmap:
global_frame: odom # odom DGPP
robot_base_frame: base_link
update_frequency: 2.0
publish_frequency: 2.0
#static_map: false
rolling_window: true
width: 3.0
height: 3.0
resolution: 0.05
transform_tolerance: 1.0
As mentioned, if I just comment the plugin item with the three topics from costmap_common file it shows the obstacle. It also show some errors (not important) in terminal but shows the obstacle as second image above.
I am using TrajectoryPlannerROS (not DWA).
Can someone shed some light on how to display lasercan obstacle for use in Navigation Stack (AMCL+MOVE_BASE)?
Thank you,
Originally posted by dpetrini on ROS Answers with karma: 23 on 2020-05-14
Post score: 0
Answer:
I will answer my own question:
After keep looking for answers and trying many combinations, I discover that is all about syntax.
It seems different versions (using directive "plugins:" or not) eventually requires proper manner to write files.
The solution of my issue above was to write the common_costmap_params.xml file as below:
obstacle_range: 2.5
raytrace_range: 3.0
footprint: [[-0.2,-0.2],[-0.2,0.2], [0.2, 0.2], [0.2,-0.2]]
inflation:
inflation_radius: 0.3
cost_scaling_factor: 50 # exponential rate at which the obstacle cost drops off (default 10)
obstacle_2d_layer:
observation_sources: scan
scan: {data_type: LaserScan, sensor_frame: zed_left_camera_frame, topic: /zed/scan, marking: true, clearing: true, min_obstacle_height: 0.0, max_obstacle_height: 0.5}
plugins:
- {name: static_map, type: "costmap_2d::StaticLayer"}
- {name: obstacle_2d_layer, type: "costmap_2d::ObstacleLayer"}
- {name: inflation, type: "costmap_2d::InflationLayer"}
As you can see the "inflation" and "obstacle_2d_layer" keywords (as pointed as "name" inside plugins) need to be exposed like that.
Doing this obstacles showed up like a charm.
Originally posted by dpetrini with karma: 23 on 2020-05-14
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34955,
"tags": "ros, navigation, ros-melodic, 2dcostmap, base-local-planner"
} |
Quickselect algorithm in Swift | Question: I recently answered a question
on Stack Overflow about finding the k-largest element in an array, and
present my implementation of the Quickselect algorithm in Swift for review.
This is essentially the iterative version described in
Wikipedia: Quickselect, only that the
paritioning does not move the pivot element to the front of the second partition.
That saves some swap operations but requires an additional check when updating the lower bound.
The language is Swift 3, compiled with Xcode 8.1.
extension Array where Element: Comparable {
public func kSmallest(_ k: Int) -> Element {
precondition(1 <= k && k <= count, "k must be in the range 1...count")
var a = self // A mutable copy.
var low = startIndex
var high = endIndex
while high - low > 1 {
// Choose random pivot element:
let pivotElement = a[low + Int(arc4random_uniform(UInt32(high - low)))]
// Partition elements such that:
// a[i] < pivotElement for low <= i < pivotIndex,
// a[i] >= pivotElement for pivotIndex <= i < high.
var pivotIndex = low
while a[pivotIndex] < pivotElement {
pivotIndex += 1
}
for i in pivotIndex+1 ..< high {
if a[i] < pivotElement {
swap(&a[pivotIndex], &a[i])
pivotIndex += 1
}
}
if k <= pivotIndex {
// k-smallest element is in the first partition:
high = pivotIndex
} else if k == pivotIndex + 1 {
// Pivot element is the k-smallest:
return pivotElement
} else {
// k-smallest element is in the second partition
low = pivotIndex
if a[low] == pivotElement {
low += 1
}
}
}
// Only single candidate left:
return a[low]
}
public func kLargest(_ k: Int) -> Element {
return kSmallest(count + 1 - k)
}
}
Examples:
let a = [ 9, 7, 6, 3, 4, 2, 5, 1, 8 ]
for k in 1 ... a.count {
print(a.kSmallest(k))
}
// 1 2 3 4 5 6 7 8 9
let b = [ "b", "a", "c", "h" ]
print(b.kLargest(2))
// "c"
Feedback on all aspects of the code is welcome, such as (but not limited to):
Possible improvements (speed, clarity, swiftyness, ...).
Naming. In particular: what would be a better name for kSmallest(_ k: Int)
in consideration of the
Swift API Design Guidelines?
The check if a[low] == pivotElement looks artificial, but without that
an infinite loop can occur, e.g. for an array with all elements equal.
Is there a better solution?
Remark: To make this code compile with Swift 4 (or later), just replace the line
swap(&a[pivotIndex], &a[i])
with
a.swapAt(pivotIndex, i)
Answer: Let's start by updating this line to Swift 4.2 :
let pivotElement = a[Int.random(in: low..<high)]
In my tests, Int.random(in:) is way faster than Int(arc4random_uniform), and the comparisons won't take that into consideration.
Efficiency
1st improvement
There is a small improvement to this algorithm, but it still gives a performance gain by rearranging the conditions from the most probable to the least, in order to take advantage of shortcut execution:
if k <= pivotIndex {
// k-smallest element is in the first partition:
high = pivotIndex
} else if k > pivotIndex + 1 {
// k-smallest element is in the second partition
low = pivotIndex
if a[low] == pivotElement {
low += 1
}
} else {
// Pivot element is the k-smallest:
return pivotElement
}
The first two conditions are equiprobable. The least probable case is left for last.
Benchmarks
The benchmarking code is the following:
let a = Array(1...10_000).shuffled()
var timings: [Double] = []
for k in 1 ... a.count {
let start = mach_absolute_time()
_ = a.kSmallest(k)
let end = mach_absolute_time()
timings.append(Double(end - start)/Double(1e3))
}
let average: Double = timings.reduce(0, +) / Double(timings.count)
print(average, "us")
var timings2: [Double] = []
for k in 1 ... a.count {
let start = mach_absolute_time()
_ = a.kSmallest2(k)
let end = mach_absolute_time()
timings2.append(Double(end - start)/Double(1e3))
}
let average2: Double = timings2.reduce(0, +) / Double(timings2.count)
print(average2, "us")
It prints the average time for looking up one kth smallest element.
kSmallest is the original, kSmallest2 is the new one. They both operate on the same array a to ensure fairness.
kSmallest2 is up to 7μs faster per lookup. The fluctuation is due to the randomness of the arrangement of the elements of the array. Which translates into up to ~70ms execution time gain for a 10.000-element array:
kSmallest 1.215636265 s (total time)
kSmallest2 1.138085315 s (total time)
In the worst case, in my tests, kSmallest2 may rarely be 2μs slower per lookup, and it is to be blamed on the randomness of choosing a pivot. Comparisons should probabilistically favor the second version.
2nd improvement
The following improvement concerns arrays with duplicates, and avoids unnecessary loops:
while a[low] == pivotElement, k - low > 1 {
low += 1
}
Instead of hopping by one index alone:
if a[low] == pivotElement {
low += 1
}
Benchmarks
The following code was used:
//As suggested by Tim Vermeulen
let a = (0..<100).flatMap { Array(repeating: $0, count: Int.random(in: 10..<30)) }
.shuffled()
var timings1: [Double] = []
for k in 1 ... a.count {
let start = mach_absolute_time()
_ = a.kSmallest(k)
let end = mach_absolute_time()
timings1.append(Double(end - start)/Double(1e6))
}
let average1: Double = timings1.reduce(0, +) / Double(timings1.count)
print("kSmallest", average1, "ms")
var timings2: [Double] = []
for k in 1 ... a.count {
let start = mach_absolute_time()
_ = a.kSmallest2(k)
let end = mach_absolute_time()
timings2.append(Double(end - start)/Double(1e6))
}
let average2: Double = timings2.reduce(0, +) / Double(timings2.count)
print("kSmallest2", average2, "ms")
var timings3: [Double] = []
for k in 1 ... a.count {
let start = mach_absolute_time()
_ = a.kSmallest3(k)
let end = mach_absolute_time()
timings3.append(Double(end - start)/Double(1e6))
}
let average3: Double = timings3.reduce(0, +) / Double(timings3.count)
print("kSmallest3", average3, "ms")
kSmallest3 has both the 1st and 2nd improvements.
Here are the results:
kSmallest 0.0272 ms
kSmallest2 0.0267 ms
kSmallest3 0.0236 ms
In an array with a high number of duplicates, the original code is now ~13% faster by implementing both improvements. That percentage will grow with the richness in duplicates, and a higher array count. If the array has unique elements, kSmallest2 is naturally the fastest since it'll be avoiding unnecessary checks.
3rd improvement (a fix?)
There are unnecessary loops where the the random index is that of a pivot element which is already in its rightful place/order. These elements aren't swapped by the code, since they are already well-placed. These elements are the ones that fall in the case of k > pivotIndex + 1 and the low index is equal to pivotIndex. An endless loop may occur if Int.random(in: low..<high) always returns low + 1.
The following code prevents such an (admittedly unlikely) endless loop:
var orderedLows: Set<Int> = [] //This will contain the indexes of elements that are already well ordered
while high - low > 1 {
// Choose random pivot element:
var randomIndex: Int
repeat {
randomIndex = Int.random(in: low..<high)
} while orderedLows.contains(randomIndex) &&
!orderedLows.isSuperset(of: Array<Int>(low..<high))
let pivotElement = a[randomIndex]
...
} else if k > pivotIndex + 1 {
// k-smallest element is in the second partition
if low == pivotIndex
{
orderedLows.insert(randomIndex)
}
low = pivotIndex
while a[low] == pivotElement, k - low > 1 {
low += 1
}
}
Benchmarks
The following array was operated on with the same benchmarking code as in the second improvment:
let a = (0..<100).flatMap { Array(repeating: $0, count: Int.random(in: 10..<30)) }
.shuffled()
And here are the results:
kSmallest 0.0662 ms
kSmallest2 0.0639 ms
kSmallest4 0.0575 ms
kSmallest4 being the version with all three improvements, and it is faster than all previous versions. The a array was purposefully chosen to be rich in duplicates to heighten the possibilty of elements that are already in the correct order. If not, kSmallest4 doesn't show any flagrant improvement.
Naming
1) pivotIndex is confusing, one would expect it t be the index of pivotElement, but it's not.
2) a isn't very descriptive. Its name neither conveys its mutability nor that it is a copy of the initial array.
3) Such an algorithm, née FIND, is commonly known as quickSelect (more names could be found here). Personally, I would prefer nthSmallest(_ n: Int), for the following reasons:
n instead of k since the latter is usually used in constant naming
nth instead of n denotes ordinality
Since the comparison predicate isn't provided, nthSmallest(_ n: Int) would be preferable to nthElement(_ n: Int) since it means that we'll be comparing elements in an ascending order.
Readability/Alternative approach
If readability and conciseness are paramount, here is an alternative that uses the partition(by:) method applied to the array slice mutableArrayCopy[low..<high] :
public func nthSmallest(_ n: Int) -> Element {
precondition(count > 0, "No elements to choose from")
precondition(1 <= n && n <= count, "n must be in the range 1...count")
var low = startIndex
var high = endIndex
var mutableArrayCopy = self
while high - low > 1 {
let randomIndex = Int.random(in: low..<high)
let randomElement = mutableArrayCopy[randomIndex]
//pivot will be the index returned by partition
let pivot = mutableArrayCopy[low..<high].partition { $0 >= randomElement }
if n < pivot + 1 {
high = pivot
} else if n > pivot + 1 {
low = pivot
//Avoids infinite loops when an array has duplicates
while mutableArrayCopy[low] == randomElement, n - low > 1 {
low += 1
}
} else {
return randomElement
}
}
// Only single candidate left:
return mutableArrayCopy[low]
}
In my tests, the more duplicates in the array, the more this version gets on a par (if not better) with the original code, but a bit slower when the array has unique elements.
Here are some tests:
[1, 3, 2, 4, 7, 8, 5, 6, 9, 10].nthSmallest(6) //6
[10, 20, 30, 40, 50, 60, 20, 30, 20, 10].nthSmallest(4) //20
["a", "a", "a", "a", "a"].nthSmallest(3) //"a"
A recursive rewrite of the original code seems more readable. But at the cost of execution time, since each time the function is called, a mutable copy of the array will be created. | {
"domain": "codereview.stackexchange",
"id": 32649,
"tags": "algorithm, swift, swift3"
} |
What is machine learning? | Question: What is the definition of machine learning? What are the advantages of machine learning?
Answer: What is machine learning?
Machine learning (ML) has been defined by multiple people in similar (or related) ways.
Tom Mitchell, in his book Machine Learning (1997), defines an ML algorithm/program (or machine learner) as follows.
A computer program is said to learn from experience $E$ with respect to some class of tasks $T$ and performance measure $P$, if its performance at tasks in $T$, as measured by $P$, improves with experience $E$.
This is a quite reasonable definition, given that it describes algorithms such as gradient descent, Q-learning, etc.
In his book Machine Learning: A Probabilistic Perspective (2012), Kevin P. Murphy defines the machine learning field/area as follows.
a set of methods that can automatically detect patterns in data, and then use the uncovered patterns to predict future data, or to perform other kinds of decision making under uncertainty (such as planning how to collect more data!)
Without referring to algorithms or the field, Shai Shalev-Shwartz and Shai Ben-David define machine learning as follows
The term machine learning refers to the automated detection of meaningful patterns in data.
In all these definitions, the core concept is data or experience. So, any algorithm that automatically detects patterns in data (of any form, such as textual, numerical, or categorical) to solve some task/problem (which often involves more data) is a (machine) learning algorithm.
The tricky part of this definition, which often causes a lot of misconceptions about what ML is or can do, is probably automatically: this does not mean that the learning algorithm is completely autonomous or independent from the human, given that the human, in most cases, still needs to define a performance measure (and other parameters, including the learning algorithm itself) that guides the learning algorithm towards a set of solutions to the problem being solved.
As a field, ML could be defined as the study and application of ML algorithms (as defined by Mitchell's definition).
Sub-categories
Murphy and many others often divide machine learning into three main sub-categories
supervised learning (or predictive), where the goal is to learn a mapping from inputs $\textbf{x}$ to outputs $y$, given a labeled set of input-output pairs
unsupervised learning (or descriptive), where the goal is to find "interesting patterns" in the data
reinforcement learning, which is useful for learning how to act or behave when given an occasional reward or punishment signals
However, there are many other possible sub-categories (or taxonomies) of machine learning techniques, such as
deep learning (i.e. the use of neural networks to approximate functions and related learning algorithms, such as gradient descent) or
probabilistic machine learning (machine learning techniques that provide uncertainty estimation)
weakly supervised learning (i.e. SL where labeling information may not be completely accurate)
online learning (i.e. learning from a single data point at a time rather than from a dataset of multiple data points)
These sub-categories can also be combined. For example, deep learning can be performed online or offline.
Related fields
There is also a related field known as computational (or statistical) learning theory, which concerned with the theory of learning (from a computational and statistical point of view). So, in this field, we are interested in questions like "How many samples do we need to approximately compute this function with a certain error?".
Of course, given that machine learning is a set of algorithms and techniques that are data- or experience-driven, one may wonder what the difference between machine learning and statistics is. In fact, in many cases, they are very similar and ML adopts many statistical concepts, and you may even read on the web that machine learning is just glorified statistics. ML and statistics often tackle the same problem, but from a different perspective or with slightly different approaches (and the terminology may slightly change from one field to the other). If you are interested in a more detailed explanation of their difference, you could read Statistics versus machine learning (2018) by Danilo Bzdok et al.
What is machine learning good for?
ML can potentially be used to (at least partially) automate tasks that involve data and pattern recognition, which were previously performed only by humans (e.g. translation from one human language, such as English, to another, such as Italian). However, machine learning cannot automate all tasks: for example, it cannot infer causal relations from the data (which often must be done by humans), unless you include causal inference as part of machine learning. If you are interested in causal inference, you could take a look at the paper Causal Inference by Judea Pearl (Turing Award for his work in causal inference!). | {
"domain": "ai.stackexchange",
"id": 1894,
"tags": "machine-learning, terminology, definitions"
} |
Notation confusion for differential volume | Question: Can anyone help me with an explanation of the following notation. I am a bit confused:
Lets say we have some type of integral and in the end we write different differential, such as:
$$d\vec r ,\quad d^3\vec r,\quad dxdydz,\quad dV.$$
How are these with each other related? Can we express the first one in components, or the second one.
Any explanation would help. I am asking because i am looking at this thread:
Speed distribution in 1 dimension
And i don't understand it. First of all PDF (like normal distribution for example) have no differential part like there is for the velocity in the above link. And then what is the difference between maxwell velocity distribution and maxwell speed distribution.
What i know ( please correct me if i am wrong):
$dP(x)=f(x)dx$ which physically means that we are searching the probability that $x$ is in the interval $x$ and $x + dx$.
Then for the velocity/speed (i don't know which of the 2 terms to use) distribution (in 3D), in analogy with the above equation we would have:
$dP(\vec v)=f(\vec v)dv_x dv_y dv_z /$. Is $dv_x dv_y dv_z = d\vec v$ or $d^3 \vec v$. I am confused by the notations etc.
Answer: As a warning, there are no absolute rules about notation, and you should always consult the particular source you are reading and make sure you understand their conventions. If their conventions are not spelled out clearly, you should find another source.
Having said that, generally you can expect:
The 3-dimensional (scalar) volume element is $dV = d^3 \vec{r} = dx dy dz$. All three symbols are different notation for the same thing, with the main difference that $dx dy dz$ commits to using a Cartesian coordinate system. You will use this measure to integrate over a volume. A typical example would be to take a function $f(\vec{r})$ and an integral over some region $\Omega$ (for instance, $\Omega$ the interior of a sphere)
\begin{equation}
\int_\Omega dV f(\vec{r}) = \int_\Omega d^3 \vec{r} f(\vec{r}) = \int dx \int dy \int dz f(x,y,z)
\end{equation}
Note in the last line I've intentionally expanded the volume integral into three integrals over Cartesian coordinates and expressed the arguments of $f$ as 3 coordinate values instead of one vector value.
A vector-valued line element is given by $d\vec{r}$. This is a measure factor for a line integral. A line integral is an integral of a vector function over a one dimensional path $\Gamma$ through a larger space. For example, perhaps you are computing the work done when you integrate the force $\vec{F}$ along some path $\Gamma$ that goes from point $A$ to point $B$, parameterized by $\lambda$. A typical example of a line integral would be
\begin{equation}
\int_\Gamma d\vec{r} \cdot \vec{F}(\vec{r}) = \int d\lambda \left(\frac{d\vec{r}}{d\lambda} \cdot \vec{F}(\vec{r})\right) = \int d \lambda \left(\frac{dx}{d\lambda} F_x + \frac{dy}{d\lambda}F_y + \frac{dz}{d\lambda}F_z\right)
\end{equation}
where the path $\Gamma$ is described by the function $\vec{r}(\lambda)=x(\lambda) \hat{e}_x + y(\lambda) \hat{e}_y + z(\lambda) \hat{e}_z$, and $\hat{e}_i$ is a unit vector in the $i$-th direction. | {
"domain": "physics.stackexchange",
"id": 78251,
"tags": "classical-mechanics, statistical-mechanics, conventions, notation, volume"
} |
What is the range of values of the expected percentile ranking? | Question: I'm currently reading
Hu, Koren, Volinsky: Collaborative Filtering for Implicit Feedback Datasets
One thing that confuses me is the "expected percentile ranking", an function the authors define to evaluate the goodness of their recommendations. They define it in the Evaluation methodology on page 6 as:
$$\overline{\text{rank}} = \frac{\sum_{u,i} r^t_{ui} \text{rank}_{ui}}{\sum_{u,i} r^t_{ui}}$$
where $u$ is a user, $i$ is an item (e.g. a TV show), $r_{ui} \in [0, \infty)$ is the amount how much user $u$ did watch show $i$. $\text{rank}_{ui} \in [0, 1]$ is the percentage rank of item $i$ for user $u$. For example, it is 0 if for user $u$ the item $i$ has the highest $r$ value and 1 if the item $i$ for user $u$ has the lowest $r$ value.
I'm not super sure if I understood it correctly.
The authors write that lower values of $\overline{\text{rank}}$ are more desirable and for random predictions would lead to an expected value of $\overline{\text{rank}}$ of 0.5.
Examples
Assume there is only one item. In this case $\text{rank} = 0$. Makes sense, as there cannot be any predictions.
Assume there is only one user and two items with $r_{1,1} = 1$ and $r_{1,2} = 2$. Then:
$$\overline{\text{rank}} = \frac{1 \cdot \text{rank}_{1, 1} + 2 \cdot \text{rank}_{1, 2}}{1+2}$$
This means $\overline{\text{rank}} \in \{2/3, 1/3\}$.
If there is only a single user and all $|I|$ values of $r_{ui}$ are the same, then $\overline{\text{rank}} = \sum_{ui} \text{rank}_{ui} = \frac{|I|}{2}$
Questions
Is my understanding of the metric correct? Especially my last example and the statement by the authors that $\overline{\text{rank}} \geq 50\%$ indicated an algorithm is no better than random seem off.
What is $t$?
Answer:
What is $t$?
It means observed $r_{ui}$ in the one-week test set (page 6-left).
Is my understanding of the metric correct?
First two examples are correct. Assuming user-item relation $r_{ui}^t$ is constant $a$ for all items in the test set, and predicted ranks are uniform across $[0, 1]$, then, the third one would be:
$$\overline{\text{rank}} = \frac{\sum_{u,i} r^t_{ui} \text{rank}_{ui}}{\sum_{u,i} r^t_{ui}}=\frac{\sum_{u,i} a \text{ rank}_{ui}}{\sum_{u,i} a}=\frac{1}{|I|}\sum_{u,i} \text{ rank}_{ui}=\frac{1}{|I|}\frac{|I|}{2}=\frac{1}{2}$$
This makes sense. Items are identical to the user, therefore no model can do better than random guessing, since there is no observed preference to help the model favor one item over the other. Of course, another assumption here is that training (4 weeks) and test (next week) sets are from the same distribution. | {
"domain": "datascience.stackexchange",
"id": 4928,
"tags": "recommender-system, mathematics"
} |
Is a static Schwarzschild observer geodesic? | Question: I know the answer to this question is no: a static observer is defined to be following the flow of the Killing vector field $\xi = \partial_t$, with appropriate normalization of the 4-velocity, such that $$ \label{1} \dot{t}=\left( 1-\frac{R_s}{r} \right)^{-1/2} = \text{const}, \, \dot{r}=0, \, \dot{\theta}=0, \, \dot{\phi}=0$$ Here $\dot{-}$ denotes derivative wrt proper time. These world lines are not geodesics: if they were freely falling, they would fall toward the center and the spatial coordinates would be functions of (proper) time; they need some kind of thrust to stay in the same point in space.
But: from Carroll, the explicit Schwarzschild geodesic equations are
which reduce to 4 identities $0=0$, considered the above equation for $\dot{t}, \dot{r}, \dot{\theta}, \dot{\phi}$. Where is the problem?
Answer: You are correct in saying that, with your definition of static observer, the curve on which the observer moves is not a geodesic curve. As physically expected, a geodesic trajectory in Schwarzschild universe requires motion, around the center of attraction or directed into it, in a way that is similar to Newton's classical gravity (but with much more complicated equations).
I think you are doing a trivial error. In your definition of the four velocity, you say that $\dot t$ is the only non vanishing velocity, and it is constant at all values of the parameter that you use to describe the motion. The dot is the derivative with respect to said parameter.
This means that, in your equations,
$$
\frac{dt}{d\lambda}=\left(1-\frac{R_s}{r}\right)^{-1/2},
$$
and all other terms are zero. As you can see, the second equation of motion is non vanishing (unless you stationate at r=2GM apparently, but you can't have r=2GM in Schwarzschild coordinates as the metric is singular there and you have to switch to another coordinate system). So, in general, the second equation is not solved and the curve is not geodesic. | {
"domain": "physics.stackexchange",
"id": 42099,
"tags": "general-relativity, black-holes, geodesics"
} |
Question about mass measurement and Archimedes law | Question: Say we have a container of water full to the brim and some object with volume $V$. We measure the mass of the container with the object by its side to obtain $m$. Now we put the object inside the container and it sinks, and we measure the mass of the container with the object in it to obtain $m_1$. Now, since the container is full to the brim, when we put the object inside, it will push out a volume $V$ of water, but also because of the buoyant force the objects weight will be less for the mass of the displaced water so we can conclude that $m-m_1=2\rho gV$ where $\rho$ stands for water density. Is this correct?
What happens if the object floats? Will then the measured mass be $m-\rho g \Delta V$ where $\Delta V$ is the volume of the object submerged/volume of displaced (pushed out) water?
Answer: It's not correct, there's no "2" in the first expression, if I understand the setup. It sounds like you are saying that you put a beaker filled to the top on a scale, and zero the scale. Then you put an object next to the beaker, and the scale reading gives you the weight of the object. Then you put the object in the beaker, discard the overflow, and measure the weight again. If that's what you mean, that second weight is less than the first one only by the weight of the overflow that was discarded. The buoyant force on the object is an internal force to that system, so will come in an action-reaction pair that has no effect on the scale reading.
Thus the difference if the object sinks, or floats, is only in the weight of the discarded overflow. If the object sinks, the overflow is the weight of water that fits in the object's volume (no "2"). If the object floats, the overflow is the weight of the object itself. So the scale will read zero if the object floats and the overflow is discarded. If the overflow is not discarded, there is no difference in the weight when the object is placed in the beaker rather than next to it, regardless of whether the object sinks or floats, either way the scale will read the weight of the object. | {
"domain": "physics.stackexchange",
"id": 35966,
"tags": "classical-mechanics, fluid-dynamics, fluid-statics"
} |
Unclear about proof for unique MST given graph G with distinct weights | Question: http://homepages.math.uic.edu/~leon/cs-mcs401-s08/handouts/mst.pdf
I have some trouble understanding the proof above.
I understand that we assuming two MSTs, T and T', and an edge e that is the cheapest edge of G that located in T. Then the weight of this edge is larger than any weight on T', given that T' contains (x,y), by definition of MST.
My question is why do we assume that T' passes through (x,y)? Wouldn't it natural to assume that T' is completely disjoint from T?
Answer: The edge $e$ is not necessarily the cheapest edge of $G$ that is in $T$. Rather, it is the cheapest edge in the symmetric difference $T \triangle T'$. It could belong to either $T$ or $T'$ (but not both!), but since both cases are the same, we assume that it belongs to $T$ without loss of generality.
We don't need to separately consider the case that $T$ is disjoint from $T'$ and the case that they share some edges. All we need to know is that they are different and so their symmetric difference is not empty. | {
"domain": "cs.stackexchange",
"id": 3828,
"tags": "graphs, proof-techniques, spanning-trees"
} |
MVC Repository Insert Using async | Question: This is practically my first time using async in my code. I read about dos-and-donts but I wanted to get some feedback on what I've written.
I'm using a repository pattern and I'm inserting asynchronously into the database.
An explanation of the models involved: I have one DomainModel and three tblModels (tblModel1, tblModel2, tblModel3). tblModel2 and tblModel3 hold an ID from tblModel1. The DomainModel is made up of all three of the tblModels. I'm simplifying the domain to database mapping here because I'm focusing on feedback about the asynchronous calls.
Repository
public async Task<int> InsertAsync(tblModel1 tm1)
{
tm1 = db.tblModel1.Add(tm1);
await db.SaveChangesAsync();
return tm1.ID;
}
public async Task<int> InsertAsync(tblModel2 tm2)
{
tm2 = db.tblModel2.Add(tm2);
await db.SaveChangesAsync();
return tm2.ID;
}
public async Task<int> InsertAsync(tblModel3 tm3)
{
tm3 = db.tblModel3.Add(tm3);
await db.SaveChangesAsync();
return tm3.ID;
}
Manager
public async Task<bool> InsertAsync(DomainModel M)
{
try
{
await Task.Run(async () => {
var tm1ID = await repo.InsertAsync(M.tm1);
var tm2ID = await repo.InsertAsync(AddID(M.tm2,tm1ID));
var tm3ID = await repo.InsertAsync(AddID(M.tm3,tm1ID));
});
}
catch (Exception ex)
{
return false;
}
return true;
}
The method AddID(tblModel, int) is to get the point across that I'm adding the ID from tblModel1 after it's inserted into tblModel2 and tblModel3. In actuality, I do this but I map the DomainModel to completely new tblModels and during that process I add the IDs to tblModel2 and tblModel3. I believe the code above is representative of what I'm doing.
Please let me know if there is a better way for me to optimize these asynchronous inserts into the database. Thank you!
Answer: The best thing you could do is let EF do what it's good at... You don't need to save every single time you add an entity - let EF track the changes in its DbSets repositories and then save the changes in the
DbContext unit of work.
As a more general point - why aren't you saving the whole DomainModel at once? I'd expect to just be able to do:
context.DomainModels.Add(someDomainModel);
await context.SaveChanges();
I've hinted at it but I'll say it outright - you get very little to no benefit in writing Repositorys wrapping the db context. It's just not worth it.
I'm not going to comment on the bad naming because I'm guessing that this is actually just an example of your working code. | {
"domain": "codereview.stackexchange",
"id": 14390,
"tags": "c#, asp.net, entity-framework, asynchronous, repository"
} |
How to use this strange operator with double factorials of the photon number operator? | Question: In a few quantum physics papers I saw an operator proportional to this one:
$$\hat{N}=\frac{(\hat{n}-1)!!}{\hat{n}!!},$$
where $\hat{n}=\hat{a}^{\dagger}\hat{a}$ and $!!$ is the double factorial.
Any idea on how to apply such an operator on e.g. a Fock state $|n\rangle$ or a coherent state?
Answer: $$\frac{(\hat{n}-1)!!}{\hat{n}!!}|n\rangle =\frac{(n-1)!!}{n!!}|n\rangle$$
Regarding coherent states it is enough expanding them in terms of states with defined $n$ and using linearity.
All that immediately arises from the general spectral theory: If $\psi_a$ is an eigenvector of a self-adjoint operator $A$ with eigenvalue $a\in \mathbb R$ and $f$ is any (measurable, real or complex valued) function over $\mathbb R$, by definition
$$f(A) \psi_a := f(a) \psi_a\:.$$ | {
"domain": "physics.stackexchange",
"id": 35482,
"tags": "quantum-mechanics, operators, hilbert-space, notation"
} |
Divergence of a Vector Field - Surprising Result | Question: I'm following the text Introduction to Electrodynamics by Griffiths, and I came across the following in an in-text problem:
Sketch the vector function v = $\frac{\boldsymbol{\hat{\mathbf{r}}}}{r^2}$, and compute its divergence. The answer may surprise you... can you explain it?
Well, the answer did surprise me, for the sketch of the function is indeed indicating a diverging field (like field lines from a point positive charge), yet the math claims the divergence to be zero. What's going wrong?
This is the solution I have, from a manual, which also doesn't make sense to me:
The answer is that $\nabla·v = 0$ everywhere except at the origin, but at the
origin our calculation is no good, since $r = 0$, and the expression for $v$ blows up. In fact, $\nabla·v$ is infinite at that one point, and zero elsewhere.
Could someone please help me understand the situation? Any help would be appreciated, thanks!
P.S. I understand that this has been asked earlier on Physics SE, but I didn't understand the answers. The one with most upvotes said:
Pretty sure the question is about $\frac{\hat{r}}{r^2}$, i.e. the electric field around a point charge. Naively the divergence is zero, but properly taking into account the singularity at the origin gives a delta-distribution. (Answer by @genneth)
What's the delta distribution in conversation?
Answer: The "problem" is there because one assumes that the charge is a point at $r=0$.
To see how one might round the "problem" in a non-rigerous way suppose instead one assumes a uniform charge density $\rho$.
In a sphere of volume $V$, the total charge is $\displaystyle\int_{\rm V} \rho\,dV$.
The electric field due to a point charge $q$ is $\vec E = - \dfrac {1}{4\pi \epsilon_0}\dfrac {q}{r^2} \hat r = k \dfrac {q}{r^2} \hat r $
In our case the charge is not a point charge but distributed over a volume $V$ and so $\vec E = k \dfrac {\int_{\rm V} \rho\,dV}{r^2} \hat r $.
The electric flux through the surface of a sphere of radius $R$ is $\displaystyle \int_{\rm S}\vec E \cdot d\vec s = k \dfrac {\int_{\rm V} \rho\,dV}{R^2}\,4\pi R^2=\int _{\rm V}4 \pi k \rho\, dV= \int _{\rm V}\nabla\cdot \vec E \,dV$.
So the divergence of the electric field is $4 \pi k\rho = \dfrac {\rho}{\epsilon _0}$ in a more familiar form.
Now consider what a point charge implies.
As the radius of the sphere decreases and tends towards zero then the charge density must tend towards infinity.
To get around this "problem" a function is "loosely" defined as having the property that at $r=0$ the area under the function is $1$ and for all other values of $r$ the area under the function is zero.
It is called a delta function $\delta (r)$ and has infinite height and zero width but with a finite area of $1$ at $r=0$..
So now the divergence of the electric field from a point charge $q$ is given by $\nabla \cdot \vec E= \dfrac{q}{\epsilon_0} \,\delta (r)$.
At $r=0$ the divergence of the electric field is $\dfrac{q}{\epsilon_0}$ and the divergence is zero everywhere else as you have found. | {
"domain": "physics.stackexchange",
"id": 60649,
"tags": "homework-and-exercises, vectors, vector-fields"
} |
Software defined radio panadapter | Question: I am trying to make a panadapter for a software defined radio and I am a little bit stuck and hope someone can help me.
I have an I Q signal from my radio. (Actually I am using an IQ wav file from the internet recorded at 44800 2 channels 16bit).
If I use HDSDR the spectrum looks like this:
The display from my app looks like this:
I am doing something wrong and I do not know exactly what (I am new to this and I am not very sure I completly understand the concept behind...) I will try to describe what I am doing and hope you can help me fix it
I open the file and start reading from it in a byte buffer (1024 bytes at a time)
I convert every 2 bytes into shorts (ByteOrder.LITTLE_ENDIAN)
I apply HanningWindow to the buffer
I split the buffer in 2 buffers each for a channel (the %2==0 as
left and %2==1 right )
I create and array of Complex numbers considering the left[i] as the
real part & right[i] as imag part
I do FFT on the complex array (wavenumber table with size 512)
I do an FFT shift (from [1,2,..,n/2,n/2+1,..,n-1,n] to [n,n-1,..,n/2+1,1,2,..,n/2] )
I plot the result
Do I need to process the IQ signal before somehow (demodulate it ?)
I am reading in a wrong way the file? (I know that the file is starting with header & everything but after that the data part starts)
How can I do a pitch /amplitude correction to remove the unwanted image existing in a not ideal IQ recorded by soundcard?
Thank you,
Bogdan
Answer: There are a number of things that could be going wrong; you haven't provided enough information to definitively diagnose your the problem. (A link to the wav file you obtained would be useful, so we can try it out ourselves.) Here are some observations:
You're assuming the file is uncompressed 16-bit little-endian samples, and ignoring the presence of the header. This could be wrong, either because the format is not what you think, or the header is not a multiple of 4 bytes long. Instead, use an audio-editing program such as Audacity to read the file and write it out again as a raw file where you have specified the parameters. That, or use a WAV-reading library in your program.
Your lengths don't match up. If you read 1024 bytes, then you have 512 shorts, or 256 complex-shorts — but you say your FFT size is 512, not 256. Either your program is not as you say or it might be reading uninitialized memory.
Your output appears to have four almost-copies of the spectrum. Two mirrored copies are explainable if you are misreading the input such that the I and Q components are not aligned properly — a symmetric spectrum is to be expected if one of I or Q is zero, constant, or otherwise not properly aligned with the input.
The other copy might be due to an error in your FFT shift code or graphics code, or because you are trying to display both components of the complex FFT output separately rather than taking the magnitude.
I would suggest adjusting the scaling of your color shading so that it is more similar to the HDSDR output, and increasing the FFT size (bin count). (You may also need to take the log of the magnitude.) This will make the result look more like HDSDR's, which will help you match up parts of the spectrum.
Do I need to process the IQ signal before somehow (demodulate it ?)
No. The sorts of processing that could be done to refine a panadapter display would be done after the FFT. The only things to do before would be things like IQ imbalance correction, which are just as appropriate for a signal being demodulated and played as audio (or whatever sort of information) as one going to the panadapter. | {
"domain": "dsp.stackexchange",
"id": 8133,
"tags": "fft, quadrature, software-defined-radio"
} |
Diophantine equations and P=NP | Question: It was proven that the problem of determining whether a given Diophantine equation has a solution is undecidable (and therefore has no polynomial time algorithm). But we can check proof certificates (that is, solutions to the equation) in linear time by plugging in our solution and evaluating. So the problem is in NP. Why does this not imply P $\neq$ NP?
Answer: Every problem in NP is decidable. Since determining whether Diophantine equations have solutions is undecidable, it cannot be in NP. Therefore, although the solution to the equation is a certificate, the size of this certificate is not bounded by any polynomial in the size of the input. | {
"domain": "cs.stackexchange",
"id": 3677,
"tags": "complexity-theory, computability"
} |
Trying to understand quantum physics double slit experiment | Question: I have watched many videos and read many articles and it says that a particle is acting like a wave. So why we try to understand on which hole the particle went through?
Clearly a wave will pass in both slits and that's why we get the stripped pattern.
A short (not extended) theory would be:
Each particle contains informations of the whole object. So a light that passes through 2 slits contains particles that are related to each other.
In the same way we people, are made from particles, and each of them contains informations and is connected with each other, that's why me is me and you are you.
So if all my particles passes through those slits in the end will produce me. But if your particles passes through those slits will produce you and not me. So it makes sense that those particles contains information about the whole object and are connected to each other.
Similarly is pre-defined that the the light will produce a stripped pattern therefore each particle is a wave but it contains informations about the whole object and is connected with other particles of that object.
So what is weird about quantum mechanics?
Answer: I will try to explain my perspective on the matter.
The "strangeness" in quantum mechanics is that sometimes light appears to behave like a particle, and sometimes like a wave.
In the photoelectric effect or Compton scattering, a photon behaves like a particle. Roughly speaking, we can treat it like a billiard ball that collides with other billiard balls, using the normal collision rules. We can predict what light will do in these cases if we treat it like a particle.
In the double-slit experiment, light behaves like a wave; it passes through both slits and makes an interference pattern. In this case, we can predict what light will do if we treat it like a wave.
But is light really a wave or a particle? Why should it behave like different things in different situations, and how do we tell in advance which one its going to behave like?
There is an additional experiment that further complicates matters. Suppose an experimenter forces light to behave like a particle. The experimenter sends only one particle at a time towards the slits, and waits for the particle to arrive on the other side before sending the next one.
And the interference pattern still occurs! So now it's not just that a group of particles can behave like a wave, but a single particle that behaves like a wave, even though the experimenter told it to behave like a particle. So if it is behaving like a particle, we should definitely be able to tell which slit it went through - it can only go through one if it's behaving like a particle.
So the experimenter puts a detector on the slits to see which one it went through. Now we know for sure that the particle is only going through one slit at a time - and the interference pattern disappears.
Now it looks like even in the same experiment, light might behave as a wave, or as a particle; it might make an interference pattern, or not. Even if you just say "particles are also waves", it doesn't let you predict in advance what the particle will do. So it's not just a matter of saying "light is a wave"; we have to find a single description that covers both the particle behavior, and the light behavior, and will tell us in advance what the particle will do.
I think that's what people find confusing about quantum mechanics. | {
"domain": "physics.stackexchange",
"id": 58302,
"tags": "quantum-mechanics, double-slit-experiment"
} |
Is information about the speed of light hidden in its spectrum? | Question: Can the speed of light in the vacuum (c) be inferred from the spectrum of light?
If that is not the case is it possible to tell from lights spectrum that it has entered a different medium, e.g. can the correct fraction of c be inferred from the spectrum then?
Answer: This is not the case. The spectrum of light refers to the frequency content of the oscillations of light at any given point: you select a point, look at the electric field oscillations there, and decompose it into a superposition of waves of different frequencies. Thus the analysis is local, and the spectrum is a property of the source of light, and does not typically change upon transmission through a medium (unless, of course, there is absorption).
There is also nothing in the spectrum of a given source that will let you infer the speed of light in vacuum. Different sources have different spectra, but their light will always travel at speed $c$ in the vacuum.
On the other hand, light of different frequencies will experience different phase shifts upon transmission through a dispersive medium with a different refraction index for each frequency. This is not directly testable in an experiment as spectrometers only measure the intensity at each frequency, but you can retrieve the phase shift information in an interference scheme.
Thus, you need to match your phase-shifted light with an unperturbed copy, and measure the intensity.
Image source
If instead of simple detectors for the output rays you put in spectrometers (for an exceptionally well aligned interferometer) then you can read off the phase shift from the interference patterns in the spectra. This then gives you information about the refractive index, and from that you can infer the speed of light in the medium at a given frequency. (Note, though, that this is in terms of $c$, which is always assumed known.) | {
"domain": "physics.stackexchange",
"id": 9793,
"tags": "optics, electromagnetic-radiation, speed-of-light, frequency, wavelength"
} |
How many time a recommender system can recommand the same item to an user? | Question: I'm working on an hybrid music recommender system project, my goal is to create recommendation playlists in accordance with users tastes.
I already implemented the first part which use a collaborative filtering algorithm, and I am now working on the content-based filtering part.
In order to make my recommender system accurate I read dozens of research papers talking about evaluating recommender system. There are many variables taken in account in those evaluations (such as coverage etc..)
Some of those papers talk about user confidence in recommender systems, defining it as one of the most important parameters to take in account. I read that for 10 items recommended if the user like something like 3 or 4 of them, it's enough to gain his confidence.
There is a point that i could'nt find in all those papers:
How many times can we recommend the same item to an user ?
To explain my question, during the playlists generation process there is a risk that the same music appear in two different playlists and I'm wondering what will be the impact on the users confidence in my recommender system in this case.
Answer: I don't think it matters if you recommend the same song in multiple playlists, just that your recommendations were accurate.
For example, imagine a user has two playlists a rock playlist and a classical playlist. If you recommended Crazy Train - Ozzy Osbourne in both playlists, you could theoretically have a "successful recommendation" in the rock playlist, and an bad recommendation in the country playlist.
The number of times a song appears is a design decision/variable - there is no right answer. You could potentially optimize on in the future using A/B testing (if I limit a song to being recommended x times, do those users then value my recommendations more than the general population?). | {
"domain": "datascience.stackexchange",
"id": 5088,
"tags": "machine-learning, recommender-system"
} |
nodelets problem | Question:
Hi. I'm trying to create a custom pgk with nodelets.
I read already manual http://wiki.ros.org/nodelet/Tutorials/Porting%20nodes%20to%20nodelets
And for more understanding I use the pkg: https://github.com/tysik/obstacle_detector
And I got error
[ERROR] [1569303210.735895568]:Loader::load Failed to load nodelet [/test_nodelet1] of type [safety_zones/TestNodelet1] even after refreshing the cache: MultiLibraryClassLoader: Could not create object of class type safety_zones::TestNodelet1Nodelet as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary()
[ERROR] [1569303210.735951283]:Loader::load The error before refreshing the cache was: MultiLibraryClassLoader: Could not create object of class type safety_zones::TestNodelet1Nodelet as no factory exists for it. Make sure that the library exists and was explicitly loaded through MultiLibraryClassLoader::loadLibrary()
Any help, please. Thank you!
CMakeList.txt
cmake_minimum_required(VERSION 2.8.3)
project(safety_zones)
## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
roscpp
roslaunch
sensor_msgs
std_msgs
std_srvs
pluginlib
nodelet
)
###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
INCLUDE_DIRS include
LIBRARIES test_nodelet1 test_nodelet2 ${PROJECT_NAME}_nodelets
CATKIN_DEPENDS
roscpp
roslaunch
sensor_msgs
std_msgs
std_srvs
pluginlib
nodelet
# DEPENDS system_lib
)
###########
## Build ##
###########
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
include
${catkin_INCLUDE_DIRS}
)
## Declare a C++ library
add_library(test_nodelet1 src/test_nodelet1.cpp)
target_link_libraries(test_nodelet1 ${catkin_LIBRARIES})
add_dependencies(test_nodelet1 ${catkin_EXPORTED_TARGETS})
add_library(test_nodelet2 src/test_nodelet2.cpp)
target_link_libraries(test_nodelet2 ${catkin_LIBRARIES})
add_dependencies(test_nodelet2 ${catkin_EXPORTED_TARGETS})
#
# Build nodes
#
add_executable(test_nodelet1_node src/nodes/test_nodelet1_node.cpp)
target_link_libraries(test_nodelet1_node test_nodelet1)
add_executable(test_nodelet2_node src/nodes/test_nodelet2_node.cpp)
target_link_libraries(test_nodelet2_node test_nodelet2)
#
# Build nodelets
#
add_library(${PROJECT_NAME}_nodelets
src/nodelets/test_nodelet1_nodelet.cpp
src/nodelets/test_nodelet2_nodelet.cpp)
target_link_libraries(${PROJECT_NAME}_nodelets test_nodelet1 test_nodelet2)
#
# Install libraries
#
install(TARGETS test_nodelet1 test_nodelet2 ${PROJECT_NAME}_nodelets
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION})
#
# Install nodes
#
install(TARGETS test_nodelet1_node test_nodelet2_node
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION})
#
# Install header files
#
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION})
#
# Install nodelet #and rviz plugins description
#
install(FILES nodelet_plugins.xml
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION})
package.xml
<?xml version="1.0"?>
<package format="2">
<name>safety_zones</name>
<version>0.0.0</version>
<description>The safety_zones package</description>
<!-- One maintainer tag required, multiple allowed, one person per tag -->
<!-- Example: -->
<!-- <maintainer email="jane.doe@example.com">Jane Doe</maintainer> -->
<maintainer email=""></maintainer>
<license>TODO</license>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>roslaunch</build_depend>
<build_depend>sensor_msgs</build_depend>
<build_depend>std_msgs</build_depend>
<build_depend>std_srvs</build_depend>
<build_export_depend>roscpp</build_export_depend>
<build_export_depend>roslaunch</build_export_depend>
<build_export_depend>sensor_msgs</build_export_depend>
<build_export_depend>std_msgs</build_export_depend>
<build_export_depend>std_srvs</build_export_depend>
<exec_depend>roscpp</exec_depend>
<exec_depend>roslaunch</exec_depend>
<exec_depend>sensor_msgs</exec_depend>
<exec_depend>std_msgs</exec_depend>
<exec_depend>std_srvs</exec_depend>
<exec_depend>pluginlib</exec_depend>
<build_depend>pluginlib</build_depend>
<build_depend>nodelet</build_depend>
<exec_depend>nodelet</exec_depend>
<!-- The export tag contains other, unspecified, tags -->
<export>
<nodelet plugin="${prefix}/nodelet_plugins.xml" />
</export>
</package>
nodelet_plugins.xml
<library path="lib/libsafety_zones">
<class name="safety_zones/TestNodelet1"
type="safety_zones::TestNodelet1Nodelet"
base_class_type="nodelet::Nodelet">
<description>
TestNodelet1
</description>
</class>
<class name="safety_zones/TestNodelet2"
type="safety_zones::TestNodelet2Nodelet"
base_class_type="nodelet::Nodelet">
<description>
TestNodelet2
</description>
</class>
</library>
test_nodelet1.h
#pragma once
#include <ros/ros.h>
#include <sensor_msgs/LaserScan.h>
namespace safety_zones
{
class TestNodelet1
{
public:
TestNodelet1(ros::NodeHandle& nh, ros::NodeHandle& nh_local);
~TestNodelet1();
private:
ros::Subscriber scan_sub_;
ros::Publisher scan_pub_;
void scanCallback(const sensor_msgs::LaserScan::ConstPtr &msg);
};
} // namespace safety_zones
test_nodelet1.cpp
#include <safety_zones/test_nodelet1.h>
using namespace safety_zones;
TestNodelet1::TestNodelet1(ros::NodeHandle& nh, ros::NodeHandle& nh_local)
{
ROS_INFO_STREAM("init TestNodelet1");
scan_sub_ = nh.subscribe<sensor_msgs::LaserScan>("/scan_back", 1, &TestNodelet1::scanCallback, this);
scan_pub_ = nh.advertise<sensor_msgs::LaserScan>("/scan_test", 1);
}
TestNodelet1::~TestNodelet1()
{
}
void TestNodelet1::scanCallback(const sensor_msgs::LaserScan::ConstPtr &msg)
{
ROS_INFO_STREAM("first_callback");
scan_pub_.publish(msg);
}
test_nodelet1_nodelet.cpp
#include <memory>
#include <nodelet/nodelet.h>
#include "safety_zones/test_nodelet1.h"
namespace safety_zones
{
class TestNodelet1Nodelet : public nodelet::Nodelet
{
public:
virtual void onInit()
{
ROS_INFO_STREAM("111");
ros::NodeHandle nh = getNodeHandle();
ros::NodeHandle nh_local = getPrivateNodeHandle();
try
{
NODELET_INFO("[TestNodelet1]: Initializing nodelet");
test_nodelet1_ = std::shared_ptr<TestNodelet1>(new TestNodelet1(nh, nh_local));
}
catch (const char *s)
{
NODELET_FATAL_STREAM("[TestNodelet1]: " << s);
}
catch (...)
{
NODELET_FATAL_STREAM("[TestNodelet1]: Unexpected error");
}
}
// virtual ~TestNodelet1Nodelet()
// {
// NODELET_INFO("[TestNodelet1]: Shutdown");
// }
private:
std::shared_ptr<TestNodelet1> test_nodelet1_;
};
} // namespace safety_zones
#include <pluginlib/class_list_macros.h>
PLUGINLIB_EXPORT_CLASS(safety_zones::TestNodelet1Nodelet, nodelet::Nodelet)
my launch file
<!-- Demonstation of obstacle detector -->
<launch>
<node name="nodelet_manager" pkg="nodelet" type="nodelet" args="manager" output="screen">
<param name="num_worker_threads" value="20"/>
</node>
<node name="test_nodelet1" pkg="nodelet" type="nodelet" args="load safety_zones/TestNodelet1 nodelet_manager" output="screen">
</node>
<!--
<node name="test_nodelet2" pkg="nodelet" type="nodelet" args="load safety_zones/TestNodelet2 nodelet_manager">
</node> -->
</launch>
<!-- -->
Originally posted by listenreality on ROS Answers with karma: 3 on 2019-09-24
Post score: 0
Original comments
Comment by gvdhoorn on 2019-09-24:
I would first check whether things work without forcing C++17 in your own package.
Plugin loading works by being able to introspection and by assuming certain functions/methods exist in your .so. But that requires ABI between the host and the library.
ROS Melodic is not compiled with C++17 enabled, so it may be that by forcing it in your nodelet, you're breaking ABI causing the plugin loader unable to load (or: find) your nodelet.
Everything could be fine ABI-wise, but it's easy to check and if it is the cause, prevents us from checking all sorts of other things.
Comment by listenreality on 2019-09-24:
@gvdhoorn so, I changed it to C++11, and still got the error
Answer:
Are you sure about this ? In your CMakeList you declare the library with add_library(${PROJECT_NAME}_nodelets so you should change libsafety_zones for libsafety_zones_nodelets or remove the _nodelets within the CMakeList.
Originally posted by Delb with karma: 3907 on 2019-09-24
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by gvdhoorn on 2019-09-24:
Ah, @Delb's comment is probably the answer here.
I would still be careful with forcing C++17 in a package that basically requires ABI compatibility with the host that is going to load it, but not listing the correct library name is going to be an even bigger problem.
Comment by listenreality on 2019-09-24:
@Delb Thanks, that helped! | {
"domain": "robotics.stackexchange",
"id": 33813,
"tags": "ros, ros-melodic, nodelets"
} |
Problem about Choosing a Suitable Origin | Question:
The part I'm not getting is how we can choose "a suitable choice of origin" such that $\textbf{a}$ can be written as a vector in the direction of the magnetic field; specifically, here's my thought process: if we view $\dot{\textbf{x}}(t)$ as a function of time and likewise view ${\textbf{x}}(t)$ as a function of time, then we have:
$$\textbf{a}=\dot{\textbf{x}}(0)-\alpha\textbf{x}(0)\times{\textbf{n}}$$
Then I thought I should choose the origin such that $\textbf{x}(0)=\textbf{0}$ so that $\textbf{a}$ is simply the initial velocity of the particle, but the question never states that the particle is initially travelling along the magnetic field, so I'm not sure what to do about this.
Answer: Assuming you have proven the part that
$$
\dot {\mathbf x} = \alpha \mathbf x \times \mathbf n + \mathbf g t + \mathbf a,
$$
let's change the origin $\mathbf x_1=\mathbf x - \mathbf d$:
$$
\dot{\mathbf x}_1 = \alpha (\mathbf x_1 +\mathbf d)\times \mathbf n + \mathbf g t + \mathbf a = \alpha \mathbf x_1\times \mathbf n + \mathbf g t + (\mathbf a + \alpha \mathbf d \times \mathbf n),
$$
Vector $\alpha \mathbf d \times \mathbf n$ can be any vector perpendicular to $\mathbf n$. Thus, if $\mathbf a$ has a component perpendicular to $\mathbf n$, we can remove it with the proper $\mathbf d$. Indeed, select $\mathbf d = \alpha^{-1}\mathbf a \times \mathbf n$, then:
$$
\mathbf a_1 = \mathbf a+\alpha(\alpha^{-1}\mathbf a \times \mathbf n)\times\mathbf n = \mathbf a - \mathbf a (\mathbf n\cdot \mathbf n)+\mathbf n(\mathbf a\cdot \mathbf n) = (\mathbf a\cdot \mathbf n)\mathbf n
$$
clearly a vector collinear with $\mathbf n$. | {
"domain": "physics.stackexchange",
"id": 78950,
"tags": "classical-mechanics, coordinate-systems, inertial-frames"
} |
Number Wizard game | Question: This is a number guesser that guesses a random number between the values that you set. The user has to use the up or down arrow keys to say if the number that they thought of between the number generated. This continues until the number that the user is thinking is generated and then the user pressed enter to say that their number has been guessed.
How can I make my code shorter? I feel that like I have a lot of unnecessary code. I mainly want to shorten the Update() area but any and all code shortcuts that could help me would be great.
NOTE: I had to use the newMax and newMin because the max and min values weren't updating correctly when I called them on ↑ and ↓ arrow presses.
using UnityEngine;
using System.Collections;
public class NumberWizard : MonoBehaviour {
int max;
int min;
int guess;
bool canGuess = false;
bool setMax = false;
bool setMin = false;
// Use this for initialization
void Start () {
print ("Welcome to Number Wizard");
print("Set max number!");
setMax = true;
}
void StartGame () {
print ("Pick a number in your head, but don't tell me!");
print ("The highest number you can pick is " + max);
print ("The lowest number you can pick is " + min);
max = max +1;
guess = Random.Range (min, max);
print ("Is the number higher or lower than " + guess + "?");
}
// Update is called once per frame
void Update () {
if (min >= max && !canGuess) min = max - 1;
if (max > 0 && min < 0 && !canGuess) min = 0;
if (max < 0) max = 0;
int newMax = max;
int newMin = min;
if (Input.GetKeyDown(KeyCode.UpArrow)){
if (canGuess){
min = guess;
NextGuess ();
} else if (!canGuess && setMax){
max += 100;
newMax += 100;
print ("Max: " + newMax);
} else if (!canGuess && setMin){
min += 1;
newMin += 1;
print ("Min: " + newMin);
}
} else if (Input.GetKeyDown(KeyCode.DownArrow)){
if (canGuess){
max = guess;
NextGuess ();
} else if (!canGuess && setMax){
max -= 100;
newMax -= 100;
if (newMax < 0) newMax = 0;
print ("Max: " + newMax);
} else if (!canGuess && setMin){
min -= 1;
newMin -= 1;
if (newMin < 0) newMin = 0;
print ("Min: " + newMin);
}
} else if (Input.GetKeyDown(KeyCode.Return)){
if (canGuess){
print ("I guessed correct!");
Start ();
min = 0;
max = 0;
canGuess = false;
} else if (!canGuess && setMax){
if (max > 0){
print("Set min number!");
setMax = false;
setMin = true;
} else if (max == 0)print("Choose a max number!");
}else if (!canGuess && setMin){
StartGame ();
setMin = false;
canGuess = true;
}
}
}
void NextGuess () {
guess = (max + min) / 2;
print ("Is the number equal to, higher or lower than " + guess + "?");
}
}
Answer: Architecture:
You should really try to abstract out the environment specific aspects (the fact you're using unity)
I would suggest creating a stand alone class Game and implementing all the business logic in there.
Your class (NumberWizard ) would then become a thin wrapper that is only responsible for instantiation of the Game class
and passing commands to it.
Doing so would contribute a lot for your solution testability and re-usability.
Business logic and readability:
You could try to give a descriptive name to your logical conditions:
!canGuess && setMax
would become var isInSetMinState = !canGuess && setMax
which would then be used in the if statements. Doing so would make it easier to see what each if statement is for.
Your game has 2 major types of actions:
Increase/Decrease
Confirm(essentially change state)
It also has 3 states:
SetMax
SetMin
Guess
I would try to use them more explicitly in your code. It would make your code more readable and easier to understand.
something like the following:
private enum GameState {SetMax,SetMin,Guess};
private enum Action {None, Increase, Decrease, Confirm};
var _state = GameState.Init
var _action = None;
...
public void Update()
{
_action = GetAction();
switch(_action)
{
case Action .Confirm:
switch(_state)
{
case GameState.SetMax:
_state = GameState.SetMin;
break;
case GameState.SetMin:
_state = GameState.SetMin;
break;
case GameState.Guess:
_state = GameState.Win;
break;
}
break;
case Action.Increase:
switch(_state)
{
case GameState.SetMax:
//increase max logic
break;
case GameState.SetMin:
//increase min logic
break;
case GameState.Guess:
//guess in an upper range of [guess,max]
break;
}
break;
case Action.Decrease:
//same as decrease
}
} | {
"domain": "codereview.stackexchange",
"id": 13706,
"tags": "c#, game, unity3d"
} |
How is spooky action at a distance measured? | Question: My recent background reading: https://www.science.org/content/article/more-evidence-support-quantum-theory-s-spooky-action-distance
I have been trying to understand how experiments actually measure the state of both entangled electrons. The way it is typically explained is that one measures the first entangled electron, and according to the Heisenberg Uncertainty Principle it's state is now resolved. Since it is entangled with the other electron, the other one is also implied to share the same state. But how does one simultaneously measure the other entangled electron to determine that it's state is indeed the same as the other electron? What is the mechanism by which that measurement is made, which allows one to confirm that the other electron actually shares the same state in that instance in time?
Answer: In general, for entangled spin pairs magnetic fields are used.
But it's important to note that the states are not the same, as you have stated, but are actually anticorrelated.
So for two entangled electrons, measuring spin along a particular axis, one of them may have a spin "up" and its entangled partner will have a spin "down", the anticorrelated value. Magnetic fields can allow us to measure this.
The Stern-Gerlach$^1$ setup to measure how particles behave in a magnetic field is summarized here. Electrons (and other particles with spin) have a magnetic dipole moment and how they interact with magnetic fields tells us about their spin orientation. Imagine the electron as being a tiny magnetic dipole or "bar magnet", and then imagine what happens if this little magnet is moving in a larger surrounding magnetic field, and how the forces on this magnet would operate depending on the orientation of this magnet (or its direction of spin). Think about one of the electrons moving through one setup and the other through a similar setup in the opposite direction a certain distance away.
Even though both setups may have a spacelike separation, both measurements will come up anticorrelated. And if one is measured before the other, the same will apply even though both electrons are inititally in a superposition of both the up and down state prior to measurement. The initial conclusion was that measuring one state instantaneously "causes" the entangled partner to then be in the anticorelated state, though care must be taken since correlation and causation are not the same thing.
$^1$Note the part that talks about single electrons since the original experiment used silver atoms with an unpaired outer electron, and the results described the spin of this electron. | {
"domain": "physics.stackexchange",
"id": 87650,
"tags": "quantum-entanglement, heisenberg-uncertainty-principle"
} |
Flickering kinect camera image using OpenNI | Question:
Hi,
I have been running an OpenNI Kinect node for approximate two hours and noticed that the camera image started to flicker, resulting in a pointcloud with black points.
When looking on the camera image in Rviz it flickered as I said.
Does anyone else experience this problem and know what it is related to?
Nicklas
Originally posted by Nicklas on ROS Answers with karma: 21 on 2012-04-03
Post score: 2
Original comments
Comment by Ben_S on 2012-04-03:
Yes, when i have the Kinect running for a longer time (>1 hour), I sometimes experience the same phenomenon. A driver restart solves this iirc. Nevertheless could this be quite frustrating when doing some long-term exploration/mapping...
Comment by Yuto Inagaki on 2013-12-07:
I had same phenomenon too, when I launch openni.launch for Kinect which is in openni_launch after some minuties(20~30minutes). Is this solved?
Answer:
Sounds like a bug. Are you running openni or openni_kinect?
The openni package is deprecated, so try openni_kinect instead.
If you already use openni_kinect, open a defect ticket. Although the defect may be in hardware, the driver might be able to perform some recovery action (assuming we can reproduce the problem).
Originally posted by joq with karma: 25443 on 2012-04-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8852,
"tags": "openni-kinect, camera"
} |
Git pre-commit hook to check that changes to certain files only happen on the master branch | Question: As a beginner in Ruby, I am trying to optimize this script that is a client side pre-commit hook for git.
Was hoping I could get a review to ensure I am following all ruby idioms, and to ensure I have the most elegant solution.
#!/usr/bin/env ruby
#
# Pre-commit hook created to help ensure dns changes are being
# committed to the correct branch.
#
# Hash of zone files to look for when a commit is attempted.
zone_files = {
development: ‘some_zone_file1’,
prod: ‘some_zone_file2’
}
# Execute git diff to determine files includes as part of commit.
out_data = %x{git diff --cached --name-only}
pri_branch = "master"
# Iterate through the zone files, and check if any are found in the output.
# Only one needs to be found. No need to find all.
zone_files.keys.each do |key|
if out_data.include?zone_files[key]
current_branch = %x{git branch}.match(/\* (.+?)\n/m)
if current_branch[1].include?pri_branch
break
else
puts "Warning: You have triggered a git pre-commit hook."
puts "Warning: Your current branch is: #{current_branch[1]}."
puts "Warning: For dns, ensure you commit on branch: #{pri_branch}"
puts "Warning: Include '-n' to bypass this check."
puts "Exiting..."
exit 1
end
end
end
Answer:
Use fewer abbreviations. It's certainly not bad in this case, but small stuff like production instead of prod (unless perhaps it's a naming convention for your production environment/server), and primary_branch (I guess?) instead of pri_branch. Or, at any rate, be consistent: You have development spelled out, but you shorten production?
Use parenthenses or at the very least spaces in method calls - and include? is just a method call. So this expression out_data.include?zone_files[key] looks weird right now.
When iterating a hash, the block will be passed the elements as [key, value] arrays. If you add a bit of array destructuring, you get key and value separately.
zone_files.each do |environment, file|
# ...
end
But zone_files doesn't actually need to be a hash. Of course you can use one, and it may be worth it to label to two zone files. But it could just be an array. And you might want to make it constant while you're at it.
Get the current branch once. Right now it's being parsed for each of the zone files, since it's inside the loop.
And check the branch once - if it's the pri_branch then you can skip everything else.
You can use a heredoc for the warning
Here's my take
# Array of zone files to look for when a commit is attempted.
ZONE_FILES = %w(some_zone_file1 some_zone_file2).freeze
# The branch we're allowed to commit zone files on
PRIMARY_BRANCH = "master".freeze
# get the current branch
current_branch = %x{git branch}.match(/\* (.+?)\n/m)[1]
# stop here if we're committing to the primary branch
exit(0) if current_branch == PRIMARY_BRANCH
# execute git diff to determine files includes as part of commit.
commit_files = %x{git diff --cached --name-only}
# find out if any of the zone files are among the files
# being committed
committing_zone_file = ZONE_FILES.detect { |zone_file| commit_files.include?(zone_file) }
# and if we're committing a zone file, complain to the user
if committing_zone_file
puts <<-EOT
Error: You have triggered a git pre-commit hook.
Your current branch is: #{current_branch}.
For DNS, ensure you commit on branch: #{PRIMARY_BRANCH}
Include '-n' to bypass this check.
Exiting...
EOT
exit(1)
end | {
"domain": "codereview.stackexchange",
"id": 8435,
"tags": "beginner, ruby, git"
} |
ROS_PACKAGE_PATH for catkin beginners | Question:
Section 1.2.2 of beginner level tutorial "8.Using rqt_console and roslaunch" says:
*If roscd fails, remember to set the ROS_PACKAGE_PATH variable in your terminal. Then the commands will look like this:
$ export ROS_PACKAGE_PATH=~/<distro>_workspace/sandbox:$ROS_PACKAGE_PATH*
I followed the catkin based tutorial and found out that this doesnt work. I use ros-groovy on Ubuntu 12.10.I tried
export ROS_PACKAGE_PATH=~/catkin_ws/src:$ROS_PACKAGE_PATH
which solved the issue. I guess whats given in the tutorial is based on rosbuild. Perhaps a mention on catkin based command would help beginners.
Originally posted by Maheshwar Venkat on ROS Answers with karma: 5 on 2013-04-10
Post score: 0
Answer:
Did you ever source ~/catkin_ws/devel/setup.bash (or source ~/catkin_ws/install/setup.bash)? That step should set your ROS_PACKAGE_PATH.
Originally posted by William with karma: 17335 on 2013-04-10
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Maheshwar Venkat on 2013-04-10:
does it remain set even if i restart my computer or something ?
Comment by William on 2013-04-10:
No, you have to source the setup.bash file each time you open a terminal. If you are going to be working with the same workspace for a long time you can put the source /path/to/setup.bash command in your ~/.bashrc file which will run each time you open a terminal.
Comment by Maheshwar Venkat on 2013-04-10:
resolved and understood. thanks.! | {
"domain": "robotics.stackexchange",
"id": 13763,
"tags": "ros, catkin, begginer-tutorials, ros-package-path, environment-variables"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.