anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Increase in kinetic energy of a system when there is no external force | Question: If a man starts running on a boat with an acceleration $a$ with respect to the boat, there is no external force that acts on the Boat+Man system (assuming friction due to water to be zero and neglecting air drag). But the velocity of system increases. What causes the increase in kinetic energy even if there is no external force?
I have not neglected friction between man and boat. But in boat+man system, its not an external force. Gravity does not do any work since displacement of boat is perpendicular to force due to gravity.
Answer: It does not matter that there is no external force acting on the system. The kinetic energy comes from the man running on the boat. He is turning the chemical energy in his muscles into kinetic energy. If we have an isolated system (i.e. one with no external forces and where nothing leaves or enters the system), we require energy to be conserved within that system but not the conservation of kinetic energy, which I think is what you have assumed.
Let us take an analogy. You have a car stationary on a road with no other forces present except the friction between the road and the car. The car can start its engine and drive off, increasing the kinetic energy of the system. Here the kinetic energy comes from the car's fuel which is burnt in its engine. In this case the road will in fact move (like the boat moves) but since it is attached to the earth the motion of the road is far less noticeable than that of the boat. | {
"domain": "physics.stackexchange",
"id": 24812,
"tags": "energy, energy-conservation, work"
} |
gmapping seems to ignore odometry data | Question:
Hi,
I'm new to ROS and just managed to get the Kinect fake lasescan and a driver node for my robot hardware working. Now I tried the slam_gmapping to create a map of my room, but the map creation quality is far away from the maps you can see on youtube, e.g. http://www.youtube.com/watch?feature=player_embedded&v=yQfvObAAxZA#!
It seems to me like gmapping ignores the published odometry data and frames because the estimated position jumps the whole time. Even if the robot rotates on the same spot it happens that the estimated position jumps for around 30-40cm. The curious thing is that my odometry is quite good and the laser scans would fit absolutely perfectly to the available map without any jumping.
Is there a possibility to say gmapping that it should stick to the odometry and make only small corrections?
Thanks and regards
jgdo -
Update:
thanks four Your reply! I'm not using the turtlebot but a self made robot with kinect and a notebook, and I so don't have any calibration software for that. What does the turtlebot calibration do? At my robot i adjusted the odometery by manually driving and rotating and they seem to work good since i can drive around and when i come back to the original place, the computed coordinates are near to zero.
here is a bagfile containing scan, tf, odometry and the map.
As you can see at the beginning gmapping tries to match the new scans to the existing part of the map instead of extending it, which causes the estimated position to jump. To get a good map its is necessary to drive backwards to a corner so the kinect can see two opposite walls at same time.
I hope this data is enough for you, if not i can make a longer example, but the behavior is the same.
Thanks again and regards - jgdo -
Originally posted by jgdo on ROS Answers with karma: 56 on 2012-06-06
Post score: 1
Original comments
Comment by allenh1 on 2012-06-06:
would you mind posting a bag file of your data?
Answer:
From my experience with the turtlebot gmapping, the odometry needs to be calibrated (I'm sure you've already done that in the tutorial, but I have trouble here). I suggest manually editing the turtlebot.launch file to change the gyro parameter to about 2.38 [that's what works best for me]. Then, run the calibration file. If you can post a bag file,I can try to play with your data.
Hope this helps!
-Hunter A.
With your bag, this is the map I made. It seems to me that your linear translation odometry is flawed (unless this is the environment).
I'm going to look at your bag file further. In the mean time, here is the turtlebot calibration information. I know it's not a turtlebot, but you might be able to use this file in the mean time.
Ok. Found the issue. If you are using a gyro sensor, then that's what is messing up your map. The EKF uses the gyro and the odometry to get pose. When one of them is bad, the whole thing is bad. Consider adjusting your gyro sensor, or just completely removing it from the robot_pose_ekf configuration. If you're not using a gyro, just let me know.
Edit the file in /etc/ros/electric/turtlebot.launch:
$ sudo nano /etc/ros/electric/turtlebot.launch
The parameters that currently exist (most likely 1.0 for both) you need to multiply with the output numbers. Then, save the file (control + o) and restart turtlebot services.
$ sudo service turtlebot stop
$ sudo service turtlebot start
(If you have not configured it to startup automatically, then just reboot your system).
Originally posted by allenh1 with karma: 3055 on 2012-06-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by jgdo on 2012-06-08:
I have tried the calibration and it said "Multiply the 'turtlebot_node/odom_angular_scale_correction' parameter with 1.007505". What do I have to do with this value now?
My robot does't have a gyro yet, so it can't mess up anything (or can it?)
Comment by jgdo on 2012-06-08:
How exactly did you get that map, especially the right one? As you can see in the bag file, I got pretty worse results..
Comment by allenh1 on 2012-06-08:
I used gmapping. Because I have a fast computer, I changed the parameters for mapping: rosrun gmapping slam_gmapping _particles:=300 _linearUpdate:=0.01 _angularUpdate:=0.001
Comment by allenh1 on 2012-06-08:
If you want me to map the whole thing, I can definitely do that for you. Just increase the size of your bag file. Do a couple loops.
Comment by jgdo on 2012-06-08:
Ok it has to do something with the number of particles you use. If I set it to 300 I get a god map, at 100 a bad one (loop closing doesn't work then). But I wonder how fast your computer is, I have a i3 2100 with 3.1 GHz and I had to slow down the whole playback to r = 0.05 to get good results.
Comment by jgdo on 2012-06-08:
Btw. I have a selfmade motor controller connected to the notebook and not a roomba, so I don't use the turtlebot driver at all but a custom driver node. Is it right that you just have to multiply delta(angle) by odom_angular_scale_correction when doing the odometry calculation?
Comment by allenh1 on 2012-06-09:
That's a little out of my knowledge area... I'll investigate the answer to that. I am running a Core i7 with 3.1 Ghz.
Comment by allenh1 on 2012-06-11:
Ok. I have confirmation with respect to the scale_correction parameters. You multiply by the deltas before update. That's it! All you need to do! | {
"domain": "robotics.stackexchange",
"id": 9700,
"tags": "navigation, odometry, gmapping"
} |
Can Potential energy be the same as instantaneous kinetic energy? | Question: I often wonder about kinetic energy and potential energy. In physics, if a rock is on top of a hill, it has potential energy or kinetic energy that doesn't exist yet. Instead, is it safe to think of potential energy as instantaneous kinetic energy? If we were to graph it as kinetic energy over time, it would be a single point (flat line?) and when the rock tumbles down the hill the kinetic energy changes as it approaches the bottom of the hill? Thus, at an infinitesimal point in time, potential energy and kinetic energy are the same?
Answer: Potential energy is the property of space. Where you put $E_p=0$ is entirely up to you. Kinetic energy is a property of a moving object, you know this from the formulae
$$E_k=\frac{mv^2}{2}$$
So, if an object is sitting/resting at the top of the hill and you have chosen the ground to be
$E_p=0$,
the object has $0$ kinetic energy because it is not moving. However, because it has the "potential" to move, it's potential energy is $$E_p=mgh$$
where $m$ is the mass of the object, $g$ is the gravitational acceleration, and $h$ is the height from the ground ($E_p$=0).
You can find more here: http://hyperphysics.phy-astr.gsu.edu/hbase/pegrav.html
This image describes how the energies would behave in a scenario which you described (falling rock). | {
"domain": "physics.stackexchange",
"id": 40204,
"tags": "kinematics"
} |
Guessing game in Java - Take 3 | Question: I already have two versions of this code reviewed (thanks @Bobby).
They can be found here and here.
The question is still the same.
The goal is maintainability and following best practices.
Jar.java
package com.tn.jar;
import java.util.Random;
public class Jar {
private String itemName;
private int numberOfItems;
private int numberToGuess;
private int numberOfGuesses;
public Jar() {
this.itemName = "Default Name";
this.numberOfItems = new Random().nextInt(10) + 1;
this.numberToGuess = new Random().nextInt(this.numberOfItems) + 1;
this.numberOfGuesses = 0;
}
public Jar(String itemName, int numberOfItems) {
this.itemName = itemName;
this.numberOfItems = numberOfItems;
this.numberToGuess = new Random().nextInt(numberOfItems) + 1;
this.numberOfGuesses = 0;
}
public String getItemName() {
return itemName;
}
public int getNumberOfItems() {
return numberOfItems;
}
public int getNumberToGuess() {
return numberToGuess;
}
public int getNumberOfGuesses() {
return numberOfGuesses;
}
public void incrementNumberOfGuesses() {
numberOfGuesses++;
}
}
Player.java
package com.tn.jar;
public interface Player {
void playGameAsPlayer();
}
Admin.java
package com.tn.jar;
public interface Admin {
void playGameAsAdmin();
}
Game.java
/*
*
*/
package com.tn.jar;
import java.util.InputMismatchException;
import java.util.Random;
import java.util.Scanner;
/**
* The Class Game.
*/
public class Game implements Admin, Player {
/** The jar. */
Jar jar;
/* (non-Javadoc)
* @see com.tn.jar.Player#playGameAsPlayer()
*/
@Override
public void playGameAsPlayer() {
Prompter.printTitle("Player");
jar = new Jar();
startGame();
}
/* (non-Javadoc)
* @see com.tn.jar.Admin#playGameAsAdmin()
*/
@Override
public void playGameAsAdmin() {
Prompter.printTitle("Administrator Setup");
String itemName = Prompter.promptForString("Name of items in the jar: ");
int numberOfItems = Prompter.promptForInt("Maximum of lentils in the jar: ");
jar = new Jar(itemName, numberOfItems);
startGame();
};
/**
* Start game.
*/
private void startGame() {
printGameExplanation();
Prompter.areYouReady();
startGuessingLoop();
printResult();
}
/**
* Start guess loop.
* Here we accept input from user,
* and keeps looping until the answer is correct
*/
private void startGuessingLoop() {
do {
jar.incrementNumberOfGuesses();
} while(Prompter.promptForInt("\nGuess: ") != jar.getNumberToGuess());
}
/**
* Game explanation.
*/
private void printGameExplanation() {
System.out.printf("%nYour goal is to guess how many lentils are in the jar. Your guess should be between 1 and %d%n%n",
jar.getNumberOfItems());
}
/**
* Prints the result.
*/
private void printResult() {
System.out.printf("%nCongratulations - you guessed that there are %d" +
" lentils in the jar! It took you %d" +
" guess(es) to get it right.%n", jar.getNumberToGuess(), jar.getNumberOfGuesses());
}
/**
* The Class Prompter.
*/
static class Prompter {
/** The scanner. */
private static Scanner scanner = new Scanner(System.in);
/**
* Are you ready.
*/
public static void areYouReady() {
do {
System.out.print("Ready? (press ENTER to start guessing): ");
} while (scanner.nextLine().length( ) > 0);
}
/**
* Prompt for input.
*
* @param question the question
* @return the string
*/
public static String promptForString(String question) {
System.out.print(question);
String result = scanner.nextLine();
return result;
}
/**
* Prompt for int.
*
* @param question The question you want to ask
* @return result as an int
*/
public static int promptForInt(String question) {
System.out.print(question);
int result = 0;
boolean success = false;
while(!success) {
try {
result = scanner.nextInt();
success = true;
} catch(InputMismatchException e) {
System.out.print(question);
scanner.nextLine();
}
}
return result;
}
/**
* Prints the title.
*
* @param title the title
*/
public static void printTitle(String title) {
System.out.printf("%n%s%n=========================%n%n", title.toUpperCase());
}
}
/**
* The main method.
*
* @param args the arguments
*/
public static void main(String[] args) {
new Game().playGameAsAdmin();
}
}
Answer: Overall your code looks okay, nothing much to comment on it as a whole. I'd like to add the following comments though:
The variables in your Jar class should be final where possible, I would change them to the following:
private final String itemName;
private final int numberOfItems;
private final int numberToGuess;
private int numberOfGuesses = 0;
Note that this also initializes numberOfGuesses to zero here instead of in the constructor.
Use a single constructor and call the other constructor to avoid code duplication:
public Jar() {
this("Default Name", new Random().nextInt(10) + 1);
}
public Jar(String itemName, int numberOfItems) {
this.itemName = itemName;
this.numberOfItems = numberOfItems;
this.numberToGuess = new Random().nextInt(numberOfItems) + 1;
this.numberOfGuesses = 0;
}
Also normally I would create all random numbers with one Random instance such that you could set a single seed in your program if you want to observe the same behavior. In this case this would be less easy but could still be achieved with a variable like private final Random random = new Random();.
You should deal with as less static variables as possible, meaning that your Prompter static class really should be a Prompter instance and that the Scanner variable should be local to that instance, ideally you should be able to pass along a scanner by for example doing Prompter prompter = new Prompter(new Scanner(System.in));.
In the Prompter#areYouReady method you could use while (!scanner.nextLine().isEmpty()) as loop condition.
In the Prompter#promptForInt method you could break; out of the loop, with the following code:
public static int promptForInt(String question) {
System.out.print(question);
int result = 0;
while (true) {
try {
result = scanner.nextInt();
break;
} catch (InputMismatchException e) {
System.out.print(question);
scanner.nextLine();
}
}
return result;
}
You could remove the dependency on System.out in your Prompter class by adding a dependency on a PrintStream, now your Prompter instance creation could look like: Prompter prompter = new Prompter(new Scanner(System.in), System.out);.
While your styling is in no place bad, it still has room for improvement. You inconsistently sometimes have two lines of white-space between methods, this should be only one. Your while statement lacks breathing space, it should be of the form while (condition) { }, not while(condition) { } and in one case you have unnecessary spacing in a method call, see the scanner.nextLine().length( ) call. | {
"domain": "codereview.stackexchange",
"id": 20922,
"tags": "java, object-oriented, number-guessing-game"
} |
DNA replication and combination | Question:
"Each gamete is genetically unique because the DNA of the parent cell is shuffled before the cell divides. This helps ensure that the new organisms formed as a result of sexual reproduction are also unique."
Then why do we say that the DNA of the parent influences the characteristics of the child while the DNA of the child is formed as a combination of shuffled up nitrogenous and phosphate bases?
Answer: To explain it briefly:
Lets take a human as example, you are diploid and you have a pair of 23 chromosomes (= total 46) and the sex chromosomes which I will exclude for this explanation.
Your gamete is haploid and has therefore only one of the two paired chromosomes. So for every chromosome pair there are 2 possible chromosomes. So in total you have 223 possibilites to arrange chromosomes. The gamete from the sexual mating partner also has 223 possibilites to arrange chromosomes. So in total a new diploid organism has (223)x(223) possibilites. So unique.
However, it is always the chromosomes of the parents. So yes, parents influence the children because the genetic information of the children can be found in one of the two parents.
Furthermore you have recombination, but you already asked another question about this. So I will not expand this answer.
There are relevant articles in Wikipedia on genetic recombination and meiosis that I would recommend. | {
"domain": "biology.stackexchange",
"id": 8137,
"tags": "reproduction"
} |
Does the turtlebot need wifi access to build a map? | Question:
Does the turtlebot actually need a wifi connection to operate? I want to build a map of a large area in my office building. The building has wifi coverage, but it is spotty and sometimes the connection gets dropped.
I would like to drive the turtlebot around with a wireless joystick, and build a map using the gmapping tutorial. My question is, if the turtlebot's netbook loses the wireless connection along the way, will this cause problems with the processes?
Basically, I want to:
bringup the turtlebot
start the gmapping app:
roslaunch turtlebot_navigation gmapping_demo.launch
Check from the workstation that the dashboard is ok (and then close it).
Start the joystick telop and drive around and build the map.
I am worried that if the wireless connection is dropped while I am driving it around to build the map, that ROS_HOSTNAME and ROS_MASTER_URI will no longer resolve on the turtlebot and the map app will fail.
Please let me know if there are any tricks to building a map like this, or if I am going about it incorrectly. It tried a short mapping experiment using key teleoperation from the workstation, and that failed when the connection was dropped. So, I am trying to understand the requirements better.
Thanks!
Originally posted by ceverett on ROS Answers with karma: 332 on 2012-02-09
Post score: 0
Answer:
If you run all of the processes on the turtlebot itself, you won't have any problems. It may be helpful to use gnu screen so your nodes don't die when the turtlebot loses its network connection and the SSH sessions disconnect.
Additionally, if the ROS_MASTER_URI is set to localhost, it does not matter if you actually have a network connection.
Originally posted by Dan Lazewatsky with karma: 9115 on 2012-02-09
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8177,
"tags": "ros, navigation, wireless, turtlebot, gmapping"
} |
What is the "Spandrels" debate about? | Question: In a former question, as a side question, I asked a clarification about the "Spandrels" paper. Being not a biologist, it was the first time I encoutered it.
Subsequently I tried to collect some information, because I read that the paper is debated and deemed "wrong" by at least a part of the community.
Despite my efforts I have not fully grasped what the issue is:
is the incorrect reference to the architectural element? (i.e., the paper is wrong because it uses a not fully agreeable comparison)
are some (or all) the claims in it wrong? if yes, are there any suggested reading to understand why?
I ask specifically because the first reference I encountered points to Dawkins, and I would like to understand if this paper is a mere pawn in the big "gradualists versus punctuationists" debate or if it has merits on his own.
Answer: Short version
The paper has nothing to do with punctuated equilibrum vs. gradualism (see also here), which was published a few years earlier. This debate is much different (but equally important, I think).
The authors of the Spandrels paper criticized evolutionary biologists of taking a very narrow view of the evolutionary process. They claiming (wrongly) that evolutionary biologists believed that every single trait on an organism was an adaptation that resulted from natural selection. The trait would not be present unless it was an adaptation. The authors claimed (rightly) that traits could not be considered independently. Instead, the traits had to be considered in light of the evolutionary history and developmental biology of the organism. To your specific questions:
The architectural metaphor was wrong. Does it make their entire argument wrong? No. They did not understand architecture. That does not mean they did not understand evolutionary biology.
Are some of their claims wrong? I think yes and no, but the full answer requires more nuance than can really be given here so read the long answer below if interested. I don't know of a single reference that supports/rebuts the Spandrels paper but consider Why Evolution is True by Jerry Coyne, The Greatest Show in Earth by Richard Dawkins, and Endless Forms Most Beautiful by Sean Carroll. None of these address directly the charges lodged by the Spandrels paper but they do show how evolutionary biology is integrative of many ideas instead of isolating each trait without considering the biology of the entire organism. (I have no commerical interest in any of these books. I simply support the ideas presented.)
Long version from my perspective.
In 1979, Stephen J. Gould and Richard C. Lewontin (hereafter, G&L) published a paper, "The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme", in the Proceedings of the Royal Society of London, Series B. The title of the paper mixes two metaphors. The first is the character Dr. Pangloss from Voltaire's "Candide." Dr. Pangloss is a bit of an optimist, who says,
It is demonstrable ... that things cannot be otherwise than as they are; for as all things have been created for some end, they must necessarily be created for the best end. Observe, for instance, the nose is formed for spectacles, therefore we wear spectacles. The legs are visibly designed for stockings, accordingly we wear stockings. ... [A]nd they, who assert that everything is right, do not express themselves correctly; they should say that everything is best." - Dr. Pangloss
Dr. Pangloss is saying that everything has a purpose, and each thing is designed specifically for that purpose. G&L extend this metaphor to evolutionary biologists. G&L claim that evolutionary biologists (at least to the time of their publication) tended to make claims similar to Dr. Pangloss: every trait that could be found on an organism results from natural selection and adaptation. G&L write,
An adaptationist programme has dominated evolutionary thought in England and the United States during the past forty years. It is based on faith in the power of natural selection as an optimizing agent. It proceeds by breaking an organism into unitary "traits" and proposing an adaptive story for each considered separately. - G&L
Hence, the Panglossian paradigm. Every trait has an adaptive purpose and is necessarily the "best" that it can be. G&L then argue that evolutionary biologists of the time failed to consider the entire organism when assigning adaptive value to each trait separately.
We criticize this approach and attempt to reassert a competing notion ... that organisms must be analyzed as integrated wholes, with bauplane so constrained by phyletic heritage, pathways of development, and general architecture that the constraints themselves become more interesting and more important in delimiting pathways of change than the selective force that may mediate change when it occurs. - G&L
G&L are saying that the many organismal traits are functions of evolutionary history, developmental pathways, and other structural contraints. G&L argue that most evolutionary biologists of the time did not consider the entire organism when ascribing adaptive value to any one trait. In G&L Panglossian metaphoric terms, evolutionary biologists had decided that the adaptive value of the nose was to support glasses, but they had failed to consider that the nose may have evolved for other reasons (e.g., smell) and that the supportive value of the nose was an evolutionar by-product instead of an evolutionary cause.
This brings us to the second metaphor, the architectural spandrels. Spandrels are triangular spaces that form when two arches meet, or when an arch meets a rectangle. According to G&L, spandrels are a necessary by-product of the architectural joining of the two shapes. The spandrel itself serves no purpose. In evolutionary terms, spandrels have no adaptive significance that resulted from natural selection. Instead, they are an evolutionary by-product that resulted from selection on other traits (arches and rectangles).
While the architectural metaphor apparently fails, their primary argument does not change. Traits cannot be considered individually. Traits of an organisms must be evaluated as one part of the entire organism, together with it's developmental biology and its evolutionary history.
Here are two examples to represent their argument, written for a general audience. The first example is based on the human hand. Our hand has five fingers. Each finger except the thumb has three bones (called phalanges), which you can see easily when you curl your finger. The thumb has only two phalanges. What is the adaptive value of having four fingers with three bones each, and a thumb with two bones? You might say that such a structure imparts excellent manual dexterity, from grasping to manipulating objects. But, would we not have that same dexterity if we had four fingers, or six? Why does the thumb have only two bones and not three? How could that be adaptive? These specific traits may not be the direct result of natural selection.
The structure of our hand is explained by our evolutionary history. Our primate ancestors had five fingers. So did our earliest mammalian ancestors, as did our earliest reptilian ancestors. However, some of the earliest amphibian ancestors, such as Acanthostega, had more than five digits (toes, fingers). Somewhere in early amphibian evolutionary history, our ancestral lineage converged on five digits, and most descendant vertebrates retain that five digit pattern. Even those vertebrates today without five digits usually display five digits during development, such as the ostrich (scroll down to image of blue-stained bones). If we assigned an adaptive purpose to the human hand independently of its evolutionary history, we might very well be wrong. Science doesn't like wrong.
Think about the five toes of your foot. Each toe has three bones except for the big toe, which has two bones. Manual dexterity? No. Evolutionary history? Yes.
The second example is the groove in your upper lip below your nose. This groove is called the philtrum. Can you think of an adaptive function for the philtrum? You might think that it helps to direct odors towards our nostrils, or hypothesize some other adaptive purpose. However, it appears that the philtrum is a by-product of the development of our embryonic face (YouTube video). The trait of our philtrum may not have adaptive significance. It is a by-product our face formation when the two sides meet in the middle.
G&L's publication produced quite an uproar. It's easy to find criticisms of the paper. Some have criticized the architectural metaphor. A bad metaphor does not necessarily make a bad scientific argument. Others have criticized the scientific rigor of their arugments. This blog entry, associated with the respected ecology journal Oikos, dislikes the paper and identifies several problems. The Oikos blog also links to several other sites that either support or dislike the G&L article. Most evolutionary biologists disliked the claim that they did not see the "whole organism."
Rightfully so, I think. I do not know of one evolutionary biologist (myself included) that has ever considered a particular trait without considering the broader evolutionary history of an organism. Indeed, considerable effort is spent by evolutionary biologists to determine how the evolutionary history of an organism influences its trait. The same applies for evolutionary developmental biology and other biological processes. For example, Martin et al. (1993) related evolutionary rate to body size, metabolic rate, and generation time of various animals, a very integrative (not isolating) approach.
To be fair, I think the G&L paper needs to be considered in the context of its time. Remember that i was published in 1979. Reread the first G&L quote above. They say, "during the past forty years." The G&L paper was published in 1979. Forty years before this paper was published was during the "modern evolutionary synthesis." Without going into details (ask another question on Biology.SE!) the modern evolutionary synthesis provided the genetic mechanism (and so much more) that explaiend how Darwin's concept of natural selection could actually work. Once the relationship between Mendelian genetic inheritance and natural selection was proposed, many evolutionary biologists tested this connection. It is not surprising to me that G&L perceived this strong "bias" towards a selectionist (Panglossian) paradigm because many biologists of the time were trying to understand how Mendelian genetics contributed to natural selection.
At the time of the modern synthesis, the structure of DNA was not yet known. The structure was finally realized by Watson and Crick (and Franklin!) in 1953, only 26 years before G&L's paper. Today we think nothing about the DNA structure but molecular genetics has advanced so much since G&L's publication that we have a much greater understanding of evolution and natural selection at the genetic level. Today, you can't study the evolution of a trait without considering its underlying genetic architecture.
In my opinion, one of the most, and perhaps the most, important development in the field of evolutionary biology is the field of evolutionary developmental biology (evo-devo). This field did not really mature as a discipline until about 2000, when the Proceedings of the National Academy of Sciences in the US devoted an entire issue to the topic. Developmental biology was a discpline that G&L had argued was important to consider. I'd bet that even they had no idea how important the evo-devo discipline would become to understanding evolutionary change. Still, recognition of the relationship between evolution and development was recognized as long ago as the mid-1800s by Ernst Haeckel. Evolutionary biologists were not necessarily considering traits in isolation from evolutionary history or developmental biology.
In the end, G&L used broad brush strokes to paint evolutionary biologists into a small canvas, claiming that biologists had a narrow evolutionary viewpoint. The evolutionary biologists responded (then and now) that their perspective was wide, even when focused on a particular trait.
The G&L paper continues to prompt considerable debate, which I think is the value of their paper. Every young graduate student should be required to read the paper. Even jaded scientists need a refreshing jolt to an established way of thinking. Our thought processes will alway benefit when we are reminded that evolutionary biology is a broad discipline that must embrace many other disciplines to arrive at a full understanding of the history of life on Earth. | {
"domain": "biology.stackexchange",
"id": 2791,
"tags": "evolution"
} |
Modeling rubber foams | Question: Are there any good papers/texts on the subject of modeling the dynamics of rubber foams? So far I haven't found any good papers/texts that cover this particular subject and I've done some searching.
I'm interested in how the internal geometry (i.e. distribution of empty space) affects the behavior of the material.
Answer: In continuum mechanics terms:
There is Coussy, Olivier: Mechanics of porous continua, Wiley 1995. I do not have access to it at the moment, but I remember it having a chapter about viscoelastic porous continua, that might be worth checking out. | {
"domain": "physics.stackexchange",
"id": 31924,
"tags": "resource-recommendations, elasticity"
} |
pcl::PointCloud vs. sensor_msgs::PointCloud | Question:
Obviously they are not the same. Which one should a developer be using?
Originally posted by ubuntuslave on ROS Answers with karma: 347 on 2011-07-20
Post score: 1
Answer:
pcl::PointCloud for "working".
sensor_msgs::PointCloud is a message for sending. You can create a pcl::PointCloud publisher that will automatically send out the right data.
Originally posted by dornhege with karma: 31395 on 2011-07-20
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by tfoote on 2011-07-26:
Documentation of how to send pcl::PointClouds natively can be found at http://www.ros.org/wiki/pcl_ros | {
"domain": "robotics.stackexchange",
"id": 6214,
"tags": "pointcloud"
} |
Path of sound in ocean where sound speed varies linearly with depth | Question: In an ocean, described with $x-y$ Cartesian coordinates where $y=0$ on the sea bed, the sound speed varies linearly with $y$:
$$v(y)=c+by$$
A textbook says that, if sound wave is emitted on the sea bed, at an angle $\theta$ upward, then the sound wave will travel a circular path, of radius $$R=\frac{c}{b\sin\theta}$$
However, my calculations do not agree with this statement.
Assume the path of sound is described by $y=f(x)$.
By Snell’s law, $$\frac{\sin \theta_{\text{incident}}}{v(y)}=\text{constant}=k$$
Since $\tan\theta_{\text{incident}}=f’$,
$$kv=\frac{f’}{\sqrt{1+f’^2}}\implies f’^2=\frac{(kv)^2}{1-(kv)^2}\implies 1+f’^2=\frac1{1-(kv)^2}$$
Differentiating both sides w.r.t. $x$,
$$2f’f’’= \frac{2k^2v}{(1-(kv)^2)^2}\frac{dv}{dx}= \frac{2k^2v}{(1-(kv)^2)^2}\frac{dv}{dy}\frac{dy}{dx}= \frac{2k^2vb}{(1-(kv)^2)^2}f’$$
Thus, $$f’’=\frac{k^2bv}{(1-(kv)^2)^2}$$
Calculating the radius of curvature:
$$R=\frac{(1+f’^2)^{3/2}}{f’’}=\frac{\sqrt{1-(kv)^2}}{k^2bv}\ne \text{a constant}$$
Therefore the path cannot be a circle.
Are my calculations correct?
If yes, why is there a discrepancy between the textbook’s statement and my calculations?
Answer: I believe that your statement of Snell's Law is incorrect because you only take into account an incident angle. Furthermore, your $\theta$ is the angle that the tangent makes with the x-axis, which is implied when you say $\tan(\theta)=f'$, but the refractive index varies with $y$, this means that for the purposes of this law, the incident angle would be $\phi=\pi/2-\theta$. I think it would be more accurate to write something like this
$$ n(y)\sin(\phi) = n(y+\Delta y)\sin(\phi + \Delta \phi) $$
Of course, this is only true as $\Delta y \rightarrow 0$. I offer an alternative to solve this problem. Snell's law is a consequence of Fermat's Principle
the optical length of the path followed by light between two fixed points, A and B, is an extremum. The optical length is defined as the physical length multiplied by the refractive index of the material.
Wikipedia does a good job of explaining how to apply this principle, so you should be able to easily adapt it for the case of sound waves
$$L(x, y, y')= \frac{1}{v(y)}\sqrt{1+y'^2}$$
Then you can apply the Euler-Lagrange equation
$$ \frac{\partial L}{\partial y} - \frac{d}{dx}\frac{\partial L}{\partial {y'}} = 0 $$
If you go through this calculation, you'll identify the formula for the curvature, whose inverse results in
$$ R = \frac{v(y)}{b\sin(\theta)}$$
Here, I already accounted for the fact that $\phi$ and $\theta$ differ by $\pi/2$. Note that when $y=0$ and $\theta=\theta_0$, this equation results in the equation your textbook gave you. The radius of curvature need not be constant unless it is explicitly stated that it is a uniform circular motion. | {
"domain": "physics.stackexchange",
"id": 60493,
"tags": "homework-and-exercises, waves, refraction"
} |
Python editing lists and converting to a dictionary | Question: After validation my errors list returns
[(False, {u'first_name': u'First name is too short'}), (False, {u'last_name': u'Last name is too short'}), (False, {u'confirm_password': u'Password is too short'}), (False, {u'email': u'Please enter a valid email'})]
I use a for loop to create a new list and only add the one index in every element. I am trying to get rid of every False. I am looking for a way to improve my code.
Here is my code
errors = []
errors.append(self.validate_length(first_name, 'first_name', 2, "First name is too short"))
errors.append(self.validate_length(last_name, 'last_name', 2, "Last name is too short"))
errors.append(self.password_match(password, confirm_password))
errors.append(self.validate_email(email_address))
error = []
print errors
for elements in range(0, len(errors)):
try:
errors[elements][1]
error.append(errors[elements][1])
except:
pass
Answer: I have a few complaints:
You're using a for loop to iterate over the indexes of a list instead of directly iterating over the list, but you're not actually using the indexes for anything. I'd suggest
for element in errors:
instead, then using element instead of errors[elements]
The line errors[elements][1] doesn't actually appear to do anything, except throw an IndexError if the list is length 1 or shorter. If this is the intended behaviour, it could be made clearer using a check such as if len(errors[elements]) < 2: ..., and perhaps avoid using exceptions entirely.
An except which catches every type of error there is is rarely a good idea - it's unclear and risks swallowing exceptions you actually want to be warned about in the future. If I understand your code correctly, except IndexError would do the same thing, but be slightly clearer.
All in all, however, it appears you're using exceptions for a case where they aren't needed, which tends to make code harder to grok overall. I'd probably rewrite the loop to something along the lines of
for element in errors:
if len(element) >= 2:
error.append(element[1])
though a list comprehension such as error = [element[1] for element in errors if len(element) >= 2] could work too. | {
"domain": "codereview.stackexchange",
"id": 20589,
"tags": "python"
} |
One scaler for all features or one scaler per feature? | Question: I have a time series with more than 30 features. For preprocessing with scikit learn do you usually use one scaler per feature or one scaler for all features that should be standardized/normalized?
Answer: Sklearn scaler works on feature/column (and thats why you want)
Imagine if it did not. Than you would shift your mean and std in weird-determined-by distribution-of-the-whole-set-kind of way. | {
"domain": "datascience.stackexchange",
"id": 6601,
"tags": "scikit-learn, data-cleaning, preprocessing, feature-scaling"
} |
A box stops moving suddenly | Question:
Hi everyone,
I’m trying to move an object on the ground without any friction in GAZEBO.
I created the default box model on the ground, and set both of mu and mu2 to zero. I left the default values in the other model parameters.
I set the gravity to (-0.1, 0, -9.8), and started the simulation.
Although there was no friction force between the box and ground and no deceleration of the box, the box stopped moving suddenly in about 1 second in simulation time.
I implemented the ft_sensor on the model, it showed zero force and torque after the stop.
It seemed that any calculation about the force was not conducted.
What happened? Can anyone help me fix this problem?
Originally posted by tatsS on Gazebo Answers with karma: 3 on 2017-04-26
Post score: 0
Answer:
The final friction is a result of the friction parameters for both bodies in contact. In your case, you should also change the friction for the ground plane.
Also, note that bodies moving very slowly are "automatically disabled" on ODE. You can turn off that behavior setting the <allow_auto_disable> flag to false. Or you can use a different physics engine, such as Bullet.
Try this world for example:
<?xml version="1.0" ?>
<sdf version="1.5">
<world name="default">
<physics type="ode">
<gravity>-0.1 0 -9.8</gravity>
</physics>
<!-- A global light source -->
<include>
<uri>model://sun</uri>
</include>
<!-- A ground plane -->
<model name="ground_plane">
<static>true</static>
<link name="link">
<collision name="collision">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>100 100</size>
</plane>
</geometry>
<surface>
<friction>
<ode>
<mu>0</mu>
<mu2>0</mu2>
</ode>
</friction>
</surface>
</collision>
<visual name="visual">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>100 100</size>
</plane>
</geometry>
</visual>
</link>
</model>
<!-- box -->
<model name="box">
<allow_auto_disable>false</allow_auto_disable>
<pose>0 0 0.5 0 0 0</pose>
<link name="link">
<collision name="collision">
<geometry>
<box>
<size>1 1 1</size>
</box>
</geometry>
<surface>
<friction>
<ode>
<mu>0</mu>
<mu2>0</mu2>
</ode>
</friction>
</surface>
</collision>
<visual name="visual">
<geometry>
<box>
<size>1 1 1</size>
</box>
</geometry>
</visual>
</link>
</model>
</world>
</sdf>
Originally posted by chapulina with karma: 7504 on 2017-04-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by tatsS on 2017-04-26:
@chapulina Thank you for your reply! I tried your code, and the box did not stop. However, it still stopped when I changed gravity to (-0.1, 0, -9.8).
What is the difference? Gazebo can not solve the slow movement accurately?
Comment by winston on 2017-04-27:
Why the gravity is set to (-1,0,-9.8) instead of (0,0,-9.8)?
Comment by tatsS on 2017-04-27:
@winston It is because I apply a force in the x-direction to the box.
Adding initial small velocity in the x-direction to the box instead of gravity=(-1, 0, -9.8) may also give the same results.
Comment by chapulina on 2017-04-27:
Ah ok, I got what's happening. The box is moving very slowly, so ODE auto-disables it. See the updated answer.
Comment by tatsS on 2017-04-27:
@chapulina Your updated answer completely solves my problem! I didn't know the ODE auto-disable feature. Thank you so much! | {
"domain": "robotics.stackexchange",
"id": 4092,
"tags": "gazebo"
} |
Install, configure, and maintain content for Ubuntu webserver | Question: I made a script for my Ubuntu server(s) running on Google Cloud Platform. The idea behind it is I can create a new instance and paste this script in Google Cloud so it runs it on the new instance and every time it reboots. The script works, so no problem there.
The idea is that it installs all the required software to run my website the first time it boots up and then clones my git repository to /var/www/html/. Every other time it boots it only updates the software and then pulls the changes from the git repository.
My questions:
From what I understand, it's not very wise to put your git password in the startup script but I know no other way to automate this fully. Can this be done in another way while keeping it fully automated?
Cloning the git repository gave an error about the folder not being empty so I just deleted it upfront. It works now but I am not sure if that's the way to go ...
Here is my startup script:
file="/var/www/check.txt"
if [ -e $file ]
then
sudo apt-get update
sudo git -C /var/www/html pull https://username:password@bitbucket.org/username/repository.git
else
sudo apt-get update
sudo apt-get install apache2 libapache2-mod-php5 php5-mcrypt php5-mysql git -y
sudo cat <<EOF > /etc/apache2/mods-enabled/dir.conf
<IfModule mod_dir.c>
DirectoryIndex index.php index.cgi index.pl index.html index.xhtml index.htm
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
EOF
sudo rm -rf /var/www/html
sudo git clone https://username:password@bitbucket.org/username/repository.git /var/www/html/ /var/www/html/
sudo cat <<EOF > /var/www/check.txt
aanwezig!
EOF
fi
Answer: Readability
The script is hard to read because code blocks are not indented,
and there are not blank lines to separate cohesive units.
Consider this alternative writing style of the same script:
file="/var/www/check.txt"
if [ -e $file ]
then
sudo apt-get update
sudo git -C /var/www/html pull https://username:password@bitbucket.org/username/repository.git
else
sudo apt-get update
sudo apt-get install apache2 libapache2-mod-php5 php5-mcrypt php5-mysql git -y
sudo cat <<EOF > /etc/apache2/mods-enabled/dir.conf
<IfModule mod_dir.c>
DirectoryIndex index.php index.cgi index.pl index.html index.xhtml index.htm
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
EOF
sudo rm -rf /var/www/html
sudo git clone https://username:password@bitbucket.org/username/repository.git /var/www/html/ /var/www/html/
sudo cat <<EOF > /var/www/check.txt
aanwezig!
EOF
fi
I know, this is still not so great, due to the here-documents interrupting the indented blocks.
We'll improve that in another step.
sudo and here-documents
The scope of sudo is strictly the specified command.
For example in sudo date > /tmp/out,
the date command will be executed as root,
but the redirection is a different matter,
it will be executed by the shell, as the current user, not root.
Try this yourself, the owner of /tmp/out will be the current user, not root.
This means that the sudo cat <<EOF ... commands don't work as you may think they do. If you want to create a file as root from a here-document,
you can write like this:
cat <<EOF | sudo tee /etc/apache2/mods-enabled/dir.conf
<IfModule mod_dir.c>
DirectoryIndex index.php index.cgi index.pl index.html index.xhtml index.htm
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
EOF
The fact that you haven't noticed the problem suggests that maybe the script is already running as root,
so you don't need any of the sudo.
Here-documents and formatting
Since here-documents break the indentation,
I recommend extracting them to helper functions, for example:
print_mod_dir() {
cat <<EOF
<IfModule mod_dir.c>
DirectoryIndex index.php index.cgi index.pl index.html index.xhtml index.htm
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
EOF
}
print_mod_dir | sudo tee /etc/apache2/mods-enabled/dir.conf
Although the here-document disrupts the indentation of the function,
that's ok, because the entire function body is about the here-document,
so it's less intrusive as elsewhere.
The rest of the code can be nicely readable, for example:
file="/var/www/check.txt"
sudo apt-get update
if [ -e $file ]
then
sudo git -C $docroot pull $repo
else
sudo apt-get update
sudo apt-get install apache2 libapache2-mod-php5 php5-mcrypt php5-mysql git -y
print_mod_dir | sudo tee /etc/apache2/mods-enabled/dir.conf
sudo rm -rf $docroot
sudo git clone $repo $docroot $docroot
sudo touch $file
fi
I replaced some repeated values with variables as @RubberDuck suggested in his review.
I also replaced the second here-document creating a file with a single line,
because as far as this script is concerned,
it's enough if the file exists,
and touch is the perfect tool for that.
Naming
What is file?
What is /var/www/check.txt?
Neither the variable name nor the file name tell anything useful to the reader.
It would be better to use names that make your intention obvious to readers. | {
"domain": "codereview.stackexchange",
"id": 26269,
"tags": "bash, linux, git, installer"
} |
How to negate this one? | Question: How can I negate the following sentence:
For all words x from L with |x|>= n , exists decomposition x = uvw with |uv| <= n and |v| >= 1, so for all i >= 0 , is valid that u(v)^iw in L is.
Answer: The negation is: There exists a word $x \in L$ satisfying $|x| \geq n$ such that for all decompositions $x = uvw$ with $|uv| \leq n$ and $|v| \geq 1$ there exists $i \geq 0$ such that $uv^iw \notin L$. | {
"domain": "cs.stackexchange",
"id": 9017,
"tags": "logic, notation"
} |
Reconstruction of a Ricker Wavelet using inverse discrete fourier transform - signal cut in a half? | Question: I am new here and new to DSP, so maybe my question is really basic.
I have the formula for the Ricker wavelet (Mexican Hat) in frequency-domain and I wish to do an inverse Fourier transform to recover my original signal in time-domain. I am using python numpy.fft module for this.
For some reason, instead of a Ricker wavelet (https://wiki.seg.org/wiki/Dictionary:Ricker_wavelet), I am obtaining a divided version of the signal, like it is aliased or cut in half or lagged (yes, I'm confused).
Do I have to change the order of my time vector accordingly to the frequency vector ? What is the reason for this ? Or is it something else that I am missing ?
My goal is to retrieve a Ricker wavelet centered in zero (or even lagged), but I don't know why my results are like these and how to justify flipping or slicing my time vector.
Please find below my code which also generate the plots. Please let me know if you need any further information.
Thanks in advance,
Luis
import matplotlib.pyplot as plt
import numpy as np
# Dummy signal length
nsamples = 338
dt = 1.6199375667655787e-10
freq = np.fft.fftfreq(len(trace),d=dt)
# peak angular frequency
omega_p = 2*np.pi*250e6
#Using only the positive frequencies for the Ricker Wavelet calculation
omega = 2*np.pi*freq[0:169]
# Ricker Wavelet in Frequency Domain
S_desired = (2/np.sqrt(np.pi))*((omega**2)/(omega_p**3))*np.exp(-(omega**2)/(omega_p**2))
# Appending the Ricker Wavelet values
S_flip = np.flip(S_desired).copy()
S = np.concatenate((S_desired,S_flip))
S_desired_time = np.fft.ifft(S)
time = time=np.arange(0,nsamples*dt,dt)
plt.plot(freq,np.abs(S),'r',label='Power spectrum Ricker wavelet')
plt.xlabel('Frequency [Hz]')
plt.figure()
plt.plot(time,S_desired_time,label='IFFT of the Ricker Wavelet')
plt.xlabel('time [s]')
Answer: For the centring on t=0: the frequency domain doesn't have any absolute time information, so it's up to you to define where your t=0 is (after all, when you do an fft of a time signal, you just supply the samples, and the dt is only used to set the frequency scale).
AS for the shape: you have to remember that the discrete FT of a signal that has a finite number of samples is the same one as with the signal repeated:
And so, you have a phase shift between the signal you get and the one you want of 1/2 your sample length.
The easiest solution to visualise your signal is just to shift the first and last part of your signal, by using for example:
S_desired_time = np.fft.ifftshift((np.fft.ifft(S))
plt.plot(time-time[-1]/2,S_desired_time.real) # to centre on 0
(note that the output of an ifft is complex, so if you want to just should plot the real part use signal.real -- plt.plot() should do it by default but it doesn't hurt to specify it)
Another way is to remember that phase shifting a time signal is the same a multiplying the Fourier transform with an exponential:
$$\mathscr{F}\big\{x(t-t_0)\big\}=X(f)e^{-j2\pi f t_0}$$
You can add this lines to you code:
timeshift=1/2
f = np.linspace(0, nsamples-1, nsamples)
phaseshift = (np.exp((-2*np.pi*1j*f*timeshift+2*np.pi*1j*timeshift*nsamples)))
S=S*phaseshift
S_desired_time = (np.fft.ifft(S))
As abs(phaseshift)=1, the magnitude of your frequency graph is the same (plt.plot(freq,np.abs(S)), but the real part is different:
plt.plot(freq,S.real):
blue: no delay.
green: with delay.
and in the time domain (I didn't center on 0):
more information:
How to do FFT fractional time delay (SOLVED)
https://antongrin.github.io/Seismika/Chapters/SignalProcessing/Ricker_wavelet.html | {
"domain": "dsp.stackexchange",
"id": 11696,
"tags": "discrete-signals, fourier-transform, python, wavelet, aliasing"
} |
Magnetized Sphere | Question: I'm trying to solve the magnetostatic problem of a magnetized sphere using the expansion of $\frac{1}{|\textbf{r}-\textbf{r}'|}$ in terms of Legrendre polynomials. For simplicity I assume $\textbf{M}\left(\textbf{r}\right)=M_{S}\hat{z}$
inside the sphere and $0$ outside, or in spherical coordinates
\begin{equation}
\left(\begin{array}{c}
M_{r}\\
M_{\theta}\\
M_{\phi}
\end{array}\right)=\left(\begin{array}{ccc}
\sin\theta\cos\phi & \sin\theta\sin\phi & \cos\theta\\
\cos\theta\cos\phi & \cos\theta\sin\phi & -\sin\theta\\
-\sin\phi & \cos\phi & 0
\end{array}\right)\left(\begin{array}{c}
M_{x}\\
M_{y}\\
M_{z}
\end{array}\right)\rightarrow\begin{array}{c}
M_{r}=M_{S}\cos\theta\\
M_{\theta}=-M_{S}\sin\theta
\end{array}
\end{equation}
The quantity $\nabla_{\textbf{r}}\cdot\textbf{M}\left(\textbf{r}\right)$
is going to be only nonzero across the surface of the magnetic material. More specifically,
we have
\begin{align}
\nabla_{\textbf{r}}\cdot\textbf{M}\left(\textbf{r}\right) & =\hat{r}\cdot\hat{r}\frac{\partial M_{r}\left(\textbf{r}\right)}{\partial r}\nonumber \\
& =-M_{S}\cos\theta\delta\left(r-R\right)
\end{align}
This yields the magnetic field expression
\begin{align}
\textbf{H}\left(\textbf{r}\right) & =\nabla_{\textbf{r}}\int_{V_{\infty}}d\textbf{r}'\frac{1}{4\pi\left|\textbf{r}-\textbf{r}'\right|}\left[-M_{S}\cos\theta'\delta\left(r'-R\right)\right]
\end{align}
Now the idea is to use
\begin{align}
\frac{1}{\left|\textbf{r}-\textbf{r}'\right|} & =\sum_{l=0}^{\infty}\frac{r_{<}^{l}}{r_{>}^{l+1}}P_{l}\left(\cos\theta\right)\\
\nonumber \\
r_{<} & =\begin{cases}
r & r<r'\\
r' & r\geq r'
\end{cases}\qquad r_{>}=\begin{cases}
r' & r<r'\\
r & r\geq r'
\end{cases}
\end{align}
where $P_{l}\left(\cos\theta\right)$ is the Legendre polynomials
of order $l=0,1,2,3$, and $\theta$ is the angle between $\textbf{r}$
and $\textbf{r}'$. We can rewrite that as
\begin{align}
\frac{1}{\left|\textbf{r}-\textbf{r}'\right|} & =\sum_{l=0}^{\infty}\frac{r^{l}}{\left(r'\right)^{l+1}}P_{l}\left(\cos\theta\right)\qquad r<r'\\
\nonumber \\
\frac{1}{\left|\textbf{r}-\textbf{r}'\right|} & =\sum_{l=0}^{\infty}\frac{\left(r'\right)^{l}}{r^{l+1}}P_{l}\left(\cos\theta\right)\qquad r>r'\\
\nonumber
\end{align}
To solve the integral, we assume $\textbf{r}\parallel\hat{z}$, so
that we have $\theta=\theta'$
\begin{align}
\textbf{H}_{dem}\left(z\right) & =-\frac{M_{S}}{4\pi}\nabla_{\textbf{r}}\sum_{l=0}^{\infty}\int_{0}^{\infty}r'^{2}dr'\int_{0}^{\pi}\sin\theta'd\theta'\int_{0}^{2\pi}d\phi'\frac{r_{<}^{l}}{r_{>}^{l+1}}P_{l}\left(\cos\theta'\right)\cos\theta'\delta\left(r'-R\right)\nonumber \\
& =-\frac{M_{S}}{2}\nabla_{\textbf{r}}\sum_{l=0}^{\infty}\int_{0}^{\infty}r'^{2}dr'\frac{r_{<}^{l}}{r_{>}^{l+1}}\delta\left(r'-R\right)\overset{=\frac{2}{2l+1}\delta_{l1}}{\overbrace{\int_{0}^{\pi}d\theta'\sin\theta'\cos\theta'P_{l}\left(\cos\theta'\right)}}\nonumber \\
& =-\frac{M_{S}}{3}\nabla_{\textbf{r}}\int_{0}^{\infty}r'^{2}dr'\frac{r_{<}}{r_{>}^{2}}\delta\left(r'-R\right)
\end{align}
For $r<R$, we obtain
\begin{align}
\textbf{H}_{dem}\left(z\right) & =-\frac{M_{S}}{3}\nabla_{\textbf{r}}R^{2}\frac{z}{R^{2}}\nonumber \\
\textbf{H}_{dem}\left(\textbf{r}\right) & =-\frac{M_{S}}{3}\nabla_{\textbf{r}}r\cos\theta\nonumber \\
& =-\frac{M_{S}}{3}\left[\hat{r}\cos\theta-\hat{\theta}\sin\theta\right]\nonumber \\
& =-\frac{M_{S}}{3}\hat{z}
\end{align}
which agrees with the expected result. On the other hand, for $r>R$, we obtain
\begin{align*}
\textbf{H}_{dem}\left(z\right) & =-\frac{M_{S}}{3}\nabla_{\textbf{r}}R^{2}\frac{R}{z^{2}}\\
& =-\frac{M_{S}}{3}R^{3}\nabla_{\textbf{r}}\frac{1}{r^{2}\cos^{2}\theta}\\
& \\
\\
\end{align*}
which doesn't agree with the correct result. Any comments where I might be doing something wrong?
EDIT --
Assuming there is no prime on the divergence of the magnetization, we have
\begin{align}
\textbf{H}_{dem}\left(\textbf{r}\right) & =-M_{S}{\nabla}_{\textbf{r}}\cos\theta\int_{V_{\infty}}d\textbf{r}'\frac{1}{4\pi\left|\textbf{r}-\textbf{r}'\right|}\delta\left(r'-R\right)
\end{align}
and finally
\begin{align}
\textbf{H}_{dem}\left(\textbf{r}\right) &=-M_{S}R^{2}{\nabla}_{\textbf{r}}\cos\theta\int d\theta'd\phi'\frac{\sin\theta'}{4\pi\sqrt{\left|\textbf{r}\right|^{2}+R-2R\left|\textbf{r}\right|\cos\gamma}}
\end{align}
where $\gamma$ is the angle between $\textbf{r}$ and $\textbf{r}'$.
For $r>R$ we can use
\begin{equation}
\frac{1}{\sqrt{\left|\textbf{r}\right|^{2}+R^{2}-2R\left|\textbf{r}\right|\cos\gamma}} =\sum_{l=0}^{\infty}\frac{R^{l}}{r^{l+1}}P_{l}\left(\cos\gamma\right)
\end{equation}
which gives
\begin{align*}
\textbf{H}_{dem}\left(\textbf{r}\right) & =-\frac{M_{S}}{4\pi}2\pi R^{2}\nabla_{\textbf{r}}\cos\theta\sum_{l=0}^{\infty}\frac{R^{l}}{r^{l+1}}\int_{0}^{\pi}d\theta'\sin\theta'P_{l}\left(\cos\gamma\right)
\end{align*}.
How should I proceede from here to obtain the expression you showed? First I have to make $\gamma$ map into $\theta'$, although I cannot see how this should give something like
\begin{equation}
\propto \sum_{l=0}^{\infty}\frac{R^{l}}{r^{l+1}}\overset{=\frac{2}{2l+1}\delta_{l1}}{\overbrace{\int_{0}^{\pi}d\theta'\sin\theta'\cos\theta'P_{l}\left(\cos\theta'\right)}} = \frac{2R}{3r^2}
\end{equation}
Answer: The $\theta$ in $$\begin{align}
\nabla_{\textbf{r}}\cdot\textbf{M}\left(\textbf{r}\right) & =\hat{r}\cdot\hat{r}\frac{\partial M_{r}\left(\textbf{r}\right)}{\partial r}\nonumber \\
& =-M_{S}\cos\theta\delta\left(r-R\right)
\end{align}$$
refers to a fixed direction with respect to the $z$ axis. So, when you put it in the integral, it should not be primed.
Then, you are somehow setting $r_>$ and $r_<$ equal to $z$ qt the end of resoliving your integrals. They should just be $r$.
So your $r<R$ expression is correct because the $r$ and the $\cos\theta$ give you the $z$ that you had erroneously put to begin with.
But for the $r>R$ region, you should get:
$$ \textbf{H}_{dem}\left(z\right) =-\frac{M_{S}}{3}\nabla_{\textbf{r}}\left(R^{2}\frac{R}{r^{2}}\cos\theta \right) = 2\frac{M_S R^3}{3}\frac{\cos\theta}{r^3}\hat{\mathbf{r}} + \frac{M_S R^3}{3}\frac{\sin\theta}{r^3}\hat{\boldsymbol{\theta}} ,$$
using $$\hat{\mathbf{r}}\cos\theta -\hat{\mathbf{z}} = \sin\theta \hat{\boldsymbol{\theta}}, $$
this becomes:
$$ \textbf{H}_{dem}\left(z\right) = M_S R^3\frac{\cos\theta}{r^3}\hat{\mathbf{r}} - \frac{M_S R^3}{3}\hat{\mathbf{z}} .$$
You can easily prove this is a special case of the general expression for a dipole:
$$ \mathbf{B}(r>R) = \frac{\mu_0}{4\pi} \left ( -\frac{\mathbf{m}}{r^3} + \frac{3 (\mathbf{m}\cdot \mathbf{r})\mathbf{r}}{r^5} \right ), $$
with $\mathbf{B} = \mu_0 \mathbf{H}$ and $\mathbf{m} = \frac{4\pi}{3}\pi R^3 \mathbf{M}$. | {
"domain": "physics.stackexchange",
"id": 72853,
"tags": "electromagnetism, magnetic-fields, greens-functions, magnetic-moment"
} |
What does general relativity say about the relative velocities of objects that are far away from one another? | Question: What does general relativity say about the relative velocities of objects that are far away from one another? In particular:--
Can distant galaxies be moving away from us at speeds faster than $c$? Can cosmological redshifts be analyzed as Doppler shifts? Can I apply a Lorentz transformation in general relativity?
Answer:
What does general relativity say about the relative velocities of objects that are far away from one another?
Nothing. General relativity doesn't provide a uniquely defined way of measuring the velocity of objects that are far away from one another. For example, there is no well defined value for the velocity of one galaxy relative to another at cosmological distances. You can say it's some big number, but it's equally valid to say that they're both at rest, and the space between them is expanding. Neither verbal description is preferred over the other in GR. Only local velocities are uniquely defined in GR, not global ones.
Confusion on this point is at the root of many other problems in understanding GR:
Question: How can distant galaxies be moving away from us at more than the speed of light?
Answer: They don't have any well-defined velocity relative to us. The relativistic speed limit of c is a local one, not a global one, precisely because velocity isn't globally well defined.
Question: Does the edge of the observable universe occur at the place where the Hubble velocity relative to us equals c, so that the redshift approaches infinity?
Answer: No, because that velocity isn't uniquely defined. For one fairly popular definition of the velocity (based on distances measured by rulers at rest with respect to the Hubble flow), we can actually observe galaxies that are moving away from us at >c, and that always have been moving away from us at >c.[Davis 2004]
Question: A distant galaxy is moving away from us at 99% of the speed of light. That means it has a huge amount of kinetic energy, which is equivalent to a huge amount of mass. Does that mean that its gravitational attraction to our own galaxy is greatly enhanced?
Answer: No, because we could equally well describe it as being at rest relative to us. In addition, general relativity doesn't describe gravity as a force, it describes it as curvature of spacetime.
Question: How do I apply a Lorentz transformation in general relativity?
Answer: General relativity doesn't have global Lorentz transformations, and one way to see that it can't have them is that such a transformation would involve the relative velocities of distant objects. Such velocities are not uniquely defined.
Question: How much of a cosmological redshift is kinematic, and how much is gravitational?
Answer: The amount of kinematic redshift depends on the distant galaxy's velocity relative to us. That velocity isn't uniquely well defined, so you can say that the redshift is 100% kinematic, 100% gravitational, or anything in between.
Let's take a closer look at the final point, about kinematic versus gravitational redshifts. Suppose that a photon is observed after having traveled to earth from a distant galaxy G, and is found to be red-shifted. Alice, who likes expansion, will explain this by saying that while the photon was in flight, the space it occupied expanded, lengthening its wavelength. Betty, who dislikes expansion, wants to interpret it as a kinematic red shift, arising from the motion of galaxy G relative to the Milky Way Galaxy, M. If Alice and Betty's disagreement is to be decided as a matter of absolute truth, then we need some objective method for resolving an observed redshift into two terms, one kinematic and one gravitational. But this is only possible for a stationary spacetime, and cosmological spacetimes are not stationary. As an extreme example, suppose that Betty, in galaxy M, receives a photon without realizing that she lives in a closed universe, and the photon has made a circuit of the cosmos, having been emitted from her own galaxy in the distant past. If she insists on interpreting this as a kinematic red shift, the she must conclude that her galaxy M is moving at some extremely high velocity relative to itself. This is in fact not an impossible interpretation, if we say that M's high velocity is relative to itself in the past. An observer who sets up a frame of reference with its origin fixed at galaxy G will happily confirm that M has been accelerating over the eons. What this demonstrates is that we can split up a cosmological red shift into kinematic and gravitational parts in any way we like, depending on our choice of coordinate system.
For those with a more technical background in abstract math, the following description may be helpful. (The answer by knzhou does a nice job of explaining this in nontechnical terms.) Spacetime in GR is described as a semi-Riemannian space. A velocity vector is a vector in the tangent space at a particular point. Velocity vectors at different points belong to different tangent spaces, so they aren't directly comparable. To compare them, you need to parallel transport them to the same spot. If the spacetime is (approximately) flat, then you can do this, and you can say, for example, that the sun's velocity vector minus Vega's velocity vector is a certain value. But if the spacetime is not even approximately flat (e.g., at cosmological scales), then parallel transport is path-dependent, so the comparison becomes completely ambiguous.
Related: Why is the observable universe so big?
References
Davis and Lineweaver, Publications of the Astronomical Society of Australia, 21 (2004) 97, msowww.anu.edu.au/~charley/papers/DavisLineweaver04.pdf | {
"domain": "physics.stackexchange",
"id": 48398,
"tags": "general-relativity, cosmology"
} |
Nested IF code in GUI Controls | Question: I'm writing GUI controls and there are many places where there are many nested ifs checking for some result.
function TMyObject.GetCursor: TCursor;
begin
if CanDragX then
begin
if CanDragY then
Result := crSizeAll
else
Result := crSizeWE;
end
else if CanDragY then
Result := crSizeNS
else
if CanClick then
Result := crHandPoint
else
Result := crArrow;
end;
How would you format/rewrite this code?
Answer: I would do this:
function TMyObject.GetCursor: TCursor;
begin
if CanDragX and CanDragY then
Result := crSizeAll
else if CanDragX then
Result := crSizeWE
else if CanDragY then
Result := crSizeNS
else
if CanClick then
Result := crHandPoint
else
Result := crArrow;
end;
But really it is a matter of style and what is most readable to you (and the person who will maintain it). | {
"domain": "codereview.stackexchange",
"id": 1957,
"tags": "delphi"
} |
Identifying irreps of $SU(2)$ | Question: How does one verify that, the representations of $SU(2)$ corresponding to $j=1/2$ or $j=1$ is irreducible? I think showing the irreducibility (taking the representative matrices into a block-diagonal form is not always trivial.) Is there a theorem which can tell us whether a $SU(2)$-representation is irreducible or not?
Answer: Schur's lemma asserts that a representation is irreducible if its commutant is trivial, that is it only contains multiples of the identity.
Let $G$ be your group and let $\pi$ be any representation of $G$ over the representation vector space $V$ (usually a Hilbert space). Then $\pi$ is a group homomorphism between $G$ and $\text{GL}_n(\mathbb C)$. The commutant of the representation is the set
$$\pi(G)' = \{S\in\text{GL}_n(\mathbb C)\ |\ ST = TS\quad\forall T\in \pi(G)\}$$
i.e. all the elements in $\text{GL}_n(\mathbb C)$ that commute with every element in the image of the representation. If $\pi$ is irreducible, by the above mentioned Schur's lemma, this commutant reduces to
$$\pi(G)' = \{\lambda\cdot 1_{\text{GL}_n(\mathbb C)}\ |\ \lambda\in\mathbb C\}$$
which means that the only elements that commute with the image of the representation are just multiples of the identity matrix.
The centre of the representation $\pi$ is the intersection $\pi(G)\cap\pi'(G)$, which contains all the elements of the representation $\pi$ that commute with $\pi$ itself. By Schur's lemma, such operators are scalars, i.e. just numbers. An example of such an element is the Casimir operator, which in the case of the group $SU(2)$ defines the total angular momentum, and takes the form
$$J^2 = j(j+1)\cdot 1_{\text{GL}_n(\mathbb C)}.$$
Since there is no way of linking two irreducible representations that have a different representation of the Casimir operator (i.e. a different value of $j$) by an intertwiner, every value of $j$ gives you a different (i.e. unitarily inequivalent) irreducible representation.
For the second question, the above argument shows that a test for the irreducibility of a representation is just to check that its commutant is trivial. | {
"domain": "physics.stackexchange",
"id": 89322,
"tags": "mathematical-physics, group-theory, group-representations, representation-theory"
} |
Minimum-weight feedback edge set in undirected graph - how to find it? Is it NP hard problem? | Question: Let G = (V,E) be an undirected graph. A set F ⊆ E of edges is called a feedback-edge set if every cycle of G has at least one edge in F. Suppose that G is a weighted undirected graph with positive edge weights. Design an efficient algorithm to find a minimum-weight feedback-edge set (MWFES).
There is large discussion about this problem in another question in programmers forum: https://stackoverflow.com/questions/10791689/how-to-find-feedback-edge-set-in-undirected-graph The conclusion of that discussion is that MWFSE are those edges that remain in graph after removal of edges that belong to the maximum-weight spanning tree (which is minimum-weight spanning tree on the graph with reversed weights, those spanning trees can be found by Kruskal algorith in polynomial time).
I have found simple counterexample of such strategy, see in pic. This counterexample takes into account the case when one edge can belong to two cycles and it can be more benefitial to chose this common edge instead of choosing two edges with minimum weights. So, in this example MWFES clearly consists from the one edge with weight 7, but minimum spanning tree have edges 7+100+101 and that leaves edges 5 and 6 for the MWFES, but MWFES with those two edges is not minimal one.
So, are there non-NP hard algorithms for finding MWFES?
p.s. One comment in the cited question goes like this "Note that you can easily find minimal (not minimum) solutions". So - what is distinction between minimum and minimal in such problems. Is there such distinction at all?
Answer: If the weight function is non-negative, then the set of edges not contained in a maximum weight spanning tree is indeed a MWFES. But if the weight function is arbitrary, a MWFES will contain all the non-positive weight edges and then a MWFES can be computed for the remaining edges.
Here the proof of the non-negative case. Let $\mathcal{T}$ be a maximum weight spanning tree in G and $w$ the weight function. It can be shown first that there exists a MWFES without edges from $\mathcal{T}$, and then, to conclude, that a FES without edges from $\mathcal{T}$ must contain all the edges not in $\mathcal{T}$.
The proof of the first statement: First of all a FES always exists (all the edges from G are a FES). Let $\mathcal{F}$ be a MWFES in graph G, and suppose that it contains an edge $e\in\mathcal{T}$. We will show that there exists a FES not containing $e$ and with weight at most the weight of $\mathcal{F}$. Let $\mathcal{C}_e$ denote the set of cycles $C$ in G such that $C \cap \mathcal{F} = \{e\}$ (i.e. cycles containing $e$ and not any other edge from $\mathcal{F}$). Notice that we can assume $|\mathcal{C}_e| = 1$: if it is zero, then $\mathcal{F}\setminus \{e\}$ is a FES with weight at most the weight of $\mathcal{F}$; if it is at least $2$, then let $C_1, C_2 \in \mathcal{C}_e$, $C_1 \neq C_2$. $C_1 \bigtriangleup C_2$ is a cycle (or a family of disjoint cycles) not containing $e$, and by definition of $\mathcal{C}_e$, not containing any edge from $\mathcal{F}$, which is a contradiction because $\mathcal{F}$ is a FES. The cycle in $\mathcal{C}_e$ must contain an edge $e'\notin \mathcal{T}$ such that $w(e') \le w(e)$ (otherwise $\mathcal{T} \cup \{e'\} \setminus \{e\}$ for any $e' \notin \mathcal{T}$ in the cycle would be a spanning tree with strictly larger weight). Then $\mathcal{F} \cup \{e'\} \setminus \{e\}$ is also a FES with weight at most the weight of $\mathcal{F}$, and hence a MWFES. Iterating this argument, we can show that there exists a MWFES without edges from $\mathcal{T}$.
Finally, each fundamental cycle (cycle in $\mathcal{T} \cup \{e\}$ for each $e \notin \mathcal{T}$) must be covered by an edge from the FES, so a FES without edges from $\mathcal{T}$ must contain all the edges $e\notin \mathcal{T}$. This shows that the set of edges not in $\mathcal{T}$ are a MWFES.
I hope it is clear enough. Best regards! | {
"domain": "cstheory.stackexchange",
"id": 4261,
"tags": "graph-theory, graph-algorithms, np-hardness"
} |
Handling null arguments in a factory class | Question: I have a Factory class that gives out instances of ResponseImpl class. It accepts one Destination class and up to four Source classes. At least one of the four Source classes should be not null.
So, instead of asking calling program to pass nulls in the arguments to Factory class, I have overloaded the method to accept at least one up to four Source classes. But, something tells me that this way of doing it is not a good idea. Although this works, I feel like this can be handled better.
Could you let me know how to make the following piece of code better?
public static Response initResponse(final Destination destination, final Source1 source1) {
return initResponse(destination, source1, null, null, null);
}
public static Response initResponse(final Destination destination, final Source1 source1, final Source2 source2) {
return initResponse(destination, source1, source2, null, null);
}
public static Response initResponse(final Destination destination, final Source1 source1, final Source2 source2, final Source3 source3) {
return initResponse(destination, source1, source2, source3, null);
}
public static Response initResponse(final Destination destination, final Source1 source1, final Source2 source2, final Source3 source3, final Source4 source4) {
try {
if(source1 == null && source2 == null && source3 == null && source4 == null) {
throw new ABODataException("Atleast one source has to be not null");
}
return ResponseImplManager.getInstance(destination, source1, source2, source3, source4);
} catch (Exception e) {
throw new ABODataException("Unable to instantiate Response:"+e.getMessage());
}
}
Answer: It's a bit problematic to refactor when the parameters are from four different types of classes. I have a couple of alternative approaches for you though, pick one if you like one.
One option would be to make your classes Source1, Source2, etc. implement one common interface. Perhaps this would even simplify your ResponseImplManager.getInstance? If they would be of the same interface (or shared superclass would also be possible of course), then you could use a method like this:
public static Response initResponse(final Destination destination,
final SourceInterface... sources) {
// sources is treated as an array here
}
Or you could send a list of sources
public static Response initResponse(final Destination destination,
final List<SourceInterface> sources)
Another option would be to use a Builder pattern
Whichever way you go in, I got one more suggestion. Use the two-parameter constructor for Exceptions to provide "Caused by" information, this will give you a more detailed stack trace
} catch (Exception e) {
throw new ABODataException("Unable to instantiate Response", e);
// or use this:
throw new ABODataException("Unable to instantiate Response: " + e.getMessage(), e);
} | {
"domain": "codereview.stackexchange",
"id": 5909,
"tags": "java, design-patterns"
} |
What is the simplest possible topological Bloch function? | Question: Kohmoto (1985) pointed out in Topological Invariant and the Quantization of the Hall Conductance how TKNN's calcuation of Hall conducance is related to topology, in which topologically nontriviality is said to be equivalent to impossiblility choosing a global phase of Bloch function $u_k (r)$ in Brillouin zone. As shown in the Figure, we can choose two distinct gauges in sector I and II, and the curvature is the loop integral of phase mismatch on boundary $\partial H$.
What is the simplest possible Bloch function that is
topologically nontrivial, and
an eigenstate of Bloch Hamiltonian?
Bloch Hamiltonian: $H(k_x,k_y) = \frac{1}{2m}(-i\partial + {\bf k}+e{\bf A}(x,y))^2 + U(x,y)$ where $U$ is lattice periodic.
Answer: Surprisingly, according to Immanuel Bloch's group (no relation to F. Bloch!), the simplest topological Bloch function is the 1D staggered lattice. The topological invariant is the Zak phase, the Barry phase accrued by walking across the edge of the Brillouin zone. The article will explain it better than I can: Direct Measurement of the Zak phase in Topological Bloch Bands | {
"domain": "physics.stackexchange",
"id": 5892,
"tags": "topology, electronic-band-theory"
} |
Multiple TCP connections using rosserial_server with WiFi Arduino | Question:
Dear all,
I am currently running an Arduino Uno* with an ATWINC1500 WiFi breakout from Adafruit and can successfully publish topics over WiFi to a master using rosrun rosserial_python serial_node.py tcp. This has been achieved using code modified from this previous ROS answer supplied by @ahendrix.
The trouble I run into is running two of these devices at the same time. They both have unique IP addresses and are connected to the network, running differently named topics, and individually can be successfully connected and publish topics.
The first device can be connected using rosrun rosserial_python serial_node.py tcp. Simply starting a second device does not lead to be being connected. If I run a second serial_node.py with a different node name to avoid conflicts (by using rosrun rosserial_python serial_node.py tcp __name:=OtherNodeName), I get an error "socket.error: [Errno 98] Address already in use".
As the socket can't be given as an argument to serial_node.py, I tried using rosrun rosserial_server socket_node as rosserial_server in theory can handle multiple connections. Using default port 11411, I find the node displays:
Listening for rosserial TCP connection on port 11411
but never finds any devices, unlike rosserial_python serial_node.py which was successful.
Am I missing something for rosserial_server to be able to identify and connect to these devices?
Is there another way of connecting multiple TCP devices such as these arduinos with wifi simultaneously?
Thank you for your time,
Andy
UPDATE:
Having followed some very good advice from gvdhoorn, I investigated the packet traffic using Wireshark. Turns out for rosserial_server socket_node the connection between the arduino and the master must not complete (there are a few messages back and forth, but finishes before the "normal" regular published traffic commences as seen when using rosserial_python serial_node.py tcp).
By adding a short delay after nh.advertise(), the master and the arduino are able to make a connection (though socket_node reports some errors to begin with), and start publishing. This also allows for multiple arduinos to be connected! However, the connection then drops out after a minute or so from what I observe as TCP out-of-order issues.
I thought this might be because of the delay(1000) in the main loop, so I replaced this instead for using millis() and letting nh.spinonce() not get held up. This didn't improve the situation, it in fact might have made the drop out occur earlier, but I haven't done any actual timings to confirm that.
There are occurrences of TCP out-of-order when using serial_node.py, but they seem to resolve themselves very quickly and not terminate the connection between the arduino and the master when using identical code. I left this running for a few hours with no termination of connection.
Better questions:
What is it about socket_node that leads to traffic being treated differently compared to serial_node.py and lead to the connection being terminated?
What could be done to improve the stability of TCP connections when using rosserial_server socket_node, in particular when using arduinos?
Is there a more elegant way of automatically reestablishing a connection if it is lost, beyond using if (!nh.connected()){reconnect_function} ?
Thanks.
Below is the original code on the arduino for anyone who is interested:
#include <SPI.h>
#include <WiFi101.h>
#include <ros.h>
#include <std_msgs/String.h>
//////////////////////
// WiFi Definitions //
//////////////////////
#include "arduino_secrets.h"
const char* ssid[] = SECRET_SSID;
const char* password[] = SECRET_PASS;
IPAddress server(192, 168, 0, 100); // ip of your ROS server
IPAddress ip; //Storage local IP address
int status = WL_IDLE_STATUS;
WiFiClient client;
class WiFiHardware {
public:
WiFiHardware() {};
void init() {
// do your initialization here. this probably includes TCP server/client setup
client.connect(server, 11411);
}
// read a byte from the serial port. -1 = failure
int read() {
// implement this method so that it reads a byte from the TCP connection and returns it
// you may return -1 is there is an error; for example if the TCP connection is not open
return client.read(); //will return -1 when it will works
}
// write data to the connection to ROS
void write(uint8_t* data, int length) {
// implement this so that it takes the arguments and writes or prints them to the TCP connection
for(int i=0; i<length; i++)
client.write(data[i]);
}
// returns milliseconds since start of program
unsigned long time() {
return millis(); // easy; did this one for you
}
};
//ROS applicable
ros::NodeHandle_<WiFiHardware> nh;
std_msgs::String msg;
ros::Publisher string("outString", &msg);
char hello[13] = "Hello World!";
void setupWiFi()
{
WiFi.begin(ssid, password);
//Print to serial to find out IP address and debugging
Serial.print("\nConnecting to "); Serial.println(ssid);
uint8_t i = 0;
while (WiFi.status() != WL_CONNECTED && i++ < 20) delay(500);
if(i == 21){
Serial.print("Could not connect to"); Serial.println(ssid);
while(1) delay(500);
}
Serial.print("Ready! Use ");
ip = WiFi.localIP();
Serial.print(ip);
Serial.println(" to access client");
}
void setup() {
//Configure pins for adafruit ATWINC1500 breakout
WiFi.setPins(8,7,4);
Serial.begin(9600);
setupWiFi();
delay(2000);
nh.initNode();
nh.advertise(string);
}
void loop() {
msg.data = hello;
string.publish(&msg);
nh.spinOnce();
delay(1000);
}
*Actually a Teensy 3.2 because the Uno does not have enough memory, however it emulates and behaves like an Uno
Originally posted by Andy West on ROS Answers with karma: 81 on 2017-10-26
Post score: 2
Original comments
Comment by gvdhoorn on 2017-10-26:
I'm not a rosserial expert, but just to get a feeling for what is going on at the network level, I'd run Wireshark while trying to get things to connect. That should at least tell you whether your clients are trying to connect, how, where and in what way. It should also show server activity.
Comment by gvdhoorn on 2017-10-26:
Note that using TCP/IP over a wireless link is not a very good choice for anything interactive. Whenever I used rosserial over wireless I've used the UDP variant. Trouble is though that it currently lacks some of the functionality that the TCP implementation does have.
Comment by Andy West on 2017-10-26:
I've investigated using Wireshark. For rosserial_python serial_node.py it appears to run nicely, however for rosserial_server socket_node there are some errors being thrown around. By adding delay(100); just after nh.advertise(); the arduinos manage to connect! But dropout randomly later.
Comment by ahendrix on 2017-10-27:
The "socket.error: [Errno 98] Address already in use" suggest that both copies of serial_node.py are trying to listen on the same port number. I'm not sure if a single node is designed to handle multiple clients or not, you might want to look for documentation about this.
Comment by ahendrix on 2017-10-27:
If a single node is supposed to handle multiple clients, then this behavior is a bug. Otherwise you may want to try running the second node and its Arduino on a different port number.
Comment by ahendrix on 2017-10-27:
(Failing to handle multiple clients is still probably a bug for most software that has a listening socket, but if the docs don't say it's supported you may have a hard time getting this "bug" fixed unless you're willing to submit the fix yourself)
Comment by subarashi on 2020-09-14:
Hello dear community. I need your help. I am facing problems with rosserial_server socket_node.
can you please take a look to my post? Thank you so much
RosSerial_Server
Answer:
The result of this investigation is that it may have been an intermittent bug. Having reinstalled rosserial_server just to be pedantic in checking everything, the continual dropping out of devices using rosserial_server socket.launch has be greatly reduced (drop out ~ 1 per hour compared to ~1 per minute).
The rosserial_server socket_node will accept multiple clients successfully as it should (unlike serial_node.py as ahendrix points out).
The only advice I can offer to others is to include a reconnect function as mentioned in the question. As rosserial_server is in development, if I can track down where this bug might of occurred, I will report it.
Thank you all for your help.
Originally posted by Andy West with karma: 81 on 2017-10-31
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by prnthp on 2017-11-05:
I don't know how much this applies to your problem but I use multiple rosserial_server nodes instead (one for each client) by adding args="_port:=11412", etc. to the .launch file.
Comment by Andy West on 2017-11-05:
That's a useful tip! Never though of setting separate ports to allow for separate nodes when using rosserial_server. Thanks prnthp.
Comment by ELLEO on 2018-02-22:
can you guys elaborate how to do this port modification? can i make the port number arguments changable and connect to several ports like that? please guide | {
"domain": "robotics.stackexchange",
"id": 29195,
"tags": "ros, arduino, wifi, rosserial, multiple-machines"
} |
Why doesn't cyclopropyl methyl carbocation stabilises itself by ring expansion? | Question: I have been taught that ring expansion stabilises smaller cyclic compounds to a great extent. So why does cyclopropyl methyl carbocation shows such type of resonance
rather than expanding its ring
thereby decreasing it's angle strain, further it will be stabilized by hyperconjugation effect and inductive effect.
Answer: Unfortunately, the premise of your question is wrong.
There is no cyclopropylcarbinyl or cyclobutyl cation. They're the same thing. Unlike cations which you might be more familiar with, for example, t-butyl cation, the cation you're looking at here, also called homoallyl, is a non-classical cation.
The open form and the two structures you drew in the second line are not distinct. They're just resonance forms of each other.
You ask does this ion undergo ring expansion? Well, yes and no. Yes, in that there is a contribution from the 4-member ring to the resonance structure. No, in that you can't expand something that doesn't exist. | {
"domain": "chemistry.stackexchange",
"id": 10755,
"tags": "organic-chemistry, stability, carbocation"
} |
Solubility in different pressure conditions | Question: I am working on my scholarship exam practice. I believe this exam assumes high school + first year university knowledge. And I'm not quite sure what I did wrong. Could you please have a look?
The solubility of oxygen in 1.0 L water is 28 mL at 25 °C and 1.0 atm.
How much oxygen can be dissolved in 1.0 L of water at 25 °C and 4.0 atm?
The answer remains 28 mL but IMO I thought when pressure goes up 4 times while other factors are constant, the solubility would do that as well. Please help on where I missed and what topics I should focus on specifically.
Answer: This is a confusing question because, while solubilities can be reported in mL/L, there can be ambiguity when choosing a pressure during conversion to this unit, for instance using the following equation to convert from molarity $c$ to volume/volume units:
$$ \rho = \frac{cRT}{p}$$
In this online data page, for instance , in some columns the solubility is reported in mL/L, converted to this unit using throughout a pressure of $\pu{1 atm}$ (the source of the data cannot be verified), even as the partial pressure of oxygen $p_{O_2}$ is increased above $\pu{1 atm}$.
In the OP the volume refers presumably to an equivalent volume of oxygen gas at the (partial) pressure of the gas above the liquid.
Using the numbers from the OP, assuming the gas is ideal, then
$ c=\frac{101325\times28\times 10^{-6}}{8.3145\times298.15} \pu{M} =\pu{ 1.1 \times 10^{-3}M}$
when $p = \pu{1 atm}$.
On the other hand, if $p = \pu{4 atm}$
$ c=\frac{4\times101325\times28\times 10^{-6}}{8.3145\times298.15} \pu{M} =\pu{ 4.6\times 10^{-3} M}$
so the solubility is the same ($\pu{28 mL/L}$) if described in terms of volume at the given pressure, but $\times 4$ greater when regarded as a molar concentration.
Note by the way that according to a number of sources the solubility at $\pu{25 ^\circ C}$ is $\pu{258 \mu M}$ (~$\pu{8.2 mm Hg}$) at $p_{O_2}=\pu{1 atm}$, and $\pu{1.0 mM}$ at $p_{O_2}=\pu{4 atm}$. | {
"domain": "chemistry.stackexchange",
"id": 11987,
"tags": "physical-chemistry, solubility"
} |
Short (user)script to store current URL and close tab in qutebrowser | Question: #!/bin/sh
#Path where to store saved urls.
URL_FILE="$QUTE_CONFIG_DIR/saved_urls"
#Arguments are appended to the url (ie for comments)
printf "$QUTE_URL $*\n" >> "$URL_FILE"
printf "tab-close" >> "$QUTE_FIFO"
I often have lot of tabs open, so thought it'd be useful to make a script to do this.
I don't feel very comfortable hardcoding the URL_FILE path, since it could happen that there was already such a file in that directory with that name, so I feel it's a bad practice, but not sure what to do instead. I'm also concerned about potential corruption of the file where I'm saving the URLs, but making this automatically backup seems like too much effort for such a simple script.
Also, I made it so additional arguments are appended as "comments" at the end of line of each URL. The simple syntax I used has the side effect that running it with no arguments results in a empty space at end of each line. It could be fixed but it'd make the code uglier; I'm not sure what's the best practice here.
Answer:
printf "$QUTE_URL $*\n"
This will want to consume (nonexistent) arguments if the expansion contains a % format specifier, such as %c, %d or %f.
Instead use printf '%s\n' "$QUTE_URL $*" or echo "$QUTE_URL $*". | {
"domain": "codereview.stackexchange",
"id": 41905,
"tags": "sh"
} |
Case where anti-dependency doesn't need pipeline stalling | Question: While exploring the various types of data hazards in a pipeline, I came across a statement in my book which said that anti-dependency mayn't lead to cycle stalling.
But i couldnt find at example for the same. Can anyone help me with it please?
Thanks in advance!
Answer: You can eliminate anti-dependences (WAR dependences) with register renaming, so if you're doing renaming there will be no stalls from anti-dependences. The other case is if you already know the write isn't going to interfere with the read (for example you know all the instructions with reads in them have already done their register reads).
Note that the traditional "5-stage" pipeline is doing the equivalent of register renaming using its bypass registers, so in that case you never need to stall. | {
"domain": "cs.stackexchange",
"id": 13874,
"tags": "computer-architecture, cpu-pipelines"
} |
Hinged rods concepts | Question: Let's say we have 2 surfaces - rod and a wall which are connected with a hinge.
What I'm struggling to understand is the direction in which reaction forces at a hinge act.
Do they always act perpendicular to the surface in which they're in contact with - why or why not - could someone please help clarify.
Answer: If you push on an object, it can respond in one of two ways (or a combination of them):
It can move (accelerate) away
It can push back
If you push on the end of the rod that is attached to the hinge in any direction, the rod doesn't move away. The hinge holds that end in place. Therefore the hinge is able to develop forces to resist the push (reaction forces) in any direction. | {
"domain": "physics.stackexchange",
"id": 98941,
"tags": "newtonian-mechanics"
} |
Pulling Docker Machine IP | Question: I am working with MongoDB through Docker, and I have a terrible bash command to pull the Docker Machine IP so I can sanely connect locally.
export dockerip="$(docker-machine ls | awk '{print $5}' | sed -n '2p' | sed 's/tcp:\/\///' | sed 's/\:2376//')"
mongo --host "$dockerip" --port 27017
How can I optimise the docker-machine line?
Answer: You can reduce the number of commands in it.
First of all, it's easy to combine these two commands:
awk '{print $5}' | sed -n '2p'
This is exactly the same:
awk 'NR == 2 {print $5}'
It's also easy to combine the two other sed commands:
sed 's/tcp:\/\///' | sed 's/\:2376//'
Like this:
sed -e 's/tcp:\/\///' -e 's/\:2376//'
By doing so you reduce the number of processes to run,
making the pipeline of commands more efficient.
These last sed commands can be written a bit simpler by using a different separator character, and avoiding unnecessary escapes:
sed -e 's?tcp://??' -e 's/:2376//'
And the last term might be better more generalized:
sed -e 's?tcp://??' -e 's/:.*//'
Putting this all together:
export dockerip="$(docker-machine ls | awk 'NR == 2 {print $5}' | sed -e 's?tcp://??' -e 's/:.*//')"
You could actually still go further,
and completely eliminate sed by moving those operations into awk.
But this is already much more efficient than the original,
simple, and probably good enough. | {
"domain": "codereview.stackexchange",
"id": 16378,
"tags": "bash, collections, mongodb"
} |
Why the coefficient of volume expansion is three times the linear coefficient expansion? | Question: In volume expansion $\beta\approx3\alpha$, let's suppose we have a rectangular solid with height $h_o$, width $w_o$, and length $l_o$, then $V_o=h_ow_ol_o$
Now, after heating, each side increases by a factor of $\alpha\Delta T$,
$V=h_o(1+\alpha\Delta T)w_o(1+\alpha\Delta T)l_o(1+\alpha\Delta T)$
$\Delta V= V-V_o=h_ow_ol_o(1+\alpha\Delta T)^3-V_o$
We will substitute first equation, since $V_o=h_ow_ol_o$
$\Delta V=V_o(1+\alpha\Delta T)^3-V_o$
$\Delta V= V_o((1+\alpha\Delta T)^3-1)$
$\Delta V= V_o(1+\alpha\Delta T)(1+\alpha\Delta T)(1+\alpha\Delta T)-1)$
$\Delta V=V_o(3\alpha\Delta T+3(\alpha \Delta T)^2+(\alpha \Delta T)^3)$
Now, the last part is the part that I don't get, the last two parts cancel out because if $\alpha$ was small, then we will cancel $\alpha^2 $ because $\alpha^2 $ will get even smaller, and $\alpha^3 $ will be even much smaller, thus we get rid of them, and we are left with $\Delta V=V_o3\alpha \Delta T$.
I don't understand why are we making this assumption here? What if $\alpha$ was big, the last two terms won't really tend to zero... it just doesn't make sense to me in which to why we cancel them out.
Please excuse my ignorance, I would really appreciate it if someone would kindly explain it in a way that it would make sense to me.
Answer: The more advanced way (and more precise way) of treating this is to express the linear expansion as $$\frac{dL}{L}=\alpha dT$$ where dL and dT are differentials. Then by the product rule,
$$dV=d(hwl)=wldh+hldw+hwdl=hwl\left(\frac{dh}{h}+\frac{dw}{w}+\frac{dl}{l}\right)=hwl(3\alpha dT)$$So,$$\frac{dV}{V}=3\alpha dT$$ | {
"domain": "physics.stackexchange",
"id": 46778,
"tags": "thermodynamics"
} |
Replace mercury with silicone oil in diffusion pump | Question: We have obtained an old mercury-vapor diffusion pump for high vacuum. However, we do not want to operate it with mercury in our lab due to health concerns (which is presumably the reason it was scrapped by the original owner).
Instead, we plan to fill it with a silicone oil of the DC-704 type or similar. The pump has thermal regulation and a large baffle to stop oil vapor counter-propagation.
Has anybody tried replacing mercury with oil? What are possible problems with this approach?
Answer: I did some internet searching and did not find anyone that had done a direct swap out. Granted, probably the majority of these replacements occurred before the existence of the internet ;-). As I am sure you are already aware (but to be thorough), modern diffusion pump designs do use synthetic working fluids.
It will probably take some fine tuning of the temperatures of the heater and cooler, but I don't see any reason why it wouldn't work. If you have means of measuring the vacuum level, you could run through some different temperatures to empirically find its new ideal operating conditions.
Worst case, the pumping efficiency will be less because the nozzle diameter was designed for higher density particles. I am not a vacuum physicist however, so who knows, it may work even better than before ;-)
Another possible issue is that it may have more backstreaming of the oil into the vacuum chamber resulting in unacceptable levels of oil contamination. Maybe the baffle you mentioned will take care of this; hard to say without some experimentation.
Just as long as no oxygen reaches the oil during operation, it wont fail catastrophically, so give it a try and let us know how it went!
Reference:
Good overview of diffusion pump working fluids
Some good history or mercury as a diffusion pump working fluid | {
"domain": "engineering.stackexchange",
"id": 771,
"tags": "pumps, safety, vacuum, vacuum-pumps"
} |
$2$ ebits $+$ $1$ bit $ = 2$ bits? | Question: The Set-Up
Let's say Alice and Bob share $k$ ebits, i.e., they have one-qubit each of each of the $k$ Bell states $\frac{\vert 00\rangle+\vert 11\rangle}{\sqrt{2}}$. Now, Alice wants to send $2n$ bits of classical information to Bob, however, she wants accomplish this via sending as few classical bits as possible. They don't have a quantum channel established between them that Alice can use to implement superdense coding. Is there a way for Alice and Bob to come up with a strategy that lets them utilize the shared ebits to send $2n$ bits of classical information via sending $<2n$ bits?
The Question
I can think of a strategy that, on average, allows Alice to successfully relay $2$ bits of classical information per each classical bit she sends to Bob (using $2$ ebits on average). However, I'm interested in knowing if there are more efficient protocols for such communication (or if such communication is even possible). The spirit of the question is to know whether shared entanglement can be used to do something akin to superdense coding without using a quantum channel.
My Attempt
The relatively simple strategy that I can think of is as follows:
Let's say Alice wants to send two bits of classical information $a,b$. Alice performs the following sub-protocol every minute (and positively completes the sub-protocol well within a minute) until she has sent a classical bit to Bob.
If $a=0$:
She measures her share of an ebit in $Z$ basis.
If the outcome of the measurement is equal to $b$, she sends the classical bit $a$ to Bob.
If the outcome of the measurement is not equal to $b$, end of the sub-protocol.
If $a=1$:
She measures her share of an ebit in $X$ basis.
If the outcome of the measurement is equal to $b$, she sends the classical bit $a$.
If the outcome of the measurement is not equal to $b$, end of the sub-protocol.
Alice and Bob have agreed on the order in which ebits will be used up, thus, if Bob receives the message from Alice during the $n^{\rm th}$ minute since they start the procedure (say, Alice uses $1$ extra classical bit to ping Bob that she's starting so Bob can keep the clock), Bob knows that he should use the $n^{\rm th}$ ebit to receive the message. Once Bob has picked the right ebit, if the classical bit he received was $0$, he measures his share of the ebit in $Z$ basis, otherwise he measures it in $X$ basis -- and voila, he recovers the second bit of classical information, namely, $b$.
Of course, it would take Alice $2$ tries, on average, to get the $b$ that she wants to send as the result of her measurement of her share of ebit. Thus, all in all, we have $2n$ ebits $+$ $n$ bits $=$ $2n$ bits.
Answer: The protocol you describe is correct, but the resource estimation is wrong. Furthermore, something like superdense coding with purely classical bits is prohibited by the No-Signaling Principle.
This protocol sends more than $n$ cbits
Bob has communication channel $\mathcal{C}$, and he conditions his decisions on the information he can learn from $\mathcal{C}$, whether he received a message or not. The communication protocol between Alice and Bob is initialized with them sharing a sequence of Bell states $|\Phi_i\rangle = (|00\rangle + |11\rangle)/\sqrt{2}$ ($i$ just indicates which timestep corresponds to which Bell state).
When communication begins, both parties set $i=0$. Then, Alice behaves as you described, and Bob behaves as follows:
Define a variable $z\in\{0, 1, \emptyset\}$
While $b$ is unknown:
1. Check $\mathcal{C}$:
a. If no signal was received: $z \leftarrow\emptyset$
b. If a signal $a \in \{0, 1\}$ was received, $z \leftarrow a$
2. Apply the following controlled operation on $|\Phi_i\rangle$:
a. If $z=\emptyset$, discard $|\Phi_i\rangle$.
b. If $z \in \{0, 1\}$ apply $\text{controlled-}H$ (controlled on $z$) and perform computational basis measurement.
3. increment $i \leftarrow i+1$
Suppose each 2-bit message is equally likely, then theres a 50% chance nothing arrives (Bob learns $\emptyset$) and theres a 25% chance each of Bob receiving $a=0$ or $a=1$. Whether he receives anything, each check of $\mathcal{C}$ therefore counts as
$$
-\frac{1}{2} \log_2\left(\frac{1}{2}\right) -2 \cdot \frac{1}{4} \log_2\left(\frac{1}{2}\right) = 1.5\text{ cbits}
$$
of information. On average, requiring two steps means that the protocol involves transmission of $3$ cbits of information on average. From a channel capacity perspective, this is less efficient than having just sent $a$ in one timestep and $b$ in another.
As @DaftWullie wrote, this kind of protocol would be like $n$ steps of a purely classical scheme to transmit $x \in \{0, 1\}^n$ where Alice only sends $x_i$ if $x_i=1$. If Bob doesn't receive anything, he records $0$, and so paradoxically (for uniformly distributed $x$) he can learn $n$ bits of information $x=x_1x_2\dots x_n$ having only received $n/2$ cbits. The resolution to this paradox is that Bob does receive information every time he checks the channel, even if nothing arrived.
A deterministic protocol for $2 \text{ ebits } + 1 \text{ cbit } \rightarrow 2 \text{ cbits }$ cannot exist
This follows from the No-Signaling Principal, which states that the use of shared ebits alone cannot allow communication of information.
Alice has a two-bit message $m \in \{0,1\}^2$ ($m$ is uniformly random), access to a shared entangled state $|\psi\rangle$ (e.g. $(|00\rangle + |11\rangle\sqrt{2})^{\otimes n})$, and an operation $A$ that acts on $m$ and her half of $|\psi\rangle$ to output $x$. She applies $A$ to $m$ and $|\psi\rangle$ (so that $|\psi\rangle$ becomes $|\psi'\rangle$), and then she uses a communication channel $\mathcal{C}$ to transmit $x$.
Bob has a an operation $B$ that acts on a single bit $x$ and his half of $|\psi'\rangle$, and his goal is to output a two-bit message $m'\in\{0,1\}^n$ that matches what Alice meant to send ($m'=m$).
We can prove that this is impossible by contradiction. Say $B$ does exist, implying that a single bit $x$ can be combined with $|\psi'\rangle$ to output $m'$. Now discard $\mathcal{C}$ completely - there is now no communication between the parties - and instead feed $B$ a uniformly random cbit $x'$ along with $|\psi'\rangle$. Alice still applies $A$ to $|\psi\rangle$ to prepare the exact same shared entangled state with Bob. But 50% of the time, Bob gets lucky and randomly guesses $x'=x$, inputs it to $B$, and outputs $m'=m$. The probability of guessing $m'=m$ is 25% (4 possible messages $m$), but the existence of $B$ means Bob has a 50% chance of guessing $m$ correctly. This implies that signaling occurred using the shared entanglement, and is in direct contradiction with the No-Signaling principle. Therefore, $B$ cannot not exist and there is no way to accomplish $2 \text{ ebits } + 1 \text{ cbit } \rightarrow 2 \text{ cbits }$. | {
"domain": "quantumcomputing.stackexchange",
"id": 3997,
"tags": "information-theory, communication, superdense-coding"
} |
String trim functions with destination size limitation | Question:
3 string trimming functions:
Trim whitespace from the beginning of a string.
Trim whitespace from the end of a string.
Trim whitespace from both ends of a string.
Looking to improve code for:
Correctness (tested code - but so far no holes) - especially overlapping src/dest cases.
Design
Portability (3 tried: MS, *nix, embedded)
Efficiency
Note: (unsigned char) used as unless x==EOF, isspace(x) is not defined when x < 0.
/* String trim functions
*
* Result is sized limited.
* Overlapping source and destination is OK.
* Pointer parameters are assumed to point to valid strings spaces.
*/
#include <ctype.h>
#include <stddef.h>
#include <string.h>
// Return string beginning with first non-white-space in string `src`
// Return NULL when destination is too small, `dest` unchanged.
char *trim_begin(char *dest, size_t dest_size, const char *src) {
while (isspace((unsigned char) *src)) {
src++;
}
size_t src_size = strlen(src) + 1;
if (src_size > dest_size) {
return NULL;
}
return memmove(dest, src, src_size);
}
// Return string beginning string `src` but not including any of its trailing
// white-spaces.
// Return NULL when destination is too small, `dest` unchanged.
char *trim_end(char *dest, size_t dest_size, const char *src) {
size_t len = strlen(src);
while (len > 0 && isspace((unsigned char) src[len - 1])) {
len--;
}
size_t src_size = len + 1;
if (src_size > dest_size) {
return NULL;
}
memmove(dest, src, len);
dest[len] = 0;
return dest;
}
// Return string beginning with first non-white-space in string `src` but not
// including any of its trailing white-spaces.
// Return NULL when destination is too small, `dest` unchanged.
char *trim(char *dest, size_t dest_size, const char *src) {
while (isspace((unsigned char) *src)) {
src++;
}
return trim_end(dest, dest_size, src);
}
Answer: Looks good to me.
A couple of (very) minor points: you should say in the comments that src mustn't be NULL; you don't need src_size in trim_end if you use the test 'if (len>=dest_size)'. | {
"domain": "codereview.stackexchange",
"id": 13426,
"tags": "c, strings"
} |
Fade-in / fade-out effects for several buttons | Question: The function of this code is to show/hide divs by way of fadeIn/fadeOut, starting with an empty div (home) and fading in one of the other divs (work,cms,contact,) on click, then fading out the last div and fading in the next on click, and then fading out any div and fading in 'panel' (an empty div) when you click on home.
<script type='text/javascript'>
$(function(){
$('.panel').hide();
$('.work_button').click(function(){
$('#cms,#contact').fadeOut(function(){
$('#work').fadeIn();
});
});
$('.cms_button').click(function(){
$('#work,#contact').fadeOut(function(){
$('#cms').fadeIn();
});
});
$('.contact_button').click(function(){
$('#cms,#work').fadeOut(function(){
$('#contact').fadeIn();
});
});
$('.home_button').click(function(){
$('.panel:visible').fadeOut();
});
});
</script>
<div class="menu">
<ul class="menu">
<li class="home_button">home</li>
<li class="work_button">work</li>
<li class="cms_button">cms</a></li>
<li class="contact_button">contact</a></li>
</ul>
</div>
<div class="panel" id="work">
<p>...</p>
</div>
<div class="panel" id="cms">
<p>...</p>
</div>
<div class="panel" id="contact">
<p>...</p>
</div>
Answer: I have no problem setting this up in html - it makes sense in this case.
<div class="menu">
<ul class="menu">
<li class="home_button">home</li>
<li class="work_button"><a href="#work" data-fadeOut="#cms,#contact">work</a></li>
<li class="cms_button"><a href="#cms" data-fadeOut="#work,#contact">cms</a></li>
<li class="contact_button"><a href="#contact" data-fadeOut="#cms,#work">contact</a></li>
</ul>
</div>
Set a target on each one
$(function() {
$('li a', '.menu').click(function(e) {
e.preventDefault();
var target = $($(this).attr('href'));
$($(this).data('fadeOut')).fadeOut(function(){
target.fadeIn();
});
})
$('.panel').hide();
$('.home_button').click(function(){
$('.panel:visible').fadeOut();
});
})
You get the nifty coincidental benefit here of this working ok even without javascript since <a href="#id"></a> also happens to be the the syntax for 'scroll-to element'
Also, you should not name functions with an uppercase capital. Because javascript has no native way of determining which functions are meant to be run in a functional style and which are meant to be used as constructors with the new keyword, it is a very well known convention to use upper case names only for functions which are meant to be invoked with the new keyword. Learn more about javascript capitalization conventions here.
Edit: Totally misread your code. I believe my rewrite now reflects what you're trying to do | {
"domain": "codereview.stackexchange",
"id": 1558,
"tags": "javascript, jquery, animation"
} |
KNN RandomizedSearchCV typerror | Question: While trying to study a binary classification problem with KNN and trying to tune the parameters of the model I'm getting a typerror that I quite don't understand. Is a parameter missing or something?
TypeError: init() takes exactly 1 positional argument (0 given)
Here is my code:
import pandas as pd
import numpy as np
from sklearn import model_selection
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RandomizedSearchCV
# generate example data
X = pd.DataFrame({
'a': np.linspace(0, 10, 500),
'b': np.random.randint(0, 10, size=500),
})
y = np.random.randint(0, 2, size=500)
# set search parameters
n_neighbors = [int(x) for x in np.linspace(start = 1, stop = 100, num = 50)]
weights = ['uniform','distance']
metric = ['euclidean','manhattan','chebyshev','seuclidean','minkowski']
random_grid = {
'n_neighbors': n_neighbors,
'weights': weights,
'metric': metric,
}
# run search
knn = KNeighborsClassifier()
knn_random = RandomizedSearchCV(estimator = knn, random_state = 42,n_jobs = -1,param_distributions = random_grid,n_iter = 100, cv=3,verbose = 2)
knn_random.fit(X,y)
knn_random.best_params_
Full error:
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\process_executor.py", line 418, in _process_worker
r = call_item()
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\externals\loky\process_executor.py", line 272, in __call__
return self.fn(*self.args, **self.kwargs)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py", line 567, in __call__
return self.func(*args, **kwargs)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 225, in __call__
for func, args, kwargs in self.items]
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py", line 225, in <listcomp>
for func, args, kwargs in self.items]
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py", line 528, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\neighbors\base.py", line 916, in fit
return self._fit(X)
File "C:\Users\dungeon\Anaconda3\lib\site-packages\sklearn\neighbors\base.py", line 254, in _fit
**self.effective_metric_params_)
File "sklearn\neighbors\binary_tree.pxi", line 1071, in sklearn.neighbors.ball_tree.BinaryTree.__init__
File "sklearn\neighbors\dist_metrics.pyx", line 286, in sklearn.neighbors.dist_metrics.DistanceMetric.get_metric
File "sklearn\neighbors\dist_metrics.pyx", line 443, in sklearn.neighbors.dist_metrics.SEuclideanDistance.__init__
TypeError: __init__() takes exactly 1 positional argument (0 given)
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
<ipython-input-2-b5a9f7ea82d0> in <module>
29 knn_random = RandomizedSearchCV(estimator = knn, random_state = 42,n_jobs = -1,param_distributions = random_grid,n_iter = 100, cv=3,verbose = 2)
30
---> 31 knn_random.fit(X,y)
32 knn_random.best_params_
~\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in fit(self, X, y, groups, **fit_params)
720 return results_container[0]
721
--> 722 self._run_search(evaluate_candidates)
723
724 results = results_container[0]
~\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in _run_search(self, evaluate_candidates)
1513 evaluate_candidates(ParameterSampler(
1514 self.param_distributions, self.n_iter,
-> 1515 random_state=self.random_state))
~\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py in evaluate_candidates(candidate_params)
709 for parameters, (train, test)
710 in product(candidate_params,
--> 711 cv.split(X, y, groups)))
712
713 all_candidate_params.extend(candidate_params)
~\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py in __call__(self, iterable)
928
929 with self._backend.retrieval_context():
--> 930 self.retrieve()
931 # Make sure that we get a last message telling us we are done
932 elapsed_time = time.time() - self._start_time
~\Anaconda3\lib\site-packages\sklearn\externals\joblib\parallel.py in retrieve(self)
831 try:
832 if getattr(self._backend, 'supports_timeout', False):
--> 833 self._output.extend(job.get(timeout=self.timeout))
834 else:
835 self._output.extend(job.get())
~\Anaconda3\lib\site-packages\sklearn\externals\joblib\_parallel_backends.py in wrap_future_result(future, timeout)
519 AsyncResults.get from multiprocessing."""
520 try:
--> 521 return future.result(timeout=timeout)
522 except LokyTimeoutError:
523 raise TimeoutError()
~\Anaconda3\lib\concurrent\futures\_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
~\Anaconda3\lib\concurrent\futures\_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
TypeError: __init__() takes exactly 1 positional argument (0 given)
Answer: The problem is with the metric seuclidean. The SEuclideanDistance constructor requires a parameter (see the DistanceMetric documentation). This parameter is not given, hence the error about the missing argument.
It should in principle be possible to give the parameter in the searchgrid, but there are several known issues with RandomizedSearchCV that make this impossible (or at least harder than necessary).
So until these issues are fixed I would suggest to remove seuclidean from the list of search parameters, or to use GridSearchCV. Whatever option is the best choice depends on the details of what you are trying to achieve. | {
"domain": "datascience.stackexchange",
"id": 5520,
"tags": "machine-learning, python, classification, hyperparameter, k-nn"
} |
How does membrane potential vary between intraceullar membranes and the cellular membrane? | Question: Question
Does each type of membrane have a different membrane potential? I'm especially interested in answers that can cite academic papers that have attempted to measure membrane potentials.
Discussion
I've asked about the composition of membranes before , and although I recieved some information, I didn't get all the information I was after. This isn't a problem with our community but rather with the field at large: the popular thinking is membranes are membranes are membranes (mostly due to the difficulties in studying membrane biophysics experimentally).
This is how wikipedia defines membrane potential:
Membrane potential (also transmembrane potential or membrane voltage)
is the difference in electric potential between the interior and the
exterior of a biological cell. - Wikipedia
This isn't strictly true. Intracellular membranes also have membrane potentials as one can imagine, and there is some unverified information regarding compartmental pH values. This is why I am interested to find out if there have been studies attempting to quantify this across the cell membrane, and across different subcellular membranes.
Answer: Yes, various intracellular membranes do have potential differences, but as you can imagine they are more difficult to measure experimentally, so in general data on this is scarce.
Summary
Mitochondrial membrane: 150mV-180mV with negativity on the matrix side. Seth et al 2011
Endoplasmic reticulum membrane: 75-95mV with negativity in the ER. Qin et al 2011, Worley et al 1994
Golgi: No notable membrane potential. Schapiro & Grinstein 2000
Lysosomal: 20mV with more negativity on the cytosolic side. Koivusalo et al 2011
Discussion
The mitochondrial membrane potential is probably the best studied case. This is the potential difference over the inner mitochondrial membrane, between the mitochondrial matrix and the intermembrane space; it is about 150mV (more negative on the matrix side). There is also a potential difference between the mitochondrial cristae (folds of the inner membrane) and the matrix which is used to drive ATP synthesis; it is probably larger than 150mV, but this is still poorly understood I think. The outer membrane is permeable (has pores) so there should be no potential difference between the intermembrane space and the cytosol.
For the endoplasmic reticulum (ER), there is little data available I think, but there are some estimates based on measurements of ion transport, which gives a value of about 75 to 95 mV (more negative inside the ER). Probably this depends a lot on the situation, as the ER is involved in regulating ion homeostasis (notably Ca2+).
For the Golgi apparatus, this study found that no potential difference vs. the cytosol, based on the movements on H+ and counterions.
Lysosomal membrane potential has been measured directly, giving values of about 20mV (more positive inside).
For the nucleus I don't know of any data, but I would assume there is no potential difference here, given that the nuclear membrane has large pores. | {
"domain": "biology.stackexchange",
"id": 4816,
"tags": "biochemistry, biophysics, cell-membrane, literature"
} |
What is the difference between mounted and unmounted filters? | Question: I'd like to know more about filter wheels and filters, but don't know the difference between these types. Can anyone explain (ideally with images)?
Answer: Unmounted filters are just the bare filters which are designed to be put into slots in filter wheels (which have square or round recessed slots in them to hold filters). Mounted filters have a metal rim around them which holds (and slightly protects the edge of the filters) and typically have screw threads on them to enable them to be screwed into eyepieces.
The image below shows one of Las Cumbres Observatory's 14 position filter wheel that holds 2inch/50mm round filters. This wheel is holding both mounted filters (in the right/top right quadrant) and unmounted filters in the remainder of the wheel. Here is a closeup of one of the mounted filters: | {
"domain": "astronomy.stackexchange",
"id": 5903,
"tags": "photography"
} |
Present syntax rules in a more succinct way | Question: I am resuming syntax rules for a small language:
\begin{eqnarray*}
e_C &::=& \epsilon \mid constant \\
\textit{prefix-op} &::=& - \\
\textit{infix-op} &::=& + \mid - \mid * \\
e_E &::=& e_C \mid \textit{prefix-op} \; e_E \mid e_E \; \textit{infix-op} \; e_E \mid \textit{function}_E (e_{E,0}, e_{E,1},\ldots) \mid \textit{specialE} \\
e_V &::=& e_C \mid \textit{prefix-op} \; e_V \mid e_V \; \textit{infix-op} \; e_V \mid \textit{function}_V (e_{V,0}, e_{V,1}, \ldots) \mid \textit{specialV}
\end{eqnarray*}
The expressions $e_E$ and $e_V$ have something common: $e_C$. Some of their operators look same: $\textit{prefix-op}$ and $\textit{infix-op}$. Their functions $\textit{function}_E$ and $\textit{function}_V$ are 2 different sets, and $\textit{specialE}$ and $\textit{specialV}$ are totally different.
I am wondering if it is still possible to present this syntax more succinct, more compact in a conventional way... Could anyone help?
Answer: Van Wijngaarden grammars are a way to handle this problem. It would be something like:
KIND : e ; v .
e KIND : e c ; prefix-op, e KIND ; e KIND, infix-op, e e; function KIND (e KIND, ",", e KIND, "," ... ) ; special KIND .
(There is no "..." in the formalism, so you'd have either to extend the grammar syntax or express it in another way). | {
"domain": "cs.stackexchange",
"id": 780,
"tags": "formal-languages, formal-grammars, semantics"
} |
Magnetic analog for how charge distributions can be approximated as point charges? | Question: I know that far away from a distribution of charges, the fields can be approximated using Gauss' Law, and the fields become very similar to that of a point charge. Is there an analog for this in magnetics?
For example, how would this work for a loop of current, could it be approximated as a "point magnet"? That doesn't seem right to me, so I'm guessing it has no analog.
But if a ball was magnetized, could it be approximated in such a fashion? I guess I am asking, at a sufficient distance from this magnetized ball, what do the nature of the fields caused by the equivalent currents due to its magnetization become "similar to"?
Answer: The loop of current can indeed be considered as a small bar magnet with the same magnetic moment as the current loop. Unlike in electrostatics, magnetic monopoles do not exist. The magnetized ball is a magnetic dipole, and hence it will have the same effects as any other magnetic dipole including a current carrying loop, at a large distance. | {
"domain": "physics.stackexchange",
"id": 73217,
"tags": "electromagnetism, electrostatics, magnetic-fields, magnetostatics"
} |
Does anyone know how to install ROS without a graphics card? | Question:
I need everything except graphics dependent packages. How do I install ROS this way? More than just the bare bones but without graphics?
Originally posted by GraceC on ROS Answers with karma: 1 on 2012-10-18
Post score: 0
Answer:
You can install individual ros stacks (or by package in new style groovy or later). They will pull in all their required dependencies if you use apt to install them.
Also even if you don't have a graphics card you can still usually install the graphics libraries. They just won't be very useful, and will require a little more disk space.
Originally posted by tfoote with karma: 58457 on 2013-01-14
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 11429,
"tags": "ros"
} |
Energy in Electromagnetic Waves | Question: Looking at diagrams of Electromagnetic Waves, it would appear to me that at certain times the waves have zero amplitude, and consequently zero energy. Indeed, substituting in the sinusoidal terms into the Poynting Vector equation, It would seem that at certain times the energy disappears. Why is this not the case?
Answer: At certain positions in the waves, the EM field is zero and thus zero energy is stored at those positions. But at other positions, the EM field is at a maximum, and those points are local maxima of energy. That pattern of oscillation between zero energy and maximum energy moves in the direction of propagation of the wave but never changes - in particular, the maximum value of the EM field (the amplitude) stays constant, and there is no time at which the EM field is zero everywhere.
As for your conclusion from the definition of the Poynting vector that the energy disappears at certain times: it's not correct, but I couldn't tell you why without seeing how you did it. What I can do is show the calculation for an electromagnetic plane wave, defined by
$$\vec{E}(z,t) = E_0\hat{x}\sin(kz - \omega t)$$
The corresponding magnetic field is
$$\vec{B}(z,t) = \frac{1}{c}\hat{k}\times\vec{E}(z,t) = \frac{E_0}{c}\hat{y}\sin(kz - \omega t)$$
since I'm setting the direction of propagation as $\hat{k} = \hat{z}$. Check that this satisfies Maxwell's equations if you want. The energy density is
$$\begin{align}
u(z,t)
&= \frac{\epsilon_0}{2}E(z,t)^2 + \frac{1}{2\mu_0}B(z,t)^2 \\
&= \frac{\epsilon_0}{2}\biggl(E_0\hat{x}\sin(kz - \omega t)\biggr)^2 + \frac{1}{2\mu_0}\biggl(\frac{E_0}{c}\hat{y}\sin(kz - \omega t)\biggr)^2 \\
&= \epsilon_0 E_0^2\sin^2(kz - \omega t)
\end{align}$$
using $\frac{1}{c^2} = \epsilon_0\mu_0$. This energy density does vary from point to point, but at any fixed time, if you take the average over one cycle, a length $\frac{2\pi}{k}$, you get
$$k\int_0^{2\pi/k} u(z,t)\mathrm{d}z
= k\int_0^{2\pi/k} \epsilon_0 E_0^2\sin^2(kz - \omega t)\mathrm{d}z
= k\epsilon_0 E_0^2 \frac{\pi}{k}
= \pi\epsilon_0 E_0^2$$
which does not depend on time. So the average energy density is constant, it does not ever go to zero. | {
"domain": "physics.stackexchange",
"id": 19755,
"tags": "electromagnetic-radiation"
} |
Torque required by a motor: using a counterweight to reduce? | Question: I have a motor and gearbox that is rated for a certain amount of torque in newton meters. Its primary load $M_1$ is connected by a rigid rod (assumed massless) length $L_1$. So I reckon the (max) torque required is $L_1 {\times} M_1 {\times} g$, achieved at the rotation angle that's directly perpendicular / furthest out from the motor+gearbox.
However that's a bit outside of my motor+gearbox's specs. So my plan was to put a "counterweight" directly opposed on the other side to oppose this torque, and therefore get my requirements within range. The counterweight is just a mass $M_2$ attached a distance $L_2$ in the same axis of the first load, but going on the other side of the motor+gearbox
Check the image below:$\hspace{150px}$.
Questions:
Does this scheme physically work, as in lower my (max) torque required on the motor+gearbox to $M_1 {\times} L_1 {\times} g - M_2 {\times} L_2 {\times} g$?
If so this seems pretty sweet, but what other physics quantities might I be sacrificing in this setup? Speed? Forces on the shaft? Any other practical considerations I should think about?
Answer: Torque $\tau$ is related to angular acceleration $\alpha$ by the equation:
$\tau = I\alpha$
where $I$ is the moment of inertia of the rotating body. The moment of inertia in the case of a point mass $m$ is:
$I = mr^2$
The moment of inertia opposing acceleration by the motor's torque will therefore increase due to the additional mass $M_2$ by the amount:
$I_{incr} = M_2r_2^2$
Practically, this means that while you will now be able to get the motor to lift $M_1$ because it is counterbalanced by $M_2$, applying torque to lower $M_1$ from any position from which it can be lowered, and for an equivalent torque, will now result in less angular acceleration. Your system will be more 'sluggish', i.e. it will be less responsive.
Yes, your torque formulation is correct.
Further about 'sluggish':
In the figure below we analyse motion of $M_1$, in the right hemisphere. The analysis applies for the mirror image case in the left hemisphere. The torque required to lift $M_1$ is proportional to $cos\theta$, where $\theta$ is the angle between the vertical (mass weight) and the direction of action of the torque. Without $M_2$, the motor would not be able to lift $M_1$ between some points $A$ and $C$, since $cos\theta$ would be high, close to $1$ near $\theta = 0^o$ (horizontal shaft position). The addition of $M_2$ will facilitate upward motion of $M_1$ between $A$ and $C$, a positive thing.
The negative effect of adding $M_2$ is that all acceleration to lower $M_1$ (red arrows in figure) will necessary require longer acceleration time since $\alpha = \tau / I$ and $I$ is now larger. Changes in rotational direction will also take longer because of the higher angular momentum to be overcome. There will be a larger time lag between changing torque direction in the motor (by electrical control), and the masses changing direction. This delayed response is called sluggishness. | {
"domain": "physics.stackexchange",
"id": 49291,
"tags": "newtonian-mechanics, torque"
} |
Smallest possible controlled chain reaction-based nuclear fission reactor? | Question: I think, it could be a reactor utilizing californium-242 (or, at least, weapon-grade U-235) cooled and moderated by heavy water.
Essentially, it were similar to an atomic bomb, but - of course - it would be optimized for stay around the equilibrial state.
The result were probably a very strong neutron source.
I think, it could be used for various things, mainly in the space applications.
Does any cost/size estimations about this ever created?
Answer: The RM-1 Russian submarine reactor had a core of less than one cubic metre. It had about 100kg fuel load, which was 90% enriched (i.e. 90kg) Uranium 235. This was liquid-metal cooled [specifically a "eutectic lead-bismuth alloy (44.5 wt% lead, 55.5 wt% bismuth)" - source as below, p40], so didn't need a moderator.
Submarine 901 had in its right-board reactor just 30.6 kg of Uranium 235; this was at 20% enrichment, so a total fuel load of 153 kg.
These were controllable chain-reaction based reactors.
Source:
NKS-138 Russian Nuclear Power Plants for Marine Applications
Ole Reistad, Norwegian Radiation Protection Authority, Norway
Povl L. Ølgaard, Risø National Laboratory, Denmark
Published by Nordic Nuclear Safety Research, April 2006
ISBN: 87-7893-200-9
http://www.nks.org/scripts/getdocument.php?file=111010111120029 | {
"domain": "engineering.stackexchange",
"id": 2032,
"tags": "nuclear-technology"
} |
Is this a valid formulation of three dimensional Quantum Mechanics? | Question: I've learned 1D quantum mechanics so far. I was thinking of generalising the ideas into three dimensions. This is what seems most natural to me:
The wave-function is no longer a linear vector, but a three dimensional matrix. The wave function matrix should satisfy in the position basis:
$$\iiint|\psi (x,y,z)|^2 dx dy dz=1$$
The wave function can be converted to the momentum basis using a Fourier transform. The Fourier transform is no longer just a 2D infinite matrix. I think it should now be a 6 dimensional infinite matrix. My reasoning is: A three dimensional finite square matrix has $n^3$ elements. The fourier transform assigns to each of these elements, a three dimensional matrix with $n^3$ elements. So the total number of elements in the Fourier transform matrix is : $n^3\cdot n^3=n^6$. This implies six dimensions.
The Fourier transform should be:
$$\psi (p_x,p_y,p_z)=h^{-3/2}\iiint\psi(x,y,z) e^{i\left(\frac{2\pi x p_x}{h}+\frac{2\pi y p_y}{h}+\frac{2\pi z p_z}{h}\right)}dx dy dz$$
Similar to the Fourier transform, the Hamiltonian should be another 6 dimensional infinite matrix. The look of Schrodinger's equation seems to remain unchanged, as in it can still be written as:
$$i\frac{h}{2\pi}\frac{d}{dt} |\psi \rangle=H|\psi \rangle$$
But the wave function and the hamiltonian should respectively be interpreted as 3 dimensional and 6 dimensional matrices.
Is this the correct generalisation to three dimensions?
Answer: It is not "wrong" per se, in the sense that you are going to get to the right expressions, but referring to the wavefunction as a matrix is not really the right way to think about it. Calling the wavefunction a matrix suggests that it has become an operator, when it is still just a vector in Hilbert space. The fact that you can split up the "index" of the vector into 3 parts does not really change the fact that you only have one index.
To give a concrete example of what I mean, if I have a matrix $M_{ij}$ I can multiply it by a vector $v_i$ to get a new vector, say $u_i$
$$
u_i = \sum_j M_{ij}v_j
$$
However if you were to multiply your "matrix" wavefunction, $\psi(x,y,z)$ by a "vector" function $f(x)$
$$
\int dz\; \psi(x,y,z)f(z)
$$
then, as far a quantum mechanics is concerned, you have done something really weird. You should always be getting integrals over all three spatial dimensions, for example in the Fourier transform you have
$$
\tilde{\psi}(p_x,p_y,p_z) = \int dx dy dz\; e^{\imath(p_x x + p_y y + p_z z)} \psi(x,y,z)
$$
That is you don't really have 3 indices, you have 1 vector valued index. In the same way the Hamiltonian (and all other operators) continue to correspond to matrices. | {
"domain": "physics.stackexchange",
"id": 82993,
"tags": "quantum-mechanics, hilbert-space, wavefunction, schroedinger-equation"
} |
Which Chandrasekhar Limit do I use? 1.39 or 1.44? | Question: Different sources online say that the Chandrasekhar Limit is either 1.40 or 1.39, or 1.44 solar masses. Why the discrepancy? I heard it might have to do with the composition of the white dwarf, but, correct me if I'm wrong, don't pretty much all white dwarf stars have the same composition?
Answer: The "classic" Chandrasekhar mass is given by
$$ M_{\rm Ch} = 1.445 \left(\frac{\mu_e}{2}\right)^{-2}\ M_{\odot},$$
where $\mu_e$ is the number of mass units per electron ($\mu_e =2$ for ionised carbon, oxygen or helium; $\mu_e = 1$ for hydrogen, $\mu_e= 56/26$ for iron (56)).
This assumes a white dwarf star of uniform, ionised composition and an equation of state given by that for ideal fermions (electrons in this case). It also ignores rotation and it uses Newtonian gravity. When defined in this way, the Chandrasekhar mass is obviously composition dependent - but most white dwarfs are expected to be formed from something with $\mu_e =2$.
In practice, a white dwarf can never become this massive. There are (at least) 4 effects that can mean the de facto Chandrasekhar mass (if it is taken to mean the maximum mass of a stable white dwarf) and at least one effect that can increase it.
The effects that reduce it:
Electrostatic interactions The electrons and ions do not form an ideal Fermi gas because of Coulomb interactions. The net result is to make a white dwarf slightly more compressible and the maximum mass about 2% lower. The correction is composition dependent; it is stronger for white dwarfs made of material with a larger atomic number.
General Relativity Massive white dwarfs are strongly affected by General Relativity. White dwarfs will become unstable at a finite density when the hydrostatic equilibrium is treated with GR. This finite density is reached at about $1.38-1.39 M_{\odot}$.
Inverse beta decay At high densities the electron Fermi energy becomes high enough to initiate inverse beta decay reactions. Electrons are captured by protons in the nuclei and this renders the star unstable. For a Carbon white dwarfs this will also happen at about $1.39 M_{\odot}$, for oxygen the threshold is lower at $\sim 1.37M_{\odot}$.
Pycnonuclear reactions At high densities, even at low temperatures, fusion reactions can be initiated by quantum tunnelling. This changes the composition of the white dwarf and can change $\mu_e$ or lower the density threshold for inverse beta decay making the star unstable.
The effect that can increase the maximum mass for stability is rotation. Some authors have claimed that the limit can be increased to as high as $2.6M_{\odot}$ in certain circumstances, though these are usually referred to as Super-Chandrasekhar mass (e.g. Das & Mukhopadhyay 2013) and are not stable. A stability analysis of rotating white dwarfs in GR found that the maximum stable mass was only increased to around $1.47 M_{\odot}$ for a Carbon white dwarf (Boshkayev et al. 2012), but these would be rotating faster than any observed white dwarfs. | {
"domain": "astronomy.stackexchange",
"id": 3259,
"tags": "white-dwarf"
} |
Where does the Solar System end? | Question: The Sun is roughly 4 light-years away from the closest star system, the Alpha Centauri system. The planets in our Solar System, however, aren't even close to that far away from the Sun. Where does our Solar System end? Is the edge considered to be the orbit of Neptune, the Kuiper Belt, the Oort Cloud, or something else?
Note: this question on Physics SE is similar, but the answers posted here go in different directions.
Answer: According to the Case Western Reserve University webpage The Edge of the Solar System (2006) an important consideration is that
The whole concept of an "edge" is somewhat inaccurate as far as the solar system is concerned, for there is no physical boundary to it - there is no wall past which there's a sign that says, "Solar System Ends Here." There are, however, specific regions of space that include outlying members of our solar system, and a region beyond-which the Sun can no longer hold any influence.
The last part of that definition appears to be a viable definition of the edge of the solar system. Specifically,
valid boundary region for the "edge" of the solar system is the heliopause. This is the region of space where the sun's solar wind meets that of other stars. It is a fluctuating boundary that is estimated to be approximately 17.6 billion miles (120 A.U.) away. Note that this is within the Oort Cloud.
Though the article above is a bit dated, the notion of the heliopause has been still of interest to scientists, particularly how far away it is - hence, the interest in the continuing Voyager missions, which states on the website, that it has 3 phases:
Termination Shock
Passage through the termination shock ended the termination shock phase and began the heliosheath exploration phase. Voyager 1 crossed the termination shock at 94 AU in December 2004 and Voyager 2 crossed at 84 AU in August 2007.
(AU = Astronomical Unit = mean Earth-sun distance = 150,000,000km)
Heliosheath
the spacecraft has been operating in the heliosheath environment which is still dominated by the Sun's magnetic field and particles contained in the solar wind.
As of September 2013, Voyager 1 was at a distance of 18.7 Billion Kilometers (125.3 AU) from the sun and Voyager 2 at a distance of 15.3 Billion kilometers (102.6 AU).
A very important thing to note from the Voyager page is that
The thickness of the heliosheath is uncertain and could be tens of AU thick taking several years to traverse.
Interstellar space, which NASA's Voyager page has defined as
Passage through the heliopause begins the interstellar exploration phase with the spacecraft operating in an interstellar wind dominated environment.
The Voyager mission page provide the follow diagram of the parameters listed above
It is a bit complicated as we do not know the full extent of what the dynamics are like out there, a recent observation reported in the article A big surprise from the edge of the Solar System, reveal that the edge may be blurred by
a strange realm of frothy magnetic bubbles,
Which is suggested in the article could be a mixing of solar and interstellar winds and magnetic fields, stating:
On one hand, the bubbles would seem to be a very porous shield, allowing many cosmic rays through the gaps. On the other hand, cosmic rays could get trapped inside the bubbles, which would make the froth a very good shield indeed. | {
"domain": "astronomy.stackexchange",
"id": 5564,
"tags": "the-sun, solar-system, oort-cloud, kuiper-belt"
} |
Half-atwood machine with accelerating pulley | Question: This is a follow-up to my previous question, in which I am now trying to calculate the acceleration of the cart (as before, the block surfaces are frictionless). The mass $m_2$ is attached to $M$ via a frictionless track that keeps it fixed onto the side of $M$ but allows it to move vertically with respect to $M$.
To do this, I first need to find the tension on the string.
I came up with the system of equations:
$$T=m_{1}\left( a-a_{M} \right)$$
$$T-m_{2}\text{g}=-m_{2}a$$
Where the acceleration of $m_1$ is $a - a_M$ (where $a$ is the magnitude of the acceleration of $m_2$) since $ m_1 $ moves right while $M$ moves left. However, is this the right way to approach it — would the accelerations even cancel each other since the surface is frictionless and the movement of $M$ cannot "pull" $m_1$ along?
Also, since the tension created is responsible for the acceleration of the $M + m_2$ system, isn't this equation also valid?: $$T = a_M\left(M + m_2 \right)$$
Clearly the solutions to both are not the same, so one (or possible both) of the above are incorrectly accounting for the forces acting on the mass.
Furthermore, once this tension is found how do you account for the normal force between $M$ and $m_2$ that also affects the acceleration?
Alternatively, is it possible to solve this using conservation of momentum or by using center of mass?
Answer: Actually, due to lack of friction, you could say $m_1$ would remain right there and $M$ would move underneath it. But, you could say it the other way round. $M$ is still and $m_1$ moves to the left with acceleration $\frac{m_1a_M-T}{m_1}$. Now, $m_2$ would have an acceleration of $\frac{T-m_2g}{m_2}$. Now, both these accelerations are same. Solve for acceleration by adding the 2 equations (this will eliminate $T$) | {
"domain": "physics.stackexchange",
"id": 25524,
"tags": "homework-and-exercises, newtonian-mechanics, classical-mechanics, kinematics, acceleration"
} |
boundary conditions for liquid with surface tension | Question: so one uses equations of motion to describe liquids (e.g. Navier–Stokes equations). These are equations for $\vec{v}(\vec{r},t)$ with boundary conditions on the surface $S$ of the liquid (e.g. $\vec{v}(\vec{r}\in S,t) = \vec{0}$).
How should one incorporate surface tension $\sigma$ in these equations/boundary conditions? It seems, only boundary conditions must change, and $\Delta p = \sigma (1/R_1 + 1/R_2)$ is the first thing that comes to mind, but how to get $1/R$ from $\vec{v}(\vec{r},t)$?
Answer: You don't want $1/R$ (although technically it means the same) but rather the full curvature term: $\Delta p=\sigma \kappa$. In fact you will get a source term in the Navier-Stokes equations that looks like this:
$$\sigma \kappa \delta(n) \mathbf{n} $$
where $\delta(n)$ is the Dirac Delta function that only has a value at the interface and $\mathbf{n}$ is the interface normal. The curvature $\kappa$ can be written as the divergence of the unit interface normal: $$\kappa=\nabla \cdot \mathbf{\frac{n}{|n|}} $$
Apart from the source term you indeed also have boundary conditions on the interface which are basically the standard free slip condition and a jump for the normal stress coming again from the Laplace pressure. There is a good explanation of these in the first part of the seminal work on fluid-fluid CFD by Brackbill.
If you are interested in the curvature itself, I think Slides 22-28 of this course on wetting are probably also a good source to take a look at for more background. | {
"domain": "physics.stackexchange",
"id": 7626,
"tags": "surface-tension, navier-stokes"
} |
position of null carriers in N=256 subcarier OFDM | Question: can anyone tell me the position of null subcarrier in $N=256$ subcarrier OFDM. since edges re assigned null to prevent interference from an adjacent symbol, i assuming that the subcarrier $-127$ to $-100$ then $0$ and then $100$-$128$ are null, is that correct assumption, please respond!
Answer: Positioning of null carriers in an OFDM signal is a protocol-dependent piece of information. Specifications will often call for some null carriers around the edge of the band (to allow for rolloff in the antialiasing filters in the front end). Another common place you'll find a null carrier is at the band center. This may be found in specifications that are targeted at low-cost (i.e. zero-IF) receiver technology. Zero-IF receivers' baseband DC offsets can cause large spurious responses in the tuned band center, hence the use of a null carrier there.
So, the short answer to your question is that it's unanswerable: it's dependent on the specifics of the system you're working with. | {
"domain": "dsp.stackexchange",
"id": 2616,
"tags": "matlab, signal-analysis, ofdm"
} |
Select the best feature selection method for classification | Question: I am trying to make predictions (using Weka) on a tabular dataset. It is a categorical dataset which is encoded by label encoder.
I got a good result for SVM and Logistic Regression, namely the accuracy is around 85%.
The dataset is high-dimensional and I like to fine-tune my accuracy.
So, I am thinking about the feature selection method. I found different feature selection techniques, such as CfsSubsetEval, Classifier Attribute eval, classifier subset eval, Cv attribute eval, Gain ratio attribute eval, Info gain attribute eval, OneRattribute eval, principal component, relief f attribute eval, Symmetric uncertainty, Wrapper subset eval.
I would like to know which one would be the best for the dataset that shows good accuracy with Logistic Regression or SVM?
Answer: I don't think that there is a single feature selection method that works best with a specific algorithm, what they do is selecting the best features based on various criteria. These features can be useful or not to the algorithm that does the classification, regardless what this algorithm is.
Without knowing anything about your data or their distribution, you can simply try a lot of those methods to see which produces the best results, and see if these generalize with the test set.
Also, SVM itself can be used for feature selection, since it finds the optimal coefficient for each feature. I don't know if you can access those coefficients through Weka (sorry, not familiar with the software), but if you could they can be an indicator of how important each feature is. | {
"domain": "datascience.stackexchange",
"id": 11225,
"tags": "classification, feature-selection, weka"
} |
Why is Heisenberg's uncertainty principle not an experimental error since it is the error created by photons striking on elementary particles? | Question: Why is Heisenberg's uncertainty principle not an experimental error since it is the error created by photons striking on elementary particles?
Answer:
it is the error created by photons striking on elementary particles
It's not. Heisenberg's uncertainty principle actually has nothing to do with any particular experiment, or any particular interaction. It's a purely mathematical statement about waves.
Its true meaning is explained in detail on the Wikipedia page, but the gist is that if you have a wave, you can express it as a function of position, $\psi(x)$, or of momentum, $\phi(p)$. These two functions are Fourier transforms of each other. You can then calculate the variance of each function, $\sigma_x^2$ and $\sigma_p^2$ respectively, using formulas given on Wikipedia, and you will find that these two quantities obey the relationship
$$\sigma_x \sigma_p \ge \frac{\hbar}{2}$$
Since $\sigma$ is a measure of how tightly concentrated the wave is around one particular point, this tells you that a wave which is tightly concentrated in position must be fairly spread out in momentum, and vice versa. (For the proper definitions of "concentrated" and "spread out," of course.)
The only way this connects to measurement is that, if you make a series of position measurements on objects with the same quantum state, the variance of those measurements will tend to the variance of the wavefunction. And similarly for momentum. So with a large number of measurements of both position and momentum, if you compute their variances, you'll find that they have to satisfy that inequality. In a sense, it's a statement about the particle's state before it gets hit with a photon (or something else), not some effect of the photon hitting it. | {
"domain": "physics.stackexchange",
"id": 15505,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle, measurement-problem"
} |
Mathematically Inclined Signal and Systems / Signal Processing Book Recommendations | Question: I'm an electronics engineering student with high inclination to analysis and pure mathematics. I was just wondering if there was any book ( or any resource ) that treats signal and systems and signal processing with a lot of mathematics rigour ( actually doing proper complex analysis, using functional analysis and linear algebra rigorously to explain convolution, fourier, laplace and z transforms for example ).
I'm very disapointed with the books i've read ( Oppenhein, Lathi and related ) because it actually throws a lot of the beauty of analysis and algebra away, focusing on the computational side.
Thanks a lot
Answer: Oppenheim and Willsky's Signals and Systems or Lathi's Linear Systems and Signals are intended for Sophomores who have only a single semester of differential equations under their belts, so it is a bit unfair to criticize them for leaving out the functional analysis and the conformal mapping. At the sophomore level my favorite book is Siebert's Circuits, Signals, and Systems. It won't give you the mathematical rigor you desire either, but you can see his great love for the mathematics and he has these wonderful, witty, footnotes that provide a great historical perspective.
There is a great book (which I love, but do not recommend to you) by the (applied) mathematician Richard Hamming (of the "Hamming window", "Hamming code", "Hamming distance", "Hamming bound" and "Hamming problem") called Digital Filters. In it he makes a number of snarky comments like:
Since we are interested mainly in using mathematics, we are obliged in our turn to be ambiguous with respect to mathematical rigor. Those who believe that mathematical rigor justifies the use of mathematics in applications are referred to Lighthill and Papoulis for rigor; those who believe that it is the usefulness in practice that justifies the mathematics are referred to the rest of this book. (1998 Dover edition, page 72.)
So in addition to the book by Papoulis that @Matt_L recommended I will add Hamming's (and Siebert's!) recommendation of M.J. Lighthill, Fourier Analysis and Generalized Functions, Cambridge University Press, 1958. | {
"domain": "dsp.stackexchange",
"id": 935,
"tags": "discrete-signals, fourier-transform, continuous-signals, dsp-core, self-study"
} |
Fail: ABORTED: No motion plan found. No execution attempted | Question:
I have used moveit api but with function "group.setPoseTarget(pose1)" it give " Fail: ABORTED: No motion plan found. No execution attempted" and at terminal of rviz it give
[ INFO] [1469775045.492605776]: LBKPIECE1: Created 1 (1 start + 0 goal) states in 1 cells (1 start (1 on boundary) + 0 goal (0 on boundary))
[ INFO] [1469775045.492659516]: No solution found after 5.050528 seconds
[ INFO] [1469775045.528370905]: Unable to solve the planning problem
[ INFO] [1469775050.530937767]: Combined planning and execution request received for MoveGroup action. Forwarding to planning and execution pipeline.
[ INFO] [1469775051.139609983]: Planning attempt 1 of at most 1
[ INFO] [1469775051.169520219]: LBKPIECE1: Starting planning with 1 states already in datastructure
[ERROR] [1469775056.175594376]: LBKPIECE1: Unable to sample any valid states for goal tree
[ INFO] [1469775056.175806507]: LBKPIECE1: Created 1 (1 start + 0 goal) states in 1 cells (1 start (1 on boundary) + 0 goal (0 on boundary))
[ INFO] [1469775056.175905203]: No solution found after 5.006600 seconds
[ INFO] [1469775056.185580711]: Unable to solve the planning problem
can any one tell me how to solve this problem?
Originally posted by FaredEtman on ROS Answers with karma: 1 on 2016-07-30
Post score: 0
Original comments
Comment by bhavyadoshi26 on 2016-12-21:
Have you solved it?
Comment by zichenxiaoxu on 2019-02-20:
Do you solve this problem? I have the same error, but I don't know how to solve it
Answer:
This can have several causes.
pose1 might simply be unreachable by the arm -> use a pose of which you know that it is reachable
pose1 is reachable but the inverse kinematics plugin failed to find a joint configuration that results in the end-effector being near that pose -> use a different IK or different parameters (trac_ik or a specialized ikfast perform good)
The planner failed to find a path to the goal -> is there a path by which you can move from the current configuration to the goal configuration (within the specified software limits)
if there are paths the planner should be able to find, you might want to try a different planner -> e.g. group.setPlannerId("RRTConnectkConfigDefault")
the planner might find the path with more time available -> group.setPlanningTime(<insert-time-limit-here>)
Originally posted by v4hn with karma: 2950 on 2016-12-21
This answer was ACCEPTED on the original site
Post score: 10
Original comments
Comment by tengfei on 2017-12-02:
Hi, I have a question that before I set the pose goal, how can I know whether the pgose goal is reachable? Is there any funtion in moveit to solve this problem?
Comment by nmelchert on 2018-09-10:
Is there a Python functionality for that?
Comment by RobertZickler on 2021-07-08:
@tengfei and @nmelchert have a look at this answer
So in short:
plan = group.plan()
if not plan.joint_trajectory.points:
# Error | {
"domain": "robotics.stackexchange",
"id": 25407,
"tags": "moveit"
} |
Does the act of organizing information (e.g. categorization) reduce entropy? | Question: I am fascinated by the relationship between entropy and life. From the Wikipedia article of that name, to the science fiction series "Three Body Problem" characterizing human-like lifeforms across the universe as simply 'low entropy beings'. However, entropy can be a tricky concept to grasp my mind around (I am an earth scientist, not pure physicist, by training). So I come to ask: does the act of organizing information (e.g. categorizing or identifying patterns) reduce entropy?
My understand is yes, because 'organizing information' is creating information about information, and the creation of information (or 'knowledge'?) reduces the randomness of (or at least uncertainty about) the states of things.
This question could seem ambiguous or nonsensical since entropy is used in a thermodynamic sense and an information science sense, and I'm asking about changes in information but am not necessarily asking about information entropy only. Citing sci-fi might not help, but I think this has a sound basis even if it's far out: consider the concept of humanoids being 'low entropy beings'. Humans in particular and life in general does seem to reduce entropy locally, at the least by simply keeping a warm body and therefore working to prevent the entropy of one's body from increasing. In that context - thermodynamic entropy in the physics of life - does the act of gaining knowledge (organizing information) reduce entropy? Again I'd think yes, in a simplest sense because when I have information organized I can spend more of my energy on actually doing work efficiently.
Answer: Sensing and learning information clearly reduces entropy in the observer, who pays a corresponding or bigger physical entropy cost to perform these actions.
Imagine a system (a robot, human, alien) observing another system (a rock, say). This paper shows that the sensor device cannot be at equilibrium in order to detect things. Indeed, it needs to produce some entropy to work, because otherwise it will just fluctuate uselessly. Now, the information gained by this process may reduce entropy in the memory of the observer - a probability estimate representing whether there is a rock there or not goes from a uniform distribution to a tighter distribution (or, you can imagine a discrete RockIsThere bit being set to a 0 or 1 value, for a more rigid design). Having a memory can also increase the ability of the sensor to acquire information (with some trade-offs).
So when the observer knows something, it has reduced the entropy of its internal probability distributions (epistemic entropy, what Jaynes was talking about), or we can say it has literally physically reduced the entropy of the RockIsThere bit by setting it to a definite value. In the later case there was the Landauer cost $k_B T \ln(2)$ J that had to be paid in the form of waste heat - the local entropy reduction had to be balanced with increasing global entropy. I suspect one can show that even the epistemic entropy reduction calculation will require a Landauer cost, since it erases past uncertain knowledge with more certain knowledge, but things can get messy if one has extra memory to store old results in (essentially it acts as a cold heat bath one can dissipate entropy into "for free" until it fills up).
Categorisation is trickier to analyse. It basically corresponds to searching for short descriptions that efficiently describe large amounts of data. It turns out that one can erase unwanted information more cheaply if one has knowledge of its internal structure, so the observer that figures out that X,Y,Z are actually one thing A can now (in principle) free up memory at a lower cost than was originally used to write X,Y,Z into their memory - the entropy "win" will be this compression efficiency minus the entropy cost of coming up with A (which might be small if the observer is a reversible computer; for such an agent interacting with the world is the main entropy cost).
I would not say organising information decreases entropy in the large - there is a lot of computation and writing/erasing memory going on - but locally inside the organising agent memory it seems like it does. | {
"domain": "physics.stackexchange",
"id": 47699,
"tags": "entropy, everyday-life"
} |
Can I solubilize lemon essence in water somehow? | Question: I have a water based solution (95% water, the rest are salts) to which I want to add a lemon fragrance. I bought "water based lemon essence", but upon putting a mere drop in about 500 mL, the solution turned from crystal clear to milky off-white.
I researched and found that lemon essence is apparently one of many essential oils (citral, citronellal, cintronellol). None of which are water-soluble. That's why the result is an emulsion, and it's not translucent. (It mixed very well and it's very stable, that's not my problem)
I was wondering then if it's possible to give lemon fragrance to a water solution without turning it into a milky emulsion.
Answer: Some hints are provided by Wikipedia:Lemon liqueur:
To produce the Lemon liqueur requires sugar, water, lemon zest,
mixing in organic salutes liquor, and time to mature. Lemon zest is soaked in high proof neutral spirits to extract from it the lemon oil (an essential oil). The extraction is then diluted with simple syrup.
In other words, lemon zest is drawn into water by mixing in water-soluble organic compounds, in this case sugar and (ethyl) alcohol. If you want it to be nonalcoholic, maybe just the sugar will do.
During late autumn I make a cranberry jam by boiling the cranberries with oranges, sugar and water. In the presence of the sugar and organic material from the oranges and cranberries and with the application of heat, orange zest is extracted into the jam. The jam is too dark and thick to see whether the zest remains in solution when the jam is cooled, but the extraction of the zest is evident by taste. | {
"domain": "chemistry.stackexchange",
"id": 15169,
"tags": "organic-chemistry, solubility"
} |
Does center of buoyancy and center of gravity coincide for an object of a random shape but uniform density submerged under water? | Question: If there is an object of a random shape and is submerged completely under water, would center of gravity of that object and the center of buoyancy coincide? Note that the object has uniform density. Only the shape is not uniform and random
I was reading about this topic when this question popped up in my head. I was reading some answers on the internet (of how center of gravity and center of buoyancy were different) on quora and there people had written that they would not coincide in case the object has non uniform density. However, will they still coincide if the density is uniform but the shape is different?
Answer: (Quick answer: Yes)
The definition of the center of buoyancy $B$ is that it is the center of mass of the displaced fluid. Hence, to find it, one looks at the undisturbed fluid (i.e. before the object is immersed) and focuses on the volume that will be occupied by the object. This will be an object-shaped volume of fluid, and its center of mass is $B$.
In your example, the fluid is water, which is of uniform density, and the object is submerged. For any two identically-shaped bodies having different but uniform densities, the centers of mass are located at the same place (relative to the shape's boundary). This means the object-shaped volume of fluid has the same center of mass ($B$) as the body, and so the two coincide. | {
"domain": "physics.stackexchange",
"id": 80905,
"tags": "fluid-statics, buoyancy"
} |
Tiny dark-colored bug ID in Amman-Jordan | Question: I live in Amman-Jordan and I keep finding this bug species on my walls. I saw four till now on the walls of another room. A little bit bigger than the first one.
Click on all photos to open full-sized image:
With cm ruler:
I also have a video, but don't know how to post it.
Answer: These are very likely flour beetles (Tribolium spp) of some kind. These small (~3mm) reddish-brown beetles are some of the most widespread and common pests of stored food products worldwide [source].
Left: red flour beetle; Right: confused flour beetle. Photo credits: Rebecca Baldwin (left); James Castner (right)
The photo is too blurry to be able to differentiate between the most likely candidates: red flour beetle (T. castaneum) and confused flour beetle (T. confusum).
You can learn more about differentiating these insects here and here, and you can find more general information about them via UF/IFAS.
Distribution: Both species are essentially found worldwide; This UF/IFAS page provides a bit more nuanced explanation of their distribution:
The red flour beetle is of Indo-Australian origin (Smith and Whitman) and is found in temperate areas, but will survive the winter in protected places, especially where there is central heat (Tripathi et al. 2001). In the United States, it is found primarily in the southern states. The confused flour beetle, originally of African origin, has a different distribution in that it occurs worldwide in cooler climates. In the United States it is more abundant in the northern states (Smith and Whitman).
Extermination tips are available via epetsupply.com:
Prevention is key!
Keeps your home clean and free of food/crumbs
Sticky traps and pheromone traps
If I had to guess, I'd guess from your poor images that you have red flour beetles, but honestly it doesn't much matter from a pest-control standpoint.
The red and confused flour beetles live in the same environment and compete for resources (Ryan et al. 1970, Willis and Roth 1950). The red flour beetle may fly, especially before a storm, but the confused flour beetle does not fly. [source: UF/IFAS]
UPDATE:
Alternatively, now that I've looked at your picture with such a small specimen, it could be possible this is another "grain beetle": Ahasverus advena, the foreign grain beetle. From Wikipedia:
Source: Wikipedia
Description:
The foreign grain beetle is approximately 2 mm (1⁄12 in) in length. It can be distinguished chiefly by slight projections or knobs on each front corner of the pronotum, and its club-shaped antennae. The larvae are worm-like, cream-colored and often reach a length of 3 mm before pupating into darker adults. Males and females are identical in appearance both as larvae and adults. The adult is usually reddish brown, or sometimes black.
Distribution
The foreign grain beetle is found in tropical and temperate regions [and is distributed all throughout the world -- see Spanish translation of Wikipedia page]. | {
"domain": "biology.stackexchange",
"id": 9659,
"tags": "species-identification, zoology, entomology, pest-control"
} |
How To Find Direction of Normal Contact Force? | Question: I am a little bit confused. while doing questions I can see direction of normal forces are different. In Some Cases, It is perpendicular to the contact surface and towards the object on which normal is applied. In Other Cases, I can see it is perpendicular but opposite direction...
In The Above Image If You Draw The FBD of man you can see the direction of normal is different from the below image if we draw the fbd of B.
Then How will i determine the direction of normal force upwards or downwards ?
Answer: A normal force acts at right angles to surface.
When considering the direction of a normal force one should consider what would happen if it is not there.
For your second example here are the FBDs with the coloured forces (other than black) being Newton third law pairs.
For block $B$ if the normal force due to $A$ was not there block $B$ would accelerate downwards due to the gravitational force on it.
This means that the normal force on $B$ due to $A$ must be upwards and hence (N3L) the normal force on $A$ due to $B$ must be upwards.
In The Above Image If You Draw The FBD of man you can see the direction of normal is different from the below image if we draw the fbd of B.
I do not agree with this statement.
Your first example is slightly more complicated and depends on whether or not the person is anchored to the platform.
If the person is not anchored then the only direction possible for the normal force on the person due to the platform is upwards and hence (N3L) the normal force on the platform due to the person is downwards.
So the normal force on the person is in the same direction as that on block $B$ due to block $A$.
If the person is not anchored to the platform then, without any further information, the direction of the normal force on the person due to the platform is indeterminate so you could put either up or down but then making sure that the direction of the normal force on the platform due to the person is shown to be in the opposite direction. | {
"domain": "physics.stackexchange",
"id": 92530,
"tags": "newtonian-mechanics, forces, mass, free-body-diagram, string"
} |
Are ergs commonly used in astrophysics? If so, is there a specific reason for it? | Question: I was reading the recent LIGO paper and one passage stuck out to me:
The system reached a peak gravitational-wave luminosity of $3.6^{+0.5}_{−0.4}× 10^{56}\:\mathrm{erg/s}$, equivalent to $200^{+30}_{−20}M_⊙c^2/\mathrm s$.
Note, in particular, that the radiated power is quoted in $\mathrm{erg/s}$, which seems pretty weird to me, since it is not an SI unit, and it seems to offer very little advantage over the use of joules. Is this a common usage in astrophysics? If so, is there some specific reason for this? (For example, papers using CGS electromagnetic units naturally use ergs for energy measurements, though the use of CGS for anything that actually requires you to write down numbers with units seems to be on a steep decline. This is not the case here, though.) Or is this just some bit of culture that's hard to live down nowadays?
Answer: Yes, you commonly see luminosity quoted with two numbers: an occasional frequency range in keV/MeV/GeV and then a luminosity in erg/s. The use of two different energy units in one sentence is its own sort of eyebrow-raiser; Here is a bevy of recent examples, to prove that it's neither a sign of age nor a particularity of the LIGO group: [1], [2], [3], [4], [5], [6], [7], [8].
This course from nrao.edu states that "Most astrophysical theory is done in cgs units, but radio observations are usually reported in mks units since engineers use mks." (Here "mks" is an abbreviated SI system.) Maoz's Astrophysics in a Nutshell also seems to use cgs quoting erg/s, and this textbook claims that it just happens to be the most common in astrophysics.
What's really interesting is that I cannot find any astrophysical journals which require it, but two astrophysical standards organizations forbid it: IAU and AAS. So CGS is persisting in common practice despite institutional pressure rather than because of it.
CGS has persisted in astrophysics and continuum mechanics. The exact reasons are not 100% clear: however in both cases it is nice that it is within a few factors of 10 from the equivalent SI units, which allows you to quickly communicate to the engineers on the team what a wattage is if you have a power output in erg/s, so probably that is a huge factor. This reminds me of the eV unit where it is a compromise between the theorist's requirement that we never use Coulombs and the engineer's requirement that we use volts.
In the latter case of continuum mechanics, it's probably just because cgs gave nice names to viscosity properties, "poise" and "stokes". In the astrophysical case I am less certain, but it seems like it's probably just because occasionally theorists need the Maxwell equations and yet nobody really wants to be bothered by the permeability and permittivity of free space, these being highly unnecessary concepts created by the ubiquity of volts and amps. | {
"domain": "physics.stackexchange",
"id": 98013,
"tags": "soft-question, astrophysics, units, conventions"
} |
Stack implementation using vectors and templates with no overflow version 1.3 | Question: Below is the code for stack along with one or two extra options. Kindly let me know if there are any concerns/critical issues. Below are the links for previous versions.
Version 1.0
Version 1.1
Version 1.2
//*************Version 1.3: Begineer's Implementation of Stack ***************//
#include <iostream>
#include <fstream>
#include <stdexcept>
template <class T>
class Mystack
{
private:
T *input;
int top;
int capacity;
public:
Mystack();
Mystack(const Mystack<T> &source);
Mystack<T> & operator=(Mystack<T> source);
~Mystack();
void push(T const& x);
void pop();
T& topElement() const;
bool isEmpty() const;
void print(std::ostream &os) const;
void swap(Mystack<T> &source);
friend std::ostream& operator<<(std::ostream& s, Mystack<T> const& d) //operator<< overloading
{
d.print(s);
return s;
}
};
template <class T>
Mystack<T>::Mystack() //default constructor
{
top = -1;
capacity = 5;
input = new T[capacity];
}
template <class T>
Mystack<T>::Mystack(const Mystack<T> &source) //copy constructor
{
input = new T[source.capacity];
top = source.top;
capacity = source.capacity;
for (int i = 0; i <= source.top; i++)
{
input[i] = source.input[i];
}
}
/**COPY AND SWAP IDIOM**/
template <class T>
void Mystack<T>::swap(Mystack<T>& source) //swaps the content of 'source' to the object that calls it
{
std::swap(input, source.input);
std::swap(top, source.top);
std::swap(capacity, source.capacity);
}
template <class T>
Mystack<T> & Mystack<T>::operator=(Mystack<T> source) // assignment operator overload
{ //^^^^^ passing by value hence copy constructor gets invoked,
// thus no headache of copying, no code duplication, exception safe
this->swap(source);
return *this; //temporary holder 'source' will get destroyed, 'source' scope ends and its destructor is called,
} //avoiding memory leak
template <class T>
Mystack<T>::~Mystack() // destructor
{
delete[] input;
}
template <class T>
void Mystack<T>::push(T const& x) //Passing x by Const Reference
{ // Valus of x cannot be changed now in the function!
if (top + 1 == capacity)
{
T *vec = new T[capacity * 2];
for (int i = 0; i <= top; i++)
{
vec[i] = std::move(input[i]);
}
std::swap(input, vec);
capacity *= 2;
delete [] vec; // Avoiding Memory Leak.
}
input[++top] = x;
}
template <class T>
void Mystack<T>::pop() //pop the element from the top of stack
{
if (isEmpty())
{
throw std::out_of_range("Stack Underflow");
}
/*else
{
std::cout << "The popped element is" << input[top--];
}*/
top--;
}
template <class T>
bool Mystack<T>::isEmpty() const //const: none of the class members can be modified in this function
{
return top == -1;
}
template <class T>
T& Mystack<T>::topElement() const // returns top element of the stack
{
if (isEmpty())
{
throw std::out_of_range("No Element to Display");
}
else
{
//std::cout << "The top element is : " << input[top];
return input[top];
}
}
template <class T>
void Mystack<T>::print(std::ostream &os) const //a more of a general print function, can be used to write to a file
{
for (int i = 0; i <= top; i++)
{
os << input[i] << " ";
}
}
int main()
{
Mystack<int> intstack, inttemp;
Mystack<float> floatstack, floattemp;
Mystack <int> temp(intstack);
Mystack<char> charstack, chartemp;
int type_choice;
std::ofstream some_file("testfile.txt"); // creation of file
int int_elem;
float float_elem;
char char_elem;
int ch = 1;
std::cout << "Enter the type of stack" << std::endl;
std::cout << "1. int ";
std::cout << "2. float ";
std::cout << "3. Char" << std::endl;
std::cin >> type_choice;
std::cout << "\n1. Push ";
std::cout << "2. Top ";
std::cout << "3. IsEmpty ";
std::cout << "4. Pop ";
std::cout << "5. Write Data to a file";
std::cout << "6. Copy Constrcutor Use";
std::cout << "7. Assignemnt Operator Use";
std::cout << "8. Print ";
std::cout << "9. Exit" << std::endl;
if (type_choice == 1)
{
int ch = 1;
while (ch > 0)
{
std::cout << "Enter the choice" << std::endl;
std::cin >> ch;
switch (ch)
{
case 1:
std::cout << "Number to be pushed" << std::endl;
std::cin >> int_elem;
intstack.push(int_elem);
break;
case 2:
try
{
std::cout << "Top Element is: " << intstack.topElement();
}
catch (std::out_of_range &oor)
{
std::cerr << "Out of Range error:" << oor.what() << std::endl;
}
break;
case 3:
std::cout << "Check Empty" << std::endl;
if (intstack.isEmpty())
std::cout << "Stack is Empty";
else
std::cout << "Stack is not Empty";
break;
case 4:
std::cout << "Pop the element" << std::endl;
try
{
intstack.pop();
}
catch (const std::out_of_range &oor)
{
std::cerr << "Out of Range error: " << oor.what() << '\n';
}
break;
case 5:
std::cout << "Stack data to file" << std::endl;
intstack.print(some_file); //printing data to a file
break;
case 6:
{
Mystack<int> s4(intstack); // copy constructor called
s4.print(std::cout);
break;
}
case 7:
inttemp = intstack; // assignment operator overload
inttemp.print(std::cout);
break;
case 8:
std::cout << intstack; // << operator overloading
break;
case 9:
exit(0);
default:
std::cout << "Enter a valid input" << std::endl;
break;
}
}
}
else if (type_choice == 2)
{
int ch = 1;
while (ch > 0)
{
std::cout << "Enter the choice" << std::endl;
std::cin >> ch;
switch (ch)
{
case 1:
std::cout << "Number to be pushed" << std::endl;
std::cin >> float_elem;
floatstack.push(float_elem);
break;
case 2:
try
{
std::cout << "Top Element" << floatstack.topElement();
}
catch (std::out_of_range &oor)
{
std::cerr << "Out of Range error:" << oor.what() << std::endl;
}
break;
case 3:
std::cout << "Check Empty" << std::endl;
if (floatstack.isEmpty())
std::cout << "Stack is Empty";
else
std::cout << "Stack is not Empty";
break;
case 4:
std::cout << "Pop the element" << std::endl;
try
{
floatstack.pop();
}
catch (const std::out_of_range &oor)
{
std::cerr << "Out of Range error: " << oor.what() << '\n';
}
break;
case 5:
std::cout << "Stack data to file" << std::endl;
floatstack.print(some_file); //data to file
break;
case 6:
{
Mystack<float> s5 = floatstack; // copy constructor called
s5.print(std::cout);
break;
}
case 7:
floattemp = floatstack; // assignment operator overload
floattemp.print(std::cout);
break;
case 8:
std::cout << floatstack; //operator << overloading
break;
case 9:
exit(0);
default:
std::cout << "Enter a valid input" << std::endl;
break;
}
}
}
else if (type_choice == 3)
{
int ch = 1;
while (ch > 0)
{
std::cout << "Enter the choice" << std::endl;
std::cin >> ch;
switch (ch)
{
case 1:
std::cout << "Number to be pushed" << std::endl;
std::cin >> char_elem;
charstack.push(char_elem);
break;
case 2:
try
{
std::cout << "Top Element" << charstack.topElement();
}
catch (std::out_of_range &oor)
{
std::cerr << "Out of Range error:" << oor.what() << std::endl;
}
break;
case 3:
std::cout << "Check Empty" << std::endl;
if (charstack.isEmpty())
std::cout << "Stack is Empty";
else
std::cout << "Stack is not Empty";
break;
case 4:
std::cout << "Pop the element" << std::endl;
try
{
charstack.pop();
}
catch (const std::out_of_range &oor)
{
std::cerr << "Out of Range error: " << oor.what() << '\n';
}
break;
case 5:
std::cout << "Stack data to file" << std::endl;
charstack.print(some_file); //printing stack data to file
break;
case 6:
{
Mystack<char> s6 = charstack; // copy constructor called
s6.print(std::cout);
break;
}
case 7:
chartemp = charstack;
chartemp.print(std::cout);
break;
case 8:
std::cout << charstack; //operator << overloading
break;
case 9:
exit(0);
default:
std::cout << "Enter a valid input" << std::endl;
break;
}
}
}
else
std::cout << "Invalid Choice";
std::cin.get();
}
Answer: Don't like this comment:
{ // Valus of x cannot be changed now in the function!
I think that is obvious.
But you are not manipulating x in the function so not a problem and you make a copy when you put it on the stack.
Don't like the usless else
else
{
//std::cout << "The popped element is" << input[top];
--top;
}
The true part of the branch never returns. So the else is redundant. I like to keep validation logic (test that the pre-conditions are met) separate from the business logic of the class.
Apart from that I can not spot anything wrong.
Couple of additions (I mention in the first review).
Using placement new/delete so that you don't construct objects that are not needed.
You could add move semantics to the class
Constructor
assignment
push. | {
"domain": "codereview.stackexchange",
"id": 11606,
"tags": "c++, algorithm, stack"
} |
Is it possible to launch non ros applications with roslaunch? | Question:
Hi,
I would like to know whether it's possible to launch non ros applications with roslaunch? And if so, how can it achieved?
Thanks for the answer.
Originally posted by l4ncelot on ROS Answers with karma: 826 on 2017-10-04
Post score: 0
Answer:
Yes, that is possible, but you will have to take care of some details.
roslaunch (and rosrun is the same) will add a nr of extra arguments to the command line it generates when starting ROS nodes, and those will also be passed to 'non ros applications'. Examples of those extra command line arguments (CLAs) are the node name (as __name:=..) and the location of the log file the node is expected to use (as __log:=..).
Ordinary programs typically do not use such arguments and their CLA parsers will get confused.
Three approaches I've seen:
write a bash script that wraps the 'non-ros binary': this script takes in all CLAs, strips out the ROS-specific ones and then invokes the target binary with the remaining arguments. Advantage: simple to create. Disadvantage: it is not really convenient to manipulate strings (ie: command lines) with bash, and these scripts often assume a fixed order of CLAs, which is not guaranteed, leading to brittle implementations.
write a small Python wrapper that can use rospy.myargv(..) to remove ROS-specific CLAs and then use subprocess to start the target binary (while passing on the remaining CLAs).
same as the previous, but in C++ and using ros::removeROSArgs(..).
I'm not aware of publicly available implementations of any of the above options, but they could already exist. Writing them yourself should not be too hard though.
Note: to use either the rospy or roscpp CLA parsing functions the wrappers do not have to be ROS nodes themselves.
Edit: as roslaunch can only start binaries that are located in packages, the wrappers would have to live in a ROS package themselves. But that is hardly a constraint.
Edit2: and obviously, if your target binary does not expect any arguments at all, then all of the above is unnecessary, as the args that roslaunch adds will not be processed anyway (but you would still need some way to start your target binary 'from' a ROS package).
Edit3: (very) minimal Python2 example:
#!/usr/bin/env python
import sys, subprocess, rospy
sys.exit(subprocess.call(rospy.myargv(argv=sys.argv)[1:]))
when used with a launch file that contains something like:
<node name=".." type=".." pkg=".." args="nc localhost 12345" />
will make nc try to connect to localhost:12345 with roslaunch waiting for it to exit (as it would with 'regular' ROS binaries).
Originally posted by gvdhoorn with karma: 86574 on 2017-10-04
This answer was ACCEPTED on the original site
Post score: 9
Original comments
Comment by gvdhoorn on 2017-10-04:
Related (prior) questions and answers: #q9161 and #q51474.
Comment by l4ncelot on 2017-10-04:
Cool, thanks a lot. | {
"domain": "robotics.stackexchange",
"id": 29002,
"tags": "roslaunch"
} |
Spherical aberration: circle of least confusion and diffraction limited spot size | Question: I need an estimate of the relative sizes of the circle of least confusion (spherical aberration) and the diffraction limited spot size for typical real lenses. I've seen plenty of discussions, but none address what to expect of a real multi-element photographic lens or simple single element off-the-shelf lens, or an aspheric singlet. Most discussions target wavefront error, but it's not clear to me how that relates to spot size. In short: what is the size of the circle of least confusion in typical real lenses.
To narrow the question, data or information on a double-gauss design would be most helpful.
I might be able to figure this out myself if I had typical aberration coefficients for photographic and simple lenses, but I can't find any published information on the size of those coefficients. If you know where to find these data, please tell me where.
Rules of thumb, estimates based on experience, or educated guesses are welcome. (3x the diffraction spot radius? 10x the diffraction spot radius? ... )
Answer: I joined "stack" recently and see that you entered this question a long time ago. I realize that you may have found out all about it in the meantime, and I apologize if what I say is well known to you now or perhaps no longer of interest. But still:
"Handbook of military infrared technology" contains a number of so-called "blur spot charts" displaying the diameter of the "disc of least confusion" for å number of simple lenses depending on bending, index of refraction and f/no. The same is done for curved mirrors and other simple components. These blur spots are obtained by raytracing, making "spot diagrams" and using these to obtain some measure of spot size. It seems that this book can be downloaded free in PDF format. You can find it if you Google the title.
The book "A system of optical design" by Arthur Cox, contains in the last section more than a hundred different designs of 12 different lens types taken from the patent literature. Full design data are given, together with computation of aberration curves of longitudinal spherical aberration, sine condition (coma), s- ant t- surfaces (astigmatism and field curvature), distortion and petzval sum. I think one could derive approximate point image diameters from these curves, but spot diagrams can be obtained if one has access to a computer program for raytracing. Amazon has two copies, but they are very expensive, so it would better to try a library.
Kidger and Wynne has written a paper called : "The design of double Gauss systems using digital computers" in Applied Optics, maybe they would present some spot diameters there.
"Telescope Optics", a book by Rutten and Venrooij, contains spot diagrams of a number of telescope objectives with full design data.
"Star testing Astronomical Telescopes" by H. R. Suiter contains a number of computed pictures of diffraction point spread functions for various aberration types. These are computed from wave-aberrations by algorithms for diffraction calculations.
I don't feel that one could rely on the design data from the patent literature to give the best designs, rather representative of the type.
There is also the question of what quality criterion is most suitable, and this may depend on the application. From a spot diagram one can compute e.g the max diameter, the rms radius, the full width half maximum or the encircled energy, all dependent on focus setting and spectral content of the light.
The Strehl ratio is the max intensity of the spot relative to the intensity of the perfect spot. This is found by a diffraction calculation. For visual instruments used in daylight, this criterion has good correlation with subjective experience of image sharpness.
The optical transfer function is often used for photographic lenses. This must be computed from the wave-aberrations of the lens found by raytracing. This is connected to the "resolving power" sometimes given for the lens. | {
"domain": "physics.stackexchange",
"id": 35569,
"tags": "optics, diffraction, lenses"
} |
What is the difference between Pytorch's DataParallel and DistributedDataParallel? | Question: I am going through this imagenet example.
And, in line 88, the module DistributedDataParallel is used. When I searched for the same in the docs, I haven’t found anything. However, I found the documentation for DataParallel.
So, would like to know what is the difference between the DataParallel and DistributedDataParallel modules.
Answer: As the Distributed GPUs functionality is only a couple of days old [in the v2.0 release version of Pytorch], there is still no documentation regarding that. So, I had to go through the source code's docstrings for figuring out the difference. So, the docstring of the DistributedDataParallel module is as follows:
Implements distributed data parallelism at the module level.
This container parallelizes the application of the given module by
splitting the input across the specified devices by chunking in the batch
dimension. The module is replicated on each machine and each device, and
each such replica handles a portion of the input. During the backwards
pass, gradients from each node are averaged.
The batch size should be larger than the number of GPUs used locally. It
should also be an integer multiple of the number of GPUs so that each chunk
is the same size (so that each GPU processes the same number of samples).
And the docstring for the dataparallel is as follows:
Implements data parallelism at the module level.
This container parallelizes the application of the given module by
splitting the input across the specified devices by chunking in the batch
dimension. In the forward pass, the module is replicated on each device,
and each replica handles a portion of the input. During the backwards
pass, gradients from each replica are summed into the original module.
The batch size should be larger than the number of GPUs used. It should
also be an integer multiple of the number of GPUs so that each chunk is the
same size (so that each GPU processes the same number of samples).
This reply in the Pytorch forums was also helpful in understanding the difference between the both, | {
"domain": "datascience.stackexchange",
"id": 5708,
"tags": "gpu, distributed, pytorch"
} |
Why Don't Electrons "Try" to Flow in an Open Circuit? | Question: I understand that electrons can't cross an open switch, but why don't they at least move towards the open switch when a battery is connected? Doesn't the voltage cause motion towards the positive terminal?
Answer: An open switch can be thought of as a capacitor with a very small capacitance with air as the dielectric. As such opposite sign charges can reside on the two sides of a switch wand there will be a potential difference across the switch. Thus there will be a time when current flows along the wires until the capacitor (switch) is fully charged, i.e., the potential difference across the switch is equal to that across the terminals of the voltage source.
As air is not a perfect insulator, a minute current can flow across the switch contacts and if the potential difference across the switch contacts is large enough so that the electric field between the contacts exceeds 30 kV/cm, the air will break down (become a conductor) and a significant current will flow through the air gap.
Even with low voltages arcing can occur as switches are opened and closed as the air breaks down and the can cause radio interference. At high voltages the effect can be quite spectacular particularly at night as the video High Voltage 345,000 volt switch closing at night! shows. In the video the switch is being closed and when the air gap is small enough the electric field between the switch contacts is large enough for the air to breakdown and a significant current pass through it. | {
"domain": "physics.stackexchange",
"id": 100622,
"tags": "electromagnetism, electrostatics, electric-circuits"
} |
What final states can proton-proton collision generate? | Question: What final states can proton-proton collision generate?
Where to find listing of possible final states of a particle interaction, for example proton colliding with proton?
Answer: All collections of particles with the total energy and momentum equal to the initial one, total $B=2$, total electric charge $Q=2$, total lepton charge $L=0$, and total spin conserved (or at least integer-valued) occur as final states of the proton-proton collisions with a nonzero probability.
If one includes extremely unlikely processes linked to grand unification etc., even the conditions about the conservation of $B,L$ may be relaxed.
Of course, this is just a very qualitative, nearly vacuous statement: everything that isn't banned (by conservation laws i.e. by symmetries) will happen with a nonzero probability. In reality, it matters what the probabilities are. Some of them are vastly greater than others. In most collisions, one only creates the same protons, a couple of pions, maybe kaons, photons, sometimes antiprotons, muons – directly or from decays of mesons. Only rarely, one produces heavy particles such as Z-bosons and W-bosons. Even more rarely, one produces Higgs bosons etc. Only hundreds of Higgs bosons have been produced in the 400 trillion collisions (per detector) at the LHC. The number of Z-bosons (or top quarks) is much higher but still a tiny fraction of those 400 trillion collisions.
It's meaningless to give a longer answer to this question because it implicitly contains all of experimental particle physics. One is interested not only about the "types" of particles in the final state but also about their energies and directions of motion (including the relative ones). Quantum field theory calculates all these probabilities (or cross sections); in some sense, it doesn't calculate anything else. So a full answer to this question also includes the explanation of all things that may be calculated from quantum field theory – all of particle physics. | {
"domain": "physics.stackexchange",
"id": 1864,
"tags": "particle-physics"
} |
Are comets known to exist in other star systems? | Question: Are comets a feature unique to our Solar System? Or, are comets/cometary clouds detected around discovered/observed extra-solar systems too? If they were detected elsewhere, how do such cometary clouds affect discovery by perturbation of planets in that system ?
Answer: It is unlikely that comets are a feature unique to our Solar System. Since comets are simply remnants of star and planetary formation, then anywhere stars and planets have formed would be fertile ground to expect comets.
Their individual masses are relatively very small compared to discovered planets. For example, Halley's Comet has a mass of roughly $2.2\times10^{14} kg$ compared to roughly $6\times10^{24} kg$ for the Earth. That's a factor of 30 billion times smaller... so it is also unlikely that the same techniques used to discover Earth-sized or larger planets would find comets, too.
However, although they're not likely to be detected, given their prevalence in our planetary system, and given that they form from natural processes, and given observational evidence of other planetary systems, it is not unreasonable to infer their existence in other systems. | {
"domain": "physics.stackexchange",
"id": 17435,
"tags": "astronomy, comets"
} |
controlling two models separately | Question:
hi all, I found this simulation tutorial at: VehicleSimulator
Am using ROS melodic.
Unfortunately only one of the worlds worked for me (world_test.launch).
Anyway, what am really interested in is creating another identical car model but control it separately from the one already spawned.
After following this tutorial, I managed to get the second model. but i can not control it (velocity and, if possible later on. pid values).
So any advice on steps to follow so as to achieve this will be greatly appreciated.
Thanks in advance.
Originally posted by lxg on ROS Answers with karma: 11 on 2020-03-06
Post score: 0
Original comments
Comment by fvd on 2020-03-07:
How did you control the first robot? How are you trying to control the second one? Are you making sure that the controllers have different names or namespaces?
Comment by lxg on 2020-03-09:
hello. the first one is launched using the world_test.launch file from the repository provided in the VehicleSimulator link in the question. Am kind of lost as to which ones am supposed to give different names since its written in c++ and more familiar with python.
So, all i need to do is give the controllers different names or namespaces?
Or is there something else?
Thanks by the way.
Comment by fvd on 2020-03-09:
Looking at rostopic list after you start up the first robot, where do the topics for the controller action (often of type follow_joint_trajectory) appear? If you start up the second one, do they use a different topic? You can use either a different name or namespace the controller. The latter is usually simpler, but you need to connect whatever uses the controller's action to that topic, so you end up writing it out either way.
Comment by lxg on 2020-03-13:
thank you very much. indeed this was the issue. after making sure that they have different names and connected it all worked.
can you convert this to an answer so that i can accept it?
Comment by lxg on 2020-03-13:
so now i can easily control their velocities (linear and angular separately). Does this apply to other controllers such as the PID, btw? or do i have to write different config.yaml, robot description files etc. ? thanks.
Comment by fvd on 2020-03-16:
I'm not sure I understand the question. It would be best if you could post a separate thread and link it here. Comments aren't meant for new questions.
Answer:
Summarizing the comments: You need to make sure that the controllers run in different namespaces (or have different names), so that they can be addressed separately.
Looking at rostopic list after you start up the first robot, you can check where the topics for the controller action (often of type follow_joint_trajectory) appear. If you start up the second one, do they use a different topic? You can use either a different name or namespace the controller. The latter is usually simpler, but you need to connect whatever uses the controller's action to that topic, so you end up writing it out either way.
Originally posted by fvd with karma: 2180 on 2020-03-13
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34552,
"tags": "ros, ros-melodic, ubuntu, ubuntu-bionic"
} |
Is quantum computing just pie in the sky? | Question: I have a computer science degree. I work in IT, and have done so for many years. In that period "classical" computers have advanced by leaps and bounds. I now have a terabyte disk drive in my bedroom drawer amongst my socks, my phone has phenomenal processing power, and computers have revolutionized our lives.
But as far as I know, quantum computing hasn't done anything. Moreover it looks like it's going to stay that way. Quantum computing has been around now for the thick end of forty years, and real computing has left it in the dust. See the timeline on Wikipedia, and ask yourself where's the parallel adder? Where's the equivalent of Atlas, or the MU5? I went to Manchester University, see the history on the Manchester Computers article on Wikipedia. Quantum computers don't show similar progress. Au contraire, it looks like they haven't even got off the ground. You won't be buying one in PC World any time soon.
Will you ever be able to? Is it all hype and hot air? Is quantum computing just pie in the sky? Is it all just jam-tomorrow woo peddled by quantum quacks to a gullible public? If not, why not?
Answer: I'll be trying to approach this from a neutral point of view. Your question is sort of "opinion-based", but yet, there are a few important points to be made. Theoretically, there's no convincing argument (yet) as to why quantum computers aren't practically realizable. But, do check out: How Quantum Computers Fail: Quantum Codes, Correlations in Physical Systems, and Noise Accumulation - Gil Kalai, and the related blog post by Scott Aaronson where he provides some convincing arguments against Kalai's claims. Also, read James Wotton's answer to the related QCSE post: Is Gil Kalai's argument against topological quantum computers sound?
Math Overflow has a great summary: On Mathematical Arguments Against Quantum Computing.
However, yes, of course, there are engineering problems.
Problems (adapted from arXiv:cs/0602096):
Sensitivity to interaction with the environment: Quantum computers are extremely sensitive to interaction with the surroundings since
any interaction (or measurement) leads to a collapse of the state function. This
phenomenon is called decoherence. It is extremely difficult to isolate a quantum system, especially an engineered one for a computation, without it getting entangled with the environment. The larger the number of qubits the harder is it to maintain the coherence.
[Further reading: Wikipedia: Quantum decoherence]
Unreliable quantum gate actions: Quantum computation on qubits is accomplished by operating upon them with an array of transformations that are implemented in principle using small gates. It is imperative that no phase errors be introduced in these transformations. But practical schemes are likely to introduce such errors. It is also possible that the quantum register is already entangled with the environment even before the beginning of the computation. Furthermore, uncertainty in initial phase
makes calibration by rotation operation inadequate. In addition, one must consider the relative lack of precision in the classical control that
implements the matrix transformations. This lack of precision cannot be completely compensated for by the quantum algorithm.
Errors and their correction: Classical error correction employs redundancy. The simplest way is to store the information multiple times, and — if these copies are later found to disagree — just take a majority vote; e.g. Suppose we copy a bit three times. Suppose further that a noisy error corrupts the three-bit state so that one bit is equal to zero but the other two are equal to one. If we assume that noisy errors are independent and occur with some probability $p$, it is most likely that the error is a single-bit error and the transmitted message is three ones. It is possible that a double-bit error occurs and the transmitted message is equal to three zeros, but this outcome is less likely than the above outcome. Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. However, quantum error correcting code(s) protect quantum information against errors of only some limited forms. Also, they are efficient only for errors in a small number of qubits. Moreover, the number of qubits needed to correct errors doesn't normally scale well with the number of qubits in which error actually occurs.
[Further reading: Wikipedia: Quantum error correction]
Constraints on state preparation: State preparation is the essential first step to be considered before the beginning of any quantum computation. In most schemes, the qubits need to be in a particular superposition state for the quantum computation to proceed correctly. But creating arbitrary states precisely can be exponentially hard (in both time and resource (gate) complexity).
Quantum information, uncertainty, and entropy of quantum gates:
Classical information is easy to obtain by means of interaction with the system. On the other hand, the impossibility of cloning means that any specific unknown state cannot be determined. This means that unless the system has specifically been prepared, our ability to control it remains limited. The average information of a system is given by its entropy. The determination of entropy would depend on the statistics obeyed by the object.
A requirement for low temperatures: Several quantum computing architectures
like superconducting quantum computing require extremely low temperatures (close to absolute zero) for functioning.
Progress:
Around a decade and a half, ago the decoherence time for the so-called "quantum computers" were lesser than 1 nanosecond. Now, the IBM Quantum Experience 16-qubit version which you can access online has a decoherence time ~100 μs (see: Demonstration of Envariance and Parity Learning on the IBM 16 Qubit Processor (Davide Ferrari & Michele Amoretti, 2018)). The decoherence time of 100 μs is sufficient to run simple quantum algorithms already! You can check it out yourself on the 5-qubit and 16-qubit quantum computers which have been made accessible online by IBM. I think Google has been able to achieve even better decoherence times with their superconducting chips (having an equivalent number of qubits).
There has been a lot of improvement in the area of quantum error correction in the past decade (requiring a much lesser number of total qubits). See Quantum Error Correction for Quantum Memories (Terhal, 2015) for a brief review.
Also, quoting: Wikipedia: Quantum error correction - Experimental realization:
There have been several experimental realizations of CSS-based codes.
The first demonstration was with NMR qubits. Subsequently,
demonstrations have been made with linear optics, trapped ions, and
superconducting (transmon) qubits. Other error-correcting codes have
also been implemented, such as one aimed at correcting for photon
loss, the dominant error source in photonic qubit schemes.
Preparation of arbitrary quantum states is still a major problem. But now at least we know the exact gate decomposition for any unitary evolution (Quantum-state preparation with universal gate decompositions (Plesch & Brukner, 2011)), albeit the number of gates doesn't usually scale well with a number of qubits. There have been further improvements like in High-fidelity quantum state preparation using neighboring optimal control (Yuchen Peng & Frank Gaitan, 2017). For some other recently developed high-precision methods of state preparation see The preparation of states in quantum mechanics (Fröhlich, 2016) and
Preparation of quantum state (Ali et al.,2017).
One of the main constraints we still have is number of qubits (note that this issue is intrinsically related to the difficulty of maintaining coherence for long periods of time c.f. Schrödinger's cat and the difficulty of macroscopic superposition state and the excellent answer therein). None of the present day quantum computers is sufficient to show any considerable improvement in "capability" compared to classical computers. The largest number factorized by a quantum computer till date is 291311 (High-fidelity adiabatic quantum computation using the intrinsic Hamiltonian of a spin system: Application to the experimental factorization of 291311 (Li et al., 2017)). Brute-force factorization of 291311 takes at most ~270 divisions, each of which takes ~10 ns on modern CPUs. That's 3 us in total, implying that the factorization will be several orders of magnitude faster on your laptop. The practical improvement in time complexity won't be noticeable unless and until the number of qubits increases by at least 10 times or so (I'm not considering the D-Wave machines which have over 1000 qubits, as they use a different mechanism known as quantum annealing which is effective only in a few narrow ranges of problems). But, arguably even the number of qubits is on a steady rise. Recently Google announced a 72-qubit machine (
Google AI blog: A Preview of Bristlecone, Google’s New Quantum Processor) and Intel announced a 49-qubit chip (IEEE Spectrum:: CES 2018: Intel's 49-Qubit Chip Shoots for Quantum Supremacy). Compare that to the 2000s when we only used to have a single-digit number of qubits!
Several quantum computing architectures have been discovered in the past couple of decades, for which near-absolute-zero temperatures are not necessary, for example, optical quantum computers, trapped-ion quantum computers, diamond-based quantum computers, etc. Cf. Why do optical quantum computers not have to be kept near absolute zero while superconducting quantum computers do?
Conclusion:
Whether we will ever have efficient quantum computers which can visibly outperform classical computers in certain areas, is something which only time will say. However, looking at the considerable progress we have been making, it probably wouldn't be too wrong to say that in a couple of decades we should have sufficiently powerful quantum computers. On the theoretical side though, we don't yet know if classical algorithms (can) exist which will match quantum algorithms in terms of time complexity. See my previous answer about this issue. From a completely theoretical perspective, it would also be extremely interesting if someone can prove that all BQP problems lie in BPP or P!
I personally believe that in the coming decades we will be using a combination of quantum computing techniques and classical computing techniques (i.e. either your PC will be having both classical hardware components as well as quantum hardware or quantum computing will be totally cloud-based and you'll only access online them from classical computers). Because remember that quantum computers are efficient only for a very narrow range of problems. It would be pretty resource-intensive and unwise to do an addition like 2+3 using a quantum computer (see How does a quantum computer do basic math at the hardware level?).
Now, coming to your point of whether national funds are unnecessarily being wasted on trying to build quantum computers. My answer is NO! Even if we fail to build legitimate and efficient quantum computers, we will still have gained a lot in terms of engineering progress and scientific progress. Already research in photonics and superconductors has increased manyfold and we are beginning to understand a lot of physical phenomena better than ever before. Moreover, quantum information theory and quantum cryptography have led to the discovery of a few neat mathematical results and techniques which may be useful in a lot of other areas too (cf. Physics SE: Mathematically challenging areas in Quantum information theory and quantum cryptography). We will also have understood a lot more about some of the hardest problems in theoretical computer science by that time (even if we fail to build a "quantum computer").
Sources and References:
Difficulties in the Implementation of Quantum Computers (Ponnath, 2006)
Wikipedia: Quantum computing
Wikipedia: Quantum error correction
Addendum:
After a bit of searching, I found a very nice article which outlines almost all of Scott Aaronson's counter-arguments against the quantum computing skepticism. I very highly recommend going through all the points given in there. It's actually part 14 of the lecture notes put up by Aaronson on his website. They were used for the course PHYS771 at the University of Waterloo. The lectures notes are based on his popular textbook Quantum Computing Since Democritus. | {
"domain": "quantumcomputing.stackexchange",
"id": 256,
"tags": "classical-computing, applications, history"
} |
A one-dimensional periodic structure is the simplest type of photonics crystal and any such one-dimensional system has a band-gap? | Question: My textbook says the following:
A one-dimensional periodic structure, such as a multilayer film (a Bragg mirror), is the simplest type of photonics crystal, and Lord Rayleigh showed that any such one-dimensional system has a band-gap.
I have the following questions:
What is meant here by a "one-dimensional system"?
Why must any such one-dimensional system have a band-gap (what is the physics that necessitates this)?
I would greatly appreciate it if people could please take the time to clarify this.
EDIT:
Found more information here.
Answer: An example of such a system given by Lord Rayleigh [1] is a string loaded with masses at regular intervals. The "one dimension" in this case is the dimension along the length of the string. An analogous structure is a series of films with different material properties, again varying at regular intervals in one dimension. For example, see the image below (from here), which illustrates a repeating pattern of films with different indices of refraction. The one dimension is the $x$-axis here.
Lord Rayleigh's result was basically that certain ranges of wavelengths of incident light (or wavelengths of mechanical waves in the loaded string example) are totally reflected by such a structure. In the above image you can see that the grey curve shows a wave of a certain wavelength passing through the periodic structure, whereas the blue curve shows a different wavelength decaying as it enters.
One way you could think about this result is that there is an interplay between the periodicity of the one-dimensional structure and that of the incident wave. For certain wavelengths this interplay leads to the a wave that dissipates away within the structure.
[1] On the Maintenance of Vibrations by Forces of Double Frequency, and on the Propagation of Waves Through a Medium Endowed with a Periodic Structure. Lord Rayleigh, Phil. Mag., S.5, vol.24, no.147, August 1887, pp.145-59 | {
"domain": "physics.stackexchange",
"id": 58023,
"tags": "optics, material-science, crystals, electronic-band-theory, photonics"
} |
How do latch transforms work? | Question:
I am getting this error after publishing latch transforms for camera-link to base-link.
The code is at the end of the errors...
[ERROR] [1380316527.758532146, 2.527000000]: Error getting latest time from frame 'camera_link' to frame 'base_link': Could not find a connection between 'base_link' and 'NO_PARENT' because they are not part of the same tree.Tf has two or more unconnected trees. (Error code: 2)
[ERROR] [1380316527.758625055, 2.527000000]: Error getting latest time from frame 'hokuyo_frame' to frame 'base_link': Could not find a connection between 'base_link' and 'NO_PARENT' because they are not part of the same tree.Tf has two or more unconnected trees. (Error code: 2)
[ERROR] [1380316527.767405664, 2.528000000]: Error getting latest time from frame 'camera_frame' to frame 'base_link': Could not find a connection between 'base_link' and 'NO_PARENT' because they are not part of the same tree.Tf has two or more unconnected trees. (Error code: 2)
[ERROR] [1380316527.767563302, 2.528000000]: Error getting latest time from frame 'camera_link' to frame 'base_link': Could not find a connection between 'base_link' and 'NO_PARENT' because they are not part of the same tree.Tf has two or more unconnected trees. (Error code: 2)
[ERROR] [1380316527.767687690, 2.528000000]: Error getting latest time from frame 'hokuyo_frame' to frame 'base_link': Could not find a connection between 'base_link' and 'NO_PARENT' because they are not part of the same tree.Tf has two or more unconnected trees. (Error code: 2)
My code is as follows:
tf2_ros::StaticTransformBroadcaster static_broadcaster;
geometry_msgs::TransformStamped msg;
msg.header.stamp = ros::Time::now();
msg.transform.rotation.x = 0.0;
msg.transform.rotation.y = 0.0;
msg.transform.rotation.z = 0.0;
msg.transform.rotation.w = 1.0;
msg.header.frame_id = "base_link";
msg.transform.translation.x = 0;
msg.transform.translation.y = 0;
msg.transform.translation.z = 0.1;
msg.child_frame_id = "camera_link";
static_broadcaster.sendTransform(msg);
msg.header.frame_id = "camera_link";
msg.transform.translation.x = 0;
msg.transform.translation.y = 0;
msg.transform.translation.z = 0.2;
msg.child_frame_id = "hokuyo_frame";
static_broadcaster.sendTransform(msg);
msg.header.frame_id = "camera_link";
msg.transform.translation.x = 0;
msg.transform.translation.y = 0;
msg.transform.translation.z = 0.3;
msg.child_frame_id = "camera_frame";
static_broadcaster.sendTransform(msg);
Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-09-27
Post score: 0
Original comments
Comment by BennyRe on 2013-09-27:
Did you try broadcasting these frames from a launch file?
Answer:
In part this was caused by not bring up the node doing the latches before the rqt_gui which was bringing up rviz as default. Apparently this makes a difference. Now I am left with one error which I will place under a different question.
Originally posted by rnunziata with karma: 713 on 2013-09-27
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15687,
"tags": "ros"
} |
Would a cross correlation between vibrations in the X and Y directions help determine vibrations in the Z direction? | Question: I have a 2D vibration sensor that can only measure in the X and Y directions, and it is mounted on top of a motor that is fixed into place, as shown in a rough sketch below.
Would I be able to work out the vibrations in the Z direction based on a cross-correlation of just the vibration in the X and Y directions?
Answer: Think it through.
Put a penny on a table. Slide it away from you and back -- that's x. Now slide it left and right -- that's y. Now pick it up, without moving it forward, back, right or left -- that's z.
If you're sensing x and y, will those change if you move the penny straight up?
If your sensor senses x and y, will it see z in any way shape or form? | {
"domain": "engineering.stackexchange",
"id": 2811,
"tags": "mechanical-engineering, vibration"
} |
Geo-Referenced Mapping interface for ROS/RVIZ | Question:
Hi Guys,
I found this video on Youtube of a QuadCopter mapping a building on a Georeferenced Map:
http://www.youtube.com/watch?v=G_vtm46eGtU
The interface looks to be RVIZ, with the map laid across the ground plane.
Does anyone know how to import a georeferenced Map, such as a geotiff, into RVIZ to get this same effect???
I am looking for a mapping interface that can take geo-referenced maps of satellite imagery and plot the robots path and sensor information on top of it, just like the video. I would also like to select GPS waypoints on the map, and send those to the robot to navigate too.
This needs to be done without connection to the internet.
The only other things I have found are:
rosworldwind - http://www.ros.org/wiki/rosworldwind: World Wind is not supported very well in Ubuntu
ground_station - http://www.ros.org/wiki/ground_station: Related to QuadCopters, and is no longer being updated
gpsd_viewer: http://www.ros.org/wiki/gpsd_viewer: Does not zoom in very good and is no longer being updated
marble_plugin: http://www.ros.org/wiki/marble_plugin: No high resolution Satellite images.
Google Earth/Maps: Google Earth/API has a lot of limitations and there licenses strictly forbids its use with autonomous vehicles. See section 10.2.C at https://developers.google.com/maps/terms
osm_cartography: It displays map features from Open Street Maps into rviz. OSM though does not provide satellite imagery unfortunately.
Looking for your guys thoughts/ideas.
Thank you
UPDATE
Some packages to maybe look into for implementation into rqt are: OSSIM (http://trac.osgeo.org/ossim/wiki) a QT geo app. Here is an example of their viewer: http://trac.osgeo.org/ossim/wiki/OssimPlanet . Also some others: http://www.osgeo.org/
Originally posted by Raptor on ROS Answers with karma: 377 on 2013-03-06
Post score: 8
Answer:
Maybe qGIS would be an option (http://www.qgis.org) too. It's not the easiest to use, but pretty powerful if you are familiar with GIS techniques and other GIS software (it uses GRASS internally to implement may of its geographic processing tools). It's been around a while and is pretty mature and stable. It's similar in purpose to ArcGIS, and maybe OSSIM (but probably with more of a vector map emphasis than imaging); it's a general GIS toolbox not just a viewer.
Originally posted by ReedHedges with karma: 821 on 2013-04-09
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by strike_eagle_iii on 2019-10-10:
How did you use qgis? Did you have to write your own rviz plugin? | {
"domain": "robotics.stackexchange",
"id": 13223,
"tags": "navigation, mapping, gps, rviz, rqt"
} |
Are there any known implementations of a functional Heap's Algorithm? | Question:
TL;DR: Is an implementation of the Heap's Algorithm adhering to the principals of functional programming possible, and are any implementations of it known? And by "adhering to the principals of functional programming", I mean mainly not relying on the mutation of an object to carry out the algorithm.
I decided to write a function that generates permutations of a list
(permutations [1 2 3])
=> [[1 2 3] [2 1 3] [3 1 2] [1 3 2] [2 3 1] [3 2 1]]
And after some quick searching, somewhat arbitrarily picked Heap's Algorithm.
The problem is, I'm writing in Clojure, trying to adhere to proper FP principals as close as possible, and the entire algorithm seems to hinge on mutating a "global" array while recursing. I worked on this for a couple of hours, and could not find a way of sanely getting the results passed "back up the stack", since the recursive calls just kind of float between mutations:
(defn- swap-v [v i1 i2]
(let [x (get v i1)]
(-> v
(assoc i1 (get v i2))
(assoc i2 x))))
(defn permutate [coll]
(let [coll-atom (atom coll)
result-atom (atom [coll])]
((fn rec [n]
(when (> n 1)
(let [n-even? (zero? (rem n 2))
swap-pos #(if n-even? % 0)]
(doseq [i (range (dec n))]
; Recursive call
(rec (dec n))
; Mutate the "array" by swapping two elements
(swap! coll-atom swap-v (swap-pos i) (dec n))
; Then mutate the results array so they can be returned later
(swap! result-atom conj @coll-atom)) ; Mutate the
; ... And another recursive call down here
(rec (dec n))))
(count @coll-atom)))
@result-atom)) ; Dereference the atom and return the results
I tried changing the doseq to a reduction, then I had the results returned nicely, but seemingly nothing to do with them.
I ended up giving up on a FP approach, wrote the above code, and tried to find an example of an FP approach to the algorithm to see where I had gone wrong. To my surprise, I wasn't able to find any. The closest I could find was a Haskell implementation, but honestly, I have no clue what most of the code is doing; it's pretty obscure. I see unsafe in a few places though, so I think they snuck some mutation in there.
Is this possible?
Note, I'm not looking for a review of the code here, since I'm aware this is very imperative-styled, and I already has a review request open.
Answer: I'm really not very familiar with clojure. I think this is probably the longest clojure program I've written so far. But I guess it provides some sort of answer.
On the whole, functional enumerations of permutations are going to suffer from the O(n) cost of creating a new permutation vector on each iteration. Pretty well all of the common imperative solutions make some attempt to avoid this cost, and many of them can produce amortised O(1) complexity (for a single permutation). Heap's algorithm is interesting in an imperative environment largely because it guarantees to perform a single swap on each iteration, which is clearly a minimal total number of mutations. That's particularly useful if the goal is to perform some sort of aggregate computation over the permutation which can be incrementally computed from the previous value and the modifications.
Heap's algorithm is not the only algorithm which performs just a single swap to produce the next permutation. The possibly even more famous bell-ringers' algorithm (often called the Steiner-Johnson-Trotter algorithm) produces sequences in which consecutive permutations differ only by a swap of two adjacent elements. This could be even more valuable for updating of aggregate computations, but it is more complicated to figure out which two adjacent elements to swap at each iteration. (Indeed, although tables of plain changes for up to seven bells were produced several centuries ago, the precise algorithm used to produce these table has not, as far as I know been recorded. However, the results line up with the algorithm proposed independently in the 1960s by the three mathematicians after whom it is named.)
The plain changes algorithm was motivated in part by the desire to keep the various bell ringers' attention on the changes; if a bell stays in the same position in the change for too long, its ringer may become bored and lose their place in the sequence. (In fact, current change ringing sequences try even harder to avoid leaving a bell in the same position for too long, so they no longer use the STJ algorithm.)
For many combinatorial problems, though, the opposite criterion is desired. For example, it may be useful to generate the permutations in lexicographic order, which means that the first item retains its value for (n-1)! iterations. Lexicographic ordering can still be performed in amortised O(1), and it involves O(1) mutations at each iteration, but the number of mutations may be as great as n-1 (or n, if the permutation sequence is circular, since the last permutation in lexicographical order is the reverse of the first one).
If the vector to be permuted may have repeated elements, and only unique permutations are desired, then lexicographic order is far and away the easiest solution. Furthermore, it is very easy to describe the next-permutation algorithm for lexicographic ordering:
Start with the vector sorted from left to right.
At each iteration:
Find the shortest suffix which is not monotonically non-increasing. (In other words, find the last element for which some subsequent element is greater.) If the entire vector is monotonically non-increasing, then it is the last permutation and the process is done.
Sort that suffix by shifting its first element to the right, and then reverse it. (This can be done by a modified reversal algorithm with k/2 swaps where k is the length of the suffix, but doing it in two steps is conceptually simpler.)
I think that algorithm would be quite simple to implement in functional style.
But let's get back to Heap's algorithm. Heap developed his algorithm in 1963, about the same time as Trotter and Johnson independently published the change ringing algorithm. (Steinhaus had published his version of the algorithm in 1958, but since he was writing in Polish it was relatively unknown until it was translated into English in 1963.) Heap's paper was a simple description of his algorithm, without formal proof, and it probably would have remained in the dusty archives of computer science had it not been rediscovered by Robert Sedgewick a decade later. In 1977, Sedgewick wrote a long survey on permutation algorithms, in which he devoted quite a bit of attention to Heap's algorithm, including producing an optimised implementation in a kind of virtual machine code. He concluded that for typical hardware architectures (in 1977), "Heap's method will run faster than any other known method."
A couple of years later, he presented a public lecture on permutation algorithms, during which he repeated this claim, and Heap's algorithm became the permutation algorithm of choice for programmers with a performance fetish. Unfortunately, the presentation slides (which are rather more easily found online than any of the other sources for Heap's algorithm) had a minor error in the algorithm pseudocode (not present in the 1977 paper, which presents a number of variant implementations, all impeccable). And because the presentation slides are rather more accessible than any of the academic references, that particular error has plagued implementations of Heap's algorithm ever since. Ironically, the error causes the algorithm to both run more slowly and to generate an incorrect sequence in which consecutive permutations sometimes do not differ by a single swap.
I have a certain affinity to Heap's algorithm because a few years ago, a question appeared on StackOverflow asking for help with that algorithm; the poster had carefully implemented the algorithm based on pseudocode in Wikipedia, and discovered that their code did not work as expected. In the course of reviewing the code, I soon realised that the poster's code was an entirely accurate implementation of the Wikipedia write-up, and in the course of trying to validate Wikipedia, I tracked down the history of the error which I summarised in the preceding paragraph. A Wikipedian noticed the answer and fixed the Wikipedia page. (Constant vigilance is necessary, however; every couple of months some aspiring computer science expert compares the pseudocode on Wikipedia either with Sedgewick's buggy slide or with one of the many buggy implementations floating about the internet, and "corrects" the Wikipedia page.)
The beauty of Heap's algorithm is the simplicity of enumerating the swaps. The algorithm is similar to a lexicographic algorithm, in reverse, in that it always does a complete permutation cycle of a prefix of the elements before changing the next element. So the higher index of the swapped elements follows a kind of factorial variant on the ruler function, in which element i appears every i! iterations (except for the iterations which are multiples of a larger factorial). The lower index follows an only slightly more complex pattern: for prefixes with an odd number of elements, the lower index is always 0; for prefixes with an even number of elements, the lower index starts at 0 and is sequentially incremented. So, for example, in the case of a four element permutation sequence, the swaps are:
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 2 0 0 0 0 0
1 2 1 2 1 3 1 2 1 2 1 3 1 2 1 2 1 3 1 2 1 2 1
(Note that the element with index 3 appears as the larger index every 3! == 6 iterations, and the element with index 2 appears as the larger index every 2! == 2 iterations, except the ones which have already been taken by index 3.)
This sequence of pairs of indices is very easy to produce recursively, but one really wants it as a sequence. There is a non-recursive algorithm which effectively involves maintaining an explicit stack, but since the published algorithms tend to mutate this stack, I chose to use the recursive algorithm. In many Scheme's, one would use some kind of continuation to turn the recursive algorithm into a sequence, but since clojure doesn't have continuations (as far as I could see), I did it with nested calls to mapcat, the same approach as is taken by clojure's tree-seq standard library function.
Once you have the list of pairs of indices to swap, they can be repetitively (and lazily) applied to permutations by using reductions, which generates the result of successively applying a sequence of values to a seed value, using an arbitrary binary function. You can see that in the last line, where reductions is applied to the sequence generated by swaps using the swap-v function to turn each successive permutation into the next one.
I hope that's enough narrative to explain the (possibly non-idiomatic) clojure program which I came up with:
(letfn [(even-swaps [n]
(if (= n 2) [[0 1]]
(drop 1 (mapcat (fn [i] (cons [(- i 1) (- n 1)]
(odd-swaps (- n 1))))
(range n)))))
(odd-swaps [n]
(drop 1 (mapcat (fn [i] (cons [0 (- n 1)]
(even-swaps (- n 1))))
(range n))))
(swaps [n]
(cond
(< n 2) '()
(even? n) (even-swaps n)
:else (odd-swaps n)))
(swap-v [v [i j]] (assoc v i (v j) j (v i)))]
(defn heap [sq]
(let [v (vec sq)]
(reductions swap-v v (swaps (count v))))))
Sample output:
user => (run! prn (heap '(a b c d)))
[a b c d]
[b a c d]
[c a b d]
[a c b d]
[b c a d]
[c b a d]
[d b a c]
[b d a c]
[a d b c]
[d a b c]
[b a d c]
[a b d c]
[a c d b]
[c a d b]
[d a c b]
[a d c b]
[c d a b]
[d c a b]
[d c b a]
[c d b a]
[b d c a]
[d b c a]
[c b d a]
[b c d a] | {
"domain": "cs.stackexchange",
"id": 11558,
"tags": "functional-programming, permutations"
} |
How to plot in MATLAB the PSD of two signals with different bandwidths | Question: I would like to plot the power spectral density (PSD) of this signal
$$y(t) = x(t) + i(t)$$
where both $x(t)$ and $i(t)$ are binary phase shift keying (BPSK) signals with baseband bandwidths $W_x/2$ and $W_i/2$, respectively.
Assuming rectangular pulse shaping, each signal will have a $sinc$ PSD, which is centered around $f_x$ for $x(t)$, and around $f_i$ for $i(t)$.
Assuming that $f_i-\frac{W_i}{2} > f_x-\frac{W_x}{2}$ and $f_i+\frac{W_i}{2} < f_x+\frac{W_x}{2}$, how can I generate the signals $x(t)$ and $i(t)$ in MATLAB, and compare the PSD of $y(t)$ with the PSD of each of $x(t)$ and $i(t)$?
Parameters
N=100;
%Bandwidth
W_x = 200*10^6;
W_i = 50*10^6;
%sampling time
T_x = 1/(2*W_x);
T_i = 1/(2*W_i);
%time axis (these are of different lengths!)
t_x = T_x.*(0:N-1);
t_i = T_i.*(0:N-1);
%carrier frequencies
fx = 100*10^6;
fi = 150*10^6;
I am stuck here. How can I continue from here?
Answer: The spectrum of a BPSK signal has a sinc function envelope. That's not bandlimited and falls off very slowly with frequency so you can't easily sample it without getting significant amount of aliasing unless you choose a VERY high sample rate.
If you just want to see qualitatively what's happening, the code below should work. If you want better than that, you need define exactly what level of precision is required and adjust signal length and sample rate accordingly.
%% PSD of a BPSK signal
N=100;
%Bandwidth
W_x = 200*10^6;
W_i = 50*10^6;
%sampling time
T_x = 1/(2*W_x);
T_i = 1/(2*W_i);
%time axis (these are of different lengths!)
t_x = T_x.*(0:N-1);
t_i = T_i.*(0:N-1);
%carrier frequencies
fx = 100*10^6;
fi = 150*10^6;
%% choose reeasonable parameters
n0 = 5e6; % signal length in samples
fs = 5e9; % sample rate in Hz
t = (0:n0-1)'/fs; % time axis
% create the modulation signals
Lx = fs/W_x; % divider
Li = fs/W_i;
nx = n0/Lx; % length of modulation signal X
ni = n0/Li;
% binary random sequence +1, -1
modx = sign(rand(nx,1) -0.5);
modi = sign(rand(ni,1)-0.5);
% upsample from modulation rate to analysis rate
y = ones(Lx,1)*(modx'); modxUp = y(:);
y = ones(Li,1)*(modi'); modiUp = y(:);
% build the time domain signals
x0 = cos(2*pi*fx*t).*modxUp;
i0 = cos(2*pi*fi*t).*modiUp;
xall = [x0 i0 x0+i0];
% spectral analysis
nfft = 2^14;
psd = pwelch(xall,hanning(nfft));
% plot it
clf;
freqAxis = (0:nfft/2)'*fs./nfft;
plot(freqAxis,10*log10(psd));
xlabel('Frequency in Hz');
grid('on');
ylabel('level in dB');
legend('X','I','X+I');
set(gca,'ylim',[-60 25]);
set(gca,'xlim',[0 fs/4]); | {
"domain": "dsp.stackexchange",
"id": 11412,
"tags": "matlab, digital-communications, power-spectral-density"
} |
Intersection of two incomplete DFAs | Question: Let assume that I have the following two automata A1 and A2 :
I have to do the intersection of the two, so I did this :
My question is related to the "∅". If I have two incomplete automata without dead states, and given a specific state, one automaton has a transition for one symbol but the second doesn't have it, what should I do ?
For example I am in the state 13 in the transition table and I read "b", should I rather put ∅ or 4 and make it non final ?
Answer: To answer the question, there are two different manners to solve this.
By adding a dead state whenever you have an incomplete automata. By doing so, you will no longer have to worry about what to do if one automata has a transition from a specific state but not the other.
You can still do the intersection of the two without adding dead states.
In this example, if you are focusing on the state 1 and 3 with the symbol 'b' :
- From A1, you have no transition : δ(1,b) = ∅
- From A2, you have a transition : δ(3,b) = 4
Since one has a transition, but not the other, then the resulting automata will have the transition : δ(13,b) = ∅. Therefore, it is indeed acceptable to put ∅. | {
"domain": "cs.stackexchange",
"id": 13069,
"tags": "automata, finite-automata"
} |
What are Grassmann-Plucker relations? | Question: In Duality, matroids, qubits, twistors and surreal numbers (recently submitted!) they
show that via the Grassmann-Plucker relations, the various apparent unrelated concepts, such as duality, matroids, qubits, twistors and surreal numbers are, in fact, deeply connected.
The paper includes many interesting topics which I am not well versed in (Grassmannian, Plucker Embedding, Hopf Map, Matroids, etc) & am wondering if it might be possible that someone could explain Grassmann-Plucker relations & how they are used in a quantum context?
Answer: (This answer is given from the point of view of the theory of quantization, in which quantum systems are described by means of a quantization map of a classical phase space into a quantum space. The Plucker embedding will be described as a special case of such a map. This case has many applications in quantum computation).
Classical dynamics takes place on manifolds. For example, the dynamics of a particle moving on a straight line is completely determined by its initial position $x$ and momentum $p$ (or equivalently velocity). The set of initial parameters needed to determine the dynamics, i.e., to solve the equations of motion is known as the system's phase space. In the above case it is $\mathbb{R}^2$. i.e., a two -dimensional vector space of all possible values of the position and the momentum. The dynamics is generated by functions on the phase called Hamiltonians $H(p, q)$ through the Hamilton's equation of motion:
$$ \frac{dx}{dt} = \frac{\partial H}{\partial p}$$
$$ \frac{dp}{dt} = -\frac{\partial H}{\partial x}$$
($t$ = time). The Hamiltonian corresponding to free motion is given by
$$ H(p, q) = \frac{1}{2m} p^2$$
Any reasonable function on the phase space can serve as a classical Hamiltonian. For example, the function:
$$ H(p, q) = p^2 + x^2$$
is the classical Hamiltonian of the Harmonic oscillator (which is the basis of continuous variable models.)
Phase spaces (i.e., sets of initial data) need not necessarily be vector spaces. This can happen, for example, if the particle's position is confined, for example, a particle in a box, or a particle with rotational degrees of freedom, in which the angular position is confined to lie on a sphere. In all the above cases, the particle momentum is not confined and can assume any value, thus the phase space has an infinite volume, as it has unbounded directions.
Quantum systems (including all quantum systems used in quantum computing, such as qubits, qudits, continuous variable models, toric codes, etc.) can be described by a classical system + a procedure of quantization, in which the phase space geometrical manifold is traded by a quantum system Hilbert space and the Hamiltonian functions are traded by operators on the Hilbert space. There is no unique procedure applicable to all kinds of systems, different quantization procedures often give slightly different results, but nevertheless, I'll describe one of these procedures which is certainly applicable at least when the quantum Hilbert spaces are finite dimensional (such as the case of qudits).
First, let me remark that a Hilbert space does not describe the quantum mechanical set of pure states because in quantum mechanics there is no relevance to the overall magnitude and phase of a state vector; the pure states are described by rays; thus, we are talking about a projective Hilbert space which is a Hilbert space with an equivalence relation:
$$|\Psi\rangle \sim c |\Psi\rangle, \quad c \in \mathbb{C}, \quad c \ne 0 $$
When the Hilbert space is finite dimensional, the projective Hilbert spaces are called projective vector spaces or simple projective spaces, for example the projective vector space corresponding to an $n$ dimensional complex vector space is called a complex projective space and denoted by $P(\mathbb{C}^n) \cong \mathbb{C}P^{n-1}$ (Its dimension is $n-1$, one dimension less due to the equivalence relation).
The quantization procedure in this case reduces to an embedding of a classical phase $M$ space into a quantum space $Q$ of states which an appropriate projective vector space:
$$M \overset{i}{\rightarrow} Q = \mathbb{C}P^{n-1}$$
In each quantization method, there is a recipe of how given a classical Hamiltonian function, one can construct a corresponding quantum operator (at least for a certain class of functions).
When the Hilbert spaces are finite dimensional, such as in the qudit case, the corresponding phase spaces have finite volume.
One of the most amazing things in the above quantization procedure is that in the case of a qudit, the complex projective space is also the classical phase space. Please see Ashtekar and Schilling.
This does not mean that quantum mechanics is equivalent to classical mechanics. It only means that the space of classical pure states is the same as the space of quantum pure states. The difference lies in the process of measurement.
Let me remark that the qudit is a representative case where the dimension of the quantum Hilbert space is finite, in this case the volume of the classical phase space is also finite. This is a general principle.
The above complete correspondence breaks in cases other than a single qudit. For example, for a set of two $n$-dimensional qudits, the phase space $M = \mathbb{C}P^{n-1} \otimes \mathbb{C}P^{n-1}$ while the quantum space is $Q = \mathbb{C}P^{2n-1} $. The quantization map: $M \overset{i}{\rightarrow} Q$ in this case is a special case of the Segre embedding mentioned in AHusain's answer.
Another case of with a finite dimensional Hilbert case is the case of fermions. A set of $k$ fermions living in an $n \ge k$ dimensional Hilbert space can assume only certain entangled state vectors which are fully antisymmetric (because fermions cannot be in the same state), for example a set of two fermions ($k=2$) on a $n=4$ dimensional Vector space can assume only the following state vectors
$$ |\Psi\rangle = c_{12} v_1\wedge v_2 + c_{13} v_1\wedge v_3 + c_{14} v_1\wedge v_4 + c_{23} v_2\wedge v_3 + c_{24} v_2\wedge v_4 + c_{34} v_3\wedge v_4 $$
(The wedge $\wedge$ is the antisymmetric tensor product: $v_i \wedge v_j = v_i \otimes v_j - v_j \otimes v_i$)
The complex dimension of this vector space is $6$ and of the corresponding projective vector space is $5$ (the real dimension is 10).
The classical phase space of the above set of fermions can be obtained as follows: Taking a fixed fermionic state, for example:
$$ |\Psi\rangle = v_1\wedge v_2 $$
The vectors $v_i$ are 4 dimensional; the phase space is the orbit of the action of the unitary group $U(4)$ on this fixed vector:
$$ g \cdot |\Psi\rangle = gv_1\wedge gv_2, \quad g \in U(4) $$
Now, if $g$ acts only within the two dimensional subspace spanned by $v_3$ and $v_4$, it clearly does not change the fermion state; also if $g$ acts only within the two dimensional subspace spanned by $v_1$ and $v_2$, it also does not the fermion state because it only changes the basis, thus there is a subgroup $U(2) \times U(2)$ which does not change the initial state, thus the phase space in this case is given by:
$$Gr(2, 4) = \frac{U(4)}{U(2) \times U(2)}$$
This manifold is called the complex Grassmann manifold. The dimension in our case is: $4^2-2^2-2^2 = 8$. The quantization map, i.e., the embedding:
$$ Gr(2, 4) \overset{i}{\rightarrow} \mathbb{C}P^{5}$$
is called the Plucker embedding (this term is applicable in the general case, for arbitrary $k$ and $n$ ) . It is clear from comparing the dimensions ($8 < 10$) that not every state in the projective Hilbert state can be obtained from a point of the Grassmannian, i.e., from a unitary rotation of a fixed initial state. Thus, if we take a general element in the projective space $\mathbb{C}P^{5}$, there will be certain relations that it must satisfy to be a unitary rotation of a fixed elements, these are called the "Plucker relations"
In our example there is a single Plucker relation:
$$c_{12} c_{34} -c_{13} c_{24} + c_{14} c_{23} = 0$$
(These relations are necessarily homogeneous because both manifolds are projective. Please see for example the following article by Smirnov, where the Plucker embedding is explained in some detail, the above equation appears in example 2.11).
One use of the Grassmann manifold is in the solution the Schrödinger equation for fermions. Instead of looking for the ground state in the entire Hilbert space, we can formulate a variational problem running only on vectors belonging to the Grassmannian. This procedure, known as the Hartree-Fock method, results in an approximate solution. (This point was also mentioned in AHusain's answer.
Please see the following article by Karle and Pachos analyzing the geometry of the Grassmannian $Gr(2,4)$ from the holonomic quantum computation point of view.
The Grassmann manifold appears also as the ground state manifold of stabilizer codes, please see for example the following article by Zheng and Brun. | {
"domain": "quantumcomputing.stackexchange",
"id": 430,
"tags": "resource-request"
} |
How would you find the mass in grams, if you only know the number of particles of a portion of the formula? | Question: If you were given a question like below:
What is the mass in grams of a sample of $\ce{Fe2(SO4)3}$ that
contains $3.59 \times 10^{23}$ sulfate ions, $\ce{SO4^{2−}}$ ? The molar mass
of $\ce{Fe2(SO4)3}$ is $399.91 \, \text{g}/\text{mol}$.
How would you go about solving a question like that?
I'm confused about how to use only the quantity of sulfate ions to find the mass in grams of the whole sample, given it's grams per mole.
Answer: The formula $\ce{Fe2(SO4)3}$ tells us that 3 molecules of $\ce{SO4^{2-}}$ exist for every $\ce{Fe2(SO4)3}$ molecule.
$\frac{\ce{1 Fe2(SO4)3 molecule}}{\ce{3 SO4^{2-} molecule}} * 3.59*10^{23} \ce{SO4^{2-} molecule}$ can be used to find the number of molecules of ferric sulfate.
You can then use $\frac{1 \ce{mole}}{6.022*10^{23} \ce{molecule}}$ to find the moles of ferric sulfate.
You can then use the given information to solve for grams of ferric sulfate.
Write a comment if there's any ambiguity. | {
"domain": "chemistry.stackexchange",
"id": 2299,
"tags": "molecules, ions, mole"
} |
Canonical Commutator Relation from Translation operator | Question: In the book of Sakurai & Napolitano, the Canonical Commutation relation is derived using the unitary translation operator:
I agree with the derivation of the book until i reach this step, and sakurai simply approximates the RHS' translation operator to identity (1.206):
$[\hat{x}, \hat{T}(\delta x)] = \delta x \hat{T}(\delta x) \approx \delta x \hat{I}$
I also understand the rest of the reasoning from there, but this approximation step does not sit right with me. As far as I get it, the approximation means setting the limit as $\delta x \rightarrow 0$ but then this should be taken over the whole equation and all displacements $\delta x$ should tend to zero right?
Any help would be much appreciated.
Answer: Looking back on this, while the derivation of Sakurai is not rigorous, I see it can be solved with a quick taylor expansion. Recalling:
$\hat{T}(\delta \vec{r})=e^{-i\hat{\vec{K}}\cdot\delta\vec{r}}$
so in this case where $\delta\vec{r}=\delta x$, we have $\hat{T}(\delta x) = \hat{I}-i\hat{K}_x\delta x + O(\delta x^2)$
hence yielding:
$[\hat{x}, \hat{T}(\delta x)] = [\hat{x}, \hat{I}-i\hat{K}_x\delta x + O(\delta x^2)] = -i\delta x[\hat{x}, \hat{K}_x] = \delta x \hat{T}(\delta x)$
The $\delta x$ cancel on both sides and we can now safely take a limiting process for $\delta x \rightarrow 0$ yielding:
$\lim_{\delta x \rightarrow 0} [\hat{x}, \hat{K}_x] =[\hat{x}, \hat{K}_x] = \lim_{\delta x \rightarrow 0} i(\hat{I}+ O(\delta x))= i\hat{I} .$ | {
"domain": "physics.stackexchange",
"id": 98204,
"tags": "quantum-mechanics, operators, momentum, commutator"
} |
File system manipulation helper | Question: What do you think about this file system manipulation helper? There is an utility class Folder which I can use to define directory structure of my app:
static class Folders
{
public static Folder Bin =>
new Folder(Assembly.GetExecutingAssembly());
public static Folder App => Bin.Up();
public static Folder Docs => App.Down("Docs");
public static Folder Temp => App.Down("Temp");
}
Then I can do some manipulations in an easy way:
static void Main(string[] args)
{
Folders.Temp.Create();
Folders.Bin.Run("scan.exe.", "-no-ui");
Folders.Temp.CopyTo(Folders.Docs, f => f.EndsWith(".pdf"));
Folders.Temp.Empty();
}
Here is the library class Folder used above:
public class Folder
{
readonly string _path;
public Folder(Assembly assembly)
: this(Path.GetDirectoryName(assembly.Location))
{
}
public Folder(string path)
{
_path = path;
}
public override string ToString() => _path;
public static implicit operator string(Folder folder) =>
folder.ToString();
public IEnumerable<Folder> Folders =>
Directory.GetDirectories(this, "*", SearchOption.TopDirectoryOnly)
.Select(p => new Folder(p));
public IEnumerable<Folder> AllFolders =>
Directory.GetDirectories(this, "*", SearchOption.AllDirectories)
.Select(p => new Folder(p));
public IEnumerable<string> Files =>
Directory.GetFiles(this, "*.*", SearchOption.TopDirectoryOnly);
public IEnumerable<string> AllFiles =>
Directory.GetFiles(this, "*.*", SearchOption.AllDirectories);
public Folder Up() =>
new Folder(
new DirectoryInfo(this)
.Parent.FullName);
public Folder Down(string folderName) =>
new Folder(
Path.Combine(
this,
folderName));
public void Create() =>
Directory.CreateDirectory(this);
public void Empty()
{
var directoryInfo = new DirectoryInfo(this);
if (!directoryInfo.Exists)
return;
foreach (var file in AllFiles)
File.Delete(file);
foreach (var folder in Folders)
Directory.Delete(folder, true);
}
public void CopyTo(Folder destination) =>
CopyTo(destination, file => true);
public void CopyTo(Folder destination, Func<string, bool> filter)
{
//Create directories
foreach (string directoryPath in AllFolders)
Directory.CreateDirectory(directoryPath.Replace(this, destination));
//Copy all the files & replaces any files with the same name
foreach (string filePath in AllFiles.Where(filter))
File.Copy(filePath, filePath.Replace(this, destination), true);
}
public void Run(string exe, string args = "")
{
string defaultCurrentDirectory = Environment.CurrentDirectory;
Environment.CurrentDirectory = this;
try
{
var process = new Process();
process.StartInfo.FileName = exe;
process.StartInfo.Arguments = args;
process.Start();
process.WaitForExit();
if (process.ExitCode != 0)
throw new InvalidOperationException(
$"{exe} process failed with exit code {process.ExitCode}.");
}
finally
{
Environment.CurrentDirectory = defaultCurrentDirectory;
}
}
}
Answer: I'm always a fan of using interfaces wherever possible so that unit tests can mock out my component easily. So let's create a couple of interfaces:
public interface IFolder
{
IEnumerable<IFolder> Folders { get; }
IEnumerable<IFolder> AllFolders { get; }
IEnumerable<string> Files { get; }
IEnumerable<string> AllFiles { get; }
IFolder Up();
IFolder Down(string folderName);
void Create();
void Empty();
void CopyTo(IFolder destination);
void CopyTo(IFolder destination, Func<string, bool> filter);
void Run(string exe, string args = "");
}
and
internal interface IFolders
{
IFolder Bin { get; }
IFolder App { get; }
IFolder Docs { get; }
IFolder Temp { get; }
}
I also like to re-use constants. And I added another implicit operator to go with your new constructor:
public class Folder : IFolder
{
private const string DirectoryWildcard = "*";
private const string FileWildcard = "*.*";
private readonly string _Path;
public Folder(Assembly assembly)
: this(Path.GetDirectoryName(assembly.Location))
{
}
public Folder(string path)
{
this._Path = path;
}
public override string ToString() => this._Path;
public static implicit operator string(Folder folder) => folder.ToString();
public static implicit operator Folder(string path) => new Folder(path);
public static implicit operator Folder(Assembly assembly) => new Folder(assembly);
public IEnumerable<IFolder> Folders => Directory
.GetDirectories(this, DirectoryWildcard, SearchOption.TopDirectoryOnly)
.Select(path => new Folder(path));
public IEnumerable<IFolder> AllFolders => Directory
.GetDirectories(this, DirectoryWildcard, SearchOption.AllDirectories)
.Select(path => new Folder(path));
public IEnumerable<string> Files => Directory.GetFiles(this, FileWildcard, SearchOption.TopDirectoryOnly);
public IEnumerable<string> AllFiles => Directory.GetFiles(this, FileWildcard, SearchOption.AllDirectories);
public IFolder Up() => new Folder(new DirectoryInfo(this).Parent.FullName);
public IFolder Down(string folderName) => new Folder(Path.Combine(this, folderName));
public void Create() => Directory.CreateDirectory(this);
public void Empty()
{
var directoryInfo = new DirectoryInfo(this);
if (!directoryInfo.Exists)
{
return;
}
foreach (var file in AllFiles)
{
File.Delete(file);
}
foreach (var folder in Folders)
{
Directory.Delete(folder.ToString(), true);
}
}
public void CopyTo(IFolder destination) => this.CopyTo(destination, file => true);
public void CopyTo(IFolder destination, Func<string, bool> filter)
{
// Create directories
foreach (var folder in this.AllFolders)
{
Directory.CreateDirectory(folder.ToString().Replace(this, destination.ToString()));
}
// Copy all the files & replaces any files with the same name
foreach (var filePath in this.AllFiles.Where(filter))
{
File.Copy(filePath, filePath.Replace(this, destination.ToString()), true);
}
}
public void Run(string exe, string args = "")
{
var defaultCurrentDirectory = Environment.CurrentDirectory;
Environment.CurrentDirectory = this;
try
{
var process = new Process { StartInfo = { FileName = exe, Arguments = args } };
process.Start();
process.WaitForExit();
if (process.ExitCode != 0)
{
throw new InvalidOperationException(
$"{exe} process failed with exit code {process.ExitCode}.");
}
}
finally
{
Environment.CurrentDirectory = defaultCurrentDirectory;
}
}
}
And finally, the Folders.cs implementation:
internal class Folders : IFolders
{
private readonly Assembly _Assembly;
public Folders(Assembly assembly = null)
{
this._Assembly = assembly ?? Assembly.GetExecutingAssembly();
}
public static IFolders Default => new Folders();
public IFolder Bin => new Folder(this._Assembly);
public IFolder App => this.Bin.Up();
public IFolder Docs => this.App.Down("Docs");
public IFolder Temp => this.App.Down("Temp");
}
Now that it's not static, you'll either have to create a new one with a particular assembly, or Folders.Default will have the current assembly as per original design. | {
"domain": "codereview.stackexchange",
"id": 18186,
"tags": "c#, file-system"
} |
Using sed regular expression to extract domain name from file | Question: I'm learning regex with sed to extract the last field from file named "test". The method I'm trying gives desired output.
Please suggest if this method Im trying is effective way of doing it. Also when should we use "-e" option with sed (please give an example — I couldn't find examples)
~# ] cat test
example.com. 4 IN NS b.iana-servers.net.
50times.com. 21556 IN NS ns1.50times.com.
example.com. 4 IN NS a.iana-servers.net.
~# ] cat test | sed -r 's/^[[:alnum:]]*.[[:alnum:]]*.?[a-z]*.[[:blank:]]+[0-9]+[[:blank:]]+IN[[:blank:]]+[A-Z]+[[:blank:]]+//g' | sed -r 's/\.*.$//'
b.iana-servers.net
ns1.50times.com
a.iana-servers.net
Answer: When processing tabular data in columns, awk is often a more appropriate tool to use. The equivalent command would be
awk '{ sub("\.$", "", $NF); print $NF }' test
… which I think is more readable.
Explanation:
NF is the number of fields: for this text, 5.
$NF is the content of the last (5th) field.
sub("\.$", "", $NF) strips the trailing dot from the last field.
{ commands } executes the commands for every line in the file. | {
"domain": "codereview.stackexchange",
"id": 14742,
"tags": "beginner, regex, linux, sed"
} |
Smart Mirror utilising python API's | Question: I am making an object-oriented python project for a smart mirror running on a raspberry pi. The code receives input from API's, formats the data and displays it on the mirror. I would like to know what I can improve on. Is my code up to industry standards?
Code:
from Tkinter import *
import locale
import threading
import time
import requests
import feedparser
import json
import traceback
import urllib2
import praw
from PIL import Image, ImageTk
from contextlib import contextmanager
#Font Variables
font_type = 'Helvetica'
font_colour = "White"
xlarge_text_size = 48
large_text_size = 30
medium_text_size = 20
small_text_size = 12
xsmall_text_size =8
#News Variables
NEWS_COUNTRY_CODE = 'au'
#Weather Variables
READ_API_KEY = 'D71A7607GOWJSZ6D'
CHANNEL_ID = 502804
#Reddit Variables
SUBREDDIT_SELECTION = 'technology'
class Clock(Frame):
def __init__(self, parent, *args, **kwargs):
Frame.__init__(self, parent, bg='black')
#Time Label
self.time1 = ''
self.timeLbl = Label(self, font=(font_type, xlarge_text_size), fg=font_colour, bg="black")
self.timeLbl.pack(side=TOP, anchor=E)
#Day Of the Week label
self.weekday1 = ''
self.weekdayLbl = Label(self, font=(font_type,medium_text_size), fg=font_colour, bg="black")
self.weekdayLbl.pack(side=TOP,anchor=E)
#Date Label
self.date1 = ''
self.dateLbl = Label(self,font=(font_type,medium_text_size), fg=font_colour, bg="black")
self.dateLbl.pack(side=TOP,anchor=E)
self.tick()
def tick(self):
#Set Clock
time2 = time.strftime('%H:%M')
if time2 != self.time1:
time1 = time2
self.timeLbl.config(text=time2)
self.timeLbl.after(200, self.tick)
# Set Day of the Week
weekday2 = time.strftime('%A')
if weekday2 != self.weekday1:
self.weekday1 = weekday2
self.weekdayLbl.config(text=weekday2)
# Set date
date2 = time.strftime("%d %b, %Y")
if date2 != self.date1:
self.date1 = date2
self.dateLbl.config(text=date2)
class Weather(Frame):
def __init__(self, parent, *args, **kwargs):
Frame.__init__(self, parent, bg='black')
self.temperature = ''
self.humidity = ''
self.uv = ''
self.apparenttemp = ''
self.icon = ''
self.degreeFrm = Frame(self, bg="black")
self.degreeFrm.pack(side=TOP, anchor=W)
self.temperatureLbl = Label(self.degreeFrm, font=('Helvetica', xlarge_text_size), fg="white", bg="black")
self.temperatureLbl.pack(side=LEFT, anchor=N)
self.uvLbl = Label(self, font=('Helvetica', large_text_size), fg="white", bg="black")
self.uvLbl.pack(side=TOP, anchor=W)
self.humidityLbl = Label(self, font=('Helvetica', medium_text_size),fg="white",bg="black")
self.humidityLbl.pack(side=TOP, anchor=W)
self.apparenttempLbl = Label(self, font=('Helvetica', medium_text_size), fg="white", bg="black")
self.apparenttempLbl.pack(side=TOP, anchor=W)
self.get_local_weather()
def get_local_weather(self):
try:
degree_sign = u'\N{DEGREE SIGN}'
tempval = ''
humidval = ''
uvval = ''
apptempval = ''
conn = urllib2.urlopen("http://api.thingspeak.com/channels/%s/feeds/last.json?api_key=%s" \
% (CHANNEL_ID, READ_API_KEY))
response = conn.read()
data = json.loads(response)
conn.close()
tempval = "%.2f%s" % (float(str(data['field1'])), degree_sign)
humidval = "%s%.2f%s" % ("Humidity ", float(str(data['field2'])), "%")
uvval = "%s%s" % ("UV Level ", int(data['field3']))
apptempval = "%s%.2f%s" % ("Feel's like ", float(str(data['field4'])), degree_sign)
if self.temperature != None:
self.temperature = tempval
self.temperatureLbl.config(text=tempval)
if self.humidity != None:
self.humidity = humidval
self.humidityLbl.config(text=humidval)
if self.uv != None:
self.uv = uvval
self.uvLbl.config(text=uvval)
if self.apparenttemp != None:
self.apparenttemp = apptempval
self.apparenttempLbl.config(text=apptempval)
except Exception as e:
traceback.print_exc()
print "Error: %s. Cannot get weather." % e
self.after(500, self.get_local_weather)
class News(Frame):
def __init__(self, parent, *args, **kwargs):
Frame.__init__(self, parent, *args, **kwargs)
self.config(bg='black')
self.title = 'News'
self.newsLbl = Label(self, text=self.title, font=('Helvetica', medium_text_size), fg="white", bg="black")
self.newsLbl.pack(side=TOP, anchor=W)
self.headlinesContainer = Frame(self, bg="black")
self.headlinesContainer.pack(side=TOP)
self.get_headlines()
def get_headlines(self):
try:
for widget in self.headlinesContainer.winfo_children():
widget.destroy()
if NEWS_COUNTRY_CODE == None:
headlines_url = "https://news.google.com/news?ned=au&output=rss"
else:
headlines_url = "https://news.google.com/news?ned=%s&output=rss" % NEWS_COUNTRY_CODE
feed = feedparser.parse(headlines_url)
for post in feed.entries[0:5]:
headline = NewsHeadline(self.headlinesContainer, post.title)
headline.pack(side=TOP, anchor=W)
except Exception as e:
traceback.print_exc()
print "Error: %s. Cannot get news." % e
self.after(600000, self.get_headlines)
class NewsHeadline(Frame):
def __init__(self, parent, event_name=""):
Frame.__init__(self, parent, bg='black')
image = Image.open("assets/Newspaper.png")
image = image.resize((25, 25), Image.ANTIALIAS)
image = image.convert('RGB')
photo = ImageTk.PhotoImage(image)
self.iconLbl = Label(self, bg='black', image=photo)
self.iconLbl.image = photo
self.iconLbl.pack(side=LEFT, anchor=N)
self.eventName = event_name
self.eventNameLbl = Label(self, text=self.eventName, font=('Helvetica', small_text_size), fg="white", bg="black")
self.eventNameLbl.pack(side=LEFT, anchor=N)
class Reddit(Frame):
def __init__(self, parent, *args, **kwargs):
Frame.__init__(self, parent, *args, **kwargs)
# Reddit Title Label
self.title = 'Reddit Top 1:'
self.redditLbl = Label(self, text=self.title, font=(font_type, medium_text_size), fg=font_colour, bg="black")
self.redditLbl.pack(side=TOP, anchor=W)
# Reddit article label
self.postContainer= Frame(self, bg="black")
self.postContainer.pack(side=TOP)
self.get_reddit_post()
def get_reddit_post(self):
try:
reddit = praw.Reddit(client_id='someinfo',
client_secret='someinfo', password='someinfo',
user_agent='someinfo', username='someinfor')
subreddit = reddit.subreddit(SUBREDDIT_SELECTION)
top_subreddit = subreddit.hot(limit=3)
for submission in top_subreddit:
if not submission.stickied:
top_post = Reddit(self.postContainer,"%s" % (submission.title))
top_post.pack(side=TOP, anchor =W)
except Exception as f:
traceback.print_exc()
print "Error: %s. This is a BIG REDDIT ERROR." % f
class FullscreenWindow:
def __init__(self):
self.tk = Tk()
self.tk.configure(background='black')
self.topFrame = Frame(self.tk, background = 'black')
self.bottomFrame = Frame(self.tk, background = 'black')
self.topFrame.pack(side = TOP, fill=BOTH, expand = YES)
self.bottomFrame.pack(side = BOTTOM, fill=BOTH, expand = YES)
self.state = False
self.tk.bind("<Return>", self.toggle_fullscreen)
self.tk.bind("<Escape>", self.end_fullscreen)
# clock
self.clock = Clock(self.topFrame)
self.clock.pack(side=RIGHT, anchor=N, padx=100, pady=60)
# weather
self.weather = Weather(self.topFrame)
self.weather.pack(side=LEFT, anchor=N, padx=100, pady=60)
# news
self.news = News(self.bottomFrame)
self.news.pack(side=LEFT, anchor=S, padx=100, pady=60)
# reddit
self.reddit = Reddit(self.bottomFrame)
self.reddit.pack(side = RIGHT, anchor=S, padx=100, pady=60)
def toggle_fullscreen(self, event=None):
self.state = not self.state # Just toggling the boolean
self.tk.attributes("-fullscreen", self.state)
return "break"
def end_fullscreen(self, event=None):
self.state = False
self.tk.attributes("-fullscreen", False)
return "break"
if __name__ == '__main__':
w = FullscreenWindow()
w.tk.mainloop()
Answer: Remove unused imports
You import threading, requests, contextmanager, and locale, but never use them.
Don't use global imports.
Use import Tkinter as tk and then prefix tk classes and functions with tk. (eg: tk.Label(...), etc). PEP8 specifically recommends against wildcard imports. Even though many tkinter tutorials do it, the valid reasons spelled out by PEP8 still apply.
Use tkinter's font objects
If you're going to use custom fonts, create font objects and use them. The benefit of doing so is that it becomes trivial to change the fonts later (either later in coding time, or later in runtime).
For example:
from tkFont import Font
FONT = {
'xlarge': Font(family="Helvetica", size=48),
'large': Font(family="Helvetica", sixe=30),
'medium': Font(family="Helvetica", sixe=20),
'small': Font(family="Helvetica", sixe=12),
'xsmall': Font(family="Helvetica", sixe=8),
}
...
self.timeLbl = Label(..., font=FONT['xlarge'], ...)
self.weekdayLbl = Label(..., font=FONT['medium'], ...)
...
If you want the user to be able to make the font bigger or smaller at runtime, it's trivial to do so because you only have to modify the font rather than modify every widget that uses the font.
Use 'after' wisely
Your function tick is called very 200ms, but what it displays changes only once a minute. I can understand wanting it to be fairly accurate, but if you're off by a few seconds does it really matter? At the very least, have it run once a second. That still will use considerably less CPU time than calling it 5 times a second.
Likewise, is it really necessary to update the weather data twice a second? Why not once every minute or every 5 minutes? Weather doesn't typically fluctuate much in such a short period of time.
Use more whitespace.
PEP8 gives good guidelines. For example, add two spaces between each class.
Separate widget creation from widget layout
In my experience, GUI code is much easier to maintain over time when layout code is grouped together.
For example, instead of this:
self.degreeFrm = Frame(self, bg="black")
self.degreeFrm.pack(side=TOP, anchor=W)
self.temperatureLbl = Label(self.degreeFrm, font=('Helvetica', xlarge_text_size), fg="white", bg="black")
self.temperatureLbl.pack(side=LEFT, anchor=N)
self.uvLbl = Label(self, font=('Helvetica', large_text_size), fg="white", bg="black")
self.uvLbl.pack(side=TOP, anchor=W)
self.humidityLbl = Label(self, font=('Helvetica', medium_text_size),fg="white",bg="black")
self.humidityLbl.pack(side=TOP, anchor=W)
self.apparenttempLbl = Label(self, font=('Helvetica', medium_text_size), fg="white", bg="black")
self.apparenttempLbl.pack(side=TOP, anchor=W)
... I recommend doing it like this:
self.degreeFrm = Frame(self, bg="black")
self.temperatureLbl = Label(self.degreeFrm, font=('Helvetica', xlarge_text_size), fg="white", bg="black")
self.temperatureLbl.pack(side=LEFT, anchor=N)
self.uvLbl = Label(self, font=('Helvetica', large_text_size), fg="white", bg="black")
self.humidityLbl = Label(self, font=('Helvetica', medium_text_size),fg="white",bg="black")
self.apparenttempLbl = Label(self, font=('Helvetica', medium_text_size), fg="white", bg="black")
self.degreeFrm.pack(side=TOP, anchor=W)
self.uvLbl.pack(side=TOP, anchor=W)
self.humidityLbl.pack(side=TOP, anchor=W)
self.apparenttempLbl.pack(side=TOP, anchor=W)
In my opinion, this makes it much easier to see which widgets are grouped together in self and which are not. Plus, layout code is often interdependent -- if you change the way you layout one widget, you may have to change others in the same parent. Having them grouped makes this much easier.
Separate fetching data and displaying data
Consider get_local_weather: it has code both to fetch the data and to display the data. I recommend breaking that into two functions. This will make it easier to test your code. For example, you can write a test for fetching the data without requiring that the UI actually be created, and you can test the UI with some test data without having to actually fetch it.
Create two functions: one that fetches the data and returns a dictionary, and then write a second function that takes the dictionary and updates the display:
def get_local_weather(self):
data = self.fetch_data()
self.update_ui(data) | {
"domain": "codereview.stackexchange",
"id": 30827,
"tags": "python, object-oriented, python-2.x, tkinter"
} |
Using a function to find the average for number of letters and words | Question: This is my code:
#include <stdio.h>
#include <ctype.h>
int average_of_let_wor();
int main(void)
{
double answer;
answer = average_of_let_wor();
printf("%.2lf", answer);
return 0;
}
int average_of_let_wor()
{
double numberOfLetters = 0;
double numberOfWords = 0;
int userInput;
double answer;
printf("please enter your input:\n");
while ((userInput = getchar()) != EOF)
{
if (userInput == ' ' || userInput == '\t' || userInput == '\n')
{
numberOfWords++;
continue;
}
else
numberOfLetters++;
}
answer = numberOfLetters/numberOfWords;
return answer;
}
Answer: First of all, be careful with types: to compute an average something, you will need a floating point number. However, the number of letters and words are countable and can therefore use the type unsigned int which are generally faster than floating point numbers.
Moreover, to detect a "space" character, you can use the standard function isspace from the header ctype.h.
Also, the variable answer is useless. You can get rid of it and simply return the answer the same line you're computing it.
Therefore, your function average_of_let_wor can be reduced to:
double average_of_let_wor()
{
unsigned int numberOfLetters = 0;
unsigned int numberOfWords = 0;
int userInput;
printf("please enter your input:\n");
while ((userInput = getchar()) != EOF)
{
if (isspace(userInput))
{
numberOfWords++;
}
else
{
numberOfLetters++;
}
}
return (double)numberOfLetters / (double)numberOfWords;
} | {
"domain": "codereview.stackexchange",
"id": 3118,
"tags": "c"
} |
Alkali atom - photon interaction in zero magnetic field | Question: An alkali atom has a single outer electron that interacts with incoming photons of the right wavelength (for alkalies it's in the visible & IR range). If there is an external magnetic field, the electron has a well defined quantization axis and the the incoming light can be separated into three components: linearly polarized and left/right-handed circular polarized parts. The interaction strength between the different atomic energy levels (levels described by different quantum numbers) and each of the polarization components is relatively straightforward to calculate (with the appropriate Clebsch-Gordan coefficients and 6-j symbols).
What is then the situation if there is no external magnetic field to provide the quantization axis? Is "polarization" meaningful from the atoms point of view? How one sets out to calculate the interaction strength for the different polarizations in the laboratory frame?
Answer: Basically, if there is no magnetic field, you are free to chose any quantization axis.
In theory, the quantization axis choice can be arbitrary, various axis choice corresponding to different "coordinate choices" in the Hilbert space. However, when you have a magnetic field, the state corresponding to axis different than the magnetic field direction are not eigenstates of the energy, and choosing the "right" axis makes the maths simpler. When you have no field, the different states are degenerate, and every choice is as simple as the others. | {
"domain": "physics.stackexchange",
"id": 56,
"tags": "quantum-mechanics, electromagnetism, quantum-optics"
} |
Accessing data of different layers in Costmap2DROS | Question:
I would like to access the data of the internal master_grid of the obstacle_layer. I am working in another layer and right now I am reading the values out of the layered_costmap_, which works fine like that:
costmap_2d::Costmap2D* costmap = layered_costmap_->getCostmap();
Also, if I would be within the obstacle_layer, I would access the data of its own master_grid like that:
unsigned char* master_array = master_grid.getCharMap();
But how would I access the master_grid of the obstacle_layer from another layer? Thanks!
UPDATE:
I think I found a working piece of code which answers like 95 % of my question. So the code searches through all the layers of the COSTMAP (actually it is a layered costmap of type costmap_2d::Costmap2DROS) and operates on the one which matches a predefined string (layer_sear_string_). But can somebody help me how to find the name of the layered costmap global_planner when running move_base?
std::vector<boost::shared_ptr<costmap_2d::Layer> >* plugins = COSTMAP->getLayeredCostmap()->getPlugins();
for (std::vector<boost::shared_ptr<costmap_2d::Layer> >::iterator pluginp = plugins->begin(); pluginp != plugins->end(); ++pluginp) {
boost::shared_ptr<costmap_2d::Layer> plugin = *pluginp;
if(plugin->getName().find(layer_search_string_)!=std::string::npos) {
boost::shared_ptr<costmap_2d::ObstacleLayer> costmap;
costmap = boost::static_pointer_cast<costmap_2d::ObstacleLayer>(plugin);
unsigned char* grid = costmap->getCharMap();
// do sth with it
}
}
Code found here.
Originally posted by Luke_ROS on ROS Answers with karma: 116 on 2014-06-03
Post score: 1
Original comments
Comment by David Lu on 2014-06-13:
Can you clarify your updated question? Which name are you looking for?
Comment by Luke_ROS on 2014-06-17:
When running move_base you get a global_planner and a local_planner. Both of them should be of type costmap_2d::Costmap2DROS, right? So I am looking for a way to access the top layered costmap of the global_planner from within a layer. (In order to finally access the master_grid of another layer).
Comment by David Lu on 2014-06-17:
To clarify terminology: In move_base, there is a global planner and a local planner, which operate on the global costmap and local costmap respectively (both of which are Costmap2DROS). I believe you are looking for a way to access the layered costmap object of the global costmap...
Comment by David Lu on 2014-06-17:
...so that you can access an individual layer's private costmap.
(There is no such thing as a "top layered costmap" and the layered costmap has a master costmap, but the individual layers aren't referred to as master costmaps.)
Comment by Luke_ROS on 2014-06-17:
Yes. That's spot on. Sorry. I got confused by the different terminologies of the paper and the actual code.
Answer:
Within any layer, you can access the layered costmap object via
protected:
LayeredCostmap* layered_costmap_;
You can use that object to get a particular layer with the code you linked above.
Originally posted by David Lu with karma: 10932 on 2014-06-17
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Luke_ROS on 2014-06-26:
Thanks a lot. Your answer led me on the right path again. I simply needed to replace COSTMAP->getLayeredCostmap() with the layered_costmap_ since we still operate within the same Costmap2DROS structure. Problem solved. Many things learned.
Comment by aswin on 2015-01-09:
@Luke_ROS Did you get the above to work? I dont see a getCharMap() in costmap_2d::ObstacleLayer
Comment by aswin on 2015-01-09:
Updated here. Just need to replace ObstacleLayer with costmap_2d::CostmapLayer | {
"domain": "robotics.stackexchange",
"id": 18147,
"tags": "navigation, c++, costmap, move-base, costmap-2d"
} |
Can a fibre preserve the shape of the light being transported through it? | Question: Can some fibres retain the shape of the light being transported through them?
If I emit radiation towards a sample (in the shape of an "S") and this is reflected back into a fibre, will the same image come out of the other side of the fibre? Will I obtain an "S" shaped beam on the other side?
Answer: A conventional fiber cannot do this because the modes inside the fiber determine the distribution of energy of transported light. Thus the output distribution is entirely independent of the input distribution (single mode fiber) or scrambled (multimode fiber).
For imaging, so called "coherent fiber bundles" are used. These are an array of tightly packed multimode fibers. Each fiber acts as a pixel and relays the intensity arriving on it to the backside. If you look at the back face with a camera you will see a honeycomb pattern (the thin gaps between cores) superimposed over whatever the fiber face sees:
Note that the number of fibers is typically limited (1000-10,000s), so do not expect an HD image.
See: https://opg.optica.org/ol/abstract.cfm?uri=ol-36-16-3212 | {
"domain": "physics.stackexchange",
"id": 91338,
"tags": "optics, electromagnetic-radiation, geometric-optics, dispersion, fiber-optics"
} |
Earth is rotating | Question:
Possible Duplicate:
Why does the atmosphere rotate along with the earth?
If i take off from land on a helicopter straight above the earth surface to a certain height and stay there for few mins/hours and come down.
Why am i coming down to the same place where i took off?
If the earth is rotating i should land on a different place right because i have not moved i am just coming down straight.I moved only vertically.
I got this thought because i was thinking why are we spending so many hours on flights to reach a country on west if i started from a eastern country.
May be there are a lot of scientific reasons behind this which i am not aware excuse me if it sounds silly.I thought this would be the best place to ask.
Answer: The helicopter in your example would have some velocity given to it by the Earth. I believe atmospheric drag would play a significant role in this, but let's ignore that for now.
You may have heard the process of an orbit described as continuous free fall, where you fall "towards" the other body just as fast as you move along the orbit. If this hypothetical helicopter lifted off, it would just be orbiting the planet! | {
"domain": "physics.stackexchange",
"id": 5408,
"tags": "rotational-dynamics, atmospheric-science, earth"
} |
Are the electrons at the centre of the Sun degenerate or not? | Question: Trying to find an answer to this question, I came across two different methods of determining whether electrons at the center of the sun are degenerate or not.
The first method, used here, calculates both the critical number density and the actual number density of electrons at the center of the Sun, and compares them together.
The result was that the actual number density is lower than the critical, so the author concluded that electrons at the center of the Sun can be treated as ideal gas (non-degenerate). This paper also mentions that in order to completely ignore the wave nature of some particles, the separation between these particles must be much larger than the de Broglie wavelength.
The second method, used here, calculates both the de Broglie wavelength of each electron, and the mean separation/spacing between electrons, and compares the two numbers.
The two numbers were almost equal in the conditions at the center of the Sun, and so the author concluded that electron gas at the center is actually "mildly degenerate".
So now, for purposes in which great accuracy isn't required, can electrons in the center of the Sun be treated as non-degenerate ? Or the deviation will still be too large to ignore ?
EDIT: In this paper, I found this: "the de Broglie wavelength of
the ions, is only about twice the average separation. Therefore, to a good
approximation we expect the ions to behave as an ideal classical gas."
And so I am not sure what relation between the de Broglie wavelength and average separation is considered good approximation.
Answer: Checking for electron degeneracy is a matter of comparing the Fermi kinetic energy with $kT$.
If $E_F/kT \gg 1$, then you may assume the electrons are degenerate.
The central density of the Sun is around $\rho=1.6\times 10^5$ kg/m$^3$ and the number of atomic mass units per electron is around $\mu_e =1.5$.
The number density of electrons is therefore $n_e =\rho/\mu_e m_u = 6.4\times 10^{31}$ m$^{-3}$.
The Fermi momentum is $p_F = (3n_e/8\pi)^{1/3} h = 1.3\times 10^{-23}$ kg m s$^{-1}$. As $p_F \ll m_e c$ then the electrons are non-relativistic and so $E_F \simeq p_F^{2}/2m_e = 9.3\times 10^{-17}$ J.
As the temperature in the solar core is $T =1.57\times 10^7$ K, then $E_F/kT = 0.43$. This ratio is clearly too small for the electrons even to be considered as partially degenerate. (For example, the ratio is more like 1000 in a typical electron-degenerate white dwarf star, and about 20 at the centre of a partially electron-degenerate brown dwarf).
I think this concurs with a treatment based on the de Broglie wavelength. The root of this method is the uncertainty principle in 3D. Degeneracy will be important when
$$(\Delta p \Delta x)^3 \simeq (\hbar/2)^3,$$
where $\Delta x$ is the electron separation and $\Delta p$ is a mean difference in electron momenta.
If we let $\lambda \simeq h/\Delta p$, then we see that degeneracy is important when $\Delta x \simeq \lambda/4\pi$. i.e. Serious degeneracy sets in when the de Broglie wavelength is an order of magnitude greater than the electron separation.
OK, but the ratio isn't zero either, so there will be a small correction to the perfect gas calculation of the pressure. To work this out properly you would have to do a numerical integration to find the pressure due to a very mildly degenerate gas.
To see whether it is worth bothering, you could simply see what the ratio of ideal degeneracy pressure at this electron number density is to the perfect gas pressure in the core of the Sun.
Roughly:
$$ \frac{P_{deg}}{P} = \frac{h^2}{20m_e} \left(\frac{3}{\pi}\right)^{1/3} n_{e}^{5/3} \frac{1}{(n_i +n_e) kT},$$
where $n_i$ is the number density of ions in the gas. If we say $n_i \simeq n_e$ (it's actually a bit smaller because of the helium nuclei present), then put the other numbers in, we find that $P_{deg}/P \sim 0.09$. Thus, I would conclude that if you want to calculate the pressure more accurately than 10 per cent, then you need to take account of the very partial degeneracy of the electrons in the solar core (and its exact composition).
A MUCH more formal treatment (see for example Chapter 2 of Clayton, D. 1983, Principles of Stellar Evolution and Nucleosynthesis, Univ. of Chicago Press), shows that the electron pressure (the ions are non-degenerate and can be treated as a perfect gas) can be written (if the electrons are non-relativistic)
$$ P_{e} = n_e kT \left(\frac{2F_{3/2}}{3F_{1/2}} \right),$$
where the term in the brackets gives the ratio by which the electron gas pressure departs from the perfect gas law, and where
$$F_n(\alpha) = \int_{0}^{\infty} \frac{u^n}{\exp(\alpha + u) + 1}\ du,$$
with $u = E_k/kT$ and $\alpha= -\mu/kT$, where $\mu$ is the chemical potential given by inverting
$$ n_e = \frac{4\pi}{h^3}(2m_e kT)^{3/2} F_{1/2}(\alpha)$$
NB: $\mu \rightarrow E_F$ when $\alpha \ll -1$.
These expressions must be evaluated numerically or taken from tables (e.g. Table 2.3 in Clayton 1983). However, $P_e/n_e kT$ is $\geq 1$ for all values of $\alpha$. So any degeneracy always increases the pressure over that of a perfect gas. The image below (from Clayton 1983) shows how $P_e/n_e kT$ varies with $\alpha$. Clayton says that "the gas pressure is essentially that of a non-degenerate gas for $\alpha>2$".
So putting in some numbers for the Sun, we find $F_{1/2}(\alpha) = 0.19$ and from Table 2.3 of Clayton, we obtain $\alpha \simeq 1.45$. This in turn means that $2F_{3/2}/3 \simeq 0.20$. So the electron pressure is a factor of $\simeq 1.05$ greater than the perfect gas pressure law at the same density and temperature. | {
"domain": "physics.stackexchange",
"id": 23231,
"tags": "thermodynamics, astrophysics, electrons, sun, stars"
} |
Is a language of some deciders decidable? | Question: Is
$$L = \{ \langle M \rangle \mid M = (\{Q_1, Q_2, . . . , Q_{100}\}, \{0, 1\}, \{0, 1, \_\}, δ, Q_1, Q_2, Q_3) \text{ is a decider}\}$$
decidable?
I know
$$HALT_{TM}= \{ \langle M \rangle \mid M \text{ is a decider}\}$$
is not decidable, but in the case we are given a specific Turing Machine that has 100 states, alphabet $\{0,1\}$, tape alphabet $\{0, 1, \_\}$ (where _ is space), a transition function $\delta$, a start state $Q_1$, an accept state $Q_2$ and a reject state $Q_3$.
Checking that M is in the correct format is easy but is it possible to build a Turing Machine that decides $L$? I assume I can use the fact that $HALT_{TM}$ is undecidable to show that $L$ also is, but I am not sure how to proceed.
And I don't see how I could reduce $L$ to $A_{TM}$ to prove by contradiction that it is not decidable?
How should I proceed?
Answer: I don't fully understand the details of your question (especially the notation that you've used).
However, the language you're asking about appears to be finite (something like the language of descriptions of all Turing machines that halt for all inputs and have at most 100 states), and every finite language is decidable. You could, in principle, produce a list of all the strings in your language, and then just design a Turing machine that accepts exactly that list of strings.
Note also that reducing $L$ to the halting problem wouldn't prove that $L$ is undecidable, since every recursively enumerable language reduces to the halting problem. You'd just be showing "If I could solve the halting problem, I could solve $L$, too." That's a bit like saying, "If I was the world's strongest man, I could lift an apple." Maybe you don't need to be so strong to do such a simple thing? If you want to prove that $L$ is undecidable by reductions, you need to reduce an undecidable language to $L$: then, you're saying, "If I could decide $L$, I'd be able to decide this undecidable language, and I know I can't do that." | {
"domain": "cs.stackexchange",
"id": 7706,
"tags": "formal-languages, turing-machines, reductions, undecidability, halting-problem"
} |
Why enthalpy change at constant volume is being stated as change in internal energy? | Question: My textbook, NCERT Chemistry page-167 (PDF), states that change in enthalpy at a constant volume is give by:
$$\Delta H = \Delta U =Q_{_V}$$
Whereas I think that it should be:
$$\Delta H = \Delta U+V\Delta P$$
So:
Which equation is correct one?
if my equation isn't correct, where might I be going wrong?
I ask this because some people whom I have asked about this have said that the one given by the book is correct (though they didn't justified why so.)
Answer: The statement made in the book is that if $P$ is constant, then (equation 6.8)
$$\Delta H = \Delta\big(U + PV) = \Delta U + P\Delta V$$
From there, if the volume is also constant, equation 6.8 becomes
$$\Delta H = \Delta U = q_V$$
The point being made in the passage is that if both pressure and volume are constant, then there is not an appreciable difference between thinking about $U$ and thinking about $H$. This would be the case in a solid or liquid exposed to the atmosphere (or some other source of constant pressure). | {
"domain": "physics.stackexchange",
"id": 65222,
"tags": "thermodynamics, textbook-erratum"
} |
Raindrops in Java | Question: Problem Statement:
Write a program that converts a number to a string, the contents of
which depends on the number's prime factors.
If the number contains 3 as a prime factor, output 'Pling'.
If the number contains 5 as a prime factor, output 'Plang'.
If the number contains 7 as a prime factor, output 'Plong'.
If the number does not contain 3, 5, or 7 as a prime factor, just pass the number's digits straight through.
Code:
public class Raindrops {
private Raindrops() {}
public static String convert(int number) {
// Pre-condition.
if (number < 0) {
throw new IllegalArgumentException("Input cannot be negative.");
}
String result = "";
for (Raindrop drop : Raindrop.values()) {
if (drop.hasPrimeFactor(number)) {
result += drop.toString();
}
}
//if (result % 3 == 0)
//result += "Pling";
//if (number % 5 == 0)
//result += "Plang";
//if (number % 7 == 0)
//result += "Plong";
if (result.isEmpty()) {
result = "" + number;
}
checkPostCondition(result, number);
return result;
}
private static void checkPostCondition(String result, int number) {
assert(result.contains("Pling") ||
result.contains("Plang") ||
result.contains("Plong") ||
result.contains("" + number));
}
private enum Raindrop {
Pling(3),
Plang(5),
Plong(7);
private final int primeFactor;
private Raindrop(int primeFactor) {
this.primeFactor = primeFactor;
}
public boolean hasPrimeFactor(int number) {
return number % primeFactor == 0;
}
}
}
Test Suite:
import org.junit.Test;
import org.junit.runner.RunWith;
import org.junit.runners.Parameterized;
import org.junit.runners.Parameterized.Parameters;
import java.util.Arrays;
import java.util.Collection;
import static org.junit.Assert.assertEquals;
@RunWith(Parameterized.class)
public class RaindropsTest {
private int input;
private String expectedOutput;
@Parameters
public static Collection<Object[]> data() {
return Arrays.asList(new Object[][]{
// Non-primes
{1, "1"},
{52, "52"},
{12121, "12121"},
// Numbers with 3 as a prime factor
{3, "Pling"},
{6, "Pling"},
{9, "Pling"},
// Numbers with 5 as a prime factor
{5, "Plang"},
{10, "Plang"},
{25, "Plang"},
// Numbers with 7 as a prime factor
{7, "Plong"},
{14, "Plong"},
{49, "Plong"},
// Numbers with multiple activating prime factors
{15, "PlingPlang"},
{21, "PlingPlong"},
{35, "PlangPlong"},
{105, "PlingPlangPlong"},
});
}
public RaindropsTest(int input, String expectedOutput) {
this.input = input;
this.expectedOutput = expectedOutput;
}
@Test
public void test() {
assertEquals(expectedOutput, Raindrops.convert(input));
}
}
Notes:
Although the solution was quite simple (see my commented code), I am trying to push myself and experiment with program correctness and flexibility, hence the final solution may seem over-engineered.
Reference
Answer: Post Condition
The post condition would pass even if the result is something like "PlingPlongNUMBER". (If I understood correctly, then the result should be either NUMBER or any combination of Pling/Plong/Plang, but never both.) Therefore, I suggest a post condition like the following (not tested!):
assert(((result.contains("Pling") || result.contains("Plang") || result.contains("Plong")) && !result.contains("" + number)) ||
(result.contains("" + number) && (!result.contains("Pling") && !result.contains("Plang") && !result.contains("Plong")))));
OOP changes
Since you are exercising OOP (and don't mind some over-engineering ;)) , you could consider adding another RainDrop, for dealing with the case that no prime factor matches (e.g. RainDrop(-1)). This would require the following modifications
Rename hasPrimeFactor to something reflect more accurately what it does (e.g. processNumber).
For the enum -1, processNumber would check that the parameter is NOT divisible by either 3, 5 or 7, and return the number as a string if that is the case.
Possible performance improvement: you might want to build a cache for divisibility with 3, 5 and 7, in case you are worried that modulo is calculated twice for each dividend. (E.g. cache table for 3 can be a HashMap, that for each already seen number tells if it is divisible with 3 or not. Same for 5 and 7.) Caveat: I did not verify that doing the lookup in the HashMap (let alone lookup + storage!) is faster than doing the modulo division, so you might actually lose performance in this way. The idea is rather to experiment with a way of caching, in case you were doing a really expensive operation.
As I said above, I'm not saying that this suggested change is necessarily better than the code you have now (in fact it is a bit more over-engineered). It is rather a way of exploring how to further OOP-fy your code.
Test Code
Let me first say, that I find it a really positive thing that you write your code with testing in mind, and add unit tests to cover (almost) all the cases. That said, some remarks:
Corner cases: I suggest adding tests for 0 and -1 as well. (Maybe -1 is tricky, since you expect an exception, all the same it is worth the effort.)
Comment about // Non-primes: while technically correct (the input numbers are non-primes), the point is that the numbers are not divisible by 3, 5 or 7, not that they are non-primes. So, I suggest updating the comment accordingly. | {
"domain": "codereview.stackexchange",
"id": 20100,
"tags": "java, performance, object-oriented, programming-challenge, unit-testing"
} |
Using B+Tree to implement index, when the index-key size and the data-block size are of the same order | Question: I want to implement an Index using a B+Tree as the underlying data structure. The index will have to support key sizes which are of order of my block size, what means I cannot save all the key as a pivot in the B+Tree inner nodes, since the branching factor would be too small which results in too many IOS for any read/write operation, as the B+Tree will be significantly high. Moreover, I wish to preserve key's order, what means that compression methods which does not preserve order cannot work for me without adjustments. I am looking for papers which talks about it and related issues. In addition if you have tackled this problem before, I would be glad to accept any suggestions and ideas. Also I wish to use a B+Tree, I am open for any other data-structure which might be more appropriate for this task.
Answer: Note: In what follows, I'm going to use the term "B-tree" to refer to the general idea of B-trees regardless of the variant, and "B+-trees" to refer specifically to B+-trees.
You've correctly identified a real-word complication of using B-trees to index strings: B-trees are page-structured files, but strings are of arbitrary length. This is glossed over in most tutorials and textbooks, but it's a real issue.
Most theoretical presentations of B-trees talk in terms of fixed fanouts, but in practice, the fanout of a node is partly determined by the sizes of the keys. If the keys are physically smaller, you can store more pointers in the node.
For this reason, many (probably most) database systems impose an upper limit on the size of a key that can be stored in a node. Say you're using 64kB pages/blocks, then you might require that no key can take up more than 8kB in a node. This gives you a minimum fanout of 8 for a B+-tree inner node (remember that if there are n pointers out of a node, you only need to store n-1 keys).
The simplest (and likely the most common) realisation of this idea means that the index is only "fast" on the first 8kB of a string, and the index lookup devolves to some other algorithm (e.g. linear or binary search) if there are too many records whose keys are distinct, but not distinguished by their first 8kB.
For a general-purpose DBMS, this is probably the right thing to do. The meaning of a query is still correct because the "real" keys are present in the records, and the system doesn't spend a lot of complexity on an uncommon case.
Even relatively inexperienced database designers know what B-tree string indexes are good for, and indexing a whole XML document as a string is not that. If someone ever does it, whether on purpose or by accident, the DBMS will still return the correct answer, but it will just degenerate to a less-efficient algorithm, and probably also signal to the database administrator that there are a significant number of inefficient queries occurring and perhaps they should take a look.
This is why the problem hasn't historically received a lot of attention.
OK, but let's assume you're not doing that. Your database is not general-purpose, and you have an excellent reason to want to index long strings. What to do?
If the key size is really roughly the same as the page size, this suggests a simple solution: make the page size bigger. Page size doesn't matter as much as it once did, since virtual address spaces are so much bigger than file sizes. Plus, you get a read-ahead bonus on modern operating systems if you read part of a file sequentially. So simply using bigger pages may not be as bad as you think if you need a quick and dirty solution.
But even if virtual memory space is essentially free, RAM and cache are not, so it's worth trying to be smart about it.
A few observations:
You don't need to store "real" keys in a B+-tree internal node. You only need a value which is guaranteed to be "between" all of the keys in two child subtrees.
If you have long keys, you don't want to be comparing whole keys all the time, and indeed you probably aren't. The further down you go in the tree, the more likely it is that all of the keys in the subtree share a (long?) prefix. You don't want to store, or compare, that prefix in every place where a key needs to be stored.
You see, most of the time, you'll find yourself in one of two situations: either the set of keys in the index that you're considering don't have a long common prefix, in which case you should be able to compare the first portion of the keys only, or they do have a long common prefix, in which case you shouldn't need to compare that prefix. Moreover, that prefix should only need to be stored once for all keys in the subtree.
This suggests that what you probably want is something more like a trie. Perhaps unsurprisingly, there are data structures that do this, such as prefix B-trees and B-tries (which are basically burst tries stored on disk).
I recommend you read those papers, but I'll try to give you some ideas about the design space here.
Suppose that all keys under a given node have a common prefix. Then within a node, you only need to store the "distinguishing" parts.
Think about the "internal" representation of a B+-tree node. The way that B+-trees are usually presented is that internal nodes are an array of n-1 keys and an array of n pointers, and you use binary search on the keys to find which pointer to traverse.
Arrays-with-binary-search are only one possibility, and you could in theory pack any search data structure that you want within a node. There's no reason why you couldn't use something more like a trie for the node representation.
If you do find yourself with a highly lopsided node in your B+-tree, say where one key is different in its first character, but all of the others have an 8kB prefix in common, you could use that to inform the balancing policy. Perhaps that one key could be moved into a sibling node? Even if that results in the sibling node splitting, that could be preferable than the alternative.
You also may have to live with using a different balance condition for your tree as a whole, such as the fanout for each node depending on key distribution, or the distance from the root not being the same for all leaves.
Consider the possibility, for example, of a B+-tree node with only one child pointer, with the rest of the node simply serving the purpose of storing "common prefix" for all child keys. You could think of this as inserting an extra "level" in the node where needed, or you could think of that node and its child as being one logical node that just happens to be larger than a page in size so that more key material can be stored. Is this a distinction without a difference?
If you're worried about complexity... well, yes, you would like $O(\log n)$ I/O operations where $n$ is the number of nodes, but if keys are huge, there is no way (compression notwithstanding) to get around doing $O(\frac{s}{p})$ I/O operations where $s$ is the size of the key and $p$ is the page size. We usually don't consider that when analysing B-trees, but that may dominate in your application.
Good luck! | {
"domain": "cs.stackexchange",
"id": 19030,
"tags": "data-structures, reference-request, database-theory, data-compression, storage"
} |
How to handle the size difference of highway network or residual network in cnn? | Question: For highway network, it looks like this:
For residual network, it looks like this:
Pictures are from What is the name of this neural network architecture with layers that are also connected to non-neighbouring layers?
My question is, how to handle the size difference between different layers in CNN to make highway network or residual network?
For example, I am working on a text classification problem. By using the embedding, I have the input size as follows:
input.shape =[batch_size, embedding_dim, max_length]
I also has a CNN layer as follows:
Conv1d(in_channels= embedding_dim, out_channels=hidden_dim, kernel_size=n)
So that the size of the output of Conv1d is [batch_size, hidden_dim, max_length-n+1].
Here is the question, the input size of the CNN layer is different from the output size. How do handle the size difference so that highway network or residual network can be built?
Thank you.
Answer: You can just use padding='same'. As noted from the documentation:
When padding="same" and strides=1, the output has the same size as the input.
Note that strides is default to 1, and if kernel_size=1, the output also has the same shape as the input.
I look at two different implementations and can confirm this:
The implementation of Dive into Deep Learning shows that the Residual block implementation is:
class Residual(tf.keras.Model): #@save
"""The Residual block of ResNet."""
def __init__(self, num_channels, use_1x1conv=False, strides=1):
super().__init__()
self.conv1 = tf.keras.layers.Conv2D(num_channels, padding='same',
kernel_size=3, strides=strides)
self.conv2 = tf.keras.layers.Conv2D(num_channels, kernel_size=3,
padding='same')
self.conv3 = None
if use_1x1conv:
self.conv3 = tf.keras.layers.Conv2D(num_channels, kernel_size=1,
strides=strides)
self.bn1 = tf.keras.layers.BatchNormalization()
self.bn2 = tf.keras.layers.BatchNormalization()
def call(self, X):
Y = tf.keras.activations.relu(self.bn1(self.conv1(X)))
Y = self.bn2(self.conv2(Y))
if self.conv3 is not None:
X = self.conv3(X)
Y += X
return tf.keras.activations.relu(Y)
which we see conv1 and conv2 has padding='same', strides=1 everywhere.
The second implementation is from Keras official code, which also uses padding='SAME' here.
Here's the visualization of how different padding works. In short, 'same' automatically calculates the padding dimension based on the kernel size so that the output has the same shape as the input for you. | {
"domain": "ai.stackexchange",
"id": 3901,
"tags": "convolutional-neural-networks, deep-neural-networks, residual-networks"
} |
Conceptual half-life Question | Question: So I have this word problem and I’m a bit confused about it. I have the answer and explanation but I still don’t understand:
The half-life of carbon-14 is approximately 5730 years, while the
half-life of carbon-12 is essentially infinite. If the ratio of
carbon-14 to carbon-12 in a certain sample is 25% less than the normal
ratio in nature, how old is the sample?
A. Less than 5730 years
B. Approximately 5730 years
C. Significantly greater than 5730 years, but less than 11460 years
D. Approximately 11460 years
Correct Answer: A
Explanation:
Because the half-life of carbon-12 is essentially infinite, a 25
percent decrease in the ratio of carbon-14 to carbon-12 means the same
as a 25 percent decrease in the amount of carbon-14. If less than half
of the carbon-14 has deteriorated, then less than one half-life has
elapsed. Therefore, the sample is less than 5730 years old. Be careful
with the wording here—the question states that the ratio is 25% less
than the ratio in nature, not 25% of the ratio in nature, which would
correspond to choice (D).
How is the ratio of the carbon isotopes relevant to half-lives? What is the purpose of saying the half-life of an isotope is infinite? What is meant by “the normal ratio in nature”? Just based on the answer, it seems like the question said “25% of a carbon sample decayed, how old is this sample?” Obviously it’s younger than 5,730 years (the time for its first half life) because if only 25% decayed that means half a half-life has passed. I don’t see how isotopes and ratios would change this problem at all.
Edit: Every answer here was very informative and helpful. It was tough to pick a best answer. I picked Cosma’s because it clicked with me.
If you’re reading this and want to understand radiocarbon dating, look at every answer because there is some useful information in every answer
Answer:
How is the ratio of the carbon isotopes relevant to half-lives? What is the purpose of saying the half-life of an isotope is infinite? What is meant by “the normal ratio in nature”?
You might read up on radiocarbon dating. Cosmic rays constantly create C-14 in the atmosphere, with then decays, and so there is an equilibrium value of C-14/C-12 in nature, the "normal ratio". When living things capture carbon, it stops equilibrating with atmospheric CO2, so the C-14 decays, while the C-12 stays put ("infinite half-life"), crucially providing a normalization for the amount of carbon involved: you can only measure ratios in a sample; you can't monitor initial amounts and wait for millennia to monitor their decay.
So, indeed, a decrease in the ratio from the natural value is tantamount to a decrease in the initial amount of C-14 in the sample when it stopped photosynthesizing and breathing. So you read it right that
“25% of a carbon sample decayed, how old is this sample?” Obviously it’s younger than 5,730 years (the time for its first half life) because if only 25% decayed that means half a half-life has passed.
. | {
"domain": "physics.stackexchange",
"id": 78415,
"tags": "radioactivity, isotopes, half-life"
} |
How to simulate and control 2 industrial robots with Moveit? | Question:
Hi !
I'm trying to simulate an industrial workcell with 2 industrial robots : Staubli TX2-60L and TX2-90L.
I've made a XACRO file with all the elements of the workcell and the two robots.
I've made a moveit package with 2 move group : one for each robots.
But when I try to move in the same time the robots with two python script, it doesn't work. Only one is moving.
Do you have an idea ?
Thank you
Originally posted by Vaneltin on ROS Answers with karma: 7 on 2020-10-04
Post score: 0
Answer:
Currently, you cannot execute two trajectories at the same time with MoveIt. This is a known limitation and related to the way MoveIt does trajectory execution monitoring. Here is a Github issue with more in-depth discussion (and how you could contribute to improving this part of the codebase).
At the moment, the main workarounds are:
Combine your two robots into one planning group, e.g. both_arms, and set multiple goals (e.g. one pose goal for each end effector) for your plan. This way, the plan will contain both robots' trajectories and avoid collisions between the robots. This is the safe and sane option.
If you need to move the robots asynchronously, you can obtain your two robot trajectories with the plan() function and then send them to the robot controllers to be executed. However, there will be no collision monitoring, and if the two trajectories move the robots are moving into the same area, they will collide. Only do this if are absolutely certain (!) that the two independent robot trajectories will not result in a collision.
Alternatively to calling the robot controllers' follow_joint_trajectory action, you can merge the two robot trajectories into a single trajectory containing the joints of both robots and execute it with MoveGroupCommander or move_group_interface. But the risks regarding collision are the same, because the plan for each robot was calculated without considering the other robot's motion.
To reiterate: Do not attempt the two latter options unless there is no risk of collision between the robots, and be aware that this is dangerous and that you can absolutely break your robots if you are not careful.
Originally posted by fvd with karma: 2180 on 2020-10-04
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Vaneltin on 2020-10-06:
Thank you very much for your clear answer!
Comment by cshmilyc on 2021-09-16:
Hi~
We can not execute two trajectories at the same time, but we can plan them one by one?
Comment by fvd on 2021-09-16:
You can plan them one by one already, but you can currently only execute one at a time through the MoveGroupInterface's execute() function. We recently submitted this PR which allows the execution of multiple trajectories via the MoveIt interfaces while checking for collision between them. You can test it yourself if you build it from source.
Comment by cshmilyc on 2021-09-17:
Thanks, @fvd~
I'll follow your research and try
Comment by cshmilyc on 2021-09-17:
Hi, again~
Shall I pull the repository from
cambel:simultaneous-motions
and build a specialized version "moveit" and replace the origin one in ROS?
Comment by fvd on 2021-09-18:\
Put it in your catkin workspace with the dependencies and build it, then it will be used instead of the binary install
Post a new question or look at the tutorial to build moveit if you get stuck more | {
"domain": "robotics.stackexchange",
"id": 35596,
"tags": "ros, moveit, ros-melodic"
} |
Simple and secure Python console menu without conditionals or match case statements | Question: I'm writing a basic tutorial for a simple console menu for Python. I think it is secure, since the input() is controlled by a dispatch. Any suggestions would be appreciated.
"""
Nice Menu for Python
"""
import time # used only for testing purposes
def m_help():
"""
- - - - - - - - -
Micro Menu Help
- help line 1
- Help line 1
- - - - - - - - -"""
print(m_help.__doc__)
m_message('...')
def m_message(message):
"""Message."""
print(f'|===> {message} <===|')
time.sleep(0.70) # slow down to see the message
def m_not_found():
"""not found."""
m_message('Choice NOT found, please try again.')
def m_view():
"""View."""
m_message('`m_view` function was called.')
def m_delete():
"""Delete."""
m_message('`m_delete` function was called.')
def m_quit():
"""Quit program."""
m_message('`m_quit` function was called.')
quit()
def menu_text():
"""
Welcome to Micro Menu
1] View (v)
2] Delete
3] Help
4] Quit (q)
"""
print(menu_text.__doc__)
def menu(index):
""" Will return a function based on the index."""
dispatcher = {
'1': m_view,
'2': m_delete,
'3': m_help,
'4': m_quit,
'v': m_view, # alternative key to '1'
'q': m_quit # alternative key to '3'
# ...
}
return dispatcher.get(index, m_not_found)
def main():
"""Main function."""
while True:
menu_text()
choice = input('>> Make your choice: ')
menu(choice)()
if __name__ == '__main__':
main()
Answer: The print(__doc__) pattern is an interesting one but not one that I recommend. Docstrings are meant to document the function itself, and your m_help docstring is not that: instead, presumably, it documents the program. Those are not the same thing.
m_function is a little odd as a naming convention. Either spell it out - menu_function - or drop the prefix entirely and make an enclosing module called menu.
Since you're just doing simple appends,
print(f'|===> {message} <===|')
can be expressed as
print('|===>', message, '<===|')
Do not sleep. "Slowing down to see the message" should not be a concern for this application since you have so little content. If your content grows, sleeping is still not the solution; you would instead want to paginate.
Your code is not DRY (don't-repeat-yourself) enough. You write the "index" characters in multiple places. There are many ways to centralise this; I show one below.
Try your best to avoid quit(). One simple way is to return a flag from your dispatched functions indicating whether the menu loop needs to break.
"""View.""", as a comment, is less helpful than having no comment at all. Similar for most of your other comments.
Suggested
"""
Nice Menu for Python
"""
from typing import Any, Callable, NamedTuple, Iterable, Iterator
def menu_help() -> None:
print("""
- - - - - - - - -
Micro Menu Help
- help line 1
- help line 2
- - - - - - - - -""")
menu_message('...')
def menu_message(message: str) -> None:
print('|===>', message, '<===|')
def menu_not_found() -> None:
menu_message('Choice not found; please try again.')
def menu_view() -> None:
pass
def menu_delete() -> None:
pass
def menu_quit() -> bool:
return True
class MenuItem(NamedTuple):
index: tuple[str, ...]
name: str
callback: Callable[[], Any]
def __str__(self) -> str:
desc = f'{self.index[0]}] {self.name}'
if len(self.index) > 1:
others = ', '.join(self.index[1:])
desc += f' ({others})'
return desc
def menu_fragments(items: Iterable[MenuItem]) -> Iterator[str]:
yield 'Welcome to Micro Menu'
for item in items:
yield str(item)
def menu_text(items: Iterable[MenuItem]) -> None:
print('\n'.join(menu_fragments(items)))
def menu(dispatcher: dict[str, MenuItem], index: str) -> Callable[[], Any]:
""" Will return a function based on the index."""
item = dispatcher.get(index)
if item is None:
return menu_not_found
# Delete this once you're done debugging the program
menu_message(f'`{item.name}` function was called.')
return item.callback
def main() -> None:
items = (
MenuItem(('1', 'v'), 'View', menu_view),
MenuItem(('2',), 'Delete', menu_delete),
MenuItem(('3',), 'Help', menu_help),
MenuItem(('4', 'q'), 'Quit', menu_quit),
)
dispatcher = {
index: item
for item in items
for index in item.index
}
while True:
menu_text(items)
choice = input('>> Make your choice: ').strip().lower()
if menu(dispatcher, choice)():
break
print()
if __name__ == '__main__':
main() | {
"domain": "codereview.stackexchange",
"id": 44286,
"tags": "python, console"
} |
Fusing IMU + GPS with robot_location package | Question:
Currently, I am trying to realize the localization by using the robot_localization package based on a GPS and an IMU. THe realization principle is based on the setting in the following link: http://answers.ros.org/question/200071/how-to-fuse-imu-gps-using-robot_localization/ . I jut copied it here:
ekf_localization_node
Inputs
IMU (type: sensor_msgs/Imu; topic: /imu; frame_id: base_imu_link)
Transformed GPS data as an odometry message (navsat_transform_node output: topic: /odometry/gps)
Outputs
Odometry message (this is what you want to use as your state estimate for your robot; topic: /odometry/filtered)
navsat_transform_node
Inputs
IMU (type: sensor_msgs/Imu; topic: /imu; frame_id: base_imu_link)
Raw GPS (type: NavSatFix; topic: /fix; frame_id: gps_reddot)
Odometry (output of ekf_localization_node: topic: /odometry/filtered)
Outputs
Transformed GPS data as odometry message (topic: /odometry/gps)
The image of my hardware setting is quite simple:
https://drive.google.com/file/d/0BwCt69n0gpFbc0c0ek03TGd0aEk/view?usp=sharing
My current launch file is:
<launch>
<include file="$(find gps_ublox)/launch/gps.launch"/>
<include file="$(find imu_ftdi)/launch/imu.launch"/>
<!--node name="location_prediction_node" pkg="location_prediction" type="location_prediction_node" output="screen"/-->
<!-- Parameters setting of the node: ekf_localization_node -->
<node pkg="tf" type="static_transform_publisher" name="imu_tf" args="0 0 0 0 0 0 1 base_link base_imu_link 20"/>
<node pkg="tf" type="static_transform_publisher" name="gps_tf" args="0 0 0 0 0 0 1 base_link gps_reddot 20"/>
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_localization" clear_params="true">
<param name="output_frame" value="odom"/>
<param name="frequency" value="20"/>
<param name="odom_used" value="true"/>
<param name="imu_used" value="true"/>
<param name="vo_used" value="false"/>
<param name="sensor_timeout" value="0.1"/>
<param name="two_d_mode" value="false"/>
<param name="map_frame" value="map"/>
<param name="odom_frame" value="odom"/>
<param name="base_link_frame" value="base_link"/>
<param name="world_frame" value="odom"/>
<param name="odom0" value="/odometry/gps"/>
<param name="imu0" value="/imu"/>
<rosparam param="odom0_config">[true, true, true,
false, false, false,
false , false, false,
false, false, false,
false, false, false]</rosparam>
<rosparam param="imu0_config">[false, false, false,
true , true , true,
false, false, false,
true , true , true ,
true , true , true ]</rosparam>
<param name="odom0_differential" value="false"/>
<param name="imu0_differential" value="false"/>
<param name="imu0_remove_gravitational_acceleration" value="true"/>
<param name="odom0_relative" value="false"/>
<param name="imu0_relative" value="false"/>
<param name="print_diagnostics" value="true"/>
<!-- ======== ADVANCED PARAMETERS ======== -->
<param name="odom0_queue_size" value="2"/>
<param name="imu0_queue_size" value="10"/>
<rosparam param="process_noise_covariance">[0.05, 0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]</rosparam>
<rosparam param="initial_estimate_covariance">[1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]</rosparam>
</node>
<!-- Parameters setting of the node: navsat_transform_node -->
<node pkg="robot_localization" type="navsat_transform_node" name="navsat_transform_node" respawn="true">
<param name="magnetic_declination_radians" value="0.0036651914"/>
<param name="yaw_offset" value="0.0"/>
<param name="zero_altitude" value="false"/>
<param name="broadcast_utm_transform" value="true"/>
<param name="publish_filtered_gps" value="true"/>
<param name="use_odometry_yaw" value="false"/>
<param name="wait_for_datum" value="false"/>
<remap from="/imu/data" to="/imu" />
<remap from="/gps/fix" to="/fix" />
<!--remap from="/odometry/filtered" to="/odometry/filtered" /-->
</node>
</launch>
Current problems:
1>. if I do not include the node [<node pkg="tf" type="static_transform_publisher" name="gps_tf" args="0 0 0 0 0 0 1 base_link gps_reddot 20"/>], all the outputs of /odometry/gps are zeros. If this tf node is included, the output of the /odometry/gps is as follows:
header:
seq: 5
stamp:
secs: 1470557701
nsecs: 931571006
frame_id: odom
child_frame_id: ''
pose:
pose:
position:
x: 0.484485645778
y: 0.158553831367
z: -1.06257850911
orientation:
x: 0.0
y: 0.0
z: 0.0
w: 1.0
covariance: [1.6656841125133872, -0.020322677227241386, -0.5022024606421389, 0.0, 0.0, 0.0, -0.020322677227241386, 1.6207245363996199, 0.19335550081983582, 0.0, 0.0, 0.0, -0.5022024606421389, 0.19335550081983582, 6.390991351086994, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
twist:
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Only the position of Pose message are nonzero but the orientation and Twist message are zeros.
Is there something wrong for my setting?
If this code is not included, the output of /odometry/filered is close to zeros when I was walking in an outdoor environment because of the zero output of /odometry/gps for package navsat_transform_node.
In addition, it seems from the link I followed that the previous author has not included this node.
2>. While including the tf node for gps, I have done an experiment.
image link: https://drive.google.com/file/d/0BwCt69n0gpFbeVpnamlKYUxxTm8/view?usp=sharing
The recorded data from /odometry/filtered are plotted by matching the google map, which are roughly accurate. However, it still has some location error. The frequency of IMU is 20HZ, and the frequency for GPS is 1HZ. Is this because of the low frequency of the GPS? The recording frequency is 2HZ. You can see that there are some jump at some of the segments.
Thanks very much for your help.
Originally posted by yrj on ROS Answers with karma: 3 on 2016-08-07
Post score: 0
Answer:
(1) Both of those are correct, thought I'm surprised the node generates any output when you turn off the base_link->gps_reddot transform. Regardless, you need that transform to be defined, so that's not a problem per se. Also, the node only generates pose data, not twist data. I'm not differentiating pose to get velocity. The real issue is that the output of the node should have been a PoseWithCovarianceStamped, but for legacy reasons, I left it as-is.
(2) This is also what I expect. First, you only have an IMU and GPS, so when you aren't getting GPS signals, your robot's pose is dictated solely by integrating IMU data. Since the only linear (as opposed to rotational) quantity measured by the IMU is acceleration, you're probably going to see a fair amount of drift when you aren't getting GPS measurements. This will cause the filter's error (covariance) to increase, and when you next receive a GPS measurement, its covariance will likely be much lower than the filter's, so the filter will accept that GPS measurement as more or less a measure of ground truth, and will "jump" to that pose.
Bottom line: integrating an IMU with a GPS isn't going to produce fantastic results without a velocity measurement, though it may improve if I ever find time to implement this. Also, if your GPS has reasonable accuracy, then you aren't going to improve its estimate with a filter very much, but rather you will have smoother transitions between its (infrequent) measurements.
Originally posted by Tom Moore with karma: 13689 on 2016-08-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 25464,
"tags": "imu, navigation, ekf, gps, robot-localization"
} |
Special cases in friction | Question: Imagine that 2 objects are stacked. The friction between the objects is higher than the friction between the bottom object and the ground. Is it possible for the top object to be pushed hard enough to slide off the bottom object. The applied force is parralel to the ground, no angle.
I have tried making equtions for this question, but they led to illogical conclusions (like the bottom object sliding faster than the top one). I asked many people and tried researching it but I cannot find a solid conclusion
Answer: Get two cardboard boxes. Fill one with books, and put it on the bottom. Leave one empty, and put it on top of the one full of books.
Push the top box parallel to the ground. The top box will leave the bottom box.
Find a table. Put a book on a table. Push the book horizontally. It will move on top of the table, rather than the table underneath it moving.
Mathematically, each interface will have some coefficient of static friction, $\mu$. The maximum amount of static friction will be $\mu N$, where $N$ is the normal force the surface is applying. As long as the total horizontal force on the object is less than $\mu N$, the object will not move with respect to the object below it.
As long as the $\mu N$ of the top-box to bottom-box interface is lower than the $\mu N$ of the bottom-box to floor interface, you'll find the top box slips off. This happens quite often actually, especially when the top object has less mass.
The opposite can happen if the interfaces are different. Stack two tires on an ice rink. The coefficient of friction the bottom tire to the ice is much lower than the coefficient of friction between the bottom and top tires. Even though the bottom-tire to ice interface has twice the normal force (the mass of two tires) than the normal force for the top-tire to bottom tire interface (the mass of one tire), you'll still find that pushing on the top tire causes the whole stack to slide. | {
"domain": "physics.stackexchange",
"id": 52447,
"tags": "newtonian-mechanics, forces, friction"
} |
Do individual neurons communicate with the origin of thoughts? | Question: When mapping the different neural pathways in the brain, often pictures such as these are drawn:
Or similar versions. Clearly these sketches draw the neural pathways as being a two-sided connection, as can be judged from the double arrows. I was wondering, how such bidirectional pathways work. Specifically, I am interested to know whether individual neurons constitute such a bidirectional pathway between different parts of the brain, or if the bidirectional pathway can be thought of as a 'circuit': a pathway of neurons that eventually comes back to a certain part of the brain.
This question is especially interesting in light of abstract thoughts. Let's make some extremely generalising (and ludicrous) assumptions and assume that our conscious thoughts are governed by our hippocampus only. That is, we assume that thinking about a matter without input from sensory neurons causes the hippocampus to fire an action potential into some specific cluster of neurons (somewhere in the brain). E.g. we assume that thinking about the letter 'A' causes the hippocampus to fire an action potential into the cluster of neurons that represents the letter 'A'. Let's also assume that the memory we have regarding the letter 'A' must be returned to the hippocampus, in order to continue our thought process. For example, I might want to recite the alphabet in my head. Starting with the letter 'A', I then continue to the letter 'B', which arises in my conscious thoughts as a result of the firing of the cluster of neurons that contain information regarding the letter 'B'.
So if we assume the circuit, conscious thoughts $\rightarrow$ cluster of information fires $\rightarrow$ conscious thoughts to exist, then how would its feedback most likely work?
Let's take the example where we think of the letter 'A' and want to think of the letter 'B'. I have sketched two different theories about how the communication between these two clusters of information and the 'governing body of thoughts' (which I assumed to be the hippocampus), might be:
The difference is the following: in the second theory, all individual neurons are expected to be able to communicate directly to the hippocampus. Whereas in the first theory, the feedback to the hippocampus arises only when we have reached our destination: the letter 'B'. Clearly, if we assume that a specific thought process is able to self-induce action potentials (of course, this is highly debatable), at least one of the two theories must be partially true. For which one exists evidence?
My first intuition would be to say that the first theory is more likely, but this is ambiguous under Dale's principle: why would only some cells be connected to the hippocampus? On the other hand, I found it hard to believe that every individual neuron is directly connected to the hippocampus.
So now that I have explained my thought process, my question can be formulated as follows:
How are clusters of neurons that are involved in abstract thoughts bidirectionally connected to the thought-governing-body (whichever actual part(s) of the brain this is)? Can we say that just a single neuron in the cluster provides the feedback to the thought-governing-body, or is every individual neuron that is involved in conscious thought potentially capable of a feedback loop?
Disclaimer: Yes, I'm not up-to-date to all the latest advances in neuroscience. Yes, I'm aware that my question might be ambiguous with respect to all the different types of neurons, connections and theory about neuronal networks and micronetworks. My question, however, is concerning the likelihood of the theories I presented. For which one do we have evidence. Are both wrong? If so, in what way? Are both right? If so, when does their difference play a role? etc.
Answer: To give a simple answer to the first part, the feedback generally happens in circuits of at least two neurons. A synapse is generally one way, transmitting information from the presynaptic neuron's axon to the postsynaptic neuron's dendrite. I'm sure there's caveats and exceptions because biology always has that, but that's the norm. A simple feedback loop might look like this: A excites B which excites C. C inhibits A as negative feedback.
The second question relies pretty heavily on the assumption that conscious thought originates from a particular part of the brain, or that one neuron is dedicated to one concept. This doesn't seem to be true. Things like memory and thought appear to be distributed, emergent properties of several circuits and areas. They might be organized or integrated by a particular part of the brain but it's not the same as having one central executive looking at all the inputs deciding what you think.
Think of it like a person working on an Excel spreadsheet. There's all the code for the OS, the software, the formulas in the spreadsheet. Those are all important parts of the process but the user doesn't need to see any of that. All they need to see is the end products of each process, and then they can figure out what to do with those products. So circuits and neurons can be involved in lower processing without connecting directly to the area that organizes and integrates information. Also, because higher level processes like thought are so distributed, there's likely to be multiple centers organizing/integrating information and all giving feedback to each other. Does that make sense?
Side note: I think you might be misunderstanding Dale's principle. When it says that neurons have the same chemical action at every synapse, that means they release the same neurotransmitter(s) but that doesn't mean they have to be connected the same way or that the neurotransmitter release has the same results. For example, say a neuron releases the excitatory neurotransmitter glutamate. If it synapses on another neuron that releases glutamate, that will promote excitation. But if it synapses on an inhibitory neuron that releases GABA, then exciting that neuron will actually result in inhibition. | {
"domain": "biology.stackexchange",
"id": 4712,
"tags": "neuroscience, neurophysiology"
} |
Use End Effector frame to move in moveit | Question:
Hello:
I use moveit_commander to make a cartesian move on the X axis, but the camera on the gripper sees that not only the X axis is moved.
I checked and found that the moveit preset uses the 'world' frame, the world frame and Link6 frame poses are not the same, and I should be doing cartesian movement on the Link6(End Effector) frame.
I checked using rosrun tf tf_echo Link6 world and sure this is what I need, but I don't know how to replace the frame that moveit uses, any pointers or instructions?
I found this discussion and it seems to be what I want to do.But using C++, is it possible to do the same thing with Python?
https://answers.ros.org/question/228190/moveit-end-effector-positioning/
Thanks in advance
Originally posted by Gojigu on ROS Answers with karma: 23 on 2022-10-04
Post score: 0
Answer:
Yes, you can do this in moveit_commander. Here are two possible approaches:
Determine the new pose within in the eef frame, then transform that pose into the world frame, then use it as the cartesian goal. See the answer to #q323075 for python code to transform a pose. One thing that answer gets wrong is that in a real app the tfbuffer and listener objects should be created only once (and not be destroyed at the end of their transform_pose() function.)
Use the moveit_commander set_pose_reference_frame() method to select the eef frame, then use a goal pose within that frame. I suspect this method is rarely used, so you may discover bugs with it. Disclosure: I've never actually tried to do it this way.
Originally posted by Mike Scheutzow with karma: 4903 on 2022-10-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Gojigu on 2022-10-09:
Thanks for your reply.
Method 1 is what I need!
Method 2 seems easier to use, but I can't get it to work, there's not much information on the web about this command. | {
"domain": "robotics.stackexchange",
"id": 38017,
"tags": "ros, python, moveit, move-group"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.