anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
err when using gscam | Question:
Hi Guys
I tried to run gscam to get image from my camera but when I typed
$ roslaunch gscam v4l.launch
I got these errs:
[FATAL] [1400738699.774908523]: Failed to PAUSE stream, check your gstreamer configuration.
[FATAL] [1400738699.775082184]: Failed to initialize gscam stream!
and when I typed :
$rosrun gscam gscam
I got these errs:
[FATAL] [1400738785.074504579]: Problem getting GSCAM_CONFIG environment variable and 'gscam_config' rosparam is not set. This is needed to set up a gstreamer pipeline.
[FATAL] [1400738785.074717553]: Failed to configure gscam!
How can I fix these?
any suggestion about this?
thanks
hamid
Originally posted by Hamid Didari on ROS Answers with karma: 1769 on 2014-05-21
Post score: 2
Answer:
Section 2 on the gscam wiki page describes how to the set the GSCAM_CONFIG environment variable; that should fix the second error you encountered.
You'll also want to make sure that you're using the correct video device, something like /dev/video0 or /dev/video1, and that your user has read and write permissions on it.
I'm not sure why their v4l launch file doesn't work; it isn't documented on the wiki.
Originally posted by ahendrix with karma: 47576 on 2014-05-21
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 18025,
"tags": "gscam, camera"
} |
Simple randomization program | Question: I'm a few months into learning C++ programming and I want to know if I'm moving generally in the right direction or not with the following code. This is the most advanced thing I've created so far, but it doesn't contain any pointers or references so I'm worried I'm not doing things properly on the memory level. I especially want advice regarding moving this type of code to templates and virtual functions.
//This program's purpose is a simple battle simulator and a simple lotto simulator
//This requires two classes and a bit of procedural main game logic that can repeat
//The first class is a Randomizer class that allows the user to generate
//random numbers by inputting the desired amount of numbers, and the max
//range for those numbers. The default start value is 1.
//The second class is a Battler class that contains two Randomizer objects.
//It also contains a method to compare the second integer of each of the vector
//arrays contained in each of those objects, and returns -1, 0, or 1 depending
//on which value is greater, or 0 if they are tied.
#include <iostream> //Necessary for cout and endl
#include <ctime> //Necessary for time_t variable
#include <cstdlib> //Necessary for rand and srand
#include <vector> //Necessary for vector<type>
/*HEADER OF RANDOMIZER CLASS*/
class Randomizer //The preferred syntax is to start with a capital letter for classes
{
private:
time_t timevalue; //time_t member variable that will be set later
void randomTimeInit(); //called by constructor (runs once)
public:
Randomizer() //Class constructor
{
srand(time(NULL)); //This is JUST necessary here to get new results every time
//Caution: Only run once per program execution! (per object?)
void randomTimeInit(); //This function sets time_t value and seeds rand() with it
std::cout << "construct randomizer" << std::endl; //UNCOMMENT FOR DEBUG
}
~Randomizer() //Class deconstructor
{
std::cout << "destruct randomizer" << std::endl; //UNCOMMENT FOR DEBUG
}
std::vector<int> randomIntArray; //The vector array for the random numbers
int getRandomNumber(int); //Function to return one random number of max int
void createRandomIntArray(int, int); //Function to create random number array
void printRandomIntArray(); //Function to print created random number array
};
/*IMPLEMENTATION OF RANDOMIZER CLASS*/
void Randomizer::randomTimeInit()
{
time_t timevalue; //time_t is a special value (usually number of seconds since 00:00)
time(&timevalue); //Sets a reference timevalue to the time (requires the special time_t value)
srand(timevalue); //Seeds the random generator with timevalue
}
int Randomizer::getRandomNumber(int max)
{
return rand() % max +1; //Returns random int %modulo (max value) + 1 (to eliminate 0)
}
void Randomizer::createRandomIntArray(int count, int max)
{
for(int i=1;i<=count;i++) //Iterates up to the count value
{
int randomInt = getRandomNumber(max); //Sets randomInt to 1-to-max
randomIntArray.insert(randomIntArray.begin(), randomInt); //Inserts randomInt at start of vector
}
}
void Randomizer::printRandomIntArray()
{
for(unsigned int j=0;j<randomIntArray.size();j++) //Iterates up to previously created vector size
{
std::cout << randomIntArray[j] << " "; //Outputs number value at [j] followed by a space
}
}
/*HEADER OF BATTLER CLASS*/
class Battler
{
private:
Randomizer random1, random2; //Two Randomizer objects that will each have different values
int numOne, maxOne, numTwo, maxTwo; //Four variables for setting up the battle
public:
Battler() //Class constructor
{
std::cout << "con battler" << std::endl; //UNCOMMENT FOR DEBUG
}
~Battler() //Class destructor
{
std::cout << "decon battler" << std::endl; //UNCOMMENT FOR DEBUG
}
int doBattler(); //Function that compares values between the Randomizer objects
};
/*IMPLEMENTATION OF BATTLER CLASS*/
int Battler::doBattler()
{
numOne = 5; //Five numbers to choose from for the battle (arbitrary)
maxOne = 100; //Value up to 100 for the "damage" number (arbitrary)
numTwo = 5; //Five numbers to choose from for the battle (arbitrary)
maxTwo = 100; //Value up to 100 for the "damage" number (arbitrary)
random1.createRandomIntArray(numOne, maxOne); //Generates Player 1s numbers
random2.createRandomIntArray(numTwo, maxTwo); //Generates Player 2s numbers
std::cout << "Here the two scores:" << std::endl;
random1.printRandomIntArray(); //Prints Player 1s numbers
std::cout << std::endl;
random2.printRandomIntArray(); //Prints Player 2s numbers
std::cout << std::endl;
//This begins the logic to test which player won the round
//I decided to use a simple return here instead of a switch statement
//One definite downside of the following code is having to repeat two of the lines
//in each of the three different statements in order to clear the vector array between rounds
if(random1.randomIntArray[2] == random2.randomIntArray[2]) //NOTE: this looks at the second integer
//of the member arrays and compares
//them ((arbitrary) but it has to be a value
//inside the array or it crashes)
{
std::cout <<"It was a draw!" << std::endl << std::endl;
random1.randomIntArray.clear(); //This is necessary to clear the vector array between rounds
random2.randomIntArray.clear(); //If it's not present then the array continues to expand forever
return 0; //Return immediately stops the function and returns this value
}
else if(random1.randomIntArray[2] > random2.randomIntArray[2]) //See NOTE above
{
std::cout << "Player 1 hits!" << std::endl << std::endl;
random1.randomIntArray.clear(); //This is necessary to clear the vector array between rounds
random2.randomIntArray.clear(); //If it's not present then the array continues to expand forever
return -1; //Return immediately stops the function and returns this value
}
else
{
std::cout << "Player 2 hits!" << std::endl << std::endl;
random1.randomIntArray.clear(); //This is necessary to clear the vector array between rounds
random2.randomIntArray.clear(); //If it's not present then the array continues to expand forever
return 1; //Return immediately stops the function and returns this value
}
}
/*START OF FUTURE MAIN.CPP*/
/*PROTOTYPE DECLARATIONS*/
void inputAndPrintRandomIntArray(); //This function takes inputs for amount of numbers and
//their max value and prints out the created array
void lottoLoop(); //This function allows the repetition of the above function
//if the user inputs char 'Y'
void battleLoop(); //This function creates a Battler object and player life
//variables and repeats the doBattler function until one player is dead
void chooseBattleOrLotto(); //This function allows user input to control which of the two
//main functions of the program to run
/*START OF MAIN FUNCTION*/
int main(int argc, char* args[]) //These arguments for main allow command line access?
{
chooseBattleOrLotto(); //Allows user to choose which function program will perform
return 0; //Here because int main expects a return value
}
/*DEFINITION OF MAIN.CPP FUNCTIONS*/
void inputAndPrintRandomIntArray()
{
Randomizer myRandomizer; //Instantiates an object of the Randomizer class
int number, maximum; //These are the variables that will go into Randomizer method
std::cout << "Enter the amount of numbers for the Lotto Ticket (1-100):";
std::cin >> number;
std::cout << "Enter the maximum number from which they will be chosen(1-1000):";
std::cin >> maximum;
std::cout << "Here are your Lotto Numbers:" << std::endl;
myRandomizer.createRandomIntArray(number, maximum); //This activates Randomizer method with input values
myRandomizer.printRandomIntArray(); //This prints the created random int array
std::cout << std::endl;
}
void lottoLoop()
{
char gameRunning = 'Y'; //This char value will control whether the game continues to loop
do
{
//system("cls"); //apparently dangerous to use, simply clears console
//alternative is cout << string( 100, '\n' ). Remove to see all outputs
inputAndPrintRandomIntArray(); //Function that takes input and outputs vector array
std::cout << "Would you like to play again? (must enter Y):";
std::cin >> gameRunning;
}
while(gameRunning == 'Y'); //For some reason I couldn't get || 'y' to work
}
void battleLoop()
{
Battler doSimpleBattle; //Instantiates an object of the Battler class
char battleAgain = 'Y'; //Creates a char that controls whether to repeat the battle
//Begins the battle logic do/while loops
do
{
int player1HP = 10; //Setting the variables here resets them between games
int player2HP = 10; //Placing them where the char is above will not work
do
{
int battleResult; //Creates a variable to control the if statements
battleResult = doSimpleBattle.doBattler(); //Sets the variable to the return value of the function
if(battleResult == -1) //Looks at the return of the doBattler method
{
player2HP--; //Decrements Player 2 health based on this result
}
if(battleResult == 1) //Continues looking at return of doBattler method
{
player1HP--; //Decrements Player 1 health based on this result
}
}
while(player1HP > 0 || player2HP > 0); //Continue the game until one player has no health left
if(player1HP == 0) //Checks if Player 1's health is zero
{
std::cout << "Player 2 wins!" << std::endl;
}
else if(player2HP == 0) //Checks if Player 2's health is zero
{
std::cout << "Player 1 wins!" << std::endl;
}
std::cout << "Game over!" << std::endl;
std::cout << "Play again? (Must be Y to work)";
std::cin >> battleAgain; //This is the char input to control repeating the game
std::cout << std::endl;
}
while(battleAgain == 'Y'); //For some reason I can't get || 'y' to work
}
void chooseBattleOrLotto()
{
char modeSelect = 'L'; //Creates a char that controls mode choice and exit
std::cout << "Press 'L' for the lottery or 'B' for battle! ('Q' to exit):";
std::cin >> modeSelect;
if(modeSelect == 'L')
{
lottoLoop(); //Contains a do while loop that allows game repeat
}
if(modeSelect == 'B')
{
battleLoop(); //Contains a do while loop that runs the battle game with
//Player 1 vs Player 2 each having 10 HP that allows repeat
}
if(modeSelect == 'Q')
{
modeSelect = 'Q';
}
else
{
std::cout << "Value must match L or B. Q to Exit." << std::endl;
if(modeSelect != 'Q')
{
chooseBattleOrLotto(); //Repeats entire function unless input char equals 'Q'
}
}
}
Answer: Some thoughts:
I'm not sure why you need a randomizer class. The random number stream is a global resource, so srand() needs to be done only once. It may be better to rename this as a Player class, since that's what it's usually used for. I can see a use for a RandomStream class for your damage and lotto numbers, something like:
class RandomStream
{
public:
RandomStream( int max ) : m_max(max) {}
virtual ~RandomStream() = 0;
int Next() { return rand() % m_max + 1; }
private:
int m_max;
};
struct DamageGenerator: public RandomStream
{
DamageGenerator(): RandomStream(c_MaxDamage) {}
};
struct LottoGenerator: public RandomStream
{
LottoGenerator( int max ): RandomStream(max) {}
};
So your numbers can be generated as:
LottoGenerator gen( maximum );
for ( int i = 0; i < number; i++ ) {
std::cout << gen.Next() << " ";
}
std::cout << std::endl;
and your battle damage as:
DamageGenerator m_gen;
void GenerateScores( int num ) {
m_scores.clear();
for ( int i = 0; i < num; i++ ) {
m_scores.push_back( m_gen.Next() );
}
}
For writing out "constructor/destructor messages", try guarding them with a #ifdef
#idef _DEBUG // or equivalent
std::cout << "..." << std::endl;
#endif
That way, you don't need to remove them in "proper use".
You have a loop running from 1 to <= count in one function and 0 to < size() in the other. Make the first one 0 to < count as well - more consistent.
for
randomIntArray.insert(randomIntArray.begin(), randomInt);
you could do
randomIntArray.push_back( randomInt );
for simplicity - the order isn't significant.
For printing them out, you could use an iterator:
for ( std::vector<int>::iterator it = randomIntArray.begin(); it != randomIntArray.end(); it ++ ) {
std::cout << *it << " "
}
Note that you're printing out a space at the end, which doesn't matter in this case. See C++ infix iterator for something rather more complex!
Note that in your Randomizer::createRandomIntArray, there's nothing to stop you first doing
randomIntArray.clear();
This will remove the need for the annoying clears after the hit or draw messages.
Don't use magic numbers 50, 100 and -1, 0, 1 etc. You can use
const int c_Rounds = 5;
const int c_MaxDamage = 100;
and
enum BattlerResult { PLAYER1_WINS, DRAW, PLAYER2_WINS };
This is to aid readability (and also ease of change), rather than wondering what it means in the future.
numOne, maxOne etc are only used once, so there's no need to "remember" them in the class definition - move them from there into the function (and actually, they're covered by the definition in 7) anyway).
Try to keep Class/variable names in the realm of what they mean rather than what they do/are. e.g., Randomizer is probably better named as Player, random1 as player1, Battler as BattlerRound, doBattler as GetBattlerResult etc.
I know they're really notes to yourself, but in general keep comments explaining how/why, rather than just echoing the code they're commenting.
The "who wins" code is repeated, possibly giving rise to inconsistent behaviour if one part is changed. Separate out that decision into a separate function, something like:
BattlerResult result = DecideWinner( player1, player2 );
Similarly, the hit or draw messages can be separated out into a function as well:
GiveResultMessage( result );
Doing these will simplify the overall doBattler function, making it easier to read and understand what's going on.
When you receive input from an unvalidated source, validate it. It doesn't make sense to have -5 lotto numbers. Also, consider allowing e.g. b as well as B as input.
The lotto loop termination should be:
while ( ( gameRunning == 'Y' ) || ( gameRunning == 'y' ) )
or perhaps even
while ( ::toupper( gameRunning ) == 'Y' )
The battleLoop loop could be a for, rather than a do...while; and it may be better as a boolean value (again, separating out the "play again" message as another function):
for ( bool battleAgain = true; battleAgain; battleAgain = QueryAnotherGame() )...
You may wish to introduce a BattleGame class to contain a single game, so that battleLoop just consists of:
void battleLoop()
{
for ( bool battleAgain = true; battleAgain; battleAgain = QueryAnotherGame() )
{
BattleGame game;
BattleResult result = game.Play();
game.GiveWinnerMessage( result );
std::cout << "Game over!" << std::endl;
}
}
chooseBattleOrLotto may be better as a switch (such as):
while ( true )
{
// ask L or B
switch ( ::toupper( modeSelect ) )
{
case 'Q': return;
case 'L': playLotto; break;
case 'B': playBattle; break;
default: "only L/B message"; break;
}
}
Note that your function is recusive - it probably won't matter for this, but a person could crash the game if they were determined by repeatedly pressing an invalid key.
I think templates and virtual functions are a bit too complex for this at the moment... | {
"domain": "codereview.stackexchange",
"id": 1993,
"tags": "c++, beginner, random, generator"
} |
Kagome Lattice: Spin-orbit coupling Hamiltonian in tight-binding models | Question: Consider spin-orbit coupling (of strength $\lambda_1$) on lattice, with the below Hamiltonian
$$H = i \lambda_1 \sum_{<ij>} ~\frac{E_{ij} \times R_{ij}}{|E_{ij} \times R_{ij}|} \cdot \sigma ~c_i^\dagger c_j $$
with lattice sites $i, j$, nearest-neighbor connecting sites vector $R_{ij}$, E-field $E_{ij}$ and Paulis matrix $\sigma$.
Consider 2D plane, so $R_{ij} = (R_{ij}^x, R_{ij}^y, 0)$ and choose E-field $E_{ij} = (E_{ij}^x, E_{ij}^y, 0)$, with $E_{ij}^x, E_{ij}^y >0$. Factor in above Hamiltonian is
$$\frac{E_{ij} \times R_{ij}}{|E_{ij} \times R_{ij}|} \cdot \sigma = \sigma_z ~\text{sgn} (E_{ij}^x R_{ij}^y - E_{ij}^y R_{ij}^x)$$
Paper here considers 2d Kagome lattice, with Hamiltonian for spin orbit appearing in 1st line of Eq. (1). Going into k-space, the authors shown in Eq. (2) that spin-orbit Hamiltonian gives terms with cosines, like $\cos (k_x)$.
However, it looks to me like terms should be sines, like $\sin(k_x)$.
Consider the 2-d Kagome lattice shown in Fig 1 of the paper. There will be terms proportional to that below to make the horizontal part of the lattice along x direcion, where $R_{ij}^y = 0$:
$$\sum_n \text{sgn} (- E_{ij}^y R_{ij}^x) c_n^\dagger c_{n+1} \to \sum_k e^{-i k_x} c_k^\dagger c_k $$
and
$$ \sum_n \text{sgn} (- E_{ij}^y R_{ij}^x) c_n^\dagger c_{n-1} \to \sum_k - e^{+i k_x} c_k^\dagger c_k $$
But beacuse of opposite direction of $R_{ij}^x$ in top and bottom cases sgn function will be different, so that exponentials $\exp$ add to make a $\sin(k_x)$ and not a $\cos(k_x)$ as is put in second line of Eq. (2) of the paper.
Where is a gap in my understanding of spin orbit on 2-d lattice?
Answer: I think the authors got it right.
The subtlety lies in the definition of $\mathbf{E}_{ij}$ and $\mathbf{R}_{ij}$. The authors consider $\mathbf{E}_{ij}$ as the electric field felt by the electron during hopping from $j$ to $i$ (although the field is non-uniform, the direction of the field does not change throughout a bond). Thanks to Clara for pointing this out.
In the above, I have shown three unit cells along $x$ direction and enumerated them as '$-1$', '$0$', and '$+1$'. The charge centers are shown as red '+' symbol. The direction of the electric field (black arrows) at the center of each bond along the horizontal direction is shown (since the question concerns the hopping along $x$ direction, I left the other bonds to avoid clutter). Notice the staggering nature of the electric field along the $x$-direction.
If we focus on the hopping along the $x$ direction, and consider the term where an electron inside the unit cell '$0$' hops from site '2' to '1', then $\left(\mathbf{E}_{1,0;~2,0}\times\mathbf{R}_{1,0;~2,0}\right)$ points in the $+z$ direction and let the magnitude be $\alpha$. This hopping will contribute to the term $H_{12}$ of the Hamiltonian. Here I use a little different notation to distinguish the indices of the unit cells and the lattice sites. The term $\mathbf{R}_{a,b;~c,d}$ represents: vector that points to site $a$ of unit cell $b$, from site $c$ of unit cell $d$.
Site '$2$' of the unit cell '-1' also contributes to the wavefunction at site $1$ of the unit cell '$0$'. This hopping will contribute to the term $H_{12}$ of the Hamiltonian. Now, notice that the electric field at the bond between the two aforementioned sites is opposite to the previous case (where hopping happened entirely within the unit cell '0'). Moreover, the direction of hopping is also reversed. Therefore the cross-product $\left(\mathbf{E}_{1,0;~2,-1}\times\mathbf{R}_{1,0;~2,-1}\right)$ still points in the $+z$ direction with magnitude $\alpha$.
Since both the terms that contribute to site '$1$' have the same sign with different Bloch factors $\exp(ik_x)$ and $\exp(-ik_x)$, therefore the resulting term will be ~$\cos(k_x)$. | {
"domain": "physics.stackexchange",
"id": 64682,
"tags": "second-quantization, lattice-model, spin-models, spin-chains, tight-binding"
} |
C64 loading screen | Question: I have made a loading screen (splash screen) just like the old C64. I have used a series of picture boxes and just change the coloured image using a timer and a case statement.
namespace c64
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
timer1.Start();
timer2.Start();
timer3.Start();
}
private void timer1_Tick(object sender, EventArgs e)
{
Random rnd = new Random();
int a = rnd.Next(1,8);
int b = rnd.Next(1,8);
int c = rnd.Next(1,8);
int d= rnd.Next(1,8);
int n= rnd.Next(1,8);
int f= rnd.Next(1,8);
int g= rnd.Next(1,8);
int h = rnd.Next(1, 8);
switch (a)
{
case 1:
pictureBox1.Image = Properties.Resources.image1;
pictureBox8.Image = Properties.Resources.image1;
pictureBox10.Image = Properties.Resources.image1;
pictureBox2.Image = Properties.Resources.image1;
pictureBox11.Image = Properties.Resources.image1;
pictureBox9.Image = Properties.Resources.image1;
break;
case 2:
pictureBox1.Image = Properties.Resources.image2;
pictureBox8.Image = Properties.Resources.image2;
pictureBox10.Image = Properties.Resources.image2;
break;
case 3:
pictureBox1.Image = Properties.Resources.image3;
pictureBox8.Image = Properties.Resources.image3;
pictureBox10.Image = Properties.Resources.image3;
break;
case 4:
pictureBox1.Image = Properties.Resources.image4;
pictureBox8.Image = Properties.Resources.image4;
break;
case 5:
pictureBox1.Image = Properties.Resources.image5;
pictureBox8.Image = Properties.Resources.image5;
break;
case 6:
pictureBox1.Image = Properties.Resources.image6;
pictureBox8.Image = Properties.Resources.image6;
break;
case 7:
pictureBox1.Image = Properties.Resources.image7;
pictureBox8.Image = Properties.Resources.image7;
break;
case 8:
pictureBox1.Image = Properties.Resources.image8;
pictureBox8.Image = Properties.Resources.image8;
break;
}
switch (b)
{
case 1:
pictureBox2.Image = Properties.Resources.image1;
pictureBox11.Image = Properties.Resources.image1;
pictureBox9.Image = Properties.Resources.image1;
break;
case 2:
pictureBox2.Image = Properties.Resources.image2;
pictureBox9.Image = Properties.Resources.image2;
break;
case 3:
pictureBox2.Image = Properties.Resources.image3;
pictureBox11.Image = Properties.Resources.image3;
pictureBox9.Image = Properties.Resources.image3;
pictureBox18.Image = Properties.Resources.image3;
pictureBox18.Image = Properties.Resources.image4;
break;
case 4:
pictureBox2.Image = Properties.Resources.image4;
pictureBox9.Image = Properties.Resources.image4;
break;
case 5:
pictureBox2.Image = Properties.Resources.image5;
pictureBox9.Image = Properties.Resources.image5;
break;
case 6:
pictureBox2.Image = Properties.Resources.image6;
pictureBox9.Image = Properties.Resources.image6;
pictureBox12.Image = Properties.Resources.image6;
break;
case 7:
pictureBox2.Image = Properties.Resources.image7;
pictureBox9.Image = Properties.Resources.image7;
break;
case 8:
pictureBox2.Image = Properties.Resources.image8;
pictureBox9.Image = Properties.Resources.image8;
break;
}
switch (c)
{
case 1:
pictureBox3.Image = Properties.Resources.image1;
pictureBox13.Image = Properties.Resources.image1;
break;
case 2:
pictureBox3.Image = Properties.Resources.image2;
pictureBox13.Image = Properties.Resources.image2;
break;
case 3:
pictureBox3.Image = Properties.Resources.image3;
break;
case 4:
pictureBox3.Image = Properties.Resources.image4;
pictureBox1.Image = Properties.Resources.image2;
pictureBox8.Image = Properties.Resources.image2;
pictureBox10.Image = Properties.Resources.image2;
break;
case 5:
pictureBox3.Image = Properties.Resources.image5;
pictureBox18.Image = Properties.Resources.image1;
pictureBox18.Image = Properties.Resources.image1;
pictureBox17.Image = Properties.Resources.image2;
break;
case 6:
pictureBox3.Image = Properties.Resources.image6;
break;
case 7:
pictureBox3.Image = Properties.Resources.image7;
break;
case 8:
pictureBox3.Image = Properties.Resources.image8;
break;
}
switch (d)
{
case 1:
pictureBox4.Image = Properties.Resources.image1;
pictureBox14.Image = Properties.Resources.image1;
pictureBox17.Image = Properties.Resources.image2;
pictureBox8.Image = Properties.Resources.image2;
pictureBox10.Image = Properties.Resources.image2;
break;
case 2:
pictureBox4.Image = Properties.Resources.image2;
pictureBox18.Image = Properties.Resources.image2;
pictureBox18.Image = Properties.Resources.image3;
break;
case 3:
pictureBox4.Image = Properties.Resources.image3;
pictureBox17.Image = Properties.Resources.image5;
pictureBox18.Image = Properties.Resources.image8;
pictureBox18.Image = Properties.Resources.image7;
break;
case 4:
pictureBox4.Image = Properties.Resources.image4;
break;
case 5:
pictureBox4.Image = Properties.Resources.image5;
pictureBox14.Image = Properties.Resources.image5;
break;
case 6:
pictureBox4.Image = Properties.Resources.image6;
pictureBox17.Image = Properties.Resources.image7;
break;
case 7:
pictureBox4.Image = Properties.Resources.image7;
break;
case 8:
pictureBox4.Image = Properties.Resources.image8;
break;
}
switch (n)
{
case 1:
pictureBox5.Image = Properties.Resources.image1;
pictureBox15.Image = Properties.Resources.image5;
break;
case 2:
pictureBox5.Image = Properties.Resources.image2;
break;
case 3:
pictureBox5.Image = Properties.Resources.image3;
pictureBox15.Image = Properties.Resources.image3;
break;
case 4:
pictureBox5.Image = Properties.Resources.image4;
break;
case 5:
pictureBox5.Image = Properties.Resources.image5;
break;
case 6:
pictureBox5.Image = Properties.Resources.image6;
break;
case 7:
pictureBox5.Image = Properties.Resources.image7;
break;
case 8:
pictureBox5.Image = Properties.Resources.image8;
break;
}
switch (f)
{
case 1:
pictureBox5.Image = Properties.Resources.image1;
pictureBox16.Image = Properties.Resources.image3;
break;
case 2:
pictureBox5.Image = Properties.Resources.image2;
break;
case 3:
pictureBox5.Image = Properties.Resources.image3;
break;
case 4:
pictureBox5.Image = Properties.Resources.image4;
break;
case 5:
pictureBox5.Image = Properties.Resources.image5;
break;
case 6:
pictureBox5.Image = Properties.Resources.image6;
break;
case 7:
pictureBox5.Image = Properties.Resources.image7;
break;
case 8:
pictureBox5.Image = Properties.Resources.image8;
break;
}
switch (g)
{
case 1:
pictureBox6.Image = Properties.Resources.image1;
break;
case 2:
pictureBox6.Image = Properties.Resources.image2;
break;
case 3:
pictureBox6.Image = Properties.Resources.image3;
break;
case 4:
pictureBox6.Image = Properties.Resources.image4;
break;
case 5:
pictureBox6.Image = Properties.Resources.image5;
break;
case 6:
pictureBox6.Image = Properties.Resources.image6;
break;
case 7:
pictureBox6.Image = Properties.Resources.image7;
break;
case 8:
pictureBox6.Image = Properties.Resources.image8;
break;
}
switch (h)
{
case 1:
pictureBox7.Image = Properties.Resources.image1;
break;
case 2:
pictureBox7.Image = Properties.Resources.image2;
break;
case 3:
pictureBox7.Image = Properties.Resources.image3;
break;
case 4:
pictureBox7.Image = Properties.Resources.image4;
break;
case 5:
pictureBox7.Image = Properties.Resources.image5;
break;
case 6:
pictureBox7.Image = Properties.Resources.image6;
break;
case 7:
pictureBox7.Image = Properties.Resources.image7;
break;
case 8:
pictureBox7.Image = Properties.Resources.image8;
break;
}
}
private void timer2_Tick(object sender, EventArgs e)
{
pictureBox21.Visible = true;
}
private void timer3_Tick(object sender, EventArgs e)
{
pictureBox21.Visible = false;
}
}
}
Is there a more efficient way to get this effect?
Answer: There's quite a lot of code duplication going on here... I would suggest refactoring the switch statements into a method.
I'm assuming there are 20 PictureBox objects and you want to randomize the shown image on each of those, because your provided code is a bit bizarre (e.g. pictureBox11.Image only ever gets assigned Properties.Resources.image1 or image3, and sometimes there are multiple assignments for the same pictureBox inside a case)
namespace c64 {
public partial class Form1 : Form {
private Random rng;
private const Image[] Images = new Image[]{
Properties.Resources.image1,
Properties.Resources.image2,
Properties.Resources.image3,
Properties.Resources.image4,
Properties.Resources.image5,
Properties.Resources.image6,
Properties.Resources.image7,
Properties.Resources.image8
};
public Form1() {
InitializeComponents();
rng = new Random();
}
private void RandomizeImage(PictureBox pictureBox) {
int index = rng.Next(0, Images.Length - 1);
pictureBox.Image = Images[index];
}
private void timer1_Tick(object sender, EventArgs e) {
RandomizeImage(pictureBox1);
RandomizeImage(pictureBox2);
RandomizeImage(pictureBox3);
RandomizeImage(pictureBox4);
RandomizeImage(pictureBox5);
RandomizeImage(pictureBox6);
RandomizeImage(pictureBox7);
RandomizeImage(pictureBox8);
RandomizeImage(pictureBox9);
RandomizeImage(pictureBox10);
RandomizeImage(pictureBox11);
RandomizeImage(pictureBox12);
RandomizeImage(pictureBox13);
RandomizeImage(pictureBox14);
RandomizeImage(pictureBox15);
RandomizeImage(pictureBox16);
RandomizeImage(pictureBox17);
RandomizeImage(pictureBox18);
RandomizeImage(pictureBox19);
RandomizeImage(pictureBox20);
}
// rest of your code ...
}
}
Of course, you could pack those pictureBoxX (X = 1 to 20) into their own array and iterate over them in a loop. | {
"domain": "codereview.stackexchange",
"id": 17869,
"tags": "c#, animation"
} |
Can Hadamard's formula be used for fermionic operators? | Question: Can I use this special case of Hadamard's formula
$$e^\hat B \hat A e^{-\hat B}= A + [B,A]+\frac{1}{2!}[B, [B,A]] + \dots$$
for fermionic operators?
Suppose I have fermionic operators that obey anticommutation relations
$\{a,a^{\dagger}\}=1$ and $\{a,a\}=\{a^{\dagger},a^{\dagger}\}=0$. The commutator for fermions $[a,a^{\dagger}]=1-2a^{\dagger}a$.
Then, if $A=a^{\dagger}$ and $B=a$, I can get
$e^\hat a \hat a^{\dagger} e^{-\hat a}=a^{\dagger}+[a,a^{\dagger}]+\frac{1}{2!}[a, [a,a^{\dagger}]]+ \dots = a^{\dagger}+(1-2a^{\dagger}a)+\frac{1}{2!}[a, (1-2a^{\dagger}a)]+\dots$
Is this formula universal for fermionic and bosonic operators?
Answer: Hadamard's formula
$$ e^XYe^{-X}~=~e^{[X,\cdot]_C}Y \tag{1}$$
also works if one or both operators $X$ and $Y$ are Grassmann-odd (or even don't carry definite Grassmann-parity). Here it is important that $[\cdot,\cdot]_C$ in eq. (1) is the commutator; not the supercommutator nor the anticommutator. The proof is very similar to the Grassmann-even case.
NB: Be aware that a Grassmann-odd operator $X$ does not need to square to zero, cf. e.g. SUSY charge operators. | {
"domain": "physics.stackexchange",
"id": 92210,
"tags": "quantum-mechanics, operators, fermions, commutator, anticommutator"
} |
Talker Listener Tutorial. Listener no show | Question:
Hi, I feel more confident with CMake and linux after spending time practising. So I tried again, these steps:
http://wiki.ros.org/catkin/Tutorials/create_a_workspace
http://wiki.ros.org/catkin/Tutorials/CreatingPackage
http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29
So I have all set up as the tutotials. Cmake is giving me this error :
CMake Error at /opt/ros/indigo/share/genmsg/cmake/genmsg-extras.cmake:94 (message): add_message_files() directory not found: /home/test/my_ws/src/beginner_tutorials/msg Call Stack (most recent call first): beginner_tutorials/CMakeLists.txt:8 (add_message_files)
Kinda stuck again, hope this time I'm more specific. Let me know if I can upload any images somewhere if it helps
Originally posted by finch1 on ROS Answers with karma: 9 on 2018-01-04
Post score: 0
Original comments
Comment by jayess on 2018-01-04:
So, you're listing three tutorials here. What tutorial are you currently following?
Comment by finch1 on 2018-01-04:
Hi Jayess, I created the workspace following the instructions from the first link. Then created a package using the commands from the second link tutorial. After this, I added the source code and rearranged the CMakeLists as the tutorial describes in the third link.
Comment by finch1 on 2018-01-04:
Changed directory back to the workspace, typed catkin_make, got the error.
Comment by jayess on 2018-01-04:
Did you go through the Creating Msgs and Srvs tutorial?
Comment by finch1 on 2018-01-04:
Hi, yes, I am reading through it, after I got the error.
Comment by gvdhoorn on 2018-01-05:
@finch1: may I suggest a title change? At the moment, the title of your post is "Following package tutorial". That doesn't give any information about what it is you're actually trying to solve. As your problem is specifically about a certain aspect of the tutorial, mention that in the title.
Comment by finch1 on 2018-01-05:
Hi gvdhoorn, I think I'm gonna call it "Going round in circles" cause that's how it feels like. I don't know what else to try. If the tutorials are correct, I'm not getting any errors and still can only run one node out of the two built, then I'm lost and have no clue where to start looking to fix.
Comment by finch1 on 2018-01-05:
Just to confirm, it should be like this right:
rosrun beginner_tutorials talker
rosrun beginner_tutorials listener
Comment by jayess on 2018-01-05:
Yes, you use rosrun like so
rosrun <package-name> <node-name>
Comment by jayess on 2018-01-05:
If you're having more issues unrelated to your original problem (compiling) could you please create a new question? It's not that I don't want to help, but things get messy when we keep changing the question.
Comment by finch1 on 2018-01-05:
Hi Jayess, its true, I'm going to cause its getting messy very quickly. Actually, I found other posts with the same issue so I'll go through them before I ask a new question, maybe I can find the answer.
Comment by jayess on 2018-01-05:
Sounds good. A good way to learn is to go through the wiki. If this issue (compiling) has been resolved, please click the checkmark to accept the answer.
Comment by jayess on 2018-01-05:
Next to the answer, click on the checkmark.
Comment by finch1 on 2018-01-05:
Thanks for confirming.
Comment by jayess on 2018-01-05:
No problem :)
Answer:
If you didn't go through the Creating Msgs and Srvs tutorial tutorial then you can remove the
add_message_files(DIRECTORY msg FILES Num.msg)
add_service_files(DIRECTORY srv FILES AddTwoInts.srv)
lines from your CMakeLists.txt file. You only need those lines if you have custom messages that you're compiling.
Originally posted by jayess with karma: 6155 on 2018-01-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by finch1 on 2018-01-04:
Removed these in a previous trial, and the node worked, eventhough I wasn't sure what I was doing so decided to ask this time.
Comment by jayess on 2018-01-04:
By worked, do you mean that it compiled and ran?
Comment by finch1 on 2018-01-04:
yes exactly, compile + run.
So I removed these to lines away and catkin_make gave no errors.
Comment by jayess on 2018-01-04:
Great. So, if this solved your problem then please click the checkmark to mark the answer as correct.
Comment by finch1 on 2018-01-04:
ok, for some reason only "talker" runs as a node, "rosrun beginner_tutorial listener" couldn't be found
Comment by jayess on 2018-01-04:
Please create a new question. We try to keep the questions focused on one issue.
Comment by gvdhoorn on 2018-01-05:
@finch1: you mentioned you have the book before, so I'd like to suggest the following: go back to "A Gentle Introduction to ROS". Read it again, try to replicate what the book does and see whether your understanding has increased. I have a feeling it will be, and it should let you avoid the kind ..
Comment by gvdhoorn on 2018-01-05:
.. of issues you are running into now.
It's not that we don't want to help you, but you'll probably save yourself quite some frustration as well be more efficient with your time (ie: not having to wait on answers here).
Comment by finch1 on 2018-01-05:
True true gvdhoorn, I agree and am doing so at the moment. I hope I can help others too one day like you guys. Thanks for your encouragement. | {
"domain": "robotics.stackexchange",
"id": 29656,
"tags": "ros, rosnode, tutorial"
} |
Plotting Ground Track of Elliptical Orbit with Inclination Angle | Question: I am trying to project the ground path of an Elliptical orbit with an arbitrary inclination angle of $\Delta i$ and a period of $T$. I have all the parameters of the orbit (semi-major axis length, semi-minor axis length, eccentricity, apogee and perigee velocity, energy, etc.).
The approach I was taking was plotting a position of the satellite in the elliptical orbit on an X-Y plane. I iterate through the time domain $t \in [0,\textrm{T}]$ where T is the period of the orbit with a specified time step of $\Delta t$. For each time, I calculate the mean anomaly and the magnitude of the distance between the satellite and the center body, $r$. When projecting the position vector of the satellite with respect to the center body on the semi-major axis, the length of that projected vector will be defined as $x$. After finding $x$, I found the vertical projection of that same position vector, thus forming the triangle below.
First of all, I need to find mean anomaly as a function of time. My code is currently not working, but in theory what I am doing is (from Curtis's Orbital Mechanics for Engineering Students, 1st Edition) I am solving numerically for a quantity labeled as $E$ using the equation for mean anomaly (eqn. between 3.12 and 3.13 for those who have the book), given as:
$M_e = E - e\sin(E)$. From there, I solve numerically for true anomaly using eqn. 3.7a:
$M_e = 2\arctan{(\sqrt{\frac{1-e}{1+e}}\tan{\theta/2})} - \frac{e\sqrt{1-e^2}\sin\theta}{1+e\cos\theta}$.
After finding those distances, I found the spherical coordinates of the satellite with respect to the center of the center body. $r$ would be $r_p + x$, $\phi$ would simply be the complement of $\Delta i$. $\theta$ would be the true anomaly.
After that, I plot (x,y) using spherical-cartesian transformation, but I simply do not believe that it is that straightforward. For one, what I am essentially trying to do is projecting a 3D curve onto a sphere and then projecting that onto a 2D plane. If anyone has sources/reading material on how to actually do this math, that would be fantastic. I was referencing this chapter from a book on satellites: http://fgg-web.fgg.uni-lj.si/~/mkuhar/Pouk/SG/Seminar/Vrste_tirnic_um_Zemljinih_sat/Orbit_and_Ground_Track_of_a_Satellite-Capderou2005.pdf. Unfortunately I cannot reference other equations that the book references until I get it.
I have not gotten the code to run yet, so I do not know what my plots look like but from preliminary data, it does not seem to be going in the right way (e.g. True anomaly ($\theta$) should be going from 0 to $2\pi$ as time increases, but that is not what is outputting.
EDIT: Typos and to clarify, I do not want to use external packages such as STK. I want to be able to physically come up with these plots.
Answer: Let's first attach a right-handed coordinate system to the orbit, defined by the unit basis vectors $(\boldsymbol{e}_x, \boldsymbol{e}_y, \boldsymbol{e}_z)$, with $\boldsymbol{e}_x$ and $\boldsymbol{e}_y$ in the orbital plane and $\boldsymbol{e}_x$ in the direction of periapsis. First, compute the eccentric anomaly $E$ from the mean anomaly $M$:
$$
M = \frac{2\pi t}{T} = E - \sin E,
$$
assuming that the satellite is at periapsis at $t=0$. The position vector of the satellite is then given by
$$
\boldsymbol{r} = a(\cos E -e)\,\boldsymbol{e}_x + b\sin E\,\boldsymbol{e}_y.
$$
There's no need to compute the true anomaly.
Next we need to find the general transformation from the orbital frame to a given reference frame, defined by $(\boldsymbol{e}_x', \boldsymbol{e}_y', \boldsymbol{e}_z')$. This can be done using three Euler angles $\Omega$, $i$, and $\omega$:
$\Omega$ is the longitude of the ascending node in the ($\boldsymbol{e}_x'$, $\boldsymbol{e}_y'$) plane, i.e. the angle measured from $\boldsymbol{e}_x'$ to the ascending node;
$i$ is the inclination of the orbit with the ($\boldsymbol{e}_x'$, $\boldsymbol{e}_y'$) plane;
$\omega$ is the argument of periapsis, i.e. the angle measured from the ascending node to the periapsis.
See also the wiki article on orbital elements. The transformation is then given by three rotations:
$$
\begin{pmatrix}
\boldsymbol{e}_x\\ \boldsymbol{e}_y\\ \boldsymbol{e}_z
\end{pmatrix}
\!=\!
\begin{pmatrix}
\cos\omega & \!\sin\omega & \!0 \\
-\sin\omega & \!\cos\omega & \!0 \\
0 & \!0 & \!1
\end{pmatrix}\!
\begin{pmatrix}
1 & \!0 & \!0 \\
0 & \!\cos i & \!\sin i \\
0 & \!-\sin i & \!\cos i
\end{pmatrix}\!
\begin{pmatrix}
\cos\Omega & \!\sin\Omega & \!0 \\
-\sin\Omega & \!\cos\Omega & \!0 \\
0 & \!0 & \!1
\end{pmatrix}\!
\begin{pmatrix}
\boldsymbol{e}_x'\\ \boldsymbol{e}_y'\\ \boldsymbol{e}_z'
\end{pmatrix}
$$
or explicitly,
$$
\boldsymbol{e}_x = (\cos\Omega\cos\omega - \sin\Omega\sin\omega\cos i)\,\boldsymbol{e}_x' + (\sin\Omega\cos\omega + \cos\Omega\sin\omega\cos i)\,\boldsymbol{e}_y' + (\sin\omega\sin i)\,\boldsymbol{e}_z'\\[1em]
\boldsymbol{e}_y = (-\cos\Omega\sin\omega - \sin\Omega\cos\omega\cos i)\,\boldsymbol{e}_x' + (-\sin\Omega\sin\omega + \cos\Omega\cos\omega\cos i)\,\boldsymbol{e}_y' + (\cos\omega\sin i)\,\boldsymbol{e}_z'\\[1em]
\boldsymbol{e}_z = (\sin\Omega\sin i)\,\boldsymbol{e}_x' + (-\cos\Omega\sin i)\,\boldsymbol{e}_y' + \cos i\,\boldsymbol{e}_z'.
$$
This enables you to express $\boldsymbol{r}$ in the reference frame coordinates. For the projected path, simply set the $z'$ component to zero. | {
"domain": "physics.stackexchange",
"id": 48098,
"tags": "newtonian-mechanics, classical-mechanics, orbital-motion"
} |
Why Is Bi Quadratic Interpolation for Image Resampling / Interpolation Rarely Done? | Question: Related question: What are the practically relevant differences between various image resampling methods?
Bilinear and bicubic interpolation for image resampling seem to be fairly common, but biquadratic is in my experience rarely heard of. To be sure, it's available in some programs and libraries, but generally it doesn't seem to be popular. This is further evidenced by there being Wikipedia articles for bilinear and bicubic interpolation, but none for biquadratic. Why is this the case?
Answer: Bilinear and biquadratic interpolation gives you a $C^0$ interpolating function. That is, a function that is continuous but has a discontinuous first derivative. On the other hand, Bicubic interpolation gives you a $C^1$ interpolating function. That is, a function that is continuous and has a continuous first derivative (the second derivative is discontinuous though). So you are not gaining anything in terms of "smoothness" (continuity of higher derivatives) by using biquadratic over bilinear, just more complexity. To get a smoother interpolation, you have to step up from bilinear to bicubic. | {
"domain": "dsp.stackexchange",
"id": 7501,
"tags": "image-processing, interpolation, resampling, biquad"
} |
Are the sources in QFT just particles? | Question: I'm reading A. Zee's Quantum Field Theory in a Nutshell, where he introduces QFT using path integral formulation.
One thing that I'm not sure I got correctly is this:
Zee adds a source term to the Klein Gordon Lagrangian:
$$\mathcal{L}=\phi(\partial^2 + m^2)\phi + J(x)\phi$$
and $J(x) \equiv J_1(x) + J_2(x) $.
What I understood is that these 2 sources are interpreted as particles. Is that correct?
The next thing is that he proceeds to do some calculations which I couldn't follow up with. Nevertheless, what I think was derived is that the field $\phi$ propagated from source $J_1(x)$ to source $J_2(x)$. And we can interpret this propagation as a force because we can see there is energy. Is my understanding correct?
Answer: Not really particles. The source terms $J$ are a computational tool. First, they allow you to take the functional Fourier transform of the path phase factor $\exp(i S[\phi] / \hbar)$. Why is this useful? Because the quantities we're interested in when doing perturbation theory are the moments of this phase factor. Moments in real space are derivatives at the origin in the Fourier dual space, so the generating functional, $Z[J]$, is usually easier to compute the $N$-point functions from. Those are the quantities we need to construct the $S$-matrix, and other useful quantities.
Second, you can think of them as an imaginary non-conserved current that the field interacts with. We then use that current that we use to create and annihilate particles in the model and monitor their interactions. It is most close to the second sense that Zee is using $J$ - he looks at how the field produces a potential energy interaction between these imaginary external currents. | {
"domain": "physics.stackexchange",
"id": 90009,
"tags": "quantum-field-theory, particle-physics, lagrangian-formalism, path-integral"
} |
Explanation as to why the sum of two sinusoidal waves, differing by only phase, can be represented by $2y_{m}\cos(\frac{1}{2} \Phi)$ | Question: How does the addition of two waves, differing only by phase, collapse to $2y_{m}\cos(\frac{1}{2} \Phi)$?
Wouldn't the $\omega$ component of the wave still come into play given that it determines the period of the wave? i.e. $\omega=2\pi f$ and hence, $T = \frac{2\pi}{\omega}$.
Answer: I don't know where your formula comes from and nor what the symbols exactly mean but if $\Phi$ is the phase difference, then it is obviously wrong in the case where $\Phi = 0$ where one expects simply a propagating wave with twice the amplitude.
To get the general result, let us consider two waves $y_{1,2} = y_m \cos(\omega t - kx + \varphi_{1,2})$. Now let us introduce $\Phi = \varphi_1-\varphi_2$ and $\phi = \varphi_1+\varphi_2$. We get that $\varphi_1 = (\Phi+\phi)/2$ and $\varphi_2 = (\phi-\Phi)/2$. This enables to rewrite $y_{1,2} = y_m \cos(\omega t - kx + \phi/2 \pm \Phi/2)$.
Next, let us consider the following trigonometric identities $\cos(a+b) = \cos a \cos b - \sin a \sin b$ and $\cos(a-b) = \cos a \cos b + \sin a \sin b$. We get from them that $\cos(a+b)+\cos(a-b)=2\cos a \cos b$. Using $a = \omega t-kx + \phi/2$ and $b = \Phi/2$, we get for the two waves $y_1+y_2 = 2y_m \cos(\omega t-kx + \phi/2)\cos(\Phi/2)$.
This equation has the nice property of giving back the right wave with a double amplitude in case of no phase difference. | {
"domain": "physics.stackexchange",
"id": 30691,
"tags": "homework-and-exercises, waves, superposition"
} |
Why does 1,2-dichloro-4-nitrobenzene undergo SNAr para and not meta to the nitro group? | Question: In the reaction of 1,2-dichloro-4-nitrobenzene with sodium ethoxide, why does ethoxide end up substituting the chlorine para to the nitro group rather than the chlorine meta to the nitro group?
I think that C–2 should be the most electron-poor carbon (and hence prone to nucleophilic attack) since it experiences a combined electron withdrawal from the nitro and chloro groups flanking it. Shouldn't the nucleophilic ethoxide anion attack there instead?
Answer: You can obtain an answer by looking at the electronic structure. $\ce{NO2}$ is a meta director for electrophilic aromatic substitution (EAS). The question asks about nucleophilic aromatic substitution (NAS). Aromatic substitutions involve a free pair of electrons resonating around a structure. Electrophiles may add at any resonance site where the pair of electrons are. Nucleophiles may inversely add to any site where electrons are not present shown in the picture below. Therefore if a functional group is meta directing for EAS it will be ortho/para directing for NAS and vice versa. | {
"domain": "chemistry.stackexchange",
"id": 5689,
"tags": "organic-chemistry, aromatic-compounds, regioselectivity"
} |
Chernoff-Hoeffding bounds for the number of nonzeros in a submatrix | Question: Consider a $n \times n$ matrix $A$ with $k$ nonzero entries. Assume every row and every column of $A$ has at most $\sqrt{k}$ nonzeros. Permute uniformly at random the rows and the columns of $A$. Divide $A$ in $k$ submatrices of size $n/\sqrt{k} \times n/\sqrt{k}$ (i.e. $\sqrt{k}$ meta-rows and meta-columns). Enumerate the $k$ nonzeros and define the following indicator random variable:
\begin{equation}
X_{\ell,z} =
\begin{cases}
1 & \text{if the $z$-th nonzero entry is in $A^\ell$} \\
0 & \text{otherwise}
\end{cases}
\end{equation}
The expected number of nonzero entries in a generic submatrix $A^\ell$ is one. Is it possible to prove Chernoff-Hoeffding bounds on the sum $X_\ell = \sum_{z=1}^k X_{\ell,z}$?
My first guess was to prove negative association, following Dubhashi and Panconesi analysis. Unfortunately, $X_{\ell,z}$ and $X_{\ell,z^\prime}$ are not negatively associated (following the book's notation, if $z$ and $z^\prime$ are in the same row/column then $\mathbf{E}[f(X_{\ell,z})g(X_{\ell,z^\prime})] > \mathbf{E}[f(X_{\ell,z})] \mathbf{E}[g(X_{\ell,z^\prime})]$).
Answer: Okay, here is a full answer.
We will use the fact, that any bipartite graph of maximum degree $d$ can be broken into (at most) $d$ matchings.
In our case, this means that we can split $A$ into (at most) $\sqrt{k}$ disjoint sets of elements $S_i\subseteq\{(i,j)\in[n]\times[n] \mid A_{i,j}=1\}$ of size at most $n$ such that any every element in each set has a unique row and column.
(This uses that a $1$ in $A$ can be seen as an edge in the $n\times n$ bipartite graph that is the matrix.)
Now let $X$ be the total number of $1$s in your $n/\sqrt k\times n/\sqrt k$ matrix, $A^\ell$, and let $X_i$ be the number of elements coming from $S_i$.
Then $X\ge t\implies \exists_i\, X_i\ge t/\sqrt k$ and so
$$\Pr[X\ge t]\le \sqrt k \Pr[X_1 \ge t/\sqrt k]$$
by the union bound.
Now, because elements in $S_1$ don't share any rows/columns, it is easy to analyze using your original approach.
In particular, reorder the rows and columns using the matching, so that the first element in $S_1$ is $(1,1)$ the next is $(2,2)$ and so on.
Let $r_i$ be the random variable that is 1 if row $i$ is chosen and $c_i$ likewise. Then we want to upper bound $\Pr[\sum_i r_ic_i \ge t]$.
Notice $E[r_ic_ir_jc_j] = E[r_ir_j]E[c_ic_j] \le E[r_i]^2E[c_i]^2$ by Maclaurin’s Inequality, so
$$\exp(t\sum_i r_ic_i) = \sum_k t^k/k!\,E\left(\sum_i r_ic_i\right)^k \le \sum_k t^k/k!\,E\left(\sum_i b_i\right)^k = \exp(t\sum_i b_i)$$
where $b_i$ is a normal binomial variable with $p=(n/\sqrt{k}/n)^2=1/k$.
Finally, we then get
$$\Pr[X\ge t]\le \sqrt k \Pr[X_1 \ge t/\sqrt k] \le \sqrt k \exp\left(-2\left(\frac{t-p n \sqrt{k}}{n\sqrt k}\right)^2 n\right)$$
or more simply
$$\Pr[X\ge \lambda\sqrt{nk}+n/\sqrt k]\le \sqrt k \exp(-2\lambda^2)$$
or if we use the Hoeffding bound for small values of $p$ (large $k$):
$$\Pr[X\ge \lambda\sqrt{nk}+n/\sqrt k]\le \sqrt k \exp\left(\frac{-\lambda^2}{2/k+\lambda/\sqrt n}\right)$$
Note that if we had simply assumed all elements were independent, we would have gotten the bound $\Pr[X\ge \lambda\sqrt{n\sqrt{k}}+n/\sqrt k]\le \exp(-2\lambda^2)$. Equal to ours except for a factor $k^{1/4}$ in the exponent. | {
"domain": "cs.stackexchange",
"id": 12010,
"tags": "permutations, sparse-matrices, chernoff-bounds"
} |
How do planets retain momentum? | Question: I'm watching this YouTube video, and in it, the lecturer explains a model of planetary orbit. At 1:55 he shows how a marble will naturally orbit the center object, and he briefly claims that planets will not lose energy like the marble does. But I cant see how that is the case. Surely there must be meteors, asteroids, comets, including general space debris, that will over the course of millennia, diminish this energy/momentum.
How then, can planets retain their energy, when there are almost only forces that would slow it down? Slowly increasing the energy wouldn't help either, as it would offset the balance. How can there be stability, especially over the course of millions of years?
Answer: Such forces do exist.
They are utterly negligible in most circumstances. If you imagine a planet "running into" matter whilst travelling in its orbit. Let's say that on average that matter is at rest with respect to the star the planet is orbiting. Then in order to make a very significant change to the linear momentum of the planet, then it must accrete a significant fraction of its own mass.
Such a thing might be important in the very early history of a planetary system when the system is full of gas and debris (e.g. the formation of the Moon), but later on (and bear in mind that the IAU definition of a planet includes the specification that it has "cleared the neighbourhood around its orbit") this just isn't an issue.
This popular science article suggests that 37,000-78,000 tonnes of material hits the Earth every year. Sounds a lot, but in comparison to the mass of the Earth ($6\times 10^{21}$ tonnes) it is very small. If the material was at rest with respect to the Earth's orbit, then an order of magnitude estimate for how long such impacts would take to significantly affect the Earth's orbit would be $\sim 10^{16}$ years. | {
"domain": "astronomy.stackexchange",
"id": 2286,
"tags": "orbit"
} |
What are the differences and advantages of TensorFlow and Octave for machine learning? | Question: I have been exploring the different libraries and languages you can use in order to implement machine learning. During this, I have stumbled upon a library TensorFlow and Octave(a high-level programming language) as both are intended for numerical computations.
What are the differences and advantages of using either?
Answer: Octave is a great language for prototyping and experimenting with ML algorithms, as it has built-in support for numerical linear algebra such as matrix and vector calculations. Octave is optimized for rapid calculations, which is very useful in Machine Learning. It is also quite easy to do matrix multiplications in Octave as Matrices are first-class objects in Octave.
Tensorflow is indeed a versatile platform for machine learning with an ever-expanding list of packages and frameworks getting built.
Octave is a good tool for learning the essentials and internals of mathematics of machine learning and Tensorflow is a good platform for building industry solutions for machine learning projects. Hence both are good for their own purposes. | {
"domain": "datascience.stackexchange",
"id": 7528,
"tags": "machine-learning, tensorflow, octave"
} |
rqt won't start | Question:
I've tried multiple ways of starting up rqt and each time the, it starts to load and seems to get stuck and so I'm left with a blank box and can't proceed any further.
Originally posted by The_Developer_02 on ROS Answers with karma: 13 on 2018-04-10
Post score: 0
Original comments
Comment by gvdhoorn on 2018-04-10:
Can you please add a screenshot of what you see? RQT without any plugins is a "blank window", so that might be perfectly ok.
(note: this is about the only time adding a screenshot makes sense, hence my question)
Comment by The_Developer_02 on 2018-04-10:
does that help?
Comment by liuyoudehama on 2018-12-10:
same problem...
Answer:
I'm not sure, but if this is a recent Ubuntu version, then it could be that this is just a basic RQT window with no plugins loaded. If you could verify that the menu appears when you hover over the title bar, then you could try loading a plugin.
If this is the case, it's an Ubuntu UI thing.
Originally posted by gvdhoorn with karma: 86574 on 2018-04-10
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by The_Developer_02 on 2018-04-10:
Yes, thank you, sorry I'm reasonably new to Ubuntu and I forget the menu is along the top. Thank you
Comment by gvdhoorn on 2018-04-10:
Ok, good to hear that nothing is 'broken' then. | {
"domain": "robotics.stackexchange",
"id": 30599,
"tags": "ros-kinetic"
} |
Processing train and test data | Question: I have X numpy array as my features and y numpy array as my target. I split both of it into train and test data. From many QnA i have read they only say to preprocess both train and test separately. I assume i only do it to my feature (X) train and test data and not the target (y). Do we also preprocess the target?
Answer: Not necessarily but depends on what your target(y) is and which algorithm/methodology you are trying to use.
It also depends on your data quality.
Few instances that come to my mind:
If your target value is categorical and multilabel in nature, it needs to be one hot encoded, also think about adding extra category to account for unknown classes
If your target is a continuous variable some transformations could work better depending on data distribution and quality- log transforms are common(if no negatives are present),
Normalization/MinMax scaling etc are employed when different features and targets are in very different scales.
https://machinelearningmastery.com/how-to-transform-target-variables-for-regression-with-scikit-learn/ | {
"domain": "datascience.stackexchange",
"id": 7141,
"tags": "machine-learning"
} |
Carbon Dioxide Specific Heat | Question: I am trying to find the specific heat of Carbon Dioxide that is at the temperature of 120 kelvin. In j/kgk
First I would like to know what is the proper formula to use and values for this situation.
Second, I wish to know what would be the ratio between temp and specific heat. Ex. 175 K = 709 j/kgk
I know that Specific Heat is which the temperature point of any sort of material which changes from a different state of matter
Answer: See the figure below, including an equation for extrapolating to lower temperatures. | {
"domain": "physics.stackexchange",
"id": 49026,
"tags": "thermodynamics"
} |
How should I clean a part before installing it in a vacuum system? | Question: What are proven procedures for preparing a part that comes fresh out of the workshop for ultra-high vacuum (UHV) and extremely-high vacuum (XUV)?
Answer: our lab has an ultra-high vacuum stm system (10-11 torr), and all parts that go in the vacuum system has to be extremely clean. Here is what we do:
first i want to point out that the material you use for UHV is very important too. The commonly accepted material is 316 stainless steel and oxygen free pure copper. For other specialized material, you should check before applying it in the vacuum system.
for any piece that just came out of the workshop, clean it with a towel to remove the visible grease/dirt. Don't worry about the stuff inside the threads or hard to reach places yet.
If you have a sonicator (sonic bath), then you can sonicate your piece following this list of solvents, for 20 mins each:
detergent (we use sparkleen)
acetone (skip when cleaning copper)
ethanol
methanol
you will need clean gloves and work bench (layered with aluminum foil).
The point to use a sonicator is to remove all the gunk in the hard-to-reach places, like the inside of threads. You can't use just acetone, since while it is great at removing grease, it sticks onto your surface and thus would contaminate the UHV environment. So you need the lower molecular weight solvents to get the acetone off.
If you DON'T have a sonic bath (I recommend you get one if you are doing serious work with UHV), you are able to get away with the cleaning by electrocleaning (we don't do electrocleaning with UHV, and I don't know if it will work 100%). The process with electro cleaning is available online. Use the solvent list first, to get rid of the surface contaminants, and use cathodic electrocleaning followed by anodic electrocleaning. The electrolytes can be found commercially. (we have an cleaning agent called Tivaclean from Krohn) | {
"domain": "physics.stackexchange",
"id": 100569,
"tags": "experimental-physics, vacuum, experimental-technique"
} |
Euler-Lagrange confusion | Question: Consider the action $S = \int dt \sqrt{G_{ab}(q)\dot{q}^a\dot{q}^b}.$
Now for computing the Euler-Lagrange equations, we need the time derivative of $\frac{\partial L}{\partial \dot{q}^c} = \frac{1}{\sqrt{G_{ab}(q)\dot{q}^a\dot{q}^b}}G_{dc}(q)\dot{q}^d$.
Do we also need to take the time derivative of the denominator? If you do then the equations of motions become ugly, and my intuition says that they should be pretty normal.
The only explanation I can think of, is that the root is a scalar since it is fully contracted, so there is no need to consider the time derivative.
Answer: In general you do have to take the derivative of the inverse square-root factor. However if you choose $$dt^2 = G_{ab}dq^adq^b$$ then the factor becomes $1$. | {
"domain": "physics.stackexchange",
"id": 99399,
"tags": "classical-mechanics, lagrangian-formalism, variational-principle, geodesics"
} |
Difficulty in E-Z nomenclature and counting number of geometrical isomers | Question:
I have found above four molecules as the geometrical isomers just by drawing them and checking if they are superimposable. I'm not sure that these are the only ones.
I have tried using E and Z but couldn't go ahead with it because configuration around each double bond depends on configurations of all other double bonds and it gets circular.
How do I assign E and Z configurations to each of the four double bonds in this case to distinguish between all possible geometrical isomers?
Answer: Rishi Shekher: You are correct that there are only four stereoisomers (A, B, C, D) of this tetraethylidenecyclobutane. Loong has given you a lead as to how to apply CIP rules to the assignment of the configurations of the double bonds. This method can appear confusing and it is certainly not intuitive. The configurations were generated with ChemDraw 21. I will show you how the CIP algorithm works.
Stereoisomer A: The configuration of each double bond must be determined independently. They are labeled in red in each of the digraphs 1-4. The digraph is constructed by following a path around the ring, CW or CCW, from the non-duplicate carbon (black dot) to the duplicate carbons (red dot), which are designated as being attached to three atoms of atomic number zero.
Focusing on digraph 1 (vide infra) and $\ce{C1}$ (black dot), the double bond immediately to its left ($\ce{C4}$) is assigned the temporary Z-configuration because the path "around the ring" to the right is longer, i.e., more carbons than the path leading to the left. This method is used to temporarily assign the five positions. The left hand chain has three Z's while the right hand chain has all E's. One proceeds out each chain from the black dot making a one-to-one comparison until a Z>E is achieved (CIP Rule 3). For $\ce{C1}$ this is accomplished at $\ce{C4}$ and $\ce{C2}$ where Z>E, respectively. Determinant double bonds are shown in blue.
Stereoisomer B: All positions in this isomer are equivalent. The double bonds are all of the E-configuration with Z>E.
Stereoisomers C and D: Stereoisomer C has a plane of symmetry. $\ce{C1}$ is equivalent to $\ce{C4}$ and $\ce{C2}$ is equivalent to $\ce{C3}$. Stereoisomer D has four equivalent double bonds owing to two planes of symmetry. Given that both stereoisomers C and D have the Z-configuration, one would would be hard pressed to know which one to draw if asked to do so. This situation is unfortunate. | {
"domain": "chemistry.stackexchange",
"id": 17104,
"tags": "nomenclature, stereochemistry, isomers, cis-trans-isomerism"
} |
I'm trying to develop an RVIZ plugin and I've error like this one presented in the question below (link). Can someone help me? | Question:
Link to question:
http://answers.ros.org/question/276894/launch-a-new-rviz-plugin-failed-errorfailed-to-load-library-xxxsomake-sure-that-you-are-calling-the-pluginlib_export_class-macro-in-the-library-code/
Originally posted by Hillal on ROS Answers with karma: 1 on 2019-06-21
Post score: 0
Answer:
Based on the sources from attached link, you should looking for hints in error message.
Failed to load library
/home/shantengfei/catkin_ws/devel/lib//libjoint_value_monitor.so.
Make sure that you are calling the
PLUGINLIB_EXPORT_CLASS macro in the
library code, and that names are
consistent between this macro and your
XML. Error string: Could not load
library (Poco exception =
/home/shantengfei/catkin_ws/devel/lib//libjoint_value_monitor.so:
undefined symbol:
_ZN19joint_value_monitor11joint_value4loadERKN4rviz6ConfigE)
First of all, we know that we have poroblems with libjoint_value_monitor.so library which should be located in the /home/shantengfei/catkin_ws/devel/lib//libjoint_value_monitor.so path --- but, it looks good
The library code contains PLUGINLIB_EXPORT_CLASS macro and names are consistent between this macro and XML --- Ok
Let's check the last sentence with undefined symbol: _ZN19joint_value_monitor11joint_value4loadERKN4rviz6ConfigE, so we should check if we have implemented this function. If we can't deduce function name from _ZN19joint_value_monitor11joint_value4loadERKN4rviz6ConfigE, we can call:
<$:~$ c++filt _ZN19joint_value_monitor11joint_value4loadERKN4rviz6ConfigE
where output will be
joint_value_monitor::joint_value::load(rviz::Config const&)
So the problem is lack of joint_value_monitor::joint_value::load(rviz::Config const&) definition.
Originally posted by abrzozowski with karma: 290 on 2019-06-22
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33238,
"tags": "rviz, plugin, ros-indigo"
} |
Why does the expansion of gas into a vacuum mean that we have less information about the system? (entropy) | Question: I'm reading through Statistical Physics by F. Mandl and in the chapter about the 2nd law of thermodynamics he states that:
The basic distinction between the initial and final states in such an irreversible process is that in the final state we have a less complete knowledge of the state of the system.
Why is this true? Let's say our gas exists in the left side of a container separated by a partition. We know that all the molecules are definitely on the left side.
Now we remove the partition. The gas rushes into the vacuum and eventually spreads itself throughout the container. The book mentions that macroscopic fluctuations of the gas moving around would not be observable unless we waited for a period of time around the order of the age of the universe. So then if we assume that 50% of the gas is on the left side and 50% is on the right side (assume that the partition is infinitesimally thin and unbreakable/unbendable), then how is it that we know less information of the system in this state?
Answer: Partially answered already in the question and comments. CuriousOne answered it.
Entropy is a measure of the possible microstates of the system, i.e., the different positions and velocities of each of the molecules. When you double the volume each of the molecules doubles the number of possible x, y, and z's. The possible number of states has increased. S has gone up. | {
"domain": "physics.stackexchange",
"id": 28910,
"tags": "thermodynamics, statistical-mechanics, entropy, ideal-gas"
} |
Tropical Plant Identification | Question: After searching extensively I have been unable to find a name for this plant. It was found growing in the Eden Project Tropical biome in the UK. The flowers were approximately 7-8 cm across.
Any help greatly appreciated!
Answer: They seem to closely resemble the shape of the "Golden Chalice Vine" (Solandara Maxima). Although they are a more yellow color and their leaves fold outward rather than inward, it is entirely possible that the flowers you found were still in the process of fully blooming.
I also found a it growing in the Eden Project Biome you mentioned, here is the link to the plant page on their website: http://www.edenproject.com/learn/for-everyone/plant-profiles/golden-chalice-vine
And for some more pictures and information: https://toptropicals.com/catalog/uid/SOLANDRA_GRANDIFLORA.htm
Hope this is the plant! | {
"domain": "biology.stackexchange",
"id": 7558,
"tags": "species-identification"
} |
Why doesn't this thought experiment with a localized source and an absorbing detector violate causality in quantum field theory? | Question: Essentially all standard relativistic quantum field theory textbooks give the same argument for why the theory obeys causality. First, one computes the quantity $\langle 0 | \phi(\mathbf{x}) \phi(\mathbf{y}) |0 \rangle$ and finds that it is nonzero for $\mathbf{x} \neq \mathbf{y}$, naively suggesting that particles can propagate instantaneously. Then, one computes the commutator $[\phi(\mathbf{x}), \phi(\mathbf{y})]$ and finds that it is zero for $\mathbf{x} \neq \mathbf{y}$. This implies that simultaneous measurements of the field at distinct points can't affect each other, which is supposed to restore causality. (For simplicity, I'm considering simultaneous points only, though of course the arguments would be unchanged for spacelike separated points.)
There are several ways to explain why the vanishing commutator restores causality. One very common way, using particle language, is to say that the commutator represents "the amplitude of a particle to go from $\mathbf{y}$ to $\mathbf{x}$, minus the amplitude of an (anti)particle to go from $\mathbf{x}$ to $\mathbf{y}$". You need to include both terms, because ultimately you can't tell which way the particles are going for spacelike separation, and they precisely cancel. Another variant, using field language (given here and in some older textbooks), is that the vanishing of this commutator implies that turning on a source at $\mathbf{x}$ doesn't affect the expectation value of $\phi(\mathbf{y})$. And finally, there's the Weinberg way, which is that these details don't matter because in the end we're only after $S$-matrix elements.
However, I've never been satisfied by these arguments, because they don't seem to reflect how measurements occur in real life. In a typical detector in particle physics, there is no component that does anything remotely like "measuring $\phi(\mathbf{x})$". Instead, detectors absorb particles, and this is a different enough process that the arguments above don't seem to apply.
A thought experiment
Let's make a simple model of the production and detection of a particle. A quantum field couples to sources locally,
$$H_{\text{int}}(t) = \int d \mathbf{x}' \, \phi(\mathbf{x}') J(\mathbf{x}', t).$$
Suppose we begin in the vacuum state $|0 \rangle$. At time $t = 0$, let's turn on an extremely weak, delta function localized source $J(\mathbf{x}') = \epsilon_s \delta(\mathbf{x} - \mathbf{x}')$. Right afterward, at time $t = 0^+$, the state is
$$\exp\left(-i \int_{-\infty}^{0+} dt\, H_{\text{int}}(t)\right) |0 \rangle = |0 \rangle - i \epsilon_s \phi(\mathbf{x}) |0 \rangle + \ldots.$$
Now let's put a purely absorbing, weakly coupled detector localized at $\mathbf{y}$, which concretely could be an atom in the ground state. This detector is described by Hamiltonian
$$H_{\text{det}} = \epsilon_d |e \rangle \langle g| \phi_-(\mathbf{y}) + \epsilon_d |g \rangle \langle e | \phi_+(\mathbf{y})$$
where $\phi_-$ is the part of $\phi$ containing only annihilation operators, and $\phi_+$ contains creation operators. Physically, the two terms represent absorption of a particle to excite the atom, and emission of a particle to de-excite it. Because the atom starts out in the ground state, only the first term of the Hamiltonian matters. The amplitude for the detector to be in the excited state, a tiny time after the source acts, is
$$\mathcal{M} \propto \epsilon_s \epsilon_d \langle 0 | \phi_-(\mathbf{y}) \phi(\mathbf{x}) |0 \rangle \propto \langle 0 | \phi(\mathbf{y}) \phi(\mathbf{x}) |0 \rangle$$
which is nonzero! This appears to be a flagrant violation of causality, since the source can signal to the detector nearly instantaneously.
Objections
Note how this example evades all three of the arguments in the second paragraph:
The amplitude for a particle to go from the detector to the source doesn't contribute, because the detector starts in the ground state; it can't emit anything. There's no reason for the commutator to show up.
The detector isn't measuring $\phi(\mathbf{y})$, so the fact that its expectation value vanishes is irrelevant. (As an even simpler example, in the harmonic oscillator $\langle 1 | x | 1 \rangle$ also vanishes, but that doesn't mean an absorbing detector can't tell the difference between $|0 \rangle$ and $|1 \rangle$.)
The Weinberg argument doesn't apply because we're not considering $S$-matrix elements. Like most experimental apparatuses in the real world, the source and detector are at actual locations in space, not at abstract infinity.
I'm not sure why my argument fails. Some easy objections don't work:
Maybe I'm neglecting higher-order terms. But that probably won't fix the problem, because the $\epsilon$'s can be taken arbitrarily small.
Maybe it's impossible to create a perfectly localized source or detector. Probably, but there's no problem localizing particles in QFT on the scale of $1/m$, and you can make the source and detector out of very heavy particles.
Maybe the detector can't be made good enough to see the problem, by the energy-time uncertainty principle. But I can't see why. It doesn't matter if it's inefficient; if the detector has any chance to click at all, at $t < |\mathbf{x} - \mathbf{y}|$, then causality is violated.
Maybe the detector really can emit particles even when it's in the ground state. But this contradicts everything I know about atomic physics.
Maybe the free-field mode expansion doesn't work because the presence of the detector changes the mode structure. But I'm not sure how to make this more concrete, plus it shouldn't matter if we take $\epsilon_d$ very small.
What's going on? Why doesn't this thought experiment violate causality?
Answer: General calculation in Schrodinger picture
We can do a straightforward calculation of your setup in the Schrodinger picture. Let $x,y$ label points in space. Before $t=0$, the state is in the vacuum $|0\rangle$. Just after $t=0$, the state is $e^{-i \epsilon_s \phi(x)} |0\rangle$, for some spatial point $x$ where you place the source at $t=0$. (Really you might want to consider turning on a source smeared over a small neighborhood of $x$, to avoid singularities in the following discussion, but I'll ignore that.) At later time $t>0$, we have state
$$|\psi(t)\rangle = e^{-iHt}e^{-i \epsilon_s \phi(x)} |0\rangle
=e^{-iHt}e^{-i \epsilon_s \phi(x)} e^{iHt} |0\rangle
= e^{-i \epsilon_s \phi(x,t)}|0\rangle$$
Then at time $t$, we make a measurement at spatial point $y$. For a moment I'll ignore your desired detector model and speak generally. Say we make a measurement in a spatial region $Y$. The observables measurable by an observer local to $Y$ are precisely the observables generated by (sums and products of) operators $\phi(y), \pi(y)$ for any $y \in Y$. Choose an observable $A_Y$ of this form, e.g. $A_Y = \phi(y)$ for some $y \in Y$, or $A_Y = \int_{y \in Y}\phi(y)\, dy$. Assume the points $(Y,t)$ are spacelike from the point $(x,t=0)$. Then the expectation of $A_Y$ in $|\psi(t)\rangle$ is
$$\langle \psi(t) | A_Y |\psi(t)\rangle = \langle 0 |e^{i \epsilon_s \phi(x,t) } A_Y e^{-i \epsilon_s \phi(x,t)} | 0 \rangle = \langle 0 | A_Y |0 \rangle.$$
So the expectation of $A_Y$ is the same as if you hadn't turned on the source at $x$ at $t=0$. The second equality uses $[\phi(x,t),A_Y]=0$, by assumed spacelike separation.
Being careful with first-order expansion in $\epsilon$
Here's a possible point of confusion. Just after $t=0$, and to first order in $\epsilon_s$, we have
$$ |\psi \rangle = |0\rangle - i \epsilon_s \phi(x) |0\rangle + O(\epsilon_s^2).$$
The second term $\phi(x) |0\rangle$ seems to dominate over the higher-order terms in $\epsilon_s$, and this may seem to suggest causality violation: an observable $A_Y$ at spacelike $Y$ still has nonzero expectation value in this state, i.e. $\langle 0 | \phi(x) A_Y \phi(x) |0\rangle \neq 0$. (Incidentally if you choose $A_Y=\phi(y)$ the expectation value is zero by symmetry, but you could choose e.g. $A_Y=\pi(y)$.). However, this observation is entirely unproblematic; the expectation value of $A_Y$ with respect to just the first-order term is not directly related to any measurement. If we include both the zero'th order and first-order terms in $|\psi\rangle$, then we find $\langle \psi | A_Y |\psi\rangle = \langle 0 | A_Y |0 \rangle + O(\epsilon^2)$, because the first-order contributions cancel.
Locality of your detector model
What about your detector model? There were a few problems. First, I wouldn't actually call your detector localized to the point $y$. The observables measurable at $y$ are just algebraic combinations of $\phi(y), \pi(y)$. Or again more generally, you could take a small spatial region $Y$ and consider observables $A_Y$ local to $Y$, given by algebraic combinations of $\phi(y), \pi(y)$ for $y \in Y$. If you want to imagine an external system $S$ like your atom coupled locally to $Y$, the coupling Hamiltonian for the detector should be like
$$H_{det} = \sum_i O^i_S A^i_Y$$
where $O^i_S$ are some operators on the coupled system $S$, and $A^i_Y$ are operators in the QFT local to $Y$.
Your desired Hamiltonian $H_{det}$ may look like it takes this form, but your operators $\phi_{\pm}(x)$, by which I assume you mean something like
$$\phi_{-}(x) \equiv \int \frac{d^3 p}{(2\pi)^3} \frac{1}{\sqrt{E_p}} a_p e^{ipx}$$
$$\phi_{+}(x) \equiv \int \frac{d^3 p}{(2\pi)^3} \frac{1}{\sqrt{E_p}} a_p^\dagger e^{-ipx},$$
are not strictly local to $y$. One way to see this is that they have nonzero commutator with any $\phi(x)$. (Incidentally there's some discussion about this in Section 6 and Eq. 81 here.). The operators $\phi_+(y)$ and $\phi_-(y)$ may look like they are local to $y$, by the way they are written, but if you actually re-write $\phi_{\pm}(x)$ in terms of the genuinely local $\phi(x)$ and $\pi(x)$ operators, you will find the $\phi_{\pm}(x)$ are not local.
Moreover, regardless of your detector model, I think you're a bit too quick when you say "the amplitude for the detector to be in the excited state$\dots$." You should actually think about what a measurement of the detector subsystem would yield. The analysis will then go similarly to the general discussion at the beginning of the answer.
Finally, what if we insist on using your particular detector $H_{det}$, using $\phi_{\pm}(y)$ couplings? First we must admit it's not truly local to $y$, and that it's really only approximately "localized" to a region of radius $\approx \frac{1}{m}$ around $y$ (for a massive theory). We must further admit that it's not even strictly localized to that neighborhood, or any finite region: it's really only "local" to a neighborhood of radius $r$ around $y$ with error $e^{-\frac{r}{m}}$, due to the nonzero commutators $[\phi_{\pm}(y), \phi(x)]$, or the expression of $\phi_{\pm}(y)$ in terms of genuinely local field operators $\phi(x), \pi(x)$. So you shouldn't be surprised if the detector has a $e^{-\frac{r}{m}}$ probability of registering a causality-violating signal in this model. | {
"domain": "physics.stackexchange",
"id": 86388,
"tags": "quantum-mechanics, quantum-field-theory, special-relativity, causality"
} |
Term for trait that is advantageous to a population only as long as it is rare | Question: I remember reading about a concept—in evolutionary biology or natural selection, I think—whereby a particular trait is advantageous to the population or species but only so long as that trait is only exhibited by a minority of the population. That is, the population is more likely to survive if the trait's existence and expression is maintained, but only if that expression is limited to a small percentage of the population. If the trait becomes too common, the selection advantage of having it will decrease, and the fact that most of the population has it may even negatively impact the survivability of that population.
I can't recall where I encountered it. It may have been in the context of reading about ADD, Aspberger's/Autism, neurotypicality, introversion, or mania/OCD. I want to revisit the topic and learn some more about it, but Google is failing me.
Can anyone tell me the name of or term for the concept I'm thinking of?
Answer: Frequency-dependent selection is the term you are looking for, I believe. Positive frequency-dependent selection encompasses traits that become more advantageous as they become more common. Negative frequency-dependent selection encompasses traits that become more advantageous as they become rarer. | {
"domain": "biology.stackexchange",
"id": 3970,
"tags": "evolution, population-genetics"
} |
How to bind a node to a specific cpu core? | Question:
Dear all,
I am tyring to run a node with a specific cpu core but have no idea how to do that.
What's more, I also want to specify the cpu core of callback functions. I know I can use asynchronous spinner to control the number of threads, but it seems there is no way to manually set which core the thread uses.
Look forward to any help!
Ziyang
Originally posted by ZiyangLI on ROS Answers with karma: 93 on 2015-02-09
Post score: 1
Answer:
Without being able to get access to the underlying threads, I think it will be difficult (impossible) to do for individual threads, but for entire processes this should be doable using taskset and a launch-prefix (taskset is from the util-linux package on Debian/Ubuntu).
Something like launch-prefix="taskset -c 1" added to a node element in a launch file should work (I haven't tested it though). That binds the node to cpu 2.
Note that according to man/1/taskset, your user needs to have CAP_SYS_NICE.
Note also that the taskset approach is not ROS specific. The launch-prefix is ROS specific (see roslaunch/XML/node - Attributes).
Final note: the fact that you've bound task X to cpu Y does not mean that other tasks cannot be run on cpu Y, obviously.
Originally posted by gvdhoorn with karma: 86574 on 2015-02-09
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by lucasw on 2020-01-02:
taskset 1 (without the -c) worked for me in Ubuntu 18.04
Comment by gvdhoorn on 2020-01-03:
The -c option is slightly more specific and supports "list format" (which is convenient when working with a (highly) multi-core CPU):
-c, --cpu-list display and specify cpus in list format
Without -c, taskset will also work, but doesn't support lists.
It's the same on all versions of Ubuntu afaik.
Comment by lucasw on 2020-01-03:
I tested both the list and mask cpu setting as a launch-prefix in melodic and they work https://github.com/lucasw/timer_test/blob/master/launch/timer_test.launch | {
"domain": "robotics.stackexchange",
"id": 20824,
"tags": "ros, asyncspinner"
} |
Non-discoveries by the Kepler space telescope: exomoons, co-orbital planets, trojans | Question: I am just reading the review article Advances in exoplanet science from Kepler (arxiv preprint: http://arxiv.org/abs/1409.1595), and I found a remarkable paragraph (last paragraph in section "Properties of planetary systems", page 341):
Just as important as the discoveries made by Kepler are its non-discoveries. So far, Kepler has found no co-orbital planets, which share the same average semi-major axis — like the Trojan asteroids found accompanying Jupiter and the Saturnian satellites Janus and Epimetheus. It has also found neither exomoons nor 'binary' planets orbiting one another.
There is no further explanation or interpretation on that.
What does the authors want to say with this remark? Do they infere that due to the non-discovery, the probability of such objects is very low?
I'm interested especially why the authors think this is important? What does it say?
Edit (October 2018):
I would like to update this question, to mention that that now - four years later - an exo-moon has very likely been discovered by Alex Teachey and David Kipping, using Kepler data (with subsequent observations with Hubble).
Answer: The Hunt for Exomoons with Kepler Project has so far failed to find any exomoons. This is a negative finding (so far). Negative findings are always a bit trickier to explain than are positive findings. This negative finding might mean something very significant, or it might have very little significance:
Maybe exoplanets are much less likely to have exomoons that the abundance of moons in our solar system would suggest; or
Maybe the close-in exoplanets that Kepler was predisposed to find are much less likely to have exomoons; or
Maybe the group hasn't studied enough exoplanets and so far have just been a bit unlucky. From reading through their papers, their approach is exceeding CPU intensive, taking decades of CPU time per planet investigated; or
Maybe their technique isn't as good at detecting exomoons as they think it is; or
Maybe it's something else that explains the negative results.
As for why Lissauer et al. put that little teaser of a paragraph in their Nature article, that's a good question. It's a bit of a parenthetical remark (i.e., non-essential). If it's not essential, what's that remark doing there? The article in question was about not just what has already been found in the Kepler dataset but also about what is still waiting to be found. The uses of transit time and transit duration variations by the Hunt for Exomoons with Kepler Project could add significant value to the Kepler dataset. As for Lissauer et al. didn't say anything more, it's a bit premature right now to make anything of the so-far negative results from the project. There are too many maybes right now. | {
"domain": "physics.stackexchange",
"id": 16470,
"tags": "astronomy, telescopes, exoplanets"
} |
Why can machine learning not recognize prime numbers? | Question: Say we have a vector representation of any integer of magnitude n, V_n
This vector is the input to a machine learning algorithm.
First question : For what type of representations is it possible to learn the primality/compositeness of n using a neural network or some other vector-to-bit ML mapping. This is purely theoretical -- the neural network could be possibly unbounded in size.
Let's ignore representations that are already related to primality testing such as : the null separated list of factors of n, or the existence of a compositeness witness such as in Miller Rabin. Let's instead focus on representations in different radices, or representations as coefficient vectors of (possibly multivariate) polynomials. Or other exotic ones as are posited.
Second question : for what, if any, types of ML algorithm will learning this be impossible regardless of the specifics of the representation vector? Again, let's leave out 'forbidden by triviality' representations of which examples are given above.
The output of the machine learning algorithm is a single bit, 0 for prime, 1 for composite.
The title of this question reflects my assessment that the consensus for question 1 is 'unknown' and the consensus for question 2 is 'probably most ML algorithms'. I'm asking this as I don't know any more than this and I am hoping someone can point the way.
The main motivation, if there is one, of this question is : is there an 'information theoretic' limit to the structure of the set of primes that can be captured in a neural network of a particular size? As I'm not expert in this kind of terminology let me rephrase this idea a few times and see if I get a Monte-Carlo approximation to the concept : what is the algorithmic complexity of the set of primes? Can the fact that the primes are Diophantine recursively enumerable (and can satisfy a particular large diophantine equation) be used to capture the same structure in a neural network with the inputs and outputs described above.
Answer: this is an old question/problem with many, many connections deep into number theory, mathematics, TCS and in particular Automated Theorem Proving.[5]
the old, near-ancient question is, "is there a formula for computing primes"
the answer is, yes, in a sense, there are various algorithms to compute it.
the Riemann zeta function can be reoriented as an "algorithm" to find primes.
seems possible to me that a GA, genetic-algorithm approach may succeed on this problem some day with an ingenious setup, ie GAs are the nearest known technology that have the most chance of succeeding.[6][7] its the problem of finding an algorithm from a finite set of examples, ie machine learning, which is very similar to mathematical induction. however there does not seem to be much research into application of GAs in number theory so far.
the nearest to this in existing literature seems to be eg [8] that discusses developing the twin prime conjecture in an automated way ie "automated conjecture making".
another approach is a program that has a large set of tables of standard functions, along with some sophisticated conversion logic, to recognize standard integer sequences. this is a new function built into Mathematica called findsequence [3]
its also connected to a relatively new field called "experimental mathematics" [9,10] or what is also called "empirical" research in TCS.
another basic point to make here is that the sequence of primes is not "smooth", highly irregular, chaotic, fractal, and standard machine learning algorithms are historically based on numerical optimization and minimizing error (eg gradient descent), and do not do so well on finding exact answers to discrete problems. but again GAs can succeed and have been shown to succeed in this area/regime.
[1] is there a math eqn for the nth prime, math.se
[2] formula for primes, wikipedia
[3] wolfram findsequence function
[4] riemann zeta function
[5] top successes of automated theorem proving
[6] applications of genetic algorithms in real world
[7] applying genetic algorithms to automated thm proving by Wang
[8] Automated Conjecture Making in Number
Theory using HR, Otter and Maple colton
[9] Are there applications of experimental mathematics in TCS?
[10] A reading list on experimental algorithmics | {
"domain": "cstheory.stackexchange",
"id": 4899,
"tags": "co.combinatorics, machine-learning, primes"
} |
what's the time complexity of the number of shots in simulator of qiskit | Question: One question,
codes as follows:
backend = Aer.get_backend('qasm_simulator')
counts = execute(qc, backend=backend, shots=102400).result().get_counts()
when my number of shots is 102400, is my time complexity 102400? That means i need Multiply 102400 by the time complexity of the relevant algorithm.
Thanks
Answer: It depends on the simulation mode (regular, stabilizer, MPS, etc.) and structure of the circuit. Roughly, if your circuit doesn't have noise or mid-circuit measurements then it's simulated only once, then measurement outcomes are sampled. | {
"domain": "quantumcomputing.stackexchange",
"id": 4779,
"tags": "qiskit, simulation, qiskit-runtime"
} |
Moveit - fix a link with a virtual joint | Question:
Hi,
I am working on a robot-arm and I want to fix a link from my robot to the world.
I want to use the virtual joint option (and set a fixed joint) on the Moveit Setup wizard but afterwards, when I launch the demo.launch file and visualize my robot, the joint isnt working. It seems it is not taken into account at all,
I set the parent frame as ¨world¨ during the setup. Am I wrong?
Thanks,
Originally posted by Soho_ on ROS Answers with karma: 11 on 2016-07-28
Post score: 0
Answer:
When you were working on the setup assistant, did you give the joint type as fixed?
If you give the joint as fixed, there will not be any visualization.
Originally posted by bhavyadoshi26 with karma: 95 on 2016-09-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by matthewmarkey on 2020-05-18:
Sorry, I am confused as to what you mean by this. If you do not set the virtual joint as fixed, what should it be set at?
I am working with a custom manipulator and can't figure out how to fix it to the gazebo world. I have tried both the gazebo tag, as well as fixing a dummy joint to the world via virtual joint, but no luck! | {
"domain": "robotics.stackexchange",
"id": 25379,
"tags": "ros, joint, moveit"
} |
Should I use and loop, or just a loop? | Question: I have this code:
if(listObj.Any(x => x.id < 0))
{
foreach(ModelClass item in listObj)
{
if(item.id < 0)
{
// code to create a new Obj in the database
}
}
}
But, should I use like this?
foreach(ModelClass item in listObj)
{
if(item.id < 0)
{
// code to create a new Obj in the database
}
}
NOTE: It's unusual to exist id < 0 (I use this to create a temp id for manipulation in the page), but there is a possibility.
Extra info that I found:
Query transformations are syntactic
IMPROVE YOUR LINQ WITH .ANY()
MSDN Documentation - Enumerable.Any Method
Answer: Use .Where extension method to filter the records you need:
foreach(ModelClass item in listObj.Where(x => x.id < 0))
{
// code to create a new Obj in the database
} | {
"domain": "codereview.stackexchange",
"id": 4012,
"tags": "c#, performance, .net, linq"
} |
Finding the maximum mean number of photons in a superposition of Fock states | Question: I am trying to find the maximum mean number of photons if $\beta$ (a complex number) is varied for the state $$|\psi_0\rangle = \frac{1}{N_0}(|0\rangle + \beta |9\rangle).$$
I have normalised the state finding $$|\psi_0\rangle = \frac{1}{\sqrt{1 + |\beta|^2}}(|0\rangle + \beta |9\rangle).$$
I take the expectation value of $\hat N = \hat a^{\dagger} \hat a$ and find that $$\langle N \rangle = \frac{9 |\beta|^2}{1 + |\beta|^2}$$
Now, I want to vary $\beta$ to find the maximum mean number of photons.
I know how to find the maximum value of a function but I am uncertain about how I should be treating $|\beta|^2$ when I take it's derivative (due to it's complex nature). To illustrate my confusion, these are the two approaches I have considered for calculating $\frac{d|\beta|^2}{d\beta}$:
(1) $$\frac{d |\beta|^2}{d \beta} = 2|\beta|\frac{|\beta|}{\beta} = 2\frac{|\beta|^2}{\beta}$$
(2) $$\frac{d |\beta|^2}{d\beta} = \frac{d}{d\beta} \beta \beta^* = \beta^*$$
I haven't taken any complex analysis, but would really appreciate some guidance on how I should interpret this. Are either of these two approaches valid?
Answer: You've formulated things in a way such that with your formalism you will never find the answer, but you're essentially overthinking this to the extreme.
The core answer is simple: if you're only looking for the expectation value of an operator that's diagonal in your chosen basis, then the only thing that matters is the population, and there is nothing quantum about it, i.e. you can just think of the problem as you would for a probabilistic mixture of $9$ photons with probability $p$ and $0$ photons with probability $1-p$. It is then easy to see that the expectation value is bounded as
$$
0\leq ⟨N⟩ \leq 9,
$$
with the extreme $⟨N⟩=9$ attained at $p=1$.
In terms of a quantum superposition, you could start by formulating your state as
$$
|\psi⟩ = \cos(\theta)|0⟩ + \sin(\theta)|9⟩,
$$
in which case you can find $⟨N⟩$ as a function and differentiate, and you will find an attained maximum at $\theta = \pi/2$, i.e. $|\psi⟩ = |9⟩$.
However, you have configured your initial Ansatz,
$$
|\psi⟩ = \frac{1}{N_0(\beta)}|0⟩ + \frac{\beta}{N_0(\beta)}|9⟩,
$$
in a way such that the amplitude of $|0⟩$ is always nonzero; this covers every state in the state space except the solution you're looking for. On the ground, that means that the solution you're trying to optimize,
$$
⟨N⟩ = \frac{9|\beta|^2}{1+|\beta|^2},
$$
never achieves its maximum, and it only converges to its supremum as $|\beta|\to\infty$. | {
"domain": "physics.stackexchange",
"id": 45083,
"tags": "quantum-mechanics, homework-and-exercises, hilbert-space, complex-numbers"
} |
Is a large tumor is more likely to develop hypoxic regions? | Question: It is known that cancerous tumors in humans can develop hypoxic regions where no blood nor oxygen arrive to some volume of its cells, creating a dead lump inside or around the tumor.
See Wikipedia - Tumor hypoxia.
Are hypoxic regions and regions without blood supply are more common in large tumors than in small tumors?
What is the likelyhood or frequency of hypoxic regions in small (< 2 cm) and large (> 4 cm) tumors?
What is the typical size of an hypoxic region within a tumor?
The article in Wikipedia is too techincal and very hard to be read and understood, and doesn't have an explicit answer to my question.
Answer: The prescence of hypoxia is independent of size, grade, or histology. It occurs due to a cut off in blood supply to the tumour. Or insufficient vasculature systems, meaning the oxygen cannot diffuse all the way into the tumour. Aberrant growth of tumours can increase the likelyhood of tumours developing hypoxic regions. | {
"domain": "biology.stackexchange",
"id": 8884,
"tags": "human-biology, cancer, blood-circulation"
} |
how to use the kdl parser | Question:
Dear Sir,
I have been trying to use the KDL parser in ros to find the kinematic model of the robot arm urdf description in .xml
Could you please guide me how to use the KDL parser as it is not clear through the kdl_parser documentation.
Thanking you for your time
Regards
rdd0101
Originally posted by rdd0101 on ROS Answers with karma: 11 on 2013-03-12
Post score: 1
Answer:
I thought the kdl_parser tutorial gave a fairly clear usage explanation:
#include <kdl_parser/kdl_parser.hpp>
KDL::Tree robot_kin;
kdl_parser::treeFromFile("my_robot.urdf", robot_kin);
(the tutorial also provides similar examples for initializing from a parameter or string)
Once you have a KDL::Tree, you probably want to calculate various sorts of kinematics. The KDL Documentation provides some good examples here:
KDL::Chain chain;
robot_kin.getChain("base_link", "tool", chain);
KDL::JntArray joint_pos(chain.getNrOfJoints());
KDL::Frame cart_pos;
KDL::ChainFkSolverPos_recursive fk_solver(chain);
fk_solver.JntToCart(joint_pos, cart_pos);
However, you might consider staying within the ROS framework for basic kinematics, rather than working in KDL directly. This allows you to use ROS functions to extract different kinematic groups, use robot-specific kinematics plugins, detect collisions, etc.
The approach differs between whether you are using arm_navigation or moveIt. Both are fairly well documented, with examples at the links given above.
Originally posted by Jeremy Zoss with karma: 4976 on 2013-03-13
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 13329,
"tags": "ros"
} |
Momentum of light in anisotropic media | Question: This question is related to the Abraham-Minkowski controversy that has already been discussed extensively here and in the research community. But I want to ask about an aspect of this momentum controversy that I could not find in literature:
There are two common expressions to calculate momentum of light:
\begin{equation}
\vec p_1 = \hbar \vec k
\end{equation}
with $\vec k = \vec D \times \vec H$ the wave vector;
and
\begin{equation}
\vec p_2 = \frac{1}{c} \vec S
\end{equation}
with $\vec S = \vec E \times \vec H$ the Poynting vector.
For an anisotropic medium with
\begin{equation}
\vec D = \epsilon_0 \hat \epsilon_r \vec E
\end{equation}
with $\hat \epsilon_r$ being a tensor with three different entries on the diagonal, $\vec D$ and $\vec E$ do obviously not point in the same direction, so $\vec k$ and $\vec S$ also not point in the same direction. In this case, which definition of momentum has to be used and why?
Answer: "In this case, which definition of momentum has to be used..."
The momentum density (momentum transferred per unit time per unit area) associated with the fields will be described by the Poynting vector. An expression similar to that which you give for $p_2$:
$\vec{g} = \frac{1}{c^2}\vec{S}$
"... and why? ..."
The $\vec{k}$ vector just indicates the spatial dependence of the phase of the travelling wave. On the other hand, the Poynting vector indicates where the energy density of the wave is moving, in other words, the direction in which the "amplitude" of the wave is propagating to.
Thinking of an individual photon, the Poynting vector gives a measure of the instantaneous group velocity of the wave-packet's wavefunction, something which can be directly related to the momentum of the photon.
An example that illustrates this in an anisotropic medium!: Let us think of a photon that travels through a birrefringent medium as shown in (Fig. 6.2. (b)): the $\vec{k}$ vector never changes (ie. always points to the right), as the wave fronts are always parallel to the material's walls. Nonetheless, the photon exits this anisotropic material at a different vertical position: it must have had vertical momentum while it propagated through the medium, as indicated by $\vec S$.
Sidenote: there is an additional co-travelling momentum due to the electrons rearranging themselves in the medium in response to the travelling wave. This is named the Minkowski momentum and is described by something similar to the expression you give for $p_1$. | {
"domain": "physics.stackexchange",
"id": 49800,
"tags": "electromagnetism, electromagnetic-radiation, momentum, poynting-vector"
} |
Generating odom message from encoder ticks for robot_pose_ekf | Question:
Hi all,
I am using robot_pose_ekf to get the estimate of the robot pose and is slightly confused on how to use encoder ticks to generate the odometry message. Till now, I am able to achieve the following:
The encoder ticks are being published on a certain topic ("re_ticks").
I am able to find the velocity of the left wheel and the right wheel (vel_left, vel_right) using wheel properties and encoder ticks.
Using 2, I found out Rotational and Translation velocity of the robot (V,W). (not sure about its correctness)
Now, I have to find velocity of the robot in X,Y and theta direction so that I can use http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom to generate the odom message.
vtheta = Wt but I am not sure about vx and vy. Intuitively, it seems that I should just do vx = Vcos(theta) and vy = Vsin(theta) but I am not sure. Also, which theta should I use, the old one or the current theta (theta = Wdt)?
I have attached the relevant part of the code below:
void rotary_encoders_callback(const geometry_msgs::Vector3::ConstPtr& ticks)
{
current_time_encoder = ros::Time::now();
double delta_left = ticks->x - previous_left_ticks;
double delta_right = ticks->y - previous_right_ticks;
// dist_per_count = distance traveled per count, delta_left = ticks moved
double vel_left = (delta_left * dist_per_count) / (current_time_encoder - last_time_encoder).toSec(); // Left velocity
double vel_right = (delta_right * dist_per_count) / (current_time_encoder - last_time_encoder).toSec(); // Right velocity
// Getting Translational and Rotational velocities from Left and Right wheel velocities
// V = Translation vel. W = Rotational vel.
if (vel_left == vel_right)
{
V = vel_left;
W = 0;
}
else
{
// Assuming the robot is rotating about point A
// W = vel_left/r = vel_right/(r + d), see the image below for r and d
double r = (vel_left * d) / (vel_right - vel_left); // Anti Clockwise is positive
W = vel_left/r; // Rotational velocity of the robot
V = W * (r + d/2); // Translation velocity of the robot
}
vth = W;
// Find out velocity in x,y direction (vx,vy)
// ???
previous_left_ticks = ticks->x;
previous_right_ticks = ticks->y;
last_time_encoder = current_time_encoder;
}
AB = r, BC = d
So, I want to make sure if everything till now is correct and if it is correct, how can I find velocity in x and y direction (vx,vy) so that I can use that to generate the odom message?
Thanks in advance.
Naman Kumar
Originally posted by Naman on ROS Answers with karma: 1464 on 2015-04-19
Post score: 2
Original comments
Comment by Mehdi. on 2016-03-15:
How do you estimate r from the odom Twist? I need to calculate the number of ticks from the odom message so basically the opposite of what you want to do.
Answer:
Your calculation are correct but you should pay attention to update frequency: if it is too high respect to wheel velocities your calculated speeds will be unstable. Just and example: your robot is moving at 0.01 m/s. If you have 0.1 radius wheels and 360 ticks per round encoders ticks increase by 11 every second. If you update velocity at 20Hz means that sometime deltas will be zero, some time one, where your robot is moving at constant speed. Same happens when the thick from encoders are not synchronised.
To avoid this you should add an update system that wait for consistent data available.
For your information I did the same in different way. I'm using MD25 board that include close loop control of motors so I can be sure my speed input results in fixed speed. I use the input to MD25 to calculate speeds instead to calculate by real thicks.
UPDATE
Another way is to publish encoder ticks only if there is a change in the encoder position then deltas are never zero, right?
No, you will have same problem! The best you can do un order to be sure about results is to run your code and drive your robot at minimal speed. If detected speeds are constant all is fine. If output is not stable... you have to think about possible fixes.
Originally posted by afranceson with karma: 497 on 2015-04-20
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Naman on 2015-04-20:
Thanks @afranceson! Another way is to publish encoder ticks only if there is a change in the encoder position then deltas are never zero, right? Also, How can I use incremental encoders to get the number of pulses in a given time instead of a number or counter? | {
"domain": "robotics.stackexchange",
"id": 21469,
"tags": "ros, navigation, odometry, encoders, robot-pose-ekf"
} |
Notch filter: differences between IIR and FIR filters | Question: I'm trying to understand this great answer from Matt L. . It's said that "One advantage of IIR filters is that steep filters with high stopband attenuation can be realized with much fewer coefficients (and delays) than in the FIR case, i.e. they are computationally more efficient." First of all why this is true? Is this because of poles? Actually this comes from my previous question on removing 400Hz noise. There are two filters for that purpose. The first one is a FIR filter with following frequency response and pole-zero plot:
And the second one is a IIR filter:
Removing noise using the IIR filter was far better. Why this happens? According to the frequency responses, attenuation in the case of FIR filter is greater in the magnitude relative to IIR filter. Also I don't understand why zeros in the FIR filter are placed so that and how this positioning corresponds to that frequency response. I understand well the relation between pole-zero plot and frequency response for the IIR filter.
Answer: Your assumption why IIR filters can have steeper transitions from passbands to stopbands compared to FIR filters of the same order is correct: IIR filter have poles away from the origin of the complex plane, and poles inside the unit circle close to zeros on the unit circle cause the corresponding frequency response to change rapidly with frequency.
The FIR filter in your question has a much broader notch than the IIR filter. This is not only caused by the fundamental difference between FIR and IIR filters, but also because for some reason the FIR filter has two zeros close to the notch frequency instead of only one. This makes the filter more robust with respect to errors in the estimation of the noise frequency, but it may also attenuate desired frequency components. It's also much harder for an FIR filter to approximate a constant response away from the notch frequency because the frequency response of an FIR filter is a polynomial and not a rational function as is the case for IIR filters.
From the zero locations of the FIR filter you can see that the filter is a linear phase filter: the zeros are either on the unit circle or they are mirrored at the unit circle. Note that FIR filters don't need to have linear phase. The required filter order for a certain specification can often be somewhat reduced if we don't impose the linear phase constraint. | {
"domain": "dsp.stackexchange",
"id": 9208,
"tags": "filters, finite-impulse-response, infinite-impulse-response, frequency-response, poles-zeros"
} |
Does the space hierarchy theorem generalize to non-uniform computation? | Question:
General Question
Does the space hierarchy theorem generalize to non-uniform
computation?
Here are a few more specific questions:
Is $L/poly \subsetneq PSPACE/poly$?
For all space constructible functions $f(n)$, is $DSPACE(o(f(n)))/poly \subsetneq DSPACE(f(n))/poly$?
For what functions $h(n)$ is it known that: for all space constructible $f(n)$, $DSPACE(o(f(n)))/h(n) \subsetneq DSPACE(f(n))/h(n)$?
Answer: One non-uniform "space hierarchy" that we can prove is a size hierarchy for branching programs. For a Boolean function $f: \{0, 1\}^n \to \{0, 1\}$, let $B(f)$ denote the smallest size of a branching program computing $f$. By an argument analogous to this hierarchy argument for circuit size, one can show that there are constants $\epsilon, c$ so for every value $b \leq \epsilon \cdot 2^n / n$, there is a function $f: \{0, 1\}^n \to \{0, 1\}$ such that $b - cn \leq B(f) \leq b$.
I think separating $\mathbf{PSPACE}/\text{poly}$ from $\mathbf{L}/\text{poly}$ would be difficult. It's equivalent to proving that some language in $\mathbf{PSPACE}$ has super-polynomial branching program complexity. A simple argument shows that $\mathbf{PSPACE}$ does not have fixed-polynomial-size branching programs:
Proposition. For every constant $k$, there is a language $L \in \mathbf{PSPACE}$ so that for all sufficiently large $n$, $B(L_n) > n^k$. (Here $L_n$ is the indicator function for $L \cap \{0, 1\}^n$.)
Proof. By the hierarchy we proved, there is a branching program $P$ of size $n^{k + 1}$ that computes a function $f$ with $B(f) > n^k$. In polynomial space, we can iterate over all branching programs of size $n^{k + 1}$, all branching programs of size $n^k$, and all inputs of length $n$ to find such a branching program $P$. Then we can simulate $P$ to compute $f$. | {
"domain": "cstheory.stackexchange",
"id": 4280,
"tags": "cc.complexity-theory, space-bounded, space-complexity, advice-and-nonuniformity, hierarchy-theorems"
} |
Meaning of $\mathrm{d}\Omega$ in basic scattering theory? | Question: In basic scattering theory, $\mathrm{d}\Omega$ is supposed to be an element of solid angle in the direction $\Omega$. Therefore, I assume that $\Omega$ is an angle, but what is this angle measured with respect to? None of the textbooks I am referring to give a clear indication of what this $\Omega$ is. The only relevant figure I could find is given below.
Answer: Sorry, a solid angle is something different than an ordinary angle; see "Solid angle", Wikipedia.
so it is not measured "with respect to anything". Solid angle $\Omega$ measures the size of a set of directions in the 3-dimensional space via the formula
$$ \Omega = \frac{A}{R^2} $$
where $A$ is the area of the intersection of all these directions (semi-infinite lines) with a two-dimensional sphere of radius $R$. For example, in your picture, all the directions (semi-infinite lines) start at the interaction point and the relevant area is the annulus (a part of the sphere) that is dashed. The letter $\Omega$ (capital omega, the solid angle, an "O" with a hole at the bottom standing on two feet), shouldn't be confused with $\Theta$ (capital theta, an "O" with an "H" inside it) which is an ordinary angle, the polar angle in the spherical coordinates. If the solid angle has the shape of an annulus (going over all the azimuth angles $\phi$), then $d\Omega = 2\pi\sin\Theta\,d\Theta = 2\pi\cdot d(\cos\Theta)$. For an infinitesimal rectangle on the spherical surface in spherical coordinates, $d\Omega = \sin\Theta\,d\Theta\,d\phi $
You calculate its area $A$, divide it by the squared distance of the this area from the center (i.e. by the squared radius of the sphere), and you obtain the solid angle $\Omega$.
The "total" solid angle surrounding the point is $4\pi\approx 12.6$, much like the total angle in the plane is 360 degrees or $2\pi$ (in radians). The word "solid" indicates that the solid angle is a property of a solid (filled) three-dimensional body composed of all the semi-infinite lines (rays) starting from the center and going in the allowed directions; it is a generalization of the ordinary angle in which one spatial dimension is added but it is not the same thing. | {
"domain": "physics.stackexchange",
"id": 4353,
"tags": "scattering, notation, integration"
} |
Ballistic behavior of molecules on potential energy surfaces | Question: I came across the following sentence in a publication$^1$: "At this point we have to remember that the computed MEP path [sic] represents the reaction in the absence of any kinetic energy, while in reality the true reaction path, to some degree, will be ballistic." Further, they write that, "This path is accessed because after the transition state the molecule is 'ballistic'."
A minimum energy path (MEP) is calculated "step-wise", taking the previous geometry as a starting point, and the program follows the gradient. In classical terms, this might be to place a ball on a tilted plane, releasing it, and stopping it after a given distance, and repeat.
When they say that, in a real reaction, the molecules will have kinetic energy and be ballistic, I think they mean that the potential energy surface (PES) may be explored in directions slightly different than the gradient. This can be compared to releasing a ball on the top of a changing surface and allowing it to roll freely, where the ball will not follow the gradient when its velocity is high.
I cannot find a decent explanation of this anywhere, and it would be nice to get some input.
$^1$ De Vico, Liu, Krogh, and Lindh, 2007. J. Phys. Chem. A 111:8013-8019. doi:10.1021/jp074063g. Cited passages are on p. 8016, Section 3.3.
Answer: I think the description you lay out in your question has it basically right.
MEPs, IRCs, etc. all assume that the geometric rearrangements that occur over the course of a reaction strictly follow the gradient uphill from reactants to transition state, and then downhill from TS to products (or intermediates). Energetically, this provides the "path of least resistance."
In the paper you cited, it looks like the MEP would require some atoms in the system to "take a right-angle turn." Quoting a broader excerpt around the first sentence you include (emphasis added):
As can be seen in Figure 3, after TS‘ S$_0$ the MEP takes a turn on the PES: the O−O‘ distance increasing and the torsion around the O−C−C‘−O‘ dihedron nearly stopping, while the increase of the C−C‘ distance comes into action.
...
At this point we have to remember that the computed MEP path represents the reaction in the absence of any kinetic energy, while in reality the true reaction path, to some degree, will be ballistic.
...
What would be the consequences if the reaction continued along the torsional mode? Section 3.2 already suggested that this is of interest—the torsional mode is the principal internal coordinate of the T$_1$ MEP.
Nuclei have mass (obviously), and thus have inertia. While Newton's relevance at the atomic scale can be fuzzy at times, classical treatment of atomic motion has sufficient validity that acceleration of nuclei can be considered to require a suitable accompanying force (viz., PES gradient). If the system has enough "inertia" and the PES gradient provides too small an acceleration to overcome it, the progress of the reaction can absolutely move away from the MEP.
Alternatively, at least one additional mechanism can introduce deviations from the MEP: molecules at finite temperature are always vibrating. Thus, even in a case where the key internal molecular coordinates (bond lengths, angles, dihedrals) for a given reaction strictly follow the IRC, the "uninvolved" vibrational motion is still occurring, and so the system is actually always oscillating around the IRC in a high-dimensional sense. If the energy of the system is high enough, these oscillations have the potential to be of sufficient magnitude to "bump" the reaction coordinate away from the gradient-following pathway.
Rotational motion may also contribute to this, but I would think in a more subtle way than vibrational motion.
For further reading: Steven Bachrach of the Computational Organic Chemistry blog has an entire tagged category, "Dynamics," dedicated to this phenomenon. In no particular order, some posts of possible survey value include:
An approach towards identifying dynamic effect without trajectories
Insights into dynamic effects
Bifurcating organic reactions
The entirety of Chapter 8 of his book is also dedicated to this topic.
Addendum: FWIW, as a chemical engineer, the (imperfect) analogy that I like to use is to liken the MEP to the 'reversible process' of thermodynamics:
Start at the beginning state
Take an infinitesimal step
Let the system infinitesimally relax
Repeat 2 & 3 until you reach the end state
As I (very approximately) understand it, irreversibility comes from changing the state of a system "too much, too fast", leading to net generation of entropy as the system relaxes back to the reversible pathway, and it's "kinetic energy" in a qualitative, generalized sense that allows the system to depart from the reversible path. | {
"domain": "chemistry.stackexchange",
"id": 5228,
"tags": "reaction-mechanism, energy, theoretical-chemistry, reaction-coordinate"
} |
How to identify bound or resonance states? | Question: Say we got a peak in our scattering experiment, how do we decide whether that peak is a resonance or a bound state?
Edit : Say in a LHCb experiment we discover a new particle.
Answer: Normally a new bound state in a single force universe would not decay into other particles, so a resonance would be declared if an enhancement is observed since the invariant mass of particles exceeds the sum of the masses of the constituents.
In particle physics, when measuring the invariant mass of a group of particles and observe an enhancement , we treat it as a possible new particle , even though the invariant mass of the enhancement is bigger than the mass of the sum of the constituents. The reason is that there are three forces involved in the standard model modeling particle interactions, and the mathematical structure allows what are elementary particles for one force to decay due to the other forces into other particles.
Take the electron positron scattering crossection:
The standard model fits the enhancements up to the Z with its group structure for particle combinations so the term resonance fits, but Z is an elementary particle in the standard model , and the fact its decay has a width is due to experimental errors and to the number of interactions contributing to the decay .
New resonances are sought in the LHC experiments that would not fit in the group structure of the current standard model , asking for extensions or new models that predict other group structures. Elaborate models ask for leptoquarks, for example this link. | {
"domain": "physics.stackexchange",
"id": 80395,
"tags": "particle-physics, experimental-physics, scattering, scattering-cross-section"
} |
What does the identity operator look like in Quantum Field Theory? | Question: In texts on ordinary quantum mechanics the identity operators
\begin{equation}\begin{aligned}
I & = \int \operatorname{d}x\, |x\rangle\langle x| \\
& = \int \operatorname{d}p\, |p\rangle\langle p|
\end{aligned} \tag1\end{equation}
are frequently used in textbooks, like Shankar's. This allows us to represent position and momentum operators in a concrete way as
\begin{equation}\begin{aligned}
x_S & = \int \operatorname{d}x' \, |x'\rangle\langle x'| x' \ \mathrm{and}\\
p_S & = \int \operatorname{d}p' \, |p'\rangle\langle p'| p',
\end{aligned} \tag2\end{equation}
where the '$S$' subscript is to emphasize these are Schrödinger picture operators.
Has a similarly concrete representation been constructed in quantum field theory, or is there some reason it's not possible? I'm imagining something like
\begin{equation}\begin{aligned}
I & = \int \left[\mathcal{D} \phi(\mathbf{x}') |\phi(\mathbf{x}')\rangle\langle\phi(\mathbf{x}')|\right] \ \mathrm{and} \\
\phi(\mathbf{x}) & = \int \left[\mathcal{D} \phi(\mathbf{x}') |\phi(\mathbf{x}')\rangle\langle\phi(\mathbf{x}')|\right] \, \phi^{\mathbf{1}_{\{\mathbf{x}=\mathbf{x}'\}}}(\mathbf{x}'),
\end{aligned} \tag3\end{equation}
where the vectors are inside the path integral metric because they're included in the continuum limit product that defines the path integral, and $\mathbf{1}_{\{\mathbf{x}=\mathbf{x}'\}}$ is the indicator function that equals $0$ when $\mathbf{x}\neq\mathbf{x}'$ and $1$ when $\mathbf{x}=\mathbf{x}'$. The indicator function is in the exponent of $\phi(\mathbf{x}')$ to make sure that the field is a non-identity operator at $\mathbf{x}$ only.
Answer: The equation $(3)$ in the OP is formally1 correct, and it is in fact one of the main ingredients in the functional integral formulation of quantum field theory. See Ref.1 for the explicit construction.
For completeness, we sketch the derivation here. We use a notation much closer to the one in the OP than to that of Ref.1, but with some minors modifications (to allow for a more general result).
The setup.
Let $\{\phi_a,\pi^a\}_a$ be a set of phase-space operators, where $a\in\mathbb R^{d-1}\times \mathbb N^n$ is a DeWitt index (i.e., it contains a continuous part, corresponding to the spatial part of spacetime $\mathbb R^d$ and a discrete part, corresponding to a certain vector space $\mathbb N^n$ whose base is spacetime). Note that we are taking these operators to be in the Schrödinger picture. They are assumed conjugate:
$$\tag1
[\phi_a,\pi^b]=i\delta_a^b
$$
where $\delta$ is a Dirac-Kronecker delta. Here, $[\cdot,\cdot]$ denotes a commutator (we assume $\phi,\pi$ to be Grassmann even; we could consider the general case here by keeping tracks of the signs, but we won't for simplicity). The rest of commutators are assumed to vanish. We take the phase-space operators to be hermitian (or otherwise, we double the dimension and split them into their real and imaginary parts).
As $[\phi_a,\phi_b]=[\pi^a,\pi^b]=0$, we may simultaneously diagonalise them:
\begin{equation}
\begin{aligned}
\phi_a|\varphi\rangle&=\varphi_a|\varphi\rangle\\
\pi^a|\varpi\rangle&=\varpi^a|\varpi\rangle
\end{aligned}\tag2
\end{equation}
where $\varphi_a,\varpi^b$ are $c$-numbers. After normalising them, if necessary, these eigenvectors are orthonormal:
\begin{equation}
\begin{aligned}
\langle\varphi|\varphi'\rangle&=\prod_a\delta(\varphi_a-\varphi'_a)\equiv\delta(\varphi-\varphi')\\
\langle\varpi|\varpi'\rangle&=\prod_a\delta(\varpi^a-\varpi'^a)\equiv\delta(\varpi-\varpi')
\end{aligned}\tag3
\end{equation}
and, as per $[\phi_a,\pi^b]=\delta_a^b$, we also have
$$\tag4
\langle \varphi|\varpi\rangle=\prod_a\frac{1}{\sqrt{2\pi}}\mathrm e^{i\varphi_a\varpi^a}
$$
As the sets $\{\phi_a\}_a$ and $\{\pi^a\}_a$ are both assumed complete, we also have
\begin{equation}
\begin{aligned}
1&\equiv \int\prod_a|\varphi\rangle\langle\varphi|\,\mathrm d\varphi_a\\
1&\equiv \int\prod_a|\varpi\rangle\langle\varpi|\,\mathrm d\varpi^a
\end{aligned}\tag5
\end{equation}
as OP anticipated.
As mentioned in the comments, and for future reference, we note that there is another identity operator that is fundamental in a quantum theory, to wit, the resolution in terms of energy eigenstates:
\begin{equation}\tag6
1\equiv \int|E\rangle\langle E|\,\mathrm dE
\end{equation}
where $\mathrm dE$ is the counting measure in the case of discrete eigenvalues. Typically, we assume that the Hamiltonian is non-negative and that its ground state energy is zero; furthermore, we assume that there is a non-zero mass gap so that $E=0$ is a regular eigenvalue of $H$ (as opposed to a singular value).
It is at this point convenient to switch into the Heisenberg picture, where $\{\phi_a,\pi^a\}_a$ become time-dependent:
\begin{equation}
\begin{aligned}
\phi_a(t)\equiv\mathrm e^{iHt}\phi_a\mathrm e^{-iHt}\\
\pi^a(t)\equiv\mathrm e^{iHt}\pi^a\mathrm e^{-iHt}
\end{aligned}\tag7
\end{equation}
with eigenstates
\begin{equation}
\begin{aligned}
|\varphi;t\rangle&=\mathrm e^{iHt}|\varphi\rangle\\
|\varpi;t\rangle&=\mathrm e^{iHt}|\varpi\rangle
\end{aligned}\tag8
\end{equation}
such that
\begin{equation}
\begin{aligned}
\phi_a(t)|\varphi;t\rangle&=\varphi_a|\varphi;t\rangle\\
\pi^a(t)|\varpi;t\rangle&=\varpi^a|\varpi;t\rangle
\end{aligned}\tag9
\end{equation}
The Heisenberg picture eigenstates satisfy the same completeness and orthonormality relations as the Schrödinger picture eigenstates (inasmuch as the transformation is unitary).
The functional integral. Time slicing.
From this, and by the usual arguments (time slicing), Ref.1 derives the phase-space functional integral representation of the transition amplitude, to wit,
\begin{equation}
\begin{aligned}
&\langle\varphi_\mathrm{in};t_\mathrm{in}|\mathrm T\left(O_1(\pi(t_1),\phi(t_1)),\dots,O_n(\pi(t_n),\phi(t_n))\right)|\varphi_\mathrm{out};t_\mathrm{out}\rangle\equiv\\
&\hspace{10pt}\int_{\varphi(t_\mathrm{in})=\varphi_\mathrm{in}}^{\varphi(t_\mathrm{out})=\varphi_\mathrm{out}}\left(O_1(\varpi(t_1),\varphi(t_1)),\dots,O_n(\varpi(t_n),\varphi(t_n))\right)\cdot\\
&\hspace{25pt}\cdot\exp\left[i\int_{t_\mathrm{in}}^{t_\mathrm{out}}\left(\sum_a\dot \varphi_a(\tau)\varpi^a(\tau)-H(\varphi(\tau),\varpi(\tau))\right)\mathrm d\tau\right]\mathrm d\varphi\,\mathrm d\varpi
\end{aligned}\tag{10}
\end{equation}
where $O_1,\dots,O_n$ is any set of operators; $\mathrm T$ denotes the time-ordering symbol; and $\mathrm d\varphi,\mathrm d\varpi$ denote the measures
$$\tag{11}
\mathrm d\varphi\equiv\prod_{\tau,a}\mathrm d\varphi_a(\tau),\qquad \mathrm d\varpi\equiv\prod_{\tau,a}\frac{1}{2\pi}\mathrm d\varpi^a(\tau)
$$
The time-slicing procedure is standard. We only consider the case $O_i=1$ here. We begin by considering the case where $t_\mathrm{in}$ and $t_\mathrm{out}$ are infinitesimally close:
$$\tag{12}
\langle\varphi';\tau+\mathrm d\tau|\varphi;\tau\rangle=\langle\varphi';\tau|\exp\left[-iH\mathrm d\tau\right]|\varphi;\tau\rangle
$$
where $H=H(\phi(\tau),\pi(\tau))$ is the Hamiltonian (the generator of time translations, essentially defined by this equation). We take the convention that all the $\phi$ must always be moved to the left of $\pi$. In this case, and up to factors of order $\mathrm O(\mathrm d\tau)^2$, we may replace $\phi$ by its eigenvalue, to wit, $\varphi$. To deal with $\pi$, we insert the identity $1$ in the form of the completeness relation:
$$\tag{13}
\langle\varphi';\tau+\mathrm d\tau|\varphi;\tau\rangle\overset{(5)}=\int\exp\left[-iH(\varphi',\varpi)\mathrm d\tau+i\sum_a(\varphi'_a-\varphi_a)\varpi^a\right]\ \mathrm d\varpi
$$
where each $\varpi^a$ is integrated over $\mathbb R$ unrestrictedly.
To find the transition amplitude over a finite interval, we just compose an infinite number of infinitesimal transition amplitudes: we break up $t'-t$ into steps $t,\tau_1,\tau_2,\dots,\tau_N,t'$, with $\tau_{k+1}-\tau_k=\mathrm d\tau=(t'-t)/(N+1)$. With this, and inserting the identity $1$ in the form of the completeness relation at each $\tau_k$, we get
\begin{equation}
\begin{aligned}
\langle\varphi';t'|\varphi;t\rangle&\overset{(5)}=\int\langle \varphi';t'|\varphi_N;\tau_N\rangle\langle \varphi_N;\tau_N|\varphi_{N-1};\tau_{N-1}\rangle\cdots\langle \varphi_1;\tau_1|\varphi;t\rangle\ \prod_{k=1}^N\mathrm d\varphi_k\\
&\overset{(13)}=\int\left[\prod_{k=1}^N\prod_a\mathrm d\varphi_{k,a}\right]\left[\prod_{k=0}^N\prod_a\frac{\mathrm d\varpi^a_k}{2\pi}\right]\cdot\\
&\hspace{20pt}\cdot\exp\left[i\sum_{k=1}^{N+1}\left(\sum_a(\varphi_{k,a}-\varphi_{k-1,a})\varpi_{k-1}^a-H(\varphi_k,\varpi_{k-1})\mathrm d\tau\right)\right]
\end{aligned}\tag{14}
\end{equation}
where $\varphi_0\equiv\varphi$ and $\varphi_{N+1}\equiv\varphi'$. By taking the formal limit $N\to\infty$, we indeed obtain the claimed formula. The generalisation of the proof to include insertions is straightforward.
Vacuum-to-vacuum transition amplitude. Feynman's $\boldsymbol{+i\epsilon}$ prescription.
In non-relativistic quantum mechanics, the functional integral as written above is the natural object to work with. On the other hand, when dealing with particle physics in the relativistic regime, one usually works with $S$ matrix elements, that is, one considers the transition amplitude, not from eigenstates of $\phi_a$, but of the creation and annihilation operators. As is well-known from the LSZ theorem, it suffices to consider the vacuum-to-vacuum transition amplitude,
\begin{equation}\tag{15}
\langle 0|\mathrm T\mathrm e^{iJ^a\phi_a}|0\rangle
\end{equation}
and from which all $S$-matrix elements can be computed. We thus want to obtain the functional integral representation of this transition amplitude.
In Ref.1 there is a rather clean derivation of such object which, unfortunately, is only worked out for a scalar boson field. The outcome is that the vacuum-to-vacuum transition amplitude is given by the functional integral over all field configurations, and the correct boundary conditions are enforced by Feynman's $+i\epsilon$ prescription. The higher spin case is non-trivial because the propagators (and ground-state wave-functionals) are gauge-dependent. Indeed, where to put the $+i\epsilon$ in an arbitrary gauge theory is a very complicated matter (e.g., in the axial gauge it is far from clear what to do with, say, $k^4$: should it be $(k^2+i\epsilon)^2$? Should it be $k^4+i\epsilon$?), and the general case has not been worked out to the best of my knowledge. It is nevertheless possible to argue that, at least in the 't Hooft-Feynman gauge $\xi=1$, the propagators and ground-state wave-functional are identical to the scalar case (up to a unit matrix in colour space), so that the derivation in the reference holds for a bosonic field of arbitrary spin. The fermionic case requires Grassmann integration, but a similar analysis (in the $\xi=1$ case) is possible. Gauge invariance then implies the general case.
For completeness, we will prove the claim by an alternative method which, although admittedly not nearly as clean, works for a field of arbitrary spin. The trick is to use the completeness relation in terms of energy eigenstates instead of $\phi_a$ eigenstates.
We proceed as follows. We want to calculate
$$\tag{16}
\lim_{t\to-\infty}\langle \varphi_\mathrm{in};t|\overset{(6)}=\lim_{t\to-\infty}\int\mathrm e^{-iEt}\langle \varphi_\mathrm{in}|E\rangle\langle E|\,\mathrm dE
$$
If we send $t$ to $-\infty$ in a slightly imaginary direction, then all excited states acquire a real and negative part in the exponential factor, which vanishes in the large $t$ limit. We are thus left with the ground state only:
$$\tag{17}
\lim_{t\to-\infty+i\epsilon}\langle \varphi_\mathrm{in};t|=\langle \varphi_\mathrm{in}|0\rangle\langle 0|
$$
Similarly,
$$\tag{18}
\lim_{t\to+\infty+i\epsilon}|\varphi_\mathrm{out};t\rangle=|0\rangle\langle 0|\varphi_\mathrm{out}\rangle
$$
With this, the matrix element $\langle\varphi_\mathrm{in};-\infty|O|\varphi_\mathrm{out};+\infty\rangle$ can be written as
\begin{equation}
\begin{aligned}
&\langle \varphi_\mathrm{in}|0\rangle\langle 0|O|0\rangle\langle 0|\varphi_\mathrm{out}\rangle
=
\int_{\varphi(-\infty)=\varphi_\mathrm{in}}^{\varphi(+\infty)=\varphi_\mathrm{out}}O\cdot\\
&\hspace{20pt}\cdot\exp\left[i\int_{(1+i\epsilon)\mathbb R}\left(\sum_a\dot \varphi_a(\tau)\varpi^a(\tau)-H(\varphi(\tau),\varpi(\tau))\right)\mathrm d\tau\right]\mathrm d\varphi\,\mathrm d\varpi
\end{aligned}\tag{19}
\end{equation}
Integrating both sides with respect to $\mathrm d\varphi_\mathrm{in}\,\mathrm d\varphi_\mathrm{out}$, we get the vacuum-to-vacuum transition amplitude in its standard form, where the integral over $\mathrm d\varphi$ is unrestricted:
\begin{equation}\tag{20}
\langle 0|O|0\rangle
=N^{-1}
\int O\exp\left[iS(\varphi,\varpi)\right]\mathrm d\varphi\,\mathrm d\varpi
\end{equation}
where
\begin{equation}\tag{21}
S(\varphi,\varpi)\equiv\int_{(1-i\epsilon)\mathbb R}\left(\sum_a\dot \varphi_a(\tau)\varpi^a(\tau)-H(\varphi(\tau),\varpi(\tau))\right)\mathrm d\tau
\end{equation}
is the classical action, and where
\begin{equation}\tag{22}
N\equiv \int\langle \varphi_\mathrm{in}|0\rangle\langle 0|\varphi_\mathrm{out}\rangle\,\mathrm d\varphi_\mathrm{in}\,\mathrm d\varphi_\mathrm{out}
\end{equation}
is an inconsequential normalisation constant (it is the norm of the ground-state wave-functional). This proves the claim: the vacuum-to-vacuum transition amplitude is given by the standard functional integral, but over all field configurations (unrestricted integral over $\mathrm d\varphi$); and the correct boundary conditions are essentially those that result from a Wick rotation $\tau\to-i\tau_\mathrm{E}$.
The configuration space functional integral. The Lagrangian.
Finally, it bears mentioning how the configuration space functional integral is obtained. Ref.1 considers the case where the Hamiltonian is a quadratic polynomial in $\varpi$:
\begin{equation}\tag{23}
H(\varphi,\varpi)=\frac12\varpi^a A_{ab}(\varphi)\varpi^b+B_a(\varphi)\varpi^a+C(\varphi)
\end{equation}
In this case, the integral over $\mathrm d\varpi$ is gaussian and so its stationary phase approximation is in fact exact. The stationary point $\varpi^\star$ is easily computed to be
\begin{equation}\tag{24}
\dot\varphi_a=\frac{\partial H}{\partial\varpi^a}\bigg|_{\varpi\to\varpi^\star}
\end{equation}
which agrees with the classical canonical relation. Therefore, the Hamiltonian at $\varpi^\star=\varpi^\star(\dot\varphi)$ is nothing but the Lagrangian $L=L(\varphi,\dot \varphi)$, and therefore
\begin{equation}\tag{25}
\langle 0|O|0\rangle
\propto
\int O\exp\left[iS(\varphi)\right]\mathrm d\varphi
\end{equation}
where
\begin{equation}\tag{26}
S(\varphi)\equiv \int_{(1-i\epsilon)\mathbb R}L(\varphi,\dot\varphi)
\end{equation}
and where $\mathrm d\varphi$ implicitly includes the determinant of the Vilkovisky metric,
\begin{equation}\tag{27}
\mathrm d\varphi\to\sqrt{\det(A)}\,\mathrm d\varphi
\end{equation}
which is required for covariance in configuration space (or, equivalently, unitarity). If the metric is flat the determinant can be reabsorbed into $N$.
If $H$ is not a quadratic polynomial, we may nevertheless use the stationary phase approximation, but the measure will acquire higher order corrections:
\begin{equation}\tag{28}
\mathrm d\varphi\to\left(\sqrt{\det(A)}+\mathcal O(\hbar)\right)\,\mathrm d\varphi
\end{equation}
which can be computed order by order in perturbation theory. This proves that the configuration space functional integral can always be made both covariant and unitary by carefully taking care of the integration measure.
In any case, there is a subtlety that must be mentioned: the integral over $\mathrm d\varpi$ as written above is only valid if $O$ depends on $\varphi$ only; for otherwise the integral is not gaussian. In other words, we may not use the configuration space functional integral to compute matrix elements of derivatives of $\phi$. In pragmatical terms, this is easy to understand: the time ordering symbol $\mathrm T$ does not commute with time-derivatives. The resolution involves the introduction of the so-called covariant time-ordering symbol which is defined such that it commutes with time derivatives (cf. this PSE post).
References.
Weinberg, QFT, Vol.I, chapter 9.
1: Formal in the sense that this is not a rigorous statement, inasmuch as the whole functional-integral formalism is not rigorous itself. It seems hard to formalise the sum over all fields, but one may argue that the picture is at least consistent. | {
"domain": "physics.stackexchange",
"id": 46455,
"tags": "quantum-field-theory, operators, path-integral"
} |
Can people traveling at difforent speeds gain information at a different rate? | Question: I heard that the faster you go, the slower time around you goes. For example, if you where in a rocket going very fast and you started a timer for 1 minute at the same time someone walking down the street started a timer for 1 minute, the person on the street's timer would go off just before yours. I also know that people going at very fast speeds can still get information (for this example lets say that it is from the internet).
So lets imagine that the person on earth starts streaming a TV show to the person in the rocket. Can the person in the rocket watch the TV show faster than the person on Earth? This would not make sense to me, given that they both experience time at the same speed.
Answer:
This would not make sense to me, given that they both experience time at the same speed.
See this answer of mine for background. Although I'm not a psychologist, I think it's reasonable to surmize that the experience of time is defined by how quickly one's own bodily processes run forward relative to the rate of other physical processes in one's nearby, comoving neighborhood. This explains why the rate of one's own time progression is never perceived to be anything other than "one time unit per one time unit".
But this does not stop signals arising from sources elsewhere that one receives from progressing at different rates if the relative motion between source and receiver change.
And, your spacefarer will indeed see the television transmission progressing more swiftly, or more slowly, depending on whether their motion is towards or away from the signal source.
There are two simple ways to see this.
The modulated TV signal, with all its framing and timing information, can be represented by a Fourier transform. The Doppler shift induced by the relative motion means that the frequencies of all the Fourier components are scaled by the same, relative velocity dependent, scale factor. So, not only does the Doppler effect change the carrier frequency, it speeds up or slows down the arrival of frames depending on whether motion is towards or away from the source.
(Actually this is of course one way to deriver the Doppler shift): draw a Minkowski diagram (see my sketch below), with regular framing pulses $T$ being sent from the Earth's worldline ($E$). Then work out the Minkowski length of the successive crossings of the spacefarer's world line $S$ by these framing pulses. You can see at a snap that the rate depends on the direction. | {
"domain": "physics.stackexchange",
"id": 39115,
"tags": "time, speed"
} |
Most general form of a spin rotation invariant Hamiltonian? | Question: I am told that the most general form of a spin rotation invariant Hamiltonian for two systems 1 and 2 both with spin $S$, i.e., the spin operators
\begin{align}
(\hat{S}_1^x)^2 +(\hat{S}_1^y)^2 + (\hat{S}_1^z)^2 = (\hat{S}_2^x)^2 +(\hat{S}_2^y)^2 + (\hat{S}_2^z)^2 = S(S+1)\hbar^2
\end{align}
is given by
\begin{equation}
\mathcal{H} = \sum_{j=0}^{2S} a_j \bigg(\frac{\mathbf{\hat{S}_1}\cdot\mathbf{\hat{S}_2}}{\hbar}\bigg)^j
\end{equation}
I understand that it should be a function of $\mathbf{\hat{S}_1}\cdot\mathbf{\hat{S}_2}$ but I do not understand why should the sum terminate at $j=2S$. Can someone explain it to me. Thank you.
Answer: $\newcommand{\bm}[1]{\mathbf{#1}}$
You need to look at this in terms of spin quantum numbers (i.e., eigenvalues).
$(\bm S_1+\bm S_2)$ can take values $S_\mathrm{tot} = 0,\dots,2S$. Now if we restrict to the subspace with total spin $S_\mathrm{tot}$, we have that
\begin{equation}
\begin{aligned}
2S_\mathrm{tot}(2S_\mathrm{tot}+1)
=
(\bm S_1+\bm S_2)^2 &= \bm S_1\cdot \bm S_1 + \bm S_2\cdot \bm S_2 +
2\, \bm S_1\cdot \bm S_2
\\
&= S(S+1) + S(S+1) + 2\,\bm S_1\cdot \bm S_2\ ,
\end{aligned}
\end{equation}
and thus
$$\bm S_1\cdot \bm S_2 = S_\mathrm{tot}(2S_\mathrm{tot}+1)-S(S+1)
\tag{1}
$$
can take $2S+1$ possible values. (Note that this means that $\bm S_1\cdot \bm S_2$ and $\bm S_1+\bm S_2$ are diagonal in the same basis, that is, we can reason
about them as if they were just numbers which can take the corresponding set of values.)
A $\mathrm{SU}(2)$ invariant Hamiltonian of the two spins will take a different value of each subspace of total spin $S_\mathrm{tot}$, i.e., it is of the form (with $\Pi_{S_\mathrm{tot}}$ the projector onto the subspace with total spin $S_\mathrm{tot}$)
$$
\mathcal H = \sum_{S_\mathrm{tot}=0}^{2S} E_{S_\mathrm{tot}} \Pi_{S_\mathrm{tot}}\ .
\tag{2}
$$
Since there is a one-to-one relation between the total spin and the value of $\bm S_1\cdot \bm S_2$ -- Eq (1) --, each projector $\Pi_{S_\mathrm{tot}}$ can be expressed as a function of $\bm S_1\cdot \bm S_2$:
$$\Pi_{S_\mathrm{tot}}=f_{S_\mathrm{tot}}(\bm S_1\cdot \bm S_2)\ .
\tag{3}
$$
This function must be $f_{S_\mathrm{tot}}(\bm S_1\cdot \bm S_2)=1$ for the desired $S_\mathrm{tot}$ (using (1)), and $f(\bm S_1\cdot \bm S_2)=0$ for all other values the total spin can take (again using (1)). This means that we only need to fix the value of $f$ at $2S+1$ points, and thus, it can be choosen to be a polynomial of degree $2S$:
$$
f_{S_\mathrm{tot}}(\bm S_1\cdot \bm S_2) = \sum_{j=0}^{2S} a_{j,S_\mathrm{tot}} (\bm S_1\cdot \bm S_2)^j\ .
$$
Substituting this into (3) and then into (2) gives that
$$
\mathcal H = \sum_{j=0}^{2S} a_{j} (\bm S_1\cdot \bm S_2)^j\ .
$$ | {
"domain": "physics.stackexchange",
"id": 93103,
"tags": "quantum-mechanics, condensed-matter, quantum-spin, spin-chains"
} |
Verhoeff check digit algorithm | Question: A recent question on credit card validation here on Code Review, led me down a dark rabbit hole of check digit algorithms. I took a stop at the Verhoeff algorithm and tried to implement it myself.
That lead to the following piece of code:
class Verhoeff:
"""Calculate and verify check digits using Verhoeff's algorithm"""
MULTIPLICATION_TABLE = (
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9),
(1, 2, 3, 4, 0, 6, 7, 8, 9, 5),
(2, 3, 4, 0, 1, 7, 8, 9, 5, 6),
(3, 4, 0, 1, 2, 8, 9, 5, 6, 7),
(4, 0, 1, 2, 3, 9, 5, 6, 7, 8),
(5, 9, 8, 7, 6, 0, 4, 3, 2, 1),
(6, 5, 9, 8, 7, 1, 0, 4, 3, 2),
(7, 6, 5, 9, 8, 2, 1, 0, 4, 3),
(8, 7, 6, 5, 9, 3, 2, 1, 0, 4),
(9, 8, 7, 6, 5, 4, 3, 2, 1, 0)
)
INVERSE_TABLE = (0, 4, 3, 2, 1, 5, 6, 7, 8, 9)
PERMUTATION_TABLE = (
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9),
(1, 5, 7, 6, 2, 8, 3, 0, 9, 4),
(5, 8, 0, 3, 7, 9, 6, 1, 4, 2),
(8, 9, 1, 6, 0, 4, 3, 5, 2, 7),
(9, 4, 5, 3, 1, 2, 6, 8, 7, 0),
(4, 2, 8, 6, 5, 7, 3, 9, 0, 1),
(2, 7, 9, 3, 8, 0, 6, 4, 1, 5),
(7, 0, 4, 6, 9, 1, 3, 2, 5, 8)
)
@classmethod
def calculate(cls, input_: str) -> str:
"""Calculate the check digit using Verhoeff's algorithm"""
check_digit = 0
for i, digit in enumerate(reversed(input_), 1):
col_idx = cls.PERMUTATION_TABLE[i % 8][int(digit)]
check_digit = cls.MULTIPLICATION_TABLE[check_digit][col_idx]
return str(cls.INVERSE_TABLE[check_digit])
@classmethod
def validate(cls, input_: str) -> bool:
"""Validate the check digit using Verhoeff's algorithm"""
check_digit = 0
for i, digit in enumerate(reversed(input_)):
col_idx = cls.PERMUTATION_TABLE[i % 8][int(digit)]
check_digit = cls.MULTIPLICATION_TABLE[check_digit][col_idx]
return cls.INVERSE_TABLE[check_digit] == 0
I chose to implement it as a class with two class methods because I plan to include other algorithms as well and structuring the code this way seemed reasonable to me.
I'm particularly interested in your feedback on the following aspects:
What do you think about the API? calculate(input_: str) -> str and validate(input_: str) -> bool seem reasonable and symmetric, but I could also imagine using something like calculate(input_: Sequence[int]) -> int/validate(input_: Sequence[int], int) -> bool.
There seems to be reasonable amount of code duplication between the two functions calculate/validate, but I couldn't really wrap my head around how to define one in respect to the other.
In addition to the class above, I also decided to take a shot at some unit tests for the algorithm using pytest.
import string
import itertools
import pytest
from check_sums import Verhoeff
# modification and utility functions to test the check digit algorihm robustness
DIGIT_REPLACEMENTS = {
digit: string.digits.replace(digit, "") for digit in string.digits
}
def single_digit_modifications(input_):
"""Generate all single digit modifications of a numerical input sequence"""
for i, digit in enumerate(input_):
for replacement in DIGIT_REPLACEMENTS[digit]:
yield input_[:i] + replacement + input_[i+1:]
def transposition_modifications(input_):
"""Pairwise transpose of all neighboring digits
The algorithm tries to take care that transpositions really change the
input. This is done to make sure that those permutations actually alter the
input."""
for i, digit in enumerate(input_[:-1]):
if digit != input_[i+1]:
yield input_[:i] + input_[i+1] + digit + input_[i+2:]
def flatten(iterable_of_iterables):
"""Flatten one level of nesting
Borrowed from
https://docs.python.org/3/library/itertools.html#itertools-recipes
"""
return itertools.chain.from_iterable(iterable_of_iterables)
# Verhoeff algoritm related tests
# Test data taken from
# https://en.wikibooks.org/wiki/Algorithm_Implementation/Checksums/Verhoeff_Algorithm
VALID_VERHOEF_INPUTS = [
"2363", "758722", "123451", "1428570", "1234567890120",
"84736430954837284567892"
]
@pytest.mark.parametrize("input_", VALID_VERHOEF_INPUTS)
def test_verhoeff_calculate_validate(input_):
"""Test Verhoeff.calculate/Verhoeff.validate with known valid inputs"""
assert Verhoeff.calculate(input_[:-1]) == input_[-1]\
and Verhoeff.validate(input_)
@pytest.mark.parametrize(
"modified_input",
flatten(single_digit_modifications(i) for i in VALID_VERHOEF_INPUTS)
)
def test_verhoeff_single_digit_modifications(modified_input):
"""Test if single digit modifications can be detected"""
assert not Verhoeff.validate(modified_input)
@pytest.mark.parametrize(
"modified_input",
flatten(transposition_modifications(i) for i in VALID_VERHOEF_INPUTS)
)
def test_verhoeff_transposition_modifications(modified_input):
"""Test if transposition modifications can be detected"""
assert not Verhoeff.validate(modified_input)
The tests cover known precomputed input and check digit values, as well as some of the basic error classes (single-digit errors, transpositions) the checksum was designed to detect. I decided to actually generate all the modified inputs in the test fixture so that it would be easier to see which of the modified inputs cause a failure of the algorithm. So far I have found none.
Note: There is a thematically related question of mine on optimizing the Luhn check digit algorithm.
Answer: Your tests look ok. I have three concerns:
If I read correctly, your "single digit modifications" test cycle is going to have over 1000000000000000000000 cycles. That's... not practical. pick a compromise.
The positive tests are checking calculate and validate. I see no reason not to check both in your negative tests too.
You're only checking syntactically valid input. This ties into your question about what the type signature should be.
You have a few options for your type signature. Without going over the compromises in detail, I would suggest that the first line in calculate and validate should be an isdigit() check, and raise an exception if it fails.
Whatever you do for types, your tests should check that it's handling edge cases as intended, whatever it is you decide you intend.
empty strings
a single digit (could break validate)
all zeros
whitespace in different configurations
illegal characters
You don't have to address all these points, depending on the use-cases of this project, and whatever else is going on in your life, it may be fine to call it good enough as-is. | {
"domain": "codereview.stackexchange",
"id": 34990,
"tags": "python, algorithm, python-3.x, unit-testing, checksum"
} |
Why do carbon dioxide and sodium hydroxide not form sodium oxide and carbonic acid? | Question: Why do carbon dioxide and sodium hydroxide not form sodium oxide and carbonic acid?
$\ce{CO2 + 2NaOH -> Na2O + H2CO3}$
as opposed to:
$\ce{CO2 + 2NaOH -> Na2CO3 + H2O}$
This is in the context of fractional distillation of liquified air, before which carbon dioxide must be removed using sodium hydroxide.
Answer: Sodium oxide is not a base in either the Bronsted-Lowry or Arrhenius senses. It is technically a base anhydride, meaning that it can be hydrated to yield a base:
$$\ce{Na2O + H2O -> 2NaOH}$$
Carbon dioxide can similarly be viewed as an acid anhydride, meaning it can be hydrated to form an acid.
$$\ce{CO2 + H2O <=> H2CO3}$$
The difference is that the hydration of sodium oxide is essentially irreversible. One program I used estimated $K_{eq}=10^{39}$ for sodium oxide hydration. In contrast the hydration of carbon dioxide is reversible, with $K_{eq}\approx1.5\times10^{-3}$.
The reaction that you are wondering about, $\ce{CO2 +2NaOH⟶Na2O +H2CO3}$, is essentially the dehydration of sodium hydroxide by $\ce{CO2}$ to form sodium oxide and carbonic acid, that is, it is reaction 2 minus reaction 1. But the equilibrium constants I just mentioned show that sodium oxide has an "affinty" for water that is about 42 orders of magnitude greater than carbon dioxide's affinity for water. So that reaction is very unfavorable. The reverse reaction would be extremely favored: Sodium oxide would be a powerful reagent for $\ce{CO2}$ removal prior to air liquefaction, at least as powerful as sodium hydroxide.
If you are curious, here is the R code I used to estimate the equilibrium constant of sodium oxide hydration, using the package CHNOSZ:
> subcrt(c("Na2O","H2O","NaOH"),c(-1,-1,2),T=25)
info.character: found H2O(liq), also available in gas
subcrt: 3 species at 298.15 K and 1 bar (wet)
$reaction
coeff name formula state ispecies
2031 -1 sodium oxide Na2O cr 2031
1 -1 water H2O liq 1
1157 2 NaOH NaOH aq 1157
$out
logK G H S V Cp
1 39.01742 -53229.29 -57143.24 -13.24728 -36.04979 -40.8745 | {
"domain": "chemistry.stackexchange",
"id": 2960,
"tags": "inorganic-chemistry, acid-base"
} |
What is it about bleach that keeps Pharaoh ants away even after drying? | Question: My apartment high-rise has been afflicted with ants in the past few years. The pest control technician says that it has become a common problem due to warming weather. Various things have been tried, and I am leaving this in their expert hands (or rather, the apartment management is). This question is not asking for further countermeasures.
When I find a place or pathway with many ants, I find that simply wiping with a damp cloth or vinegar doesn't keep them away for long. Wiping is supposed to disrupt their pheromone trails, but I only find somewhat lasting effects if I wipe with undiluted household bleach. I am pleasantly surprised that they stay away even after the bleach has dried. After some internet research, I found that bleach leaves behind a salt residue when dried. Is it the salt residue that keeps the ants away when wiping otherwise doesn't have lasting effect?
I've had another corroborative experience that leads me to suspect that salt residue has a repulsive effect on pharaoh ants. I keep a large beer glass of very salty water on hand because of the dental benefits of salt water rinse. I probably use too strong a solution, which likely has drawbacks (something I have yet to research). It is kept on top of the fridge. Around the glass is salt water staining. I never see ants around it, even though ants like water. I admit that this corroboration is not very strong, as I generally do not see ants on top of the fridge anyway
Answer: It's likely that the bleach (hypochlorite), which is a strong oxidizer, is destroying the trails more effectively than the vinegar. The mechanism of action here is the organic pheromone molecules, known as monomorine-1 and faranal are broken down into smaller organic molecules through oxidation. This chemically disrupts the trail and it takes some time for the ants to re-establish it.
I'm no organic chemist, but the acetic acid in vinegar may be a good smell masker for us, but I don't think it will have any chemical effect on the pheromones. | {
"domain": "biology.stackexchange",
"id": 12039,
"tags": "ant"
} |
Special relativity, length contraction, Wonderland | Question: George Gamow described In Mr. Tompkin in Wonderland a hypothetical world in which the speed of light is 10 km/h. Cyclists, who in such a world are obviously moved highly relativistically, are always seen by pedestrians as strongly contracted. Is this view always realistic?
I think It’s not always true: when the speed of a pedestrian is equal to the speed of a cyclist they don’t face the Length contraction anymore. Or am I wrong?
Answer: Exactly. A pedestrian running along the cyclist will see a normal cyclist and a flattened world.
Generally, the perspective of the world will warp as you move in this world: things will not just flatten, but seem to crowd in front of you as if seen through a fish-eye lens. But things moving with you will look normal. | {
"domain": "physics.stackexchange",
"id": 93661,
"tags": "special-relativity"
} |
'double' is not a supported type in ROS messages? | Question:
Hello all,
I meet two issues for the data type delcaration:
Q1: I define the message in the svr file as below--but when I check the srv by rossrv show I got error message like:
unknown srv type... Cannot locate message [double] in package with paths... blablabla..
So I can not define double type data in srv file? instead uint32 works OK.. But ROS should support double , correct?
uint8 Number #0/1/2/3
string Name
bool In_Out #(true) Out(False)
double TimeStamp #double??
double Duration #double??
---
string Feedback
bool PaySucc
Q2: In the relevant Server.cpp file, I define a struct data class as below, but this time the IDE (I am using QT creator w/ROS plugin) report that unknow type of "string", and double is support instead..
This really confused me.. is this the QTcreator issue?
BTW, is there a standard data type roscpp support can refer to?
typedef struct Req
{
uint8_t Number; //0/1/2/3
string Name;
bool IN_OUT; //(true) Out(False)
double TimeStamp; //double??
double Duration; //double??
/******/
string Feedback;
bool Succ; //Success(true) Fail (false)
} Req;
Originally posted by macleonsh on ROS Answers with karma: 26 on 2019-05-13
Post score: 0
Answer:
For Q1, see #q205541 and the link to this page therein. In short: in msg/srv definition you need to use float64 as this is a "ROS built-in type". Those are used to autogenerate the code for different languages. In C++ use double.
Q2: this is expected behavior, as you are now using C++. This is, double is a C++ primitive type, whereas string is not. Use std::string instead. No fault with QTCreator there.
Edit:
std::string is the C++ string type, whereas std_msgs::String is the C++ representation of a ROS standard message that only contains a string (ROS built-in type) / std::string (C++ type) as a field. Check the std_msgs/String message definition here.
I.e., manipulating the respective data field of the std_msgs::String with c_str() will work.
E.g. (using std::strings empty() method for sake of brevity)
std_msgs::String string_msg;
std::string cpp_string;
cpp_string.empty(); // compiles
string_msg.empty(); // does not compile
string_msg.data.empty(); // compiles
Originally posted by mgruhler with karma: 12390 on 2019-05-14
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by macleonsh on 2019-05-14:
@mgruhler Thanks so much. This is exactly what I want to find--simple and strightforward..
Yes follow your suggestion I solve the issue quickly..
One additional questions:
What is the difference between std::string and std_msgs:String? my initial understand is the 1st one is c++ type and 2nd one is ROS override type? but looks not that simple.
Some small experience that I can manipulate std::string by using c_str() but can not do that to std_msgs:String..
Comment by mgruhler on 2019-05-14:
@macleonsh edited answer above.
If you feel this is answering your question, please mark the answer as correct by clicking the check-mark next to it. | {
"domain": "robotics.stackexchange",
"id": 33011,
"tags": "c++, ros-kinetic, roscpp"
} |
ValueError: not enough values to unpack (expected 4, got 2) | Question: I have written this code
fig, (axis1, axis2,axis3, axis4)=plt.subplots(2,2,figsize=(10,4))
and I am getting this error
ValueError: not enough values to unpack (expected 4, got 2)
I tried many ways to remove this error but all was in vain.
Can you explain to me why I am getting this error?
Answer: Its because you have not looked how the values are packed in plt.subplot function.
>>> plt.subplots(2,2,figsize=(10,4))
(<matplotlib.figure.Figure at 0xa3918d0>, array([[<matplotlib.axes._subplots.AxesSubplot object at 0x000000000A389470>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000000A41AD30>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x000000000A6F7EB8>,
<matplotlib.axes._subplots.AxesSubplot object at 0x000000000BC232E8>]], dtype=object))
Instead of unpacking all values at once, unpack in steps. You will get a better idea then.
For your solution, to unpack -
>>> fig, [[axis1, axis2],[axis3, axis4]] = plt.subplots(2,2,figsize=(10,4)) | {
"domain": "datascience.stackexchange",
"id": 3305,
"tags": "pandas, data-science-model, matplotlib"
} |
How to construct a 2-partite matrix | Question: Let's assume we have an internal hamiltonian $H_0 = \mid 1\rangle \langle 1\mid$.
Now let's assume we have two systems with identical Hamiltonians $H^1_0$,$H^2_0$ and I want to compute the joint Hamiltonian
$$H^{(2)}_0 = H_0^1\otimes \mathbb{I}+ \mathbb{I}\otimes H_0^2$$
I have a little bit of a hard time understanding how exactly $H^{(2)}_0$ is constructed.
I approach this the following way
$$H^{(2)}_0 = \begin{bmatrix}
1&0\\0&0\end{bmatrix} \otimes \begin{bmatrix}
1&0\\0&1\end{bmatrix} + \begin{bmatrix}
1&0\\0&1\end{bmatrix} \otimes\begin{bmatrix}
1&0\\0&0\end{bmatrix} $$
$$ =\begin{bmatrix}
1&0&0&0& \\
0&0&0&0& \\
0&0&1&0& \\
0&0&0&0& \end{bmatrix} + \begin{bmatrix}
1&0&0&0& \\
0&1&0&0& \\
0&0&0&0& \\
0&0&0&0& \end{bmatrix} = \begin{bmatrix}
2&0&0&0& \\
0&1&0&0& \\
0&0&1&0& \\
0&0&0&0& \end{bmatrix}$$
Is this correct? I'm doing this mostly via gut feeling. I remember my professor mentioning that depending on where the Hamiltonian is, like in $H_0^1\otimes \mathbb{I}$ where it is on the first entry or $ \mathbb{I}\otimes H_0^2$ where it is on the second entry I can see on what sub-system it acts.
What I'm asking is 1. if my calculations were correct and to give me some conceptual context. Also a link that could explain it to me would be greatly appreciated. (Spent some time looking for it).
Answer: Yes, this looks quite right. I'll offer how I think about constructing such an object.
First, it is usually written sloppily (cough: Griffiths) as
$$H^{(2)} = H^{(1)} + H^{(2)}$$
(please allow me to drop the subscripts). However, if we truly wish to construct the $2$ particle hilbert space we must write
$$H^{(2)} = H^{(1)} \otimes 1^{(2)} + 1^{(1)} \otimes H^{(2)}.$$
where the superscripts indicate on which space ($\mathcal{H}_1$ or $\mathcal{H}_2$) the operators correspond to. Of course, our goal is to explicitly construct the matrix representation of $\mathcal{H}_1 \otimes \mathcal{H}_2$.
We are told that the hamiltonian of a single particle is given by
$$H = | 1 \rangle \langle 1 |$$
Note what has been assumed but not stated explicitly in the problem: (1) We assume that the hamiltonian describes a two state system and (2) that we are choosing the regular representation of that system and (3) $|1 \rangle$ represents the eigenstate of $H$ corresponding to eigenvalue $1$. (Number (3) is actually conventional to assume, but I thought I'd mention it for completeness.)
That is, we are choosing to represent the eigenstates of the 2 state system as
$$ | 1 \rangle \dot{=} {1 \choose 0}$$
This is convenient as it makes the Hamiltonian diagonal.
At any rate, what's very nice about Dirac notation is that, after choosing our representation, all we have to do is just follow the recipe and know basic computational linear algebra. That is,
$$|1 \rangle \langle 1| = {1 \choose 0}(1\ \ 0) = \begin{pmatrix} 1 & 0 \\
0 & 0 \end{pmatrix}. $$
We may then use the kronecker product to compute the direct product of two matrices $A, B$( where $[A] = m\times n$ and $[B]= p\times q$)
$$
\mathbf {A} \otimes \mathbf {B} ={\begin{bmatrix}a_{11}\mathbf {B} &\cdots &a_{1n}\mathbf {B} \\\vdots &\ddots &\vdots \\a_{m1}\mathbf {B} &\cdots &a_{mn}\mathbf {B} \end{bmatrix}}.
$$
to construct the complete hamiltonian in the 2-particle hilbert space (which it looks like you did correctly). | {
"domain": "physics.stackexchange",
"id": 54683,
"tags": "tensor-calculus, hamiltonian, matrix-elements"
} |
Bernoulli Principle Confusion | Question: Sorry, beginner's question, but this counter-intuition is boggling my mind. Say I have a hose of a certain diameter. If I squeeze the hose at a certain point, then the pressure applied on that point increases by the action of my hand. Consequently (I think), the speed of the water at that point increases. So how come in Bernoulli's model, when there is a constriction, the static pressure decreases? This is counter-intuitive to me and I can't make sense of it.
Answer: Your hand is doing a force on a area of the hose. So you are applying a pressure on it. But the consequence for the water flow inside has nothing to do with the stress status of the hose.
There is a continuous flow, so the product $Av$ (cross section times velocity) must be constant. In the squeezed region, the cross section is smaller, so the velocity increases there, or in another words, it accelerates.
From the second law of Newton, there must be a net force to produce an acceleration. It comes exactly from the gradient of pressure. So the pressure before is bigger than inside the squeezed region.
When it goes out of it the effect is the opposite. The water velocity decreases, it decelerates, and the force is counterflow. So the pressure after the squeezed region is bigger that inside it. | {
"domain": "physics.stackexchange",
"id": 99110,
"tags": "fluid-dynamics, pressure, bernoulli-equation"
} |
How are aquifers found and traced? | Question: Recently a line from a news article about an aquifer in the path of a tunnel boring machine for Metro networks caught my attention:
Ahead of tunnelling, the builders conduct hydrogeology, a study of water flow in aquifers and characterization of
aquifers. During our study, the aquifer wasn’t identified,” said an engineer.
This led me to wonder how aquifers and underground water flow are detected, studied and mapped out. I searched the net but couldn't find anything relevant.
So, how are aquifers detected and their courses mapped?
Answer: Aquifers are relatively permeable zones of material that transmit water. Common aquifer materials include layers of unconsolidated sedimentary rock, like sands and gravels; and poorly cemented “bedrock” units like sandstone. Interconnected solution cavities in limestone, called karst, are common in some areas. Calling something an “aquifer” generally infers that the unit is capable of producing enough water via a well to be of interest to humans.
Aquifers are typically delineated using geologic and geophysical logs of boreholes from wells and test holes. These kinds of logs are typically required to be provided to a specified public agency for future use by others. Using these logs, a geologist can create cross sections of an area to evaluate the extent and thickness of these units. This is done keeping in mind the geometry of how these were originally deposited, such as linear streams, sheet-like sands, etc.
Surface geophysical techniques can also be used to assess subsurface conditions, but typically require “ground truth.” I remember asking a geophysicist what his data were telling him about an area, and he said, “I won’t know until you put some drill holes in.” It wasn’t as bad/funny as it seem. He could help color in between the data points, but first he needed to know what the specific geophysical response he was seeing meant as far as real rock present in the subsurface.
It should be rare to hit an aquifer zone unknowingly, as you described. It’s possible they hit a permeable fault zone, which can be tricky to know about beforehand. But I need to mention that engineers are notorious for keeping geologists out of the loop until a problem arises. They often don’t want to pay for upfront studies, the kind that geologists use to assess such things. | {
"domain": "earthscience.stackexchange",
"id": 1844,
"tags": "hydrology, hydrogeology, underground-water"
} |
Should I normalise my data if future unseen data may have a different range? | Question: I'm new to ML and researching data prep, more specifically feature normalisation.
My question is whether it's a good idea to normalise data when its range may change over time?
For example, if I'm trying to predict stock prices in my train dataset, the prices range from 100 to 200 and later (unseen data) they could reach 300.
Should I normalise the training data?
Answer:
whether it's a good idea to normalise data when its range may change over time?
Short answer : Yes
Long answer : It is generally a good idea to scale/normalise your data in machine learning.
However, I am not sure what you mean when you say :
when its range may change over time?
I would assume that you are asking if the range of dependent variable(y) changes over time. Note that you don’t necessarily scale the dependent variable(y), just the independent variables.So, In this case you won’t have to anything additionally.
However, there is one boundary case that you might need to consider : If the change in range of dependent variable (y) is because of some change in distribution of data, you might need to go back, do the necessary data preparation and hence the normalisation and train your model again. | {
"domain": "datascience.stackexchange",
"id": 9716,
"tags": "machine-learning, supervised-learning, normalization"
} |
Derivation of $d\theta = ds/r$ | Question: I was reading about Uniform Circular motion and I came across this formula: $d\theta = ds/r $. ($r$ being the radius, $d\theta$ being the angle swept by the radius vector and $ds$ being the arc length)
I thought that the formula is basically the definition of radian measure. But deeper research led me to the following derivation:
What is $dr$ and how did we get $ds^2 = dr^2 + r^2 d\theta^2$?
I know this is very basic but there was no image to represent this pictorically and I couldn't get much results googling these formulae.
Answer: This is a formula used to find the arc lengths swept in polar-coordinates. A geometrical proof is as follows:
Taking a very small section of a curve we get the approximation of one side being $rd\theta$ and the other side being $dr$ so the arc length is approximately equal to the hypotenuse and by Pythagoras Theorem we get the expression:
Apologies in advance for not using Mathjax and hope you understand my handwriting so here’s the algebraic proof: | {
"domain": "physics.stackexchange",
"id": 47725,
"tags": "kinematics, coordinate-systems, geometry"
} |
What's the normal force on a squishy ball on an inclined track? | Question: In this article...
https://billiards.colostate.edu/physics/Domenech_AJP_87%20article.pdf
...which analyzes rolling friction on a rolling ball, the author claims that the normal force on a ball rolling down an incline is given by:
Where $R_b$ is the regular radius of the ball and $R_e$ is the effective radius of the ball, which is less than the regular radius since the ball squishes under its own weight.
But I don't understand. Why does the fact that the ball "squishes" as it rolls change the magnitude of the normal force?
Answer: Look at their Fig. 3. The normal forces are not directed orthogonally to the inclined plane. You might say that their vector sum is orthogonal to the inclined plane, but I guess the components tangential to the plane still affect the rolling friction in their first formula. | {
"domain": "physics.stackexchange",
"id": 54646,
"tags": "newtonian-mechanics, forces, rotational-dynamics"
} |
Sharing non-ROS code between packages in catkin | Question:
I'm currently writing 3 separate packages for 3 different gripper designs. Under the hood, however, they all use the same UDP interface for sending and receiving commands, data, etc.
What is the recommended way to share the common code between those packages? Is standard practice just to add a separate, non-package directory in the catkin workspace for building a shared library containing common code?
Originally posted by rkeatin3 on ROS Answers with karma: 156 on 2017-11-09
Post score: 2
Answer:
TwoThree options:
make your shared code a ROS package (ie: add package.xml and use catkin_package(..) in a CMakeLists.txt)
install shared code "system wide" and treat it as a system dependency
create a "plain CMake" package (see REP-134)
My preference is always option 2, especially if the shared code does not have any ROS dependencies.
Originally posted by gvdhoorn with karma: 86574 on 2017-11-09
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by rkeatin3 on 2017-11-09:
A non-package directory is an option as well, right? Though that would necessitate building with catkin_make_isolated as opposed to catkin_make?
I was hesitant to make another package for the common code (since it's not really standalone), but it's better than option 2 for me in this instance.
Comment by rkeatin3 on 2017-11-09:
Thanks for your help!
Comment by gvdhoorn on 2017-11-09:
You're right, there is a third option (and probably some others as well). I've added it to the list.
Comment by gvdhoorn on 2017-11-09:
Note that it's really easy to create a Debian pkg with something like checkinstall. That makes things immediately usable as system dependency.
Comment by gvdhoorn on 2017-11-09:
re: not really stand-alone: I'm not sure how that matters. Library code is almost never stand-alone (ie: doesn't do anything by itself if not used by something else).
Comment by gvdhoorn on 2017-11-09:
Btw: do the designs change the way the shared code is used? If not, it might make sense to create a stand-alone 'driver' node that gets parameterised by the three packages that provide info about the gripper designs.
Comment by rkeatin3 on 2017-11-09:
The shared code just does the socket/UDP stuff, and each gripper will likely have a different interface (though that isn't nailed down yet). The way I have it architected now, each gripper can be controlled with or without ROS, so I'll probably want to stick with that (as opposed to strictly...
Comment by rkeatin3 on 2017-11-09:
coupling the shared code to ROS).
Comment by gvdhoorn on 2017-11-10:\
The shared code just does the socket/UDP stuff
to me this sounds like an ideal candidate for a system dependency.
Note that that is nothing magical, it just means that you install the headers and libs in /usr/local fi and make sure CMake can find them there. | {
"domain": "robotics.stackexchange",
"id": 29329,
"tags": "catkin"
} |
How to make my robot move parallel to edge of the table? | Question: I have an autonomous differential drive robot that moves on the floor and its purpose is to always move parallel to the longer edge of the table (just assume that the table never ends). It looks something like :
Now, the robot will be using this same method to align itself to the table edge for the first time. Also, when it travels forward (along the edge), it is prone to deviate from its path due to various reasons. Again, this method will guide it to stay on the right track.
I can't use any line-following method.
Answer:
I can't use any line-following method.
Actually, you are quite wrong, the way I understand the problem. It is a line-following. It is just that the line is not painted (like on the road). The line is the edge of the table.
The way I see it, your robot needs to look up, and detect the line "painted" by the edge of the table on the ceiling. Once the line is detected, just follow it.
I have in mind 2 ways to detect the edge of the table.
Contrast based: using an imaging sensor (e.g., camera), detect the edge of the table by detecting the contrast / color change. Based on this, adjust the trajectory.
Distance measurement: an IR sensor (practically, more) can be used to measure the distance to the first obstacle (table or ceiling), vertically. It should be easy to do, considering that the ceiling is significantly higher than the table. | {
"domain": "robotics.stackexchange",
"id": 2225,
"tags": "sensors, differential-drive, precise-positioning"
} |
Context-free pumping lemma of $a^nb^n$ | Question: I know $a^nb^n$ with $n\geq0$ is considered a context-free language, but if I try:
Using pumping length $p = 3$
$n = p$, thus we have $aaabbb$
$u =aa$ and $y = bb$
$v = a$, $w = b$ and $x=λ$, then $|vwx|=2\leq p=3$ and $|vx| = 1 \geq 1$
$uv^iwx^iy \notin L$, for instance, with $i=2$ we have:
$$aaaabbb$$
I know I'm wrong in some part of the process, that's I'm attempting to 'break' the lemma, to fully understand it.
Answer: You picked a wrong decomposition. Similarly to the pumping lemma for regular languages, the pumping lemma for context-free languages states that for every context-free language $L$, there exists some legal decomposition for every string $s$ with $|s| \geq p$ where $p$ is a pumping length of $L$.
By legal decomposition, I mean dividing $s$ into partitions $uvwxy$ such that
$|vx| \geq 1$
$|uvw| \leq p$
$uv^nwx^ny \in L$ for all $n \in \mathbb{N}_0$
In the case of the string $aaabbb$ with $p = 3$, a legal decomposition would be eg. $u, w, y = \epsilon$, $v = aaa$, $x = bbb$. | {
"domain": "cs.stackexchange",
"id": 19857,
"tags": "context-free, pumping-lemma"
} |
Catkin/Ros “undefined reference to” | Question:
I'm trying to build a project with ROS, but I keep getting "undefined reference to <Class::Function>" errors, for exemple :
CMakeFiles/robot_controller_node.dir/src/Node_Robot_Controller.cpp.o: dans la fonction « main »:
Node_Robot_Controller.cpp:(.text+0x1f1): référence indéfinie vers « trajectoryclass::trajectoryclass(ros::NodeHandle) »
Node_Robot_Controller.cpp:(.text+0x796): référence indéfinie vers « List_Container::getElement(int) »
Node_Robot_Controller.cpp:(.text+0x7ab): référence indéfinie vers « Task_Interface::getTaskDescription() »
Node_Robot_Controller.cpp:(.text+0x7d1): référence indéfinie vers « List_Container::getElement(int) »
Node_Robot_Controller.cpp:(.text+0x7d9): référence indéfinie vers « Task_Interface::getTaskId() »
Node_Robot_Controller.cpp:(.text+0x83a): référence indéfinie vers « List_Container::getElement(int) »
Node_Robot_Controller.cpp:(.text+0x977): référence indéfinie vers « List_Container::next() »
Node_Robot_Controller.cpp:(.text+0xa60): référence indéfinie vers « List_Container::getElement(int) »
Node_Robot_Controller.cpp:(.text+0xa68): référence indéfinie vers « Task_Interface::getTaskId() »
Node_Robot_Controller.cpp:(.text+0xab5): référence indéfinie vers « List_Container::getElement(int) »
Node_Robot_Controller.cpp:(.text+0xadd): référence indéfinie vers « List_Container::isEmpty() »
/home/tcozic/Documents/git_ur10/catkin_ws/devel/lib/librobot_controller_library.so: référence indéfinie vers « error_norm(std::vector >, std::vector >) »
/home/tcozic/Documents/git_ur10/catkin_ws/devel/lib/librobot_controller_library.so: référence indéfinie vers « Position_Joint::getVector() »
collect2: error: ld returned 1 exit status
make[2]: *** [/home/tcozic/Documents/git_ur10/catkin_ws/devel/lib/ur10/robot_controller_node] Erreur 1
make[1]: *** [ur10/CMakeFiles/robot_controller_node.dir/all] Erreur 2
make: *** [all] Erreur 2
Invoking "make install -j4 -l4" failed
This is my CmakeLists.txt for the compilation of this package :
<pre><code>
cmake_minimum_required(VERSION 2.8.3)
project(ur10)
set(MSG_DEPS
std_msgs
sensor_msgs
geometry_msgs
trajectory_msgs
moveit_msgs
)
find_package(catkin REQUIRED COMPONENTS
message_generation
moveit_core
moveit_ros_planning
moveit_ros_planning_interface
dynamic_reconfigure
moveit_ros_move_group
pcl_conversions ##adding
pcl_msgs ##adding
roscpp
rospy
roslib
#tf
#urdf
genmsg
${MSG_DEPS}
)
find_package(VISP REQUIRED)
find_library(VISP_LIBRARIES NAMES visp HINTS ${VISP_LIBRARY_DIRS} )
find_package(PCL REQUIRED)
find_package(OpenCV REQUIRED)
find_package( ur_package REQUIRED)
## Generate messages in the 'msg' folder
add_message_files(
FILES
Task_move.msg
Task_wait.msg
Piece.msg
Task.msg
Task_tool.msg
)
# Generate services in the 'srv' folder
add_service_files(
FILES
Validation.srv
NewPiece.srv
)
## Generate added messages and services with any dependencies listed here
generate_messages(
DEPENDENCIES
std_msgs # Or other packages containing msgs
)
catkin_package(
INCLUDE_DIRS include
LIBRARIES ${PROJECT_NAME}
CATKIN_DEPENDS message_runtime roscpp rospy roslib moveit_core moveit_ros_planning_interface #moveit_plan_execution
moveit_trajectory_execution_manager moveit_ros_planning moveit_planning_scene_monitor ${MSG_DEPS}
DEPENDS VISP OpenCV
)
###########
## Build ##
###########
include(CheckCXXCompilerFlag)
CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
if(COMPILER_SUPPORTS_CXX11)
set(CMAKE_CXX_FLAGS "-std=c++11")
elseif(COMPILER_SUPPORTS_CXX0X)
set(CMAKE_CXX_FLAGS "-std=c++0x")
else()
message(FATAL_ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler. Suggested solution: update the pkg build-essential ")
endif()
## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(include ${catkin_INCLUDE_DIRS})
include_directories(${ur_package_INCLUDE_DIRS})
## Declare a C++ library
add_library(robot_controller_library
src/List_Container/List_Container.cpp
src/Piece_Description/Piece.cpp
src/Robot_Description/Robot.cpp
src/Robot_Description/Tool.cpp
src/Robot_Description/Tool_IO_Config.cpp
src/Position/Position_Interface.cpp
src/Position/Position_Joint.cpp
src/Position/Position_TCP.cpp
src/Task/Task_Move.cpp
src/Task/Task_Tool.cpp
src/Task/Task_Wait.cpp
)
add_dependencies(robot_controller_library ur10_gencpp)
# Declare a C++ executable
add_executable(robot_controller_node src/Node_Robot_Controller.cpp)
#add_dependencies(robot_controller ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
target_link_libraries(robot_controller_node robot_controller_library ${catkin_LIBRARIES} ${VISP_LIBRARIES})
</code></pre>
All *.cpp files are classes, with theirs own headers in the include/ur10/ [Directory_name]/directories, except for the Node_controller_node.cpp
This is my first big project with ROS and I don't understand where the problem come from....
Thanks in advance for your help !
Edit : I tried to compile the library without the node:
## Declare a C++ library
set(SOURCE_robot_controller_library
src/List_Container/List_Container.cpp
src/Piece_Description/Piece.cpp
src/Robot_Description/Robot.cpp
src/Robot_Description/Tool.cpp
src/Robot_Description/Tool_IO_Config.cpp
src/Position/Position_Interface.cpp
src/Position/Position_Joint.cpp
src/Position/Position_TCP.cpp
src/Task/Task_Move.cpp
src/Task/Task_Tool.cpp
src/Task/Task_Wait.cpp
src/Task/Task_Interface.cpp
)
add_library(Robot_controller_lib ${SOURCE_robot_controller_library})
add_dependencies(Robot_controller_lib ur10_gencpp)
target_link_libraries(Robot_controller_lib
${catkin_LIBRARIES}
${VISP_LIBRARIES})
I don't have any errrors, but I keep getting the reference error when I try to add the compilation of the node :
add_executable(robot_controller_node src/Node_Robot_Controller.cpp)
add_dependencies(robot_controller_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})
target_link_libraries(robot_controller_node
Robot_controller_lib
${catkin_LIBRARIES}
${VISP_LIBRARIES})
PS: I can't post code here, it is for a professionnal project.
Could it be a probleme with the use of inheritance or virtual functions with catkin ? The classes List_Container and Task_Interface possess virtual functions and inherit others class. I already make sure that all functions have been implemented.
Originally posted by sumperfees on ROS Answers with karma: 77 on 2016-06-01
Post score: 0
Original comments
Comment by ahendrix on 2016-06-01:
"Undefined reference" is a linker error; it means that you used a function or class from a header, but you either didn't link to the library that implements that function or class, or if it's a function from your code, you didn't implement it.
Comment by sumperfees on 2016-06-02:
I just checked, all the functions in errors are implemented,
for exemple :
//header
template class List_Container{ .....; T * getElement(int i);.....}
//cpp
template
T *List_Container::getElement(int i){ return this->list[i]; }
list is a vector of T
Answer:
You cannot implement template classes in your cpp file; they must be implemented in the header.
Have a look at http://stackoverflow.com/questions/495021/why-can-templates-only-be-implemented-in-the-header-file for one post that explains the details, or just perform a Google search. This is a well documented limitation of C++.
Originally posted by ahendrix with karma: 47576 on 2016-06-02
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 24783,
"tags": "ros, c++, catkin"
} |
Closed surface intuition | Question:
In topology, closed surface is simply defined to be the surface that has no boundary as opposed to open surfaces.
This is the layman's definition of closed surface. Example is notably a sphere.
But, I am unable to conceive the sense of the definition. How can a surface has no boundary? What does this actually mean? I'm not getting the intuition. Plz help.
Answer: If you had a spherical piece of paper, any point on the paper would be surrounded by paper on two dimensions. You could cut out a little circle with that point in the center. If you had a normal sheet of paper, most of the paper would be like that, but there'd be a boundary where the points only have paper on one side and you could only cut out a semicircle. That's what "boundary" means when dealing with surfaces.
Unfortunately, the definition you're showing is incomplete. A closed surface must also be compact. My favorite definition would be really difficult to explain, but if you're not using some really weird way of measuring distance, a simpler one will suffice. It must be closed and bounded (no relation to the "closed" and "boundary" I already mentioned). "Closed" here means that any point not on the paper is completely surrounded by points not on the paper, so you can't just have a normal sheet of paper where only the edge is missing so it technically has no boundary. "Bounded" means that it doesn't go on forever in any direction, so a plane wouldn't count.
Edit:
I think it's probably good to explain why compact is a thing. If you look at an open interval from zero to one, it's bounded. It doesn't go on forever. But you can take a continuous function of it (which preserves all the sorts of structures mathematicians love) and get something that goes on forever. For example, $f(x) = 1/x$ is continuous on that interval, and maps it to the open interval $(1,\infty)$. If you use a closed interval, you can't do that. Any continuous function of $[0,1]$ will map it to a bounded set. You could say $1/0 = \infty$, and topologists frequently do that, but adding an infinity like that messes around with the structure of the real line so much that you're less making $[0,1]$ infinite than you are making the real line finite.
Compact means that you're dealing with a set in which being finite is inherent to the structure in a way that can't be changed by something as simple as a continuous function.
A closed surface is one that doesn't go on forever but also doesn't have edges. It just loops around on itself like a sphere. | {
"domain": "physics.stackexchange",
"id": 20593,
"tags": "electrostatics"
} |
Given a number of points, generates all paths between all points that does not overlap | Question: I wrote this code in 4 days, I want to see if I can optimize this code some more, and have some suggestion on what can I improve.
This code takes in any number of points then naming them alphabetically, when the points exceeds 26 the letters will repeat themselves AA BB CC, AAA, BBB, CCC, etc., then through combinations getting all paths between these points.
['A B', 'A C', 'A D', 'A E', 'B C', 'B D', 'B E', 'C D', 'C E', 'D E']
# This is only 5 points
The code:
class Distance:
def __init__(self):
pass
def possiblePathsGeneration(self, numOfPoints = 5):
flag = False
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
accumulator = 0
terminateNum = 0
paths = []
points = []
temporaryStorage = ""
flag2 = 0
self.numOfPoints = numOfPoints
if self.numOfPoints > 26: # To see if it exceeds the alphabet
points = list(alphabet[:26]) # filling in the list with existing alphabet
for x in range(2, ((self.numOfPoints - 1) // 26) + 2): # How many times the letter will repeat itself (You will see in the output what i mean)
for y in alphabet[:26]: # repeats over the alphabet
if flag2 == 1: # Prevents another whole alphabet generating
break
if self.numOfPoints % 26 > 0 and (self.numOfPoints - 26) // 26 < 1: # To see if it has any spare letters that does not form a full alphabet
terminateNum = self.numOfPoints % 26 # calculates how many spare letters are left out and sets it as a termination number for later
flag = True # Sets a flag which makes one of the if statments false which allows execution of later programs
else:
terminateNum = (self.numOfPoints - 26) // 26 # Getting the times that the alphabet has to iterate through
if flag == True and self.numOfPoints % 26 > 0 & (self.numOfPoints - 26) // 26 < 1: # To see if we have a whole alphabet
break
if accumulator >= terminateNum: # Determines when to leave the loop
break
points.append(y * x) # Outputs point
accumulator += 1
if flag != True & accumulator != terminateNum | accumulator <= terminateNum: # Determines if we have more whole alphabets
continue
terminateNum = self.numOfPoints % 26 # Resets number of letters to generate
for y in alphabet[:terminateNum]: # outputs the spares
flag2 += 1
if flag2 == 1 and not(self.numOfPoints < 52): # prevents generation of extra letters
break
points.append(y * x)
else:
points = list(alphabet[:self.numOfPoints])
temporaryPoints = [x for x in points]
for x in points:
for y in temporaryPoints[1:]:
paths.append(x + " " + y)
temporaryStorage = x + " " + y
yield temporaryStorage
temporaryPoints.pop(0)
distance = Distance()
print([x for x in distance.possiblePathsGeneration()])
I have tested this code a few times, it doesn't have any bugs that I know of.
The reason I use classes is that this is a part of the actual code, and using classes is convenient for later when I want to do some calculations.
Answer:
Strings
instead of inserting strings, you should use the python library.
import string
alphabet = string.ascii_uppercase
print(alphabet)
this will reduce the likely hood of typos.
Naming
One thing I would consider is your naming objects. For instance, you name the class Distance, yet I don't see anything related to a measurement between two points. This could possibly be only part of the code, and you will have a measurement function, but then the class isn't strictly a measurement tool and also generates paths. So perhaps it should have a different name either way.
I might be overthinking it, but it will help with readability when it comes time for others to pick up your code and understand what it's supposed to do.
iteration
another python tool to look at is itertools. You can use this for better ability to iterate over lists, etc. in your case it has a combination function.
import itertools
path = []
points = ['a', 'b', 'c', 'd', 'e']
for combo in itertools.combinations(points, 2):
path.append(combo)
this also stores your end result as a list of lists. Each path is a list. I personally dislike storing anything as a string, unless absolutely necessary or if it's actually a word/sentence. | {
"domain": "codereview.stackexchange",
"id": 43055,
"tags": "python, combinatorics"
} |
How can I assign a genomic region into a window using R? | Question: I have asked this question at BioStars but have not gotten any suggestions so far. So, I am asking the community here, perhaps someone here has an idea how I can solve my problem.
I am dealing with a bunch of inversions and running some analysis on each of them. One of them is located on chromosome one, the starting position is 6199548 and ending position is 9699260. So the length of this variant is 3499712 bp. I want to generate non-overlapping windows based on the length of these focal positions (3499712) and assign the focal position to one of these windows that overlap with the starting and ending position of my focal position. I am achieving this by using the GenomicRanges::tileGenome function in R:
seqlength <-c(chr1=159217232,chr2=184765313,chr3=182121094,chr4=221926752,chr5=187413619)
bin_length<-c(3499712)
bins<-tileGenome(seqlength,tilewidth=bin_length,cut.last.tile.in.chrom
= T)
#bins_u<-unlist(bins)
write.table(bins,file=paste("bins_good_",bin_length,sep=""),col.names
= FALSE, row.names = TRUE) }
This generates non-overlapping 3499712 bps windows across the genome.
window_id chromosome start end length
"1" "chr1" 1 3499712 3499712
"2" "chr1" 3499713 6999424 3499712
"3" "chr1" 6999425 10499136 3499712
"4" "chr1" 10499137 13998848 3499712
"5" "chr1" 13998849 17498560 3499712
However, I have a major problem here that I have not been able to sort out. I want one of these windows to overlaps exactly with the starting and ending position of my focal position (6199548:9699260). Only the focal position not any other position outside this region. This means 2 windows (one before and one after the focal position) will have a different length which my analysis is fine with. I just do not know how to do it?
This is my desired output:
window_id chromosome start end length
"1" "chr1" 1 3499712 3499712 "*"
"2" "chr1" 3499713 6199547 2699834 "*"
"3" "chr1" 6199548 9699260 3499712 "*"
"4" "chr1" 9699261 13998848 4299587 "*"
"5" "chr1" 13998849 17498560 3499712 "*"
As you see, the the second window is now smaller, the third window accommodates the focal position and the forth window is a little larger than the bin size. Does anyone have any idea how can I do this task in R?
Answer: I'm not sure how you did your calculation but for me it looks like your region is 3499713 bp in length.
site <- GRanges("chr1:6199548-9699260")
seqlength <- c(chr1=159217232,chr2=184765313,chr3=182121094,chr4=221926752,chr5=187413619)
bin_length <- width(site)
## check length of region
print(bin_length)
##3499713
bins <- tileGenome(seqlength, tilewidth=bin_length, cut.last.tile.in.chrom=TRUE)
## extract overlapping bins
flank.bins <- subsetByOverlaps(bins,site)
## find new coordinates excluding region of interest
new.flank.bins <- setdiff(reduce(flank.bins),site)
## combine new bins with your region of interest
new.bins <- c(new.flank.bins,site)
## remove old bins
bins <- subsetByOverlaps(bins,new.bins,invert=T)
## add new bins
bins <- sort(c(bins,new.bins))
## check that it makes sense
##table(width(bins[seqnames(bins) == 'chr1']))
write.table(bins,file=paste("bins_good_",bin_length,sep=""),col.names= FALSE, row.names = TRUE) | {
"domain": "bioinformatics.stackexchange",
"id": 906,
"tags": "r"
} |
What is a fracton? | Question: Recent, in articles on QFT and condensed matter new objects appear -- fractons.
As I understand now, fracton is a particle with restricted motion: for example, such excitations can move only along line.
If such excitations have restricted motion, how they deal with uncertainty principle?
Could somebody present simple model, where such excitations appear?
Also, it is very interesting to understand, which role such excitations play in condensed matter systems?
Answer: These[1][2] review papers contain good introductions to fractons.
Generally, there is no uncertainty principle for such systems. This is because these fractons are usually emergent particles. For example, we can think of a domain wall excitation in a 1D Ising model as a particle, but this does not have any uncertainty principle associated to it.
The prototype model for fractons is the X-cube model (gapped). Another class of models can be constructed using tensor gauge theories (gapless). Both of these are discussed in those papers and references therein.
These systems have many interesting properties. The constrained motion of these excitations make thermalization slow and the dynamics glassy (here), can be used as a system to store quantum information due to its robust sub-extensive ground state degeneracy, and also has interesting connections to toy models of gravity (here). It is also interesting because it is really a new phase of matter we haven't seen before, and has expanded the problem of classification of phases of matter. I am not aware of any experimental realizations of fractons as of now. | {
"domain": "physics.stackexchange",
"id": 69471,
"tags": "quantum-field-theory, particle-physics, condensed-matter, heisenberg-uncertainty-principle, virtual-particles"
} |
Add item at the beginning of each inner array in jagged array | Question: So, there is a big jagged string array (~ [120] [1 000 000]) here that represents data from an excel worksheet ( columns / rows).
Task: We have to append items at the beginning of each inner array.
I have a solution, but I am not sure about performance.
private string[][] AddField(string[][] exportDataFieldValues, string linkDisplayFieldValue)
{
var newExportDataFieldValues = new List<string[]>();
foreach (var row in exportDataFieldValues)
{
var cellsRange = row.ToList();
cellsRange.Insert(0, linkDisplayFieldValue);
newExportDataFieldValues.Add(cellsRange.ToArray());
}
return newExportDataFieldValues.ToArray();
}
Q: As you can see, my solution contains a lot of type conversions, I believe there is a more effective solution. Сan you provide a more effective solution?
Answer: I guess your code will behave very bad performance wise.
I would suggest to create the cellsRange object first as a List<whatever> and add 0,linkDisplayFieldValue first and then call AddRange() on cellsRange passing row.ToList() as parameter. | {
"domain": "codereview.stackexchange",
"id": 31303,
"tags": "c#, algorithm, array, .net"
} |
How were the oil drops in the millikan oil drop experiment negatively charged? | Question:
In this picture(all others on google are the same), the positive plate is at the top and the negative plate is on the bottom. This means for this experiment to make any sense, the oil drops must be negatively charged. If this is the case, then why are X-rays used to cause the oil droplet to be charged? Surely since X-rays are ionising, they would remove electrons, making the oil droplets positively charged.
Answer: Once you have an X-ray source you have electrons present in your setup. Be it by ionising the electrodes or the oil itself. But where do these ionised electrons go? They are present in the bulk of the system.
The ones that stick around the oil droplet are the ones that need to be observed. Because they’re the ones that’ll be balanced under gravity-electric equilibrium.
The positively charged ones don’t enter the chamber between the electrodes anyway and if they do, they won’t be under equilibrium as gravity and electric force acts in the same direction and thus won’t be considered for observation. | {
"domain": "physics.stackexchange",
"id": 79179,
"tags": "electrons"
} |
What do ants see? | Question: After watching some ants in my garden today, and then looking at this very illuminating demonstration, I got to wondering, about what they would see. Not specifically ants (I understand their eyesight is quite poor), but similarly small, or even smaller creatures.
I guess I'm asking more about the nature of light and how photons are reflected off very small surfaces. Would a very small creature, like say, an ant, with vision, be able to see something as small as a single e. coli bacterium? or a virus? Would their world 'look' the same as ours or does the viewers relative size have a bearing on the quality of their perception?
And additionally beyond the realm of reality, if I could shrink myself down to the size of a bacterium, could I see atoms?
Answer: The other answers to the effect that one needs big optics to see fine detail are indeed true for are true for conventional imaging optics that sense the electromagnetic farfield or radiative field i.e. that whose Fourier component at frequency $\omega$ can be represented as a linear superposition of plane waves with real-valued wave-vectors $(k_x,\,k_y,\,k_z)$ with $k_x^2+k_y^2+k_z^2 = k^2 = \omega^2/c^2$. This is the kind of field which the Abbe diffraction limit applies to and limits "eyes" like our own comprising imaging optics and retinas, or even compound eyes like those of an ant.
However, this is not the whole electromagnetic field: very near to the objects that interact with it, the electromagnetic field includes nearfield or evanenescent field components. These are generalised plane waves for which:
The component of the wavevector in some direction $k_\parallel$ is greater than the wavenumber $k$ and can thus encode spatial variations potentially much smaller than a wavelength;
The component of the wavevector $k_\perp$ orthogonal to this direction must therefore be imaginary, so that $k_\parallel^2 + k_\perp^2 = k^2$ can be fulfilled.
So such fields decay exponentially with distance from the disturbance to the electromagnetic field that begat them and thus cannot normally contribute to an image formed by an imaging system.
However, if you can bring your image sensors near enough to the disturbance, you can still register the detail encoded in the finer-than-wavelength evanescent components. This is the principle of the Scanning Nearfield Optical Microscope.
The near field optical microscope sensor can be extremely small indeed, so that a bacterium sized lifeform could register below-wavelength detail in the World around it with receptors built of a few molecules as long as the lifeform were near enough to the detail in question. Note that, when $k_\parallel > k$ that the fields decay like $exp(-\sqrt{k_\parallel^2-k^2} z)$ with rising distance $z$ from their sources. So there is a tradeoff between how much finer than a wavelength we can see with such a sensor, and how near to the source we need to be to see it. If we want to see features one tenth of the wavelength of seeable light, then $k\approx 12{\rm \mu m^{-1}}$ and $k_\parallel \approx 120{\rm \mu m^{-1}}$, so that the amplitude of the nearfield decays by a factor of $e$ for each hundredth of a wavelength distant from the source the detector is. Thus we lose about 10dB signal to noise ratio for every hundredth of a wavelength distance that separates the detector and source. So to sense such fine detail (50nm structures) from a micron away would need extremely strong light sources, so that the detectors would have a very clean signal.
Of course, the above is an extreme example, but if you're a bacterium sized lifeform directly sensing the field using a finely spaced array of molecular sensors, you may well be able to "see" below-wavelength features of the World in your immediate neighbourhood. Moreover, it is possible to conceive of a tiny creature "feeling" its neighbourhood using molecular atomic force microscopes.
So, yes, if you include all physics and heed the proviso that you must get up really close to the sensed objects, it would be possible for a bacterium sized lifeform to see below-wavelength detail in its immediate neighbourhood, maybe even individual atoms if we include atomic force sensing.
Of course, packing all the signal processing "brain" into the lifeform needed to understand this information might be another matter altogether. | {
"domain": "physics.stackexchange",
"id": 11974,
"tags": "photons, visible-light, microscopy"
} |
Boric acid ant poison | Question: I made an ant poison using boric acid and peanut butter for grease eating ants. They eat the peanut butter (mixed with some olive oil to get it closer to a liquid), but I see a white residue (crystals?) left behind. How can I get the boric acid dissolved in an oil or grease?
Answer: Boric acid is soluble in water, not oil. Pick an emulsifier and add it to your ant poison.
Boric acid needs to remain acidic to retain its effectiveness as an ant killer. Ants deaths due to boric acid poisoning is a topic still up for debate; however, it has been suggested boric acid is a neuro-toxin and that the acid damages one of the organs in an ants digestive system.
Peanut butter is usually slightly acidic; however, your particular mixture may be rendering the boric acid inert. Solutions of sugar and boric acid with a high pH have been known to pass right through ants doing little harm. Perhaps using an acidic additive will increase the bait's effectiveness.
Boric acid in a 1-2% concentration may be ideal if the entire ant colony is targeted. To target the entire colony, enough poison should be used so the ant does not die before it makes it back it its colony. | {
"domain": "chemistry.stackexchange",
"id": 5904,
"tags": "everyday-chemistry"
} |
Show that,with the array representation for sorting an n-element heap, the leaves are the nodes indexed by n⌊n/2⌋+1,⌊n/2⌋+2,…,n | Question: The Question of the CLRS $6.1-7$ exercise reads as:
Show that, with the array representation for sorting an n-element heap, the leaves are the nodes indexed by $\lfloor n / 2 \rfloor + 1, \lfloor n / 2 \rfloor + 2, \ldots, n⌊n/2⌋+1,⌊n/2⌋+2,…,n$.
I looked for the solution here:
https://walkccc.github.io/CLRS/Chap06/6.1/
The solution was provided like this:
Let's take the left child of the node indexed by $\lfloor n / 2 \rfloor + 1.$
\begin{aligned} \text{LEFT}(\lfloor n / 2 \rfloor + 1) & = 2(\lfloor n / 2 \rfloor + 1) \\ & > 2(n / 2 - 1) + 2 \\ & = n - 2 + 2 \\ & = n. \end{aligned}
I can't understand this statement:
$LEFT(⌊/2⌋+1) > 2(/2−1)+2$
Please help me out.
Thank you.
Answer: So, basically in heap representation, $LEFT(i)$ refers to the index of $i's$ left child. What we want to show is that index $⌊/2⌋+1$ is a leaf and is not a middleware node which can be proved if we could show the index of the left child is larger than the number of elements in the heap.
On the other hand, $LEFT(⌊/2⌋+1) = 2(⌊/2⌋+1) = 2⌊/2⌋+2 $ and with removing those brackets around the $n/2$ we can show that it is larger than $2(n/2-1)+2 = n$. | {
"domain": "cs.stackexchange",
"id": 17917,
"tags": "algorithms, heap-sort"
} |
Gradient descent does not converge in some runs and converges in other runs in the following simple Keras network | Question: When training a simple Keras NN (1 input, 1 level with 1 unit for a regression task) during some runs I get big constant loss that does not change in 80 batches. During other runs it decreases. What may be the reason that gradient does not converge in some runs and converges in other runs in the following network: ?
import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import layers
# Generate data
start, stop = 1,100
cnt = stop - start + 1
xs = np.linspace(start, stop, num = cnt)
b,k = 1,2
ys = np.array([k*x + b for x in xs])
# Simple model with one feature and one unit for regression task
model = keras.Sequential([
layers.Dense(units=1, input_shape=[1], activation='relu')
])
model.compile(loss='mae', optimizer='adam')
batch_size = int(cnt / 5)
epochs = 80
Next goes callback to save the Keras model weights at some frequency. According to Keras docs:
save_freq: 'epoch' or integer. When using 'epoch', the callback should save the model after each epoch.
When using integer, the callback should save the model at end of this many batches.
weights_dict = {}
weight_callback = tf.keras.callbacks.LambdaCallback \
( on_epoch_end=lambda epoch, logs: weights_dict.update({epoch:model.get_weights()}))
Train model:
history = model.fit(xs, ys, batch_size=batch_size, epochs=epochs, callbacks=[weight_callback])
I get:
Epoch 1/80
5/5 [==============================] - 0s 770us/step - loss: 102.0000
Epoch 2/80
5/5 [==============================] - 0s 802us/step - loss: 102.0000
Epoch 3/80
5/5 [==============================] - 0s 750us/step - loss: 102.0000
Epoch 4/80
5/5 [==============================] - 0s 789us/step - loss: 102.0000
Epoch 5/80
5/5 [==============================] - 0s 745us/step - loss: 102.0000
Epoch 6/80
...
...
...
Epoch 78/80
5/5 [==============================] - 0s 902us/step - loss: 102.0000
Epoch 79/80
5/5 [==============================] - 0s 755us/step - loss: 102.0000
Epoch 80/80
5/5 [==============================] - 0s 1ms/step - loss: 102.0000
Weights:
for epoch, weights in weights_dict.items():
print("*** Epoch: ", epoch, "\nWeights: ", weights)
Output:
*** Epoch: 0
Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)]
*** Epoch: 1
Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)]
*** Epoch: 2
Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)]
*** Epoch: 3
Weights: [array([[-0.44768167]], dtype=float32), array([0.], dtype=float32)]
...
...
As you can see, weights and biases also do not change, bias = 0.
Yet on other runs gradient descent converges, weights and non-zero biases are fitted with much smaller loss. The problem is repeatable. The problem is that it converges in 30% of runs with exactly the same set of parameters that it does not in 70% of runs. Why it does some times and some times does not with the same data and parameters?
Answer: Their is some random elements when using packages such as TensorFlow, Numpy etc. Some examples includes:
How the weights are initialized.
How the data is shuffled (if enabled) in each batch. Batches containing different data, will produce different gradients which might influence convergence.
This means, that even when you run the same code it is actually not 100% the same and that is why you get different results.
If you want the same results, you should fix the random seed as follow: tf.random.set_seed(1234). This is usually done after the imports. The value 1234 can be any integer, for example if I use a value of 500 I get the same results and good convergence.
Some other points to note
If I remember correctly calculations perform using a GPU might also
introduce random factors.
It is a good idea to also fix Numpy seed, random package seed and any function which takes a seed value e.g. sklearn.model_selection.train_test_split | {
"domain": "datascience.stackexchange",
"id": 8639,
"tags": "keras, tensorflow, gradient-descent"
} |
What is "σ" in a context free grammar? | Question: I have a grammar like this:
A → BAB | B | ε
B → 00σ | ε
What is the meaning of σ in the second rule?
Answer: Most probably, it means “for each letter $\sigma$ from alphabet $\Sigma$”.
If $\Sigma = \{ 0, 1 \}$, then B → 000 | 001 | ε. | {
"domain": "cs.stackexchange",
"id": 15855,
"tags": "context-free, formal-grammars"
} |
ROS Groovy Turtlebot Install from Debs Path error | Question:
Hello!
I am following the instructions from here:
ros.org/wiki/turtlebot/Tutorials/groovy/Installation
Doing the debs installation to create my turtlebot computer.
I got all the way down to:
One last step if you have a kobuki base, you'll need to add kobuki's udev rules (you'll need your sudo password):
. /opt/turtlebot/groovy/setup.bash
rosrun kobuki_ftdi create_udev_rules
And encountered a small problem:
/opt/turtlebot/ does not exist.
Suggestions?
Originally posted by kpowell34567 on ROS Answers with karma: 11 on 2013-02-13
Post score: 0
Answer:
I had the same problem. My directory is
. /opt/ros/groovy/setup.bash rosrun kobuki_ftdi create_udev_rules
instead of
. /opt/turtlebot/groovy/setup.bash rosrun kobuki_ftdi create_udev_rules
Originally posted by teichel1 with karma: 51 on 2013-02-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Daniel Stonier on 2013-02-21:
Yes, there was a typo on the tutorial that's been fixed since. Thanks. | {
"domain": "robotics.stackexchange",
"id": 12870,
"tags": "ros, turtlebot, kobuki, groovy-turtlebot"
} |
bags file to excel | Question:
hi,
i want to cconvert bags file to array (excel for example).
there is any solution to do this?
Originally posted by Emilien on ROS Answers with karma: 167 on 2016-04-05
Post score: 0
Answer:
There are a couple of good posts about converting bag files to CSV, which can be imported into excel: http://answers.ros.org/question/9102/how-to-extract-data-from-bag/ and http://answers.ros.org/question/55037/is-there-a-way-to-save-all-rosbag-data-into-a-csv-or-text-file/
For the very simple case of extracting a simple message on a single topic, rostopic echo -b file.bag -p /topic works quite well.
Originally posted by ahendrix with karma: 47576 on 2016-04-05
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Emilien on 2016-04-05:
thank you very much | {
"domain": "robotics.stackexchange",
"id": 24320,
"tags": "ros"
} |
Does Cl- Have more -I effect or +M effect ( Resonance effect)? | Question:
In this question my answer was (A) because I thought $\ce{NH2}$ group would give more electrons than $\ce{Cl-}$. But the correct answer is (D) and my teacher's reason is that chlorine has more negative Inductive effect than positive Resonance effect. Is that true?
Answer: Yes as you said the inductive effect of chlorine exceeds the strength of it's +R effect.
This effect becomes clear when you study aromaticity, that chlorine is a deactivating group (i.e it destabilizes the carbocation formed) but is still Ortho - para directing, because it still causes stabilization by the +R effect in those positions.
However unlike other +R groups which are also activating, chlorine is deactivating, because its inductive effect is slightly stronger than it's +R effect
( Note that all comparisons are based on which effect causes more stability) | {
"domain": "chemistry.stackexchange",
"id": 14038,
"tags": "organic-chemistry, acid-base, aromatic-compounds, resonance, inductive-effect"
} |
What does it mean for an action to be invariant under $x \to x'$, $\phi \to \phi'$? | Question: I'm suddenly getting very confused about a basic question. Suppose somebody tells you that the action is invariant under the transformation
$$x \to x', \quad \phi(x) \to \phi'(x').$$
I realize this notation is ambiguous, but it seems to be common. For example, one might define a Lorentz transformation in this sloppy fashion as
$$x \to \Lambda x, \quad \phi \to \phi(\Lambda^{-1}x)$$
or a dilation transformation as
$$x \to \lambda x, \quad \phi \to \lambda^\alpha \phi(x/\lambda).$$
Now suppose the action is
$$S_{000}^0 = \int_a^b dx \, h(\phi(x)).$$
Then I can think of fifteen things "the action is invariant" could naively mean. Define
$$S^1_{111} = \int_{f(a)}^{f(b)} dx' \, h(\phi'(x')), \quad S^0_{101} = \int_a^b dx'\, h(\phi(x')), \quad S^1_{010} = \int_{f(a)}^{f(b)} dx \, h(\phi'(x))$$
along with twelve other quantities in what is hopefully a self-explanatory notation. Then one of these quantities is equal to $S_{000}^0$, but which one is typically meant?
Answer:
Noether's theorem works even for non-geometric theories, so to be as general and simple as possible, we shall not use notions & concepts from differential geometry. For the purpose of Noether's theorem, it is enough to discuss infinitesimal variations:
$$
\delta x^{\mu}
~:=~ x^{\prime\mu} - x^{\mu}
~=~ \varepsilon~ X^{\mu}(x),\tag{1}$$
$$
\delta\phi^{\alpha}(x) ~:=~\phi^{\prime\alpha}(x^{\prime})-\phi^{\alpha}(x) ~=~ \varepsilon~ Y^{\alpha}(\phi(x),\partial\phi(x), x),\tag{2}$$
where $\varepsilon$ is an infinitesimal ($x$-independent) parameter, and $X^{\mu}$ and $Y^{\alpha}$ are generators.
If $V~\subseteq~\mathbb{R}^4$ is a spacetime region, let
$$ V^{\prime}~:=~\{ x^{\prime}\in \mathbb{R}^4 \mid x \in V \} ~\subseteq~\mathbb{R}^4 \tag{3}$$
denote the varied spacetime region.
The infinitesimal variation of the action is by definition
$$\begin{align}\delta S_V~:=~& S_{V^{\prime}}[\phi^{\prime}] -S_V[\phi]\cr ~:=~& \int_{V^{\prime}}\! d^4x^{\prime}~{\cal L}(\phi^{\prime}(x^{\prime}),\partial^{\prime}\phi^{\prime}(x^{\prime}),x^{\prime})
\cr &-\int_V\! d^4x~{\cal L}(\phi(x),\partial\phi(x),x).\end{align}\tag{4}$$
Formula (4) is $S^1_{111}-S^0_{000}$ in OP's notation. See e.g. Refs. 1 & 2.
The infinitesimal variation (1) & (2) are called a quasi-symmetry of the action if the infinitesimal variation (4) is a boundary integral, cf. my Phys.SE answer here. In the affirmative case, Noether's theorem leads to an on-shell conservation law.
References:
H. Goldstein, Classical Mechanics, 2nd edition, Section 12.7.
H. Goldstein, Classical Mechanics, 3rd edition, Sections 13.7. | {
"domain": "physics.stackexchange",
"id": 44366,
"tags": "lagrangian-formalism, symmetry, definition, action, noethers-theorem"
} |
Difference between flexion and contraction? | Question: i asked this to my anatomy teacher and he said there is no difference but when it comes to specialy in body building when you say to flex their bicep they freeze their upper limb in order to do that but when you say contract your bicep they only isolate it an shortens it.
am I right with my reason?
Answer: Flexion: the movement that decreases the angle between two parts [1]. Examples: clenching the hand into fist, sitting down.
Contraction: the property of muscle to generate tension when actin and myosin filaments are crossing. There are a few types of contractions. The isometric contraction is when the muscle generates tension but its length doesn't change (for example when you try to lift something that you can't). The isotonic contraction is when the muscle changes its length [2]. This kind of contraction leads to movements and can lead to flexion too.
References:
Wikipedia contributors, "Anatomical terms of motion," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Anatomical_terms_of_motion&oldid=612679254 (accessed June 26, 2014).
Wikipedia contributors, "Muscle contraction," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Muscle_contraction&oldid=609265714 (accessed June 26, 2014). | {
"domain": "biology.stackexchange",
"id": 2344,
"tags": "human-anatomy, movement"
} |
When would pandas.get_dummies()'s parameter of drop_first=False be appropriate to use? | Question: While working on some case studies that use various machine learning models, I came across a project for predicting churn in the telecom industry. The Jupyter Notebook I saw had the following lines of code:
dummies1 = pd.get_dummies(df_churn, columns=cat_cols, drop_first=False) #for kNN and decision trees
dummies2 = pd.get_dummies(df_churn, columns=cat_cols, drop_first=True) #for logistic regression
As far as I knew, drop_first prevents the so-called 'Dummy Variable Trap'. Is there a reason for not dropping the extra dummy variable in the case of running kNN and Decision Tree models?
The Jupyter Notebook's author is unreachable, and I could not find the answers elsewhere.
Answer: The purpose of drop_first=True is usually to avoid multicollinearity.
In one-hot encoding, you set 1 to the position associated to a discrete value among n possible options. When you one-hot encode a value, there is redundant information because you can figure out the value of any of the positions by computing 1 minus the sum of all other values. This means that any position of the one-hot encoded variable is a linear combination of the other positions.
This linear correlation, however, can be a problem in some cases. One example is when you want to know the effect the input features have on the prediction of a logistic or linear regression model (see this for details).
One solution to the multicollinearity of one-hot encoding is simply to, remove one of the values. With that, you don't loose information and, at the same time, you remove the multicollinearity. drop_first=True is precisely for that.
In cases where multicollinearity is not a problem, however, it is desirable to have all the dummies and therefore use drop_first=False. Nevertheless, it should be decided case by case. For instance, in decision trees, it's easier to learn the association between the dummy having value 1 than to learn that all the others have value 0. In the case of k nearest neighbours, having n - 1 dummies makes the distances between samples with the missing value different than distances between samples with the other values, so it would be best to have all the dummies (at least with the Euclidean distance).
Other cases where it's specifically better to have all dummies is the case where there are missing values and you are encoding them as all zeroes.
Finally, in models with regularization, it is also needed to keep all dummies because otherwise, the predictions may depend on which column you leave out (see this). | {
"domain": "datascience.stackexchange",
"id": 12163,
"tags": "machine-learning, python, pandas, dummy-variables"
} |
What is subpixel matching precision for stereo? | Question:
Hello,
I'm trying to figure out what the subpixel stereo matching precision is in stereo_image_proc (that is, how many pixels/subpixels of disparity can the stereo matching algorithm resolve). I looked at the code a bit, and I think it's 1/4-pixel, but I've also heard it's 1/8-pixel. Do you which it is (or maybe a different value altogether)? Thanks!
Originally posted by rdbrewer on ROS Answers with karma: 66 on 2013-05-26
Post score: 0
Answer:
I am pretty sure it's 1/16th pixel. In disparity.cpp of stereo_image_proc it says so explicitly:
// OpenCV uses 16 disparities per pixel
Best
Tim
Originally posted by timster with karma: 396 on 2013-08-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 14303,
"tags": "opencv, stere-image-proc"
} |
Equivalence of logical Formula (Kripke structures) | Question: Can someone explain me how to find if these formulas are equivalent with Kripke structures?
AG(Fp or Fq) , A(GFp or GFq)
AGF(p and q) , A(GFp and GFq)
AFG(p and q) , A(FGp and FGq)
Thank you in advance for your help :)
Answer: First, these can all be looked at as LTL formulas. Therefore, their equivalence can be determined by their behaviour on traces (no need for Kripke structures with a complicated structure).
This answer is just a bunch of spoilers, so read only if you already tried your best.
For the first pair - they are equivalent. Intuitively, this is because there are infinitely many p's or infinitely many q's iff there are infinitely many (p's or q's).
Foramlly,
$\pi\models G(Fp \vee Fq)$ iff at every index $i$, $\pi^i\models (Fp\vee Fq)$, meaning that in every index, there is eventually a $p$ or a $q$. Equivalently, this means that there are infinitely many $p$'s or $q$'s, which is equivalent to $\pi\models GFp\vee GFq$.
For the second pair - they are not equivalent. Try and think of a counter example. If you really can't - leave a comment.
The third pair are also equivalent. Informally - this is because they both state that after a finite prefix, both $p$ and $q$ always hold. The formal argument just follows from the semantics. | {
"domain": "cs.stackexchange",
"id": 1421,
"tags": "logic, linear-temporal-logic"
} |
Building cv_bridge with catkin fails | Question:
Trying to build cv_bridge fails with this error:
==> Processing catkin package: 'camera_calibration_parsers'
==> Building with env: '/Users/tatsch/ros_catkin_ws/install_isolated/env_cached.sh'
Makefile exists, skipping explicit cmake invocation...
==> make cmake_check_build_system in '/Users/tatsch/ros_catkin_ws/build_isolated/camera_calibration_parsers'
==> make -j4 in '/Users/tatsch/ros_catkin_ws/build_isolated/camera_calibration_parsers'
==> make install in '/Users/tatsch/ros_catkin_ws/build_isolated/camera_calibration_parsers'
<== Finished processing package [97 of 153]: 'camera_calibration_parsers'
==> Processing catkin package: 'cv_bridge'
==> Building with env: '/Users/tatsch/ros_catkin_ws/install_isolated/env_cached.sh'
Makefile exists, skipping explicit cmake invocation...
==> make cmake_check_build_system in '/Users/tatsch/ros_catkin_ws/build_isolated/cv_bridge'
==> make -j4 in '/Users/tatsch/ros_catkin_ws/build_isolated/cv_bridge'
Linking CXX shared library /Users/tatsch/ros_catkin_ws/devel_isolated/cv_bridge/lib/libcv_bridge.dylib
ld: warning: directory not found for option '-L/Users/tatsch/ros_catkin_ws/install_isolated/share/OpenCV/3rdparty/lib'
[ 50%] Built target cv_bridge
Linking CXX shared library /Users/tatsch/ros_catkin_ws/devel_isolated/cv_bridge/lib/python2.7/site-packages/cv_bridge/boost/cv_bridge_boost.dylib
ld: warning: directory not found for option '-L/Users/tatsch/ros_catkin_ws/install_isolated/share/OpenCV/3rdparty/lib'
Undefined symbols for architecture x86_64:
"_PyErr_SetString", referenced from:
failmsg(char const*, ...) in module.cpp.o
"_PyExc_TypeError", referenced from:
failmsg(char const*, ...) in module.cpp.o
"_PyImport_ImportModule", referenced from:
init_module_cv_bridge_boost() in module.cpp.o
"_PyInt_FromLong", referenced from:
boost::python::to_python_value<int const&>::operator()(int const&) const in module.cpp.o
"_PyInt_Type", referenced from:
boost::python::to_python_value<int const&>::get_pytype() const in module.cpp.o
"_PyObject_AsWriteBuffer", referenced from:
convert_to_CvMat(_object*, CvMat**, char const*) in module.cpp.o
"_PyObject_CallObject", referenced from:
FROM_CvMat(CvMat*) in module.cpp.o
"_PyObject_GetAttrString", referenced from:
FROM_CvMat(CvMat*) in module.cpp.o
"_PyString_AsString", referenced from:
convert_to_CvMat(_object*, CvMat**, char const*) in module.cpp.o
"_Py_BuildValue", referenced from:
FROM_CvMat(CvMat*) in module.cpp.o
"__Py_NoneStruct", referenced from:
boost::python::api::object::object() in module.cpp.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [/Users/tatsch/ros_catkin_ws/devel_isolated/cv_bridge/lib/python2.7/site-packages/cv_bridge/boost/cv_bridge_boost.dylib] Error 1
make[1]: *** [src/CMakeFiles/cv_bridge_boost.dir/all] Error 2
make: *** [all] Error 2
<== Failed to process package 'cv_bridge':
Command '/Users/tatsch/ros_catkin_ws/install_isolated/env_cached.sh make -j4' returned non-zero exit status 2
Command failed, exiting.
OpenCV compiled successfully before, but /Users/tatsch/ros_catkin_ws/install_isolated/share/OpenCV/3rdparty/lib is indeed empty.
In which step should it get filled or do I have to do it manually?
Can someone give me a hint?
Originally posted by J.M.T. on ROS Answers with karma: 266 on 2013-01-15
Post score: 3
Original comments
Comment by WilliamWoodall on 2013-01-15:
Can post the output from ./src/catkin/bin/catkin_make_isolated -j1
Comment by WilliamWoodall on 2013-01-15:
It looks like boost python is missing some symbols from the Python libraries. Are you using the system python or have you installed python from a mpkg or from brew?
Comment by WilliamWoodall on 2013-01-15:
Yeah, -j4 just tends to interleave errors and build logs, -j1 is easier to read and tell what is generating the error.
Answer:
I have made a pull request against the vision_opencv to fix this. It should be merged and released in a few days time.
In the mean time, a patch:
diff --git a/cv_bridge/src/CMakeLists.txt b/cv_bridge/src/CMakeLists.txt
index 03a02b8..6c8a69f 100644
--- a/cv_bridge/src/CMakeLists.txt
+++ b/cv_bridge/src/CMakeLists.txt
@@ -20,6 +20,7 @@ include_directories(SYSTEM ${PYTHON_INCLUDE_PATH}
add_library(${PROJECT_NAME}_boost module.cpp)
target_link_libraries(${PROJECT_NAME}_boost ${Boost_LIBRARIES}
${catkin_LIBRARIES}
+ ${PYTHON_LIBRARIES}
${PROJECT_NAME}
)
Originally posted by WilliamWoodall with karma: 1626 on 2013-01-16
This answer was ACCEPTED on the original site
Post score: 4
Original comments
Comment by WilliamWoodall on 2013-01-16:
No problem, glad it helps. | {
"domain": "robotics.stackexchange",
"id": 12431,
"tags": "ros-groovy, cv-bridge, osx"
} |
Using a trained Model from Pickle | Question: I trained and saved a model that should predict a sons hight based on his fathers height.
I then saved the model to Pickle.
I can now load the model and want to use it but unfortunately a second variable is demanded from me (besides the height of the father) I think I did something wrong when training the model?
I will post the part of the code wher I think the error is in, please ask if you need more.
#Spliting the data into test and train data
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=0)
#Doing a linear regression
lm=LinearRegression()
lm.fit(X_train,y_train)
#testing the model with an example value
TestValue = 65.0
filename = 'Father_Son_Height_Model.pckl'
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.predict(TestValue)
print(result)
The error message says:
ValueError: Expected 2D array, got scalar array instead:
array=65.0.
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
Thanks alot in advance.
Answer: You need to use loaded_model.predict(TestValue), not loaded_model.score(TestValue). The latter is for evaluating the models accuracy, and you would also need to pass the true height of the son, which is the y value it's asking for. | {
"domain": "datascience.stackexchange",
"id": 6051,
"tags": "machine-learning, linear-regression, pickle"
} |
PT-symmetric hamiltonians: Transition from non-dissipative to dissipative system | Question: It is known that Spectrum of a non-hermitian hamiltonian is complex ($E_n-i\gamma_n$) and they represent the dissipative system. eg, Damped harmonic oscillator(DHO), where $E_n$ are the energies of the DHO and $\gamma_n$ describes the decaying rate of its states. It is also known that Spectrum of a hermitian hamiltonian is real and they represent the Non-dissipative system(HO).
In PT-Symmetric hamiltonians: Spectrum is real if PT-symmetry is unbroken and it is complex if it is broken.
Now my ques is: this transition of PT-symmetry can be understood as transition from dissipative system to non-dissipative system ?.
Could anybody give me a simple example ?
Answer: Is the breaking of PT symmetry in a quantum mechanical system indicating the transition from a non-dissipative to a dissipative system: no, PT symmetric systems are in between closed (Hermitian) and open (general, non-Hermitian) systems in the following sense.
A quantum mechanical system with a Hermitian Hamiltonian is closed and probability is conserved (and in its classical analog (if it exists), the total energy in the system is conserved). A QM system with non-Hermitian Hamiltonian generally is open, it looses probability and indeed exhibits dissipation in this sense.
A quantum mechanical system with a non-Hermitian, PT-symmetric Hamiltonian may exhibit PT symmetric eigenfunctions and real eigenvalues (unbroken PT symmetry) or complex eigenvalues and non-PT symmetric eigenfunctions.
As a simple example, we consider two subsystems one of which loses probability (and the classical counterpart would lose energy, so there is dissipation) and the other gains exactly the same amount. Such coupled QM system is PT symmetric and exhibits real eigenvalues for sufficiently large coupling strength such that the eigenfunctions are also PT symmetric (unbroken PT symmetry). If the coupling becomes too weak, the eigenvalues cease to be real but become complex, indicating a state of broken PT symmetry, i.e. eigenfunctions are not any longer PT symmetric.
The breaking of PT-symmetry, rather than a transition from an open (dissipative) to a closed (non-dissipative) system, indicates a transition from a balance between loss and gain to a state without such balance. | {
"domain": "physics.stackexchange",
"id": 53084,
"tags": "quantum-mechanics, dissipation"
} |
Logger class for MVC Framework | Question: I have created a logger class for my own framework. Now I am trying to identify the components which can be done in a better way. The logger class doesn't log anything inside a file but it logs the files that are getting loaded (say controller, model, etc.) and displays it in HTML. I think it's bad practice to output HTML in a logger class but I have not read this anywhere. The reason I am outputting HTML is because when you are in a URL like - localhost/controller/method and the logger is turned on, then on this page it will display all the files (filename, classname and method) that has been loaded to fulfill the request.
Is it ok to do it this way?
class EH_Logger {
/**
* Holds logs
* @var array
*/
static $logText = array();
/**
* Holds the time taken for rendering the application
* @var string
*/
static $loadingTime;
/**
* Structures the table that needs to be displayed.
*/
static function printLog() {
$time = microtime(true);
$query = "";
$query_exist = false;
self::$loadingTime = $time - STARTTIME;
$html = "<div id='logTableHolder'>
<a href=''
onClick=\"var displayObj = document.getElementById('logtable');
if(displayObj.style.display=='none') {
displayObj.style.display ='';
this.innerHTML='(-)';
} else {
displayObj.style.display='none';
this.innerHTML='(+)';
}
return false\">
(-)
</a>";
$html .= "<table id='logtable' border='1'><thead>
<tr class='log_header'>
<th>Component</th>
<th>Location</th>
<th>Classname</th>
</tr></thead>";
$query .= " <tbody><tr></tr>
<tr class='log_header'>
<th>Component</th>
<th colspan='2'>Query</th>
<th>Time</th>
</tr>";
foreach (self::$logText as $data => $messageData) {
foreach ($messageData as $key => $message) {
$classname = (isset($message['CLASSNAME'])) ? $message['CLASSNAME'] : "-";
$vars = (isset($message['VARS'])) ? $message['VARS'] : "-";
if($data == "DATABASE") {
$query .= "<tr>
<td>$data</td>
<td colspan='2'>" . $message['QUERY'] . "</td>
<td>" . $message['TIME'] . "</td>
</tr>";
$query_exist = true;
} else {
$html .= "<tr>
<td>$data</td>
<td>" . str_replace('\\', '/', $message['PAGENAME']) . "</td>
<td>" . $classname . "</td>
</tr>";
}
}
}
$html .= ($query_exist)?$query:"";
$html .= "<tr><td colspan='4'>".sprintf ("This page took <strong>%f</strong> seconds to load.", self::$loadingTime );".</td></tr>";
$html .= "</tbody></table></div>";
echo $html;
echo self::applyLoggerCSS();
}
/**
* Apply CSS to the table
* @return string
*/
public static function applyLoggerCSS() {
return '<style type="text/css">
table#logtable {
background: none repeat scroll 0 0 #FFFFFF;
border: 6px solid #EEEEEE;
border-collapse: collapse;
box-shadow: 0 0 30px 0 #454545;
color: #6C6C6C;
font: 11px/24px Verdana,Arial,Helvetica,sans-serif;
width: 100%;
text-align: left;
}
#logTableHolder {
position: absolute;
width: 60%;
left: 50%;
margin-left: -30%;
top: 25px;
z-index: 999;
}
#logTableHolder a {
background: none repeat scroll 0 0 #EEEEEE;
border-radius: 2px 2px 2px 2px;
left: 97%;
margin-bottom: 29px;
padding: 0 4px;
position: absolute;
}
#logtable th {
padding: 0 0.5em;
text-align: left;
}
#logtable tr.yellow td {
border-top: 1px solid #FB7A31;
border-bottom: 1px solid #FB7A31;
background: #FFC;
}
#logtable td {
border-bottom: 1px solid #CCC;
padding: 0 0.5em;
}
#logtable td:first-child {
width: 190px;
}
#logtable td+td {
border-left: 1px solid #CCC;
text-align: left;
}
</style>';
}
}
Answer: As a general rule, if you are logging to the screen, you aren't production ready. There may be situations where you can limit the display to just when you are logged in, but in general, you log to files so that only you can view them.
When doing multiline strings, you should use either heredoc or nowdoc syntax.
Heredoc:
$query .= <<<EOHTML
<tr>
<td>$data</td>
<td colspan="2">{$message['QUERY']}</td>
<td>{$message['TIME']}</td>
</tr>
EOHTML;
Nowdoc:
return <<<'EOCSS'
<style type="text/css">
table#logtable {
background: none repeat scroll 0 0 #FFFFFF;
border: 6px solid #EEEEEE;
border-collapse: collapse;
box-shadow: 0 0 30px 0 #454545;
color: #6C6C6C;
font: 11px/24px Verdana,Arial,Helvetica,sans-serif;
width: 100%;
text-align: left;
}
</style>
EOCSS;
If you are logging to HTML purely because it's easier to get the data that you want, you may want to check out debug_backtrace. That will give you more verbose output that you can log. | {
"domain": "codereview.stackexchange",
"id": 10258,
"tags": "php, classes, logging"
} |
Dipole Moment and separation | Question: My understanding of a dipole moment is it defines the strength of various dipole interactions, eg the common example of torque in an electric field. My confusion stems from the fact that the magnitude of the dipole moment is proportional to the charge separation. In many areas of physics, we deal with quantities that are inversely proportional to distance. I can reason very intuitively that as I move further from something, our mutual interaction is reduced. The idea that in a dipole, the further you remove something, the stronger the interaction becomes is equally counter-intuitive to me.
For example, if I removed two charges to infinity from each other, their dipole moment is infinity, and they are the "strongest" dipole possible. In fact, why don't massive dipole moments exist between charged particles on earth and charge particles on some star light-years away. With such a large dipole moment, even the smallest electric field would result in an unimaginable torque on our charged particles. I know particle charges are distributed on a macroscopic scale very finely so as to be overall neutral, but there must be even to a minuscule order some small imbalance that would evidence these incredible torques.
I suppose I think of the dipole moment as some kind of force, or at least proportional to a force, and I am having trouble understanding how a force's effect can increase with distance.
Please help me understand this better.
Answer: Indeed, there must be dipole moments of such magnitude occurring (if you just base it on the definition)! But usually, there is no extensive electric field to support the continuous rotation of the dipoles, because the net electric field in space is zero. Though there will occur some abrupt torque (again, based on definition) of intense magnitude due to the interaction of the dipole charges to the electric field of other charges, the torque will quickly reverse direction alternately, and the average torque will become zero.
The blue-colored is the dipole charges.
At the left side of the universe, a charge from the dipole interacts with other charges, jiggling it back and forth.
In this case, $T = r \times F$.
But because they're not connected, this torque has little to no effect on the position of the dipole charge at the right side of the universe. Only the relative position with each other has changed. | {
"domain": "physics.stackexchange",
"id": 31726,
"tags": "electrostatics, dipole-moment"
} |
Decomposing the n-cube into vertex-disjoint paths | Question: I am not sure if this question is better suited for cs.stackexchange or math.stackexchange - This is of interested to me in the context of a data structure problem, but the question itself may be better suited elsewhere. If so, please feel free to move my question.
Let $\mathcal{Q}_n$ denote the $n$-cube graph. Given a set of vertices $S = \{s_1, \ldots s_k \}$ and $T = \{t_1, \ldots t_k\}$, we would like to decompose $\mathcal{Q}_n$ into $k$ vertex disjoint paths $P_1, \ldots P_k$, such that each $P_i$ begins with $s_i$ and ends with $t_i$. Here, two paths are vertex-disjoint if they do not share any vertices. In addition, I require that every vertex in $\mathcal{Q}_n$ be part of some path $P_j$. I would like to know under what conditions (particularly on $S$ and $T$) can we guarantee that such a decomposition exists?
Also, I would like to know the name of this problem so that I can better search for references.
Answer: I found the following result due to Gregor and Dvorak. They show that if the sets $S$ and $T$ are balanced (in the sense that they contain the same amount of vertices from both bi-partitions of the $n$-cube), then such paths exists whenever $2k-e < n$, where $e$ is the number of pairs $(s_i, t_i)$ that form edges in $\mathcal{Q}_n$. The paper can be found here: http://www.sciencedirect.com/science/article/pii/S0020019008002238
I can construct an example when $k = 2$, $e=0$ and $n=3$, so this result isn't completely tight, however. | {
"domain": "cs.stackexchange",
"id": 6428,
"tags": "algorithms, graphs"
} |
Given a word, find other words in an array with same length and same characters | Question: I tried solving the problem in the following manner; I am just a beginner and wanted to know my mistakes and a more efficient way with better time complexity (if any).
public class d3 {
public static void main(String[] args){
String Search="loop";
String[] letters = Search.split("") ;
int counter;
String[] words={"pole","pool","lopo","book","kobo"};
for(int i=0;i<words.length;i++)
{
counter=0;
String ssplit[] = words[i].split("");
for(int j=0;j<words[i].length();j++)
{
if(letters.length==ssplit.length)
{
for(int k=0;k<letters.length;k++)
{
if(letters[j].equals(ssplit[k]))
{counter++;
ssplit[k]="*";
break;
}
}
if (counter == 4)
{
System.out.println(words[i]);
}
}
}
}
}
}
Answer: Whitespace is very important for readability. For posting to Stack Exchange sites I recommend replacing tabs with spaces, because otherwise the site does that for you and the tabstops might not match. Here, though, the whitespace is so crazy that I think you need to look at configuring your IDE to pretty-print the code. Reformatting so that I can understand the structure:
public class d3 {
public static void main(String[] args){
String Search="loop";
String[] letters = Search.split("") ;
int counter;
String[] words={"pole","pool","lopo","book","kobo"};
for(int i=0;i<words.length;i++)
{
counter=0;
String ssplit[] = words[i].split("");
for(int j=0;j<words[i].length();j++)
{
if(letters.length==ssplit.length)
{
for(int k=0;k<letters.length;k++)
{
if(letters[j].equals(ssplit[k]))
{
counter++;
ssplit[k]="*";
break;
}
}
if (counter == 4)
{
System.out.println(words[i]);
}
}
}
}
}
}
Names
Java convention is that camel-case names which start with a capital letter are types (classes, interfaces, etc), so Search as the name of a variable is unexpected.
counter is not entirely uninformative, but a better name would tell me what it counts. Similarly, it would be helpful to distinguish which variables relate to the search query and which to the items searched. The best convention I've seen there is PHP's needle and haystack, so I would suggest needleLetters and haystackWords.
foreach statement
Instead of for(int i=0;i<words.length;i++) ... words[i] you can use for (String word : words) ... word. This removes a variable and simplifies the naming, making it easier to see what the code does.
Decomposing strings
String has a method toCharArray(). I think it would make more sense to use that than split("").
Don't put something in a loop which can go outside it
for(int j=0;j<words[i].length();j++)
{
if(letters.length==ssplit.length)
{
...
}
}
could be rewritten
if(letters.length==ssplit.length)
{
for(int j=0;j<ssplit.length;j++)
{
...
}
}
Executing the test once is more efficient, and it's also easier to understand because the maintainer doesn't have to reason about loop invariants to work out what might have changed the second time the test is executed.
Since there's nothing after this test in the loop body, an alternative would be
if(letters.length!=ssplit.length)
{
continue;
}
for(int j=0;j<ssplit.length;j++)
{
...
}
Beware hard-coded constants
Why
if (counter == 4)
{
System.out.println(words[i]);
}
? That's a bug. The comparison should be with letters.length. Also, it would make more sense to move the test outside the loop over j.
Use advanced data structures
for(int j=0;j<words[i].length();j++)
{
for(int k=0;k<letters.length;k++)
{
if(letters[j].equals(ssplit[k]))
{
counter++;
ssplit[k]="*";
break;
}
}
}
takes time proportional to words[i].length() * letters.length. If you use java.util.HashMap<Character, Integer> to store a per-character count, you can generate a representation for each word in time proportional to the length of the word, and you can compare the representations of two words in time proportional to the length of each word. In this toy example it doesn't matter, but for real applications the difference between \$O(n^2)\$ and \$O(n)\$ can be the difference between a project being feasible and not. The first place to look for optimisations is always the algorithm. | {
"domain": "codereview.stackexchange",
"id": 33237,
"tags": "java"
} |
Why doesn't a Gaussian beam converge to a point? | Question: No matter what lens is put in the beam path of a Gaussian beam, it will always go through a waist of non-zero width.
Why not just a point?
I know the maths, I'm wondering whether there is any physics that prevents it.
Answer: While I concur that you may use the uncertainty principle to understand this, it isn't necessary. If you have a classical EM field that's governed by the nice wave equation derived from Maxwell's equations, then you can compute a diffraction integral that tells you that you must have a finite waist, even if the far-field divergence is very large. | {
"domain": "physics.stackexchange",
"id": 27462,
"tags": "optics, geometric-optics, laser-interaction"
} |
Does existence of a total $\mathsf{NP}$ search problem not solvable in polytime imply $\mathsf{NP}\cap\mathsf{coNP} \neq \mathsf{P}$? | Question: It easy to see that if $\mathsf{NP}\cap\mathsf{coNP} \neq \mathsf{P}$ then there are total $\mathsf{NP}$ search problems which cannot be solved in polynomial time (create a total search problem by having both the witnesses for membership and the witnesses for nonmembership).
Is the converse also true, i.e.
Does existence of a total $\mathsf{NP}$ search problem not solvable in polynomial time imply $\mathsf{NP}\cap\mathsf{coNP} \neq \mathsf{P}$?
Answer: I assume that P, NP, and coNP in the question are classes of languages, not classes of promise problems. I use the same convention in this answer. (Just in case, if you are talking about classes of promise problems, then the answer is affirmative because P = NP∩coNP as classes of promise problems is equivalent to P = NP.)
Then the answer is negative in a relativized world.
The statement TFNP ⊆ FP is known as Proposition Q in the literature [FFNR03]. There is a weaker statement called Proposition Q’ [FFNR03] that every total NPMV relation with one-bit answers is in FP. (Here a relation with one-bit answers means a subset of {0,1}*×{0,1}.) It is easy to see that Proposition Q relative to some oracle implies Proposition Q’ relative to the same oracle.
Fortnow and Rogers [FR02] considered the relationships between the statement P = NP∩coNP, Proposition Q’, and a few other related statements in relativized worlds. In particular, Theorem 3.2 (or Theorem 3.3) in [FR02] implies that there is an oracle relative to which P = NP∩coNP but Proposition Q’ does not hold (and therefore Proposition Q does not hold, either). Therefore, in a relativized world, P = NP∩coNP does not imply Proposition Q; or by taking contrapositive, existence of TFNP relation which cannot be computed in polynomial time does not imply P ≠ NP∩coNP.
References
[FFNR03] Stephen A. Fenner, Lance Fortnow, Ashish V. Naik, and John D. Rogers. Inverting onto functions. Information and Computation, 186(1):90–103, Oct. 2003. DOI: 10.1016/S0890-5401(03)00119-6.
[FR02] Lance Fortnow and John D. Rogers. Separability and one-way functions. Computational Complexity, 11(3–4):137–157, June 2002. DOI: 10.1007/s00037-002-0173-4. | {
"domain": "cstheory.stackexchange",
"id": 1905,
"tags": "cc.complexity-theory, reference-request, search-problem"
} |
Cosmetic appearances of multi-controlled Z gate in MCMT Qiskit | Question: Using from qiskit.circuit.library I can import MCMT to create short-hand multi controlled gates,
e.g.
from qiskit.circuit.library import MCMT
c3z = MCMT('z', num_ctrl_qubits=3, num_target_qubits=1) #define the 3-controlled z gate a.k.a. cccz gate
c3z.decompose().draw(output='mpl')
upon running the code you see the decomposed c3z gate:
if you append this gate to a quantum circuit,
from qiskit import QuantumCircuit
circ.clear()
circ = QuantumCircuit(4)
circ.append(c3z,[0,1,2,3])
circ.draw('mpl')
you see the following:
2 questions:
1.) If I add the line
circ.decompose(c3z)
(see https://qiskit.org/documentation/stubs/qiskit.circuit.QuantumCircuit.decompose.html) to the above then the code becomes
from qiskit import QuantumCircuit
circ.clear()
circ = QuantumCircuit(4)
circ.append(c3z,[0,1,2,3])
circ.decompose(c3z)
circ.draw('mpl')
And I'd expect the output to resemble the first image, not the second. However the circuit is not decomposed by adding this line and I just see the purple rectangle instead.
2.)
The cz/ccz gates in Qiskit have a nice appearance:
circ.clear()
circ = QuantumCircuit(4)
circ.cz(0,1)
circ.ccz(0,1,2)
circ.draw('mpl')
is there any way to get my c3z gate to follow this trend and look neat and tidy?
Many thanks.
Answer: I don't think this is possible if you use the MCMT class to create your custom gate. However, if you start from a simple ZGate and transform it into your multi-controlled c3z gate by the Gate.control method, then the circuit should be finally drawn as you wish:
from qiskit import QuantumCircuit
from qiskit.circuit.library import ZGate
circ = QuantumCircuit(4)
c3z = ZGate().control(num_ctrl_qubits=3, ctrl_state='111')
circ.append(c3z, [0,1,2,3])
circ.draw('mpl') | {
"domain": "quantumcomputing.stackexchange",
"id": 4742,
"tags": "qiskit, programming"
} |
Why is my $p$-$T$ graph gradient increasing for decreasing radius when it should be decreasing in hard sphere collisions? | Question: Why is my $p$-$T$ gradient decreasing for increasing radius of spheres when it should be increasing?
I'm simulating some hard sphere collisions with
$Radius_{container}=10,$ and varying pressure and temperatures, I measured my pressure using average impulse/time $\cdot$circumference of the container, and $E_{k}=k_BT$.
There are 150 balls, the mass of each ball is the atomic mass of helium, the mass of the container is very large so it does not move. Calculated from 1000 collisions.
I used $p=\frac{Nk_BT}{V-Nb}$ to fit the straight lines, since $b$ increases with the radius of the constituent sphere, increasing the radius should increase the $gradient=\frac{Nk_B}{V-Nb}$. But it's the total opposite according to my graph? (the top line corresponds to Rb=0.01)
Answer: Your gradient seems to be inversely proportional to $Rb$ and maybe $Rb$ for the top orange line should be $0.005$.
I measured my pressure using average impulse/time ⋅circumference of the container
Have you calculated the time for the balls to go across the container (correct) or the time using the radius of the ball and the speed? | {
"domain": "physics.stackexchange",
"id": 84764,
"tags": "thermodynamics, pressure, temperature, ideal-gas, gas"
} |
Tensor decomposition of $\partial_\mu A_\nu$ | Question: In the decomposition of a rank-2 Minkowski tensor into irreducible representations, I expect the 16 components of the tensor product $M_\mu N_\nu$ to reduce to the sum of a scalar (1), a rank-2 anti-symmetric tensor (6), and a rank-2 traceless symmetric tensor (9). Applying this to $\partial_\mu A_\nu$, where $A_\nu$ is the electromagnetic 4-potential, I find the usual anti-symmetric field tensor $F_{\mu\nu}$, and the the 4-divergence $\partial\cdot A$ (which vanishes in Lorentz gauge). But I don't recognize the traceless symmetric tensor. Have I missed something in the decomposition, messed up my indices, or forgotten something from E & M?
thanks!
Answer: I am confused by what is meant by "physical interpretation". In E&M only gauge invariant quantities have physical interpretations. And, only the antisymmetric tensor is gauge invariant so no other combination has a physical interpretation, in E&M. That is a possible answer. If $A$ is not the gauge field for E&M and/or $\partial$ not the gradient, why the traceless symmetric tensor can have a physical interpretation. But, that depends on what $A$ and $\partial$ are. I can not immediately think of good examples of such tensors that have physical interpretations. In General relativity $\partial x_\mu^\prime/\partial x_\nu$ is an important tensor but not gauge invariant, and so is not really physical. In (Euclidian) elasticity theory the traceless symmetric part of this tensor is the strain, and parts of this tensor also has a similar interpretation in GR if $x$ is a flat-space background and $x^\prime$ is a slight perturbation from that flat space background, I believe. But, I can not immediately think of traceless symmetric relativistic tensors that are (a) that easy to construct and (b) have physical interpretations. | {
"domain": "physics.stackexchange",
"id": 14066,
"tags": "electromagnetism, special-relativity"
} |
How is the chirality for the weak interaction conserved for non-relativistic neutrinos? | Question: In this article, one can read that the neutrinos in the cosmic neutrino background have a speed of about 1/50 of the speed of light, which is clearly non-relativistic.
From the viewpoint of, say, cosmic rays emerging from the sun, these slow neutrinos can change helicity. A left-handed neutrino then changes into a right-handed one.
Now, the weak interaction affects only left-handed particles and right-handed anti-particles. Does this mean that a left-handed neutrino, as seen from an observer at rest wrt co-moving coordinates, participating in a weak process, cannot participate in the same process, as seen from the frame in which high-energy cosmic rays are at rest and from which the neutrino seems right-handed? How is the chirality of the weak interaction conserved? Are non-relativistic neutrinos behaving like, say, electrons, which can have both right-handed and left-handed versions but still behave chirally?
Answer: I am unclear what you are asking, but here is the connection of slow fermions' helicity and chirality. From (6.38), for ultra relativistic fermions, $\kappa= \frac{p}{m+E}$ goes to 1, so positive helicity ones are almost pure right-chiral, and negative helicity ones are almost pure left-chiral.
Not your case. For speed v ~1/50, you have $\kappa\approx 0.01$, so, by contrast, positive helicity fermions are almost equal parts right-chiral and left-chiral, and similarly for negative helicity.
This holds for both electrons and Dirac neutrinos, of the same speed.
From the viewpoint of, say, cosmic rays emerging from the sun, these slow neutrinos can change helicity. A left-handed neutrino then changes into a right-handed one.
No, really slow neutrinos produced by weak interactions change chirality, as per above. In their frame, only about half will interact weakly.
Are non-relativistic neutrinos behaving like, say, electrons, which can have both right-handed and left-handed versions but still behave chirally?
Indeed, slow Dirac neutrinos and electrons share in that feature: about half of them are liable to interact weakly in that frame. However, note there is at least a ratio of a million between electron and neutrino masses, so you must scale them appropriately. Electrons here are a mental crutch...
PS. Solar neutrinos, however, are ultrarelativistic, since their energy is ~ 300keV... | {
"domain": "physics.stackexchange",
"id": 97435,
"tags": "neutrinos, weak-interaction, chirality, helicity"
} |
Dilation operator acting on $x$-dependent field | Question: I've been studying conformal field theory (CFT) and got the following "apparent" inconsistency. Let's take dilation ($D$) and translation ($P_\mu$, 4-momentum) generators that according to CFT are such that
$$
[D, P_\mu] = iP_\mu,\quad [D, \Phi(0)] = i\Delta\Phi(0),\quad \Delta \in \mathbb{R} \tag1
$$
Now we are looking for $[P_\mu, \Phi(x)]$ and $[D, \Phi(x)]$ for an arbitrary field $\Phi(x)$ (it may be scalar, spinor, vector, etc). Let's start with the first one and use the Heisenberg picture to make explicit the $x$-dependence of $\Phi$:
$$
\partial_\mu\Phi(x) = \partial_\mu e^{iPx}\Phi(0)e^{-iPx} = i[P_\mu, \Phi(x)] \implies [P_\mu, \Phi(x)] = -i\partial_\mu\Phi(x) \tag2
$$
Let's go now to the 2nd commutator:
$$
[D, \Phi(x)] = De^{iPx}\Phi(0)e^{-iPx} - e^{iPx}\Phi(0)e^{-iPx}D = e^{iPx}([\hat{D}, \Phi(0)])e^{-iPx} \tag3
$$
with
$$
\hat{D} = e^{-iPx}De^{iPx} = D - ix^\mu[P_\mu, D]\ \mbox{(obtained by Taylor expansion of exp.)}\tag4
$$
Now, using (1) and (4) into (3), the final result is
$$
[D, \Phi(x)] = i(\Delta + (x\partial))\Phi(x) \tag5
$$
But this contradicts first commutator in (1). Because of (5), $D = i(\Delta + (x\partial))$ whose commutator with $P_\mu = -i\partial_\mu$ is $[D, P_\mu] = [i(\Delta + (x\partial)), -i\partial_\mu] = -i(-i\partial_\mu) = -iP_\mu$
Isn't this a contradiction?
Answer: Let $Q_{\xi}$ be a charge that generates some symmetry $\delta_\xi$. Then,
$$
[ Q_\xi , \Phi ] = \delta_\xi \Phi .
$$
Then,
\begin{align}
[ [ Q_\xi , Q_{\xi'} ] , \Phi ] &= [ Q_\xi , [ Q_{\xi'} , \Phi ] ] + [ [ Q_\xi , \Phi ] , Q_{\xi'} ] \\
&= [ Q_\xi , \delta_{\xi'} \Phi ] + [ \delta_\xi \Phi , Q_{\xi'} ] \\
&= \delta_{\xi'} [ Q_\xi , \Phi ] - \delta_\xi [Q_{\xi'} , \Phi ] \\
&= \delta_{\xi'} \delta_\xi \Phi - \delta_\xi \delta_{\xi'} \Phi \\
&= - [ \delta_\xi , \delta_{\xi'} ] \Phi
\end{align}
Note that the commutator of charges generates the commutator of transformations with an extra minus sign! This resolves your contradiction.
Let me add in a bit of information regarding symmetries and transformations. The finite version of a symmetry transformation is
$$
U \Phi U^{-1} = R(U^{-1}) \Phi
$$
where $R(U)$ is the representation of $U$ under which $\Phi$ transforms. Note that the argument of $U$ on the RHS is $U^{-1}$ and not $U$. To understand this, we consider the transformation
$$
(U_1U_2) \Phi (U_1 U_2)^{-1} = R( (U_1 U_2 )^{-1} ) \Phi
$$
Alternatively,
\begin{align}
(U_1U_2) \Phi (U_1 U_2)^{-1} &= U_1U_2 \Phi U_2^{-1} U_1^{-1} \\
&= U_1 R(U_2^{-1}) \Phi U_1^{-1} \\
&= R(U_2^{-1}) U_1 \Phi U_1^{-1} \\
&= R(U_2^{-1}) R(U_1^{-1}) \Phi .
\end{align}
It follows that,
$$
R(U_2^{-1} U_1^{-1} ) = R(U_2^{-1}) R(U_1^{-1})
$$
or equivalently, $R(U_1U_2)=R(U_1)R(U_2)$. In other words, $R$ satisfies the same group multiplication law as $U$ so it is indeed a representation.
We can now relate this to infinitesimal transformations.
$$
U = \exp [ Q_\xi ] , \qquad R(U) = \exp [ R (Q_\xi ) ] = \exp [ - \delta_\xi ] .
$$
Note that $- \delta_\xi$ is the representation of $Q_\xi$ (not $+\delta_\xi$). Substituting this and expanding to first order in $\xi$, we find
$$
[ Q_\xi , \Phi ] = + \delta_\xi \Phi .
$$
The extra minus sign between $Q_\xi$ and $\delta_\xi$ also implies that the algebra of the charges $Q_\xi$ has an extra sign compared to the algebra of $\delta_\xi$,
\begin{align}
[Q_{\xi_1} , Q_{\xi_2} ] &= Q_{\xi_3} \\
\implies R ( [Q_{\xi_1} , Q_{\xi_2} ] ) &= R ( Q_{\xi_3} ) \\
\implies [ R ( Q_{\xi_1} ) , R ( Q_{\xi_2} )] &= R ( Q_{\xi_3} ) \\
\implies [ -\delta_{\xi_1}, -\delta_{\xi_2} ] &= -\delta_{\xi_3} \\
\implies [ \delta_{\xi_1}, \delta_{\xi_2} ] &= -\delta_{\xi_3} \\
\end{align} | {
"domain": "physics.stackexchange",
"id": 87579,
"tags": "quantum-field-theory, group-theory, conformal-field-theory, commutator"
} |
Is it possible for a sound to be louder as you move away from it? | Question: I was asked a puzzling question/thought experiment:
Given the source of a sound in a wide open field so acoustics do not play a role, is it possible for a sound to be louder as you move away from it.
My answer was instinctively no. As you move away from sound it dissipates, so it should not be louder.
The response is that, if there is another, weaker source of sound closer to you, then by walking away the source closer to you will lose strength and the source farther will "shine" through better.
This doesn't feel right to me. Surely when you move away from the sound, the farther one dissipates as well as the nearer one? Is this potentially because of the inverse square law?
edit: This question probably applies in a similar way to light. Not quite sure what the right tags should be for this
Answer: No, but it can sound like it.
If there are two sounds that have similar frequencies, we will only hear the loudest one (this phenomenon is used to help compress music files). This is caused auditory masking. Now, if you have a loud sound far away, and a less loud sound close to you, the one close to you will sound louder, and will mask the farther away one. If you move away from both of them, then because of the inverse square law the near sound will get softer faster than the far-away one, so the far-away sound will no longer be masked. This may sound to you like the farther-away sound is getting louder as you move away from it. (Even though it isn't in reality.) | {
"domain": "physics.stackexchange",
"id": 14635,
"tags": "waves, acoustics, thought-experiment"
} |
How is it possible to pull out derivatives of a wavefunction? | Question: In an early derivation, the following equation was stated:
$$\frac\partial{\partial t}\lvert\psi\rvert^2 = \frac{i\hbar}{2m}\biggl(\psi^*\frac{\partial^2\psi}{\partial x^2} - \frac{\partial^2\psi^*}{\partial x^2}\psi\biggr) = \frac\partial{\partial x}\biggl[\frac{i\hbar}{2m}\biggl(\psi^*\frac{\partial\psi}{\partial x} - \frac{\partial \psi^*}{\partial x}\psi\biggr)\biggr].$$
It appears as if the $\frac\partial{\partial x}$ was "taken out" from the expression. Is that valid? If so, why can't it be done a second time, i.e.
$$\frac\partial{\partial x}\biggl[\frac{i\hbar}{2m}\biggl(\psi^*\frac{\partial\psi}{\partial x} - \frac{\partial \psi^*}{\partial x}\psi\biggr)\biggr] = \frac{\partial^2}{\partial x^2}\biggl[\frac{i\hbar}{2m}(\psi^*\psi - \psi^*\psi)\biggr] = 0.$$
If this is also valid, wouldn't the original be $\frac\partial{\partial t}\lvert\psi\rvert^2 = 0$?
Answer: He's using the following:
$$
\begin{eqnarray}
\frac{\partial}{\partial x} \left(f \frac{\partial g}{\partial x} - g \frac{\partial f}{\partial x}\right) &=& \frac{\partial f}{\partial x} \frac{\partial g}{\partial x} + f\frac{\partial^2 g}{\partial x^2} - \frac{\partial g}{\partial x} \frac{\partial f}{\partial x} - g\frac{\partial^2 f}{\partial x^2} \\
&=& f\frac{\partial^2 g}{\partial x^2} - g\frac{\partial^2 f}{\partial x^2}
\end{eqnarray}
$$ | {
"domain": "physics.stackexchange",
"id": 26872,
"tags": "quantum-mechanics, wavefunction, schroedinger-equation, differentiation"
} |
How can I make this Python code that implements a voting machine look more professional? | Question: I have been teaching myself python in my spare time and so far I think I have figured out the basics. However, I am fairly certain that my code is extremely unprofessional and the way I write it is incorrect. So far, I have been going on a "If it works don't screw with it" philosophy but I would like to learn how to actually write Python correctly. If anyone could just take a look at what I have written that would be great.
Here is my code that implements a voting machine, called "VoteBot":
import time
import os
#colors
os.system('color')
black = lambda text: '\033[0;30m' + text + '\033[0m'
red = lambda text: '\033[0;31m' + text + '\033[0m'
green = lambda text: '\033[0;32m' + text + '\033[0m'
yellow = lambda text: '\033[0;33m' + text + '\033[0m'
blue = lambda text: '\033[0;34m' + text + '\033[0m'
magenta = lambda text: '\033[0;35m' + text + '\033[0m'
cyan = lambda text: '\033[0;36m' + text + '\033[0m'
white = lambda text: '\033[0;37m' + text + '\033[0m'
#Start Menu
print(' --------------------------------------------------------------------------------')
print(' VoteBot v1.5.0')
print(' Copyright 2020.')
print(' Author:')
print(' ')
print(' Hit any key and press enter to begin.')
print(' --------------------------------------------------------------------------------')
vote = input(' ')
#start of main code
while vote == 'start' or '1' or '2' or '3' or '4':
print("\n" * 5000)
print(blue(' --------------------------------------------------------------------------------------------------------'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' --------------------------------------------------------------------------------------------------------'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' --------------------------------------------------------------------------------------------------------'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' @@@@@@@ @@@@@ @@@@ #@@@ @@ /@@@@@@ %@@@@, @@@@@ #@@@ @@@@ @@@@ @@@@@@@'))
print(blue(' @@@@@@@@ @@@& @@& @@@@ @@@@ @@@@@ @@@@@@@@@@@ @@@, @@% @@@@ @@ @@& @@@ @@@@@@@@'))
print(blue(' @@@@@@@@@ @@ %@@ @@@@@@ (@@@ @@@@@ @@@@@@@ @ @, @@ @@@@@@ (@@ @@ @ .@ /@@@@@@@@'))
print(blue(' @@@@@@@@@@ @ *@@@ @@@@@ @@@@ @@@@@ @@@@@@@@@@@ @@@ * @@ @@@@@ @@@ #@@ @ @@@@@@@@@'))
print(blue(' @@@@@@@@@@@ @@@@@% @@@@@ @@@@@ @@@@@@ @@@@. @@@% @@@@@ @@@/ @@@@@@@@@@'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' --------------------------------------------------------------------------------------------------------'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' --------------------------------------------------------------------------------------------------------'))
print(blue(' @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@'))
print(blue(' --------------------------------------------------------------------------------------------------------'))
print(' ')
print(' --------------------------------------------------------------------------------------------------------')
print(' Vote here to choose the winning team. Please choose by entering the number corresponding to your choice.')
print(' --------------------------------------------------------------------------------------------------------')
print(red(' 1. Team 1 (Cantidate 1, Cantidate 2)'))
print(yellow(' 2. Team 2 (Cantidate 1, Cantidate 2)'))
print(green(' 3. Team 3 (Cantidate 1, Cantidate 2)'))
print(blue(' 4. Team 4 (Cantidate 1, Cantidate 2)'))
print(' --------------------------------------------------------------------------------------------------------')
print(' Please make your selection now. Remember, if you do not input a NUMBER your vote will not be counted.')
vote = input(' Enter your vote: ')
file = open('votedata.txt', 'a')
file.write(vote + '\n')
print(' The system is adding your vote. The next person can vote in 3 seconds.')
time.sleep(3)
if vote == 'tally':
break
#start of tally
with open("votedata.txt") as fp:
results = {}
for row in fp:
try:
v = int(float(row))
if v not in results:
results[v] = 0
results[v] += 1
except:
print(red("Invalid Numeric entry"))
print(results)
print('The program will automatically shut down in 5 minutes.')
time.sleep(300)
Answer: Some high level coding style notes:
Try to eliminate code that looks like it's copied and pasted (DRY - Don't Repeat Yourself), and define it in functions. Your various color functions are mostly identical with the exception of one digit (the color code) -- you can avoid repeating yourself by defining a function that implements the shared part and defining the colors in terms of that.
Professional coders (especially those working on large shared codebases) rely heavily on static typing to improve correctness and readability; modern Python has built in support for static typing, and while it's optional it goes a long way toward making code look professional IMO. If you get in the habit of using typing and a static type checker (mypy) you'll find that you spend a lot less time debugging silly typos too!
It's generally considered good style to not have lines of code be super long -- opinion varies on this, but modern style guides usually suggest a maximum width of 120 characters, and I personally try to keep it under 80. The fact that your output contains lots of blank space that might make it annoying to view in a normal terminal is more of a UX issue than a code review one, but assuming it's actually a requirement to left-pad everything with 53 spaces, I think that should be implemented in code (again, in a reusable function because DRY) rather than copy+pasted into all your strings.
Again this is more UX than actual coding, but: proofread! It's "candidate", not "cantidate". :)
Here's how I'd apply those notes to the print statements in your code:
from enum import IntEnum
from typing import Optional
# Command prompt colors.
class Color(IntEnum):
"""MS-DOS command prompt color codes."""
BLACK = 0
RED = 1
GREEN = 2
YELLOW = 3
BLUE = 4
MAGENTA = 5
CYAN = 6
WHITE = 7
GRAY = 8
def color_text(text: str, color: Optional[Color]) -> str:
"""Wraps text in the specified color."""
if color is None:
return text
return '\033[0;3' + hex(color.value)[-1] + 'm' + text + '\033[0m'
def pp(text: str, color: Optional[Color] = None, left_padding: int = 53) -> None:
"""Pretty-print text with optional coloring and default left-padding of 53 spaces."""
print(' ' * left_padding + color_text(text, color))
#Start Menu
pp('-' * 80)
pp('VoteBot v1.5.0')
pp('Copyright 2020.')
pp('Author:')
pp('')
pp('Hit any key and press enter to begin.')
pp('-' * 80)
vote = input(' ' * 53)
#start of main code
while vote == 'start' or '1' or '2' or '3' or '4':
pp("\n" * 5000) # this is gross -- maybe use a system("cls") instead?
# ... etc
pp('-' * 80)
pp('Vote here to choose the winning team. '
'Please choose by entering the number corresponding to your choice.')
pp('-' * 80)
pp('1. Team 1 (Candidate 1, Candidate 2)', Color.RED)
pp('2. Team 2 (Candidate 1, Candidate 2)', Color.YELLOW)
pp('3. Team 3 (Candidate 1, Candidate 2)', Color.GREEN)
pp('4. Team 4 (Candidate 1, Candidate 2)', Color.BLUE)
# ... etc
The signature of pp is designed to narrow your output statements and keep the text aligned for readability; the function name itself is abbreviated, the padding is built into the function so the actual arguments don't need to include it, and the color argument goes at the end so that it doesn't cause the width of the line before the text to vary.
Colors have been defined in an IntEnum because that makes it impossible (with static typechecking) to pass anything that's not a valid color into the color_text function; the alternative would be to use a regular int (or worse yet, the str representation) and then validate at runtime that it falls within the expected range. | {
"domain": "codereview.stackexchange",
"id": 37433,
"tags": "python"
} |
How to draw L and D configuration for isoleucine and threonine? | Question: I'm finding difficulties in drawing the D and L configurations of these two aminoacids, because, differently from the others, they have two chiral carbons (specifically α and β).
In particoular, in the slides of my professor (I'll share the picture), in the representation of L-isoleucine both the amino group and the methyl group are on the left side.
Instead, in the representation of L-threonine only the amino group is on the left, while the alcohol group is on the right (I expected it to be on the left side too).
Why L-threonine doens't have the OH on the left side?
Answer: There are four stereoisomers of D-threonine. One is called L-threonine, and the other two are not called threonine (because they are diastereomers of threonine, so they have different physical properties).
So the D/L nomenclature is a way to distinguish enantiomers. For a history of how D or L were assigned, see https://chemistry.stackexchange.com/a/50561/72973.
I'm finding difficulties in drawing the D and L configurations of these two aminoacids
You can't draw these from first principles, you have to look up the stereochemistry. This is different from the R/S nomenclature (where the complete stereochemical information is available from the R or S assigned to the "chiral" carbons).
This is illustrated nicely for the sugars glucose and galactose. Both of them have a D and an L enantiomer, but that does not tell you the configuration of the other 4 stereocenters. | {
"domain": "chemistry.stackexchange",
"id": 17110,
"tags": "amino-acids"
} |
(Why) is $NP\subseteq coNP/poly$ same as $coNP\subseteq NP/poly$? | Question: If I rememeber right, I read somewhere that $NP\subseteq coNP/poly$ is the same as $coNP\subseteq NP/poly$. Is this true? If yes, is there a relatively simple proof for this?
Definitions
Class $NP/poly$.
We say that a language $L$ belongs to the complexity class $NP/poly$ if there is a Turing machine $M$ and a sequence of strings $\{a_n\colon n\in\mathbb{N}\}$ called advice, such that the following hold.
Machine $M$, when given an input $x$ of length $n$, has access to the string $a_n$ and has to decide whether $x\in L$. Machine $M$ works in nondeterministic polynomial time.
$|a_n|\leq p(n)$ for some polynomial $p$.
$coNP/poly$ is the complement of $NP/poly$.
Is the following definition correct?
Class $coNP/poly$.
We say that a language $L$ belongs to the complexity class $coNP/poly$ if there is a Turing machine $M$ and a sequence of strings $\{a_n\colon n\in\mathbb{N}\}$ called advice, such that such that the following hold.
Machine $M$, when given an input $x$ of length $n$, has access to the string $a_n$ and has to decide whether $x\notin L$. Machine $M$ works in nondeterministic polynomial time.
$|a_n|\leq p(n)$ for some polynomial $p$.
Answer: By definition,
$$
L \in \mathsf{coNP} \Longleftrightarrow \overline{L} \in \mathsf{NP} \\
L \in \mathsf{coNP/poly} \Longleftrightarrow \overline{L} \in \mathsf{NP/poly}
$$
From this you can easily deduce your claim. | {
"domain": "cs.stackexchange",
"id": 20069,
"tags": "complexity-theory, complexity-classes"
} |
Local symplectic transformations on Gaussian states | Question: According to Eqn. $(18)$ of this paper, given a two mode Gaussian state with the $4 \times 4$ covariance matrix $\sigma$, it is possible to find a symplectic matrix $S = S_1 \oplus S_2$, where $S_{1, 2}$ are local $2 \times 2$ symplectic matrices, such that:
$$\sigma' = S \sigma S^T = \begin{pmatrix}a & 0 & c_1 & 0
\\
0 & a & 0 & c_2
\\ c_1 & 0 & b & 0 \\
0 & c_2 & 0 & b
\end{pmatrix} $$
Where $a, b, c_{1, 2}$ are real. How to show this? I can see how the diagonal blocks can become diagonal, from the fact that the covariance matrices can be diagonalized using symplectic matrices. But how come the off diagonal terms are diagonalized? Is there any reference which derives this result?
If $$S = \begin{bmatrix}S_1 & 0 \\ 0 & S_2 \end{bmatrix}$$ and $$\sigma = \begin{bmatrix}\sigma_A & \sigma_{AB} \\ \sigma_{AB}^T & \sigma_B \end{bmatrix},$$ then after the transformation
$$\sigma' =
\begin{bmatrix}
S_1 \sigma_A S_1^T & S_1 \sigma_{AB} S_2^T \\
S_2 \sigma_{AB}^T S_1^T & S_2 \sigma_B S_2^T
\end{bmatrix}
$$
According to Williamson's theorem, we can choose $S_{1, 2}$ to diagonalize $\sigma_{A, B}$, so the diagonal blocks can be made diagonal. What about the off diagonal terms?
Moreover, can this be extended to larger systems, say $4$ mode systems with $8 \times 8$ covariance matrices?
Answer: The trick here is that two-dimensional orthogonal matrices (rotations) are also symplectic matrices. They correspond to phase shifts of phase space, as explained more carefully here [1].
\begin{equation}
R(\phi) = \begin{pmatrix} \cos(\phi) & -\sin(\phi) \\
\sin(\phi) & \cos(\phi) \end{pmatrix}
\end{equation}
So, after applying $S_1 \oplus S_2$ to bring $\sigma$ to the form
\begin{equation}
\sigma = \begin{pmatrix} a 1_{2x2} & C \\ C^T & b 1_{2x2} \end{pmatrix}
\end{equation}
you can then decompose $C$ using the singular value decomposition as $C = R(\phi_1) D R( - \phi_2)$, where $D = \begin{pmatrix} c_1 & 0 \\ 0 & c_2\end{pmatrix}$ and then apply the symplectic transformation $R(\phi_1) \oplus R(\phi_2)$ to bring the covariance matrix to your desired form. I do not think that this works exactly like this for larger systems since the orthogonal matrices are contained in the symplectic matrices only for the special case of two dimensions.
[1] https://arxiv.org/pdf/1401.4679.pdf | {
"domain": "physics.stackexchange",
"id": 93552,
"tags": "quantum-mechanics, hilbert-space, quantum-information, quantum-optics, quantum-states"
} |
How likely is it that Watson-like AI will replace Wikipedia-like encyclopedias? | Question: Is there any risk in the near future of replacing all encyclopedias with Watson-like AI where knowledge is accessible by everybody through API?
Something similar happened in the future in The Time Machine movie from 2002.
Obviously maintaining 40 million articles and keeping it up-to-date and consistent could be beyond brain power of few thousands of active editors. Not to mention thousands of other encyclopedias including paperback version or large number of books used by universities which needs to be updated every year by a huge number of people.
What are the pros and cons of such a change?
Answer: I get the impression that (perhaps even more than Bluemix) this is what the Wolfram Language is looking to offer in the longer term.
Seems to me that the main pros and cons are two sides of the same coin:
With Wikipedia, there's no 'search filter' between you and the text. Adding an algorithmic level of indirection between the user and the knowledge that they're looking for is subject to hidden biases.
If those biases are intended in your best interests, and the search is context-sensitive enough to present you with information in the form that is most useful and digestible to you, then this is a good thing. Otherwise, not. Like many topics in AI, problems arise because we're simply not that good at modelling human context yet.
Of course, we're already subject to this filter bubble effect via search engines and social media. The current consensus seems to be that even more of this would not be a good thing for society. | {
"domain": "ai.stackexchange",
"id": 35,
"tags": "watson"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.