anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
List of binary numbers: How many positions have a one and zero | Question: I have a list of integers, e.g. i=[1,7,3,1,5] which I first transform to a list of the respective binary representations of length L, e.g. b=["001","111","011","001","101"] with L=3.
Now I want to compute at how many of the L positions in the binary representation there is a 1 as well as a zero 0. In my example the result would be return=2 since there is always a 1 in the third (last) position for these entries. I want to compute this inside a function with a numba decorator.
Currently my code is:
@nb.njit
def count_mixed_bits_v2(lst):
andnumber = lst[0] & lst[1]
ornumber = lst[0] | lst[1]
for i in range(1, len(lst)-1):
andnumber = andnumber & lst[i+1]
ornumber = ornumber | lst[i+1]
xornumber = andnumber ^ ornumber
result = 0
while xornumber > 0:
result += xornumber & 1
xornumber = xornumber >> 1
return result
First I take the AND of all numbers, ans also the OR of all numbers, then the XOR of those two results will have a 1 where the condition is fulfilled. In the end I count the number of 1's in the binary representation. My code seems a bit lengthy and I'm wondering if it could be more efficient as well. Thanks for any comment!
Edit: Without the numba decorator the following function works:
def count_mixed_bits(lst):
xor = reduce(and_, lst) ^ reduce(or_, lst)
return bin(xor).count("1")
(Credit to trincot)
Answer: I don't know numba, but here's a little rewrite:
Shorter variable names like and_, using the underscore as suggested by PEP 8 ("used by convention to avoid conflicts with Python keyword") and as done by operator.and_.
Yours crashes if the list has fewer than two elements, I start with neutral values instead.
Looping over the list elements rather than the indexes.
Using augmented assignments like &=.
In the result loop, drop the last 1-bit so you only have as many iterations as there are 1-bits.
def count_mixed_bits(lst):
and_, or_ = ~0, 0
for x in lst:
and_ &= x
or_ |= x
xor_ = and_ ^ or_
result = 0
while xor_ > 0:
result += 1
xor_ &= xor_ - 1
return result | {
"domain": "codereview.stackexchange",
"id": 39929,
"tags": "python, numba"
} |
Load txt to Scripting.Dictionary | Question: This code load a 40MB txt into a dictionary. It runs in about 40 seconds (sometimes 20, no idea why). Is there a way to make it run under 4 seconds, or hopefully in 1 second?
Sub ScriptDic()
Dim FileNum As Integer
Dim DataLine As String
Dim tmp As Variant
Dim Dict As Object
Dim duplicatecount As Object
Dim key As String
Dim count As Long
Set Dict = CreateObject("Scripting.Dictionary")
Set duplicatecount = CreateObject("Scripting.Dictionary")
Sheets("Control").Cells(5, 3).Value2 = Now
Filename = "C:\Users\MyFolder\Documents\PerformanceTests\MyData.txt"
FileNum = FreeFile()
Open Filename For Input As #FileNum
While Not EOF(FileNum)
Line Input #FileNum, DataLine ' read in data 1 line at a time
tmp = Split(DataLine, Chr(9))
key = tmp(7) & "-" & tmp(8)
If Not Dict.exists(key) Then
Dict.Add key, tmp
duplicatecount.Add key, 1
Else
count = duplicatecount(key)
duplicatecount.Remove (key)
Dict.Add key & ">" & count, tmp
duplicatecount.Add key, count + 1
End If
Wend
Sheets("Control").Cells(6, 3).Value2 = Now
End Sub
Answer: I assume you're working with a 40MB file instead of a 40GB file. The performance will vary greatly depending upon:
the length of each line in the file
the number of lines in the file
the number of uniquely keyed-lines in the file
I've worked with a contrived data-file that is 5.127MB in size and has 250,000 data rows like:
a b c d e f g h i j
b c d e f g h i j k
a b c d e f g h i j
The file has nearly all duplicated records, so I'm pushing the limits on the unique key approach. A file with all unique values will perform differently.
Using your code against the 5MB file, it runs in 6.18s. If I only read the file, it reads in 0.11s. If I only read the file, split each line and build a key, it runs in 0.76s. So approximately 5.42s, or 88% of the duration is related to dictionary manipulation.
So, what can we do in VBA, to improve your code
Option Explicit
You haven't included it in your code, so I assume it isn't declared.
Declare filename As String
The filename variable isn't declared, although you may have declared it at global scope. Option Explicit shows e this right away.
Dim filename As String
Declare tmp as a String Array
tmp is assigned with the Split function, which returns a String array, so declare tmp accordingly and you'll use a great deal less memory.
Dim tmp() As String
Early Binding
You're using CreateObject("Scripting.Dictionary"), so your code will fail anyway, if Microsoft Scripting Runtime isn't available. But more importantly, by using late-binding, COM has to work harder to find the methods you're using. It's much better to early-bind by adding a reference to Microsoft Scripting Runtime.:
Dictionary variable names
You've used ambiguous names: Dict and duplicatecount. You'd be better off with more meaningful names (I don't know what your data is, so I'm using less un-meaningful names) like:
Dim allRecords As Scripting.Dictionary
Dim uniqueKeyCounts As Scripting.Dictionary
Set allRecords = New Scripting.Dictionary
Set uniqueKeyCounts = New Scripting.Dictionary
Avoid using magic function/literal values
You're referring to the Tab character Chr(9) which incurs time on every line. VBA has a built-in constant for Tab: vbTab, which makes it more efficient and easier to read.
tmp = Split(DataLine, vbTab)
Unique keys and Dictionary usage
The code to build a unique key and populate the dictionaries is the most expensive, so let's be careful about checking the smallest dictionary more frequently than the large dictionary.
Also, there's no need to Remove and then Add back, when we can just increment the count of the existing entry.
If uniqueKeyCounts.Exists(key) Then
'Retrieve the current count once
Dim currentCount As Long
currentCount = uniqueKeyCounts.Item(key)
allRecords.Add key & ">" & currentCount, DataLine
'Don't remove and re-add, just increment the counter
uniqueKeyCounts(key) = currentCount + 1
Else
uniqueKeyCounts.Add key, 1
allRecords.Add key, tmp
End If
Close the file
You've opened the file and read it all, but you've forgotten to close the file:
Close #FileNum
Timing events
Using Now() offers very limited granularity. Timer gives you improved granularity but also a few problems when timed-code runs through midnight, but it avoids the need for win32 functions. If you want better timing see GetTickCount, or better still, QueryPerformanceCounter.
I've used Timer for the sake of simplicity.
Dim start As Double
start = Timer
'...Do stuff...
Debug.Print Timer - start
So, when I put it altogether, the file is read in 4.85s, or about 22% faster.
Sub ScriptDic()
'Declare fileName
Dim filename As String
Dim FileNum As Integer
Dim DataLine As String
'Declare tmp as a string array
Dim tmp() As String
'Declare as Scripting.Dictionary to get early-binding benefits
Dim allRecords As Scripting.Dictionary
Dim uniqueKeyCounts As Scripting.Dictionary
Dim key As String
Dim count As Long
'USe early binding
Set allRecords = New Scripting.Dictionary
Set uniqueKeyCounts = New Scripting.Dictionary
Sheets("Control").Cells(5, 3).Value2 = Now
filename = "C:\Temp\test100k.txt"
FileNum = FreeFile()
Dim start As Double
start = Timer
Open filename For Input As #FileNum
Do While Not EOF(FileNum)
Line Input #FileNum, DataLine ' read in data 1 line at a time
'USe the vbTab constant
tmp = Split(DataLine, vbTab)
key = tmp(7) & "-" & tmp(8)
'Check the unique dictionary - it's almost always smaller
If uniqueKeyCounts.Exists(key) Then
'Retrieve the current count once
Dim currentCount As Long
currentCount = uniqueKeyCounts.Item(key)
allRecords.Add key & ">" & currentCount, DataLine
'Don't remove and re-add, just increment the counter
uniqueKeyCounts(key) = currentCount + 1
Else
uniqueKeyCounts.Add key, 1
allRecords.Add key, tmp
End If
Loop
'Close the file
Close #FileNum
Debug.Print Timer - start
Sheets("Control").Cells(6, 3).Value2 = Now
End Sub
But do you really want a Dictionary?
Using the existing and optimized approach, you'll end up with a dictionary that is keyed by what seems to be an arbitrary index. You may have your reasons, but if you're not going to use the features of a dictionary, you might be better off with an array:
This code runs in just 1.56s.
Sub ScriptDic1()
'Declare fileName
Dim filename As String
Dim FileNum As Integer
Dim DataLine As String
'Declare tmp as a string array
Dim tmp() As String
'Declare as Scripting.Dictionary to get early-binding benefits
Dim uniqueKeyCounts As Scripting.Dictionary
Dim key As String
Dim count As Long
Dim lineNumber As Long
'Create an array that is larger than our needs, we'll resize it when we're done.
ReDim allRecords(1, 500000) As Variant
'USe early binding
Set uniqueKeyCounts = New Scripting.Dictionary
Sheets("Control").Cells(5, 3).Value2 = Now
filename = "C:\Temp\test100k.txt"
FileNum = FreeFile()
Dim start As Double
start = Timer
Open filename For Input As #FileNum
Do While Not EOF(FileNum)
Line Input #FileNum, DataLine ' read in data 1 line at a time
'USe the vbTab constant
tmp = Split(DataLine, vbTab)
key = tmp(7) & "-" & tmp(8)
'Check the unique dictionary - it's going to be smaller
If uniqueKeyCounts.Exists(key) Then
'Retrieve the current count once
Dim currentCount As Long
currentCount = uniqueKeyCounts.Item(key)
allRecords(0, lineNumber) = key & ">" & currentCount
allRecords(1, lineNumber) = tmp
'Don't remove and re-add, just increment the counter
uniqueKeyCounts(key) = currentCount + 1
Else
uniqueKeyCounts.Add key, 1
allRecords(0, lineNumber) = key
allRecords(1, lineNumber) = tmp
End If
lineNumber = lineNumber + 1
Loop
ReDim Preserve allRecords(1, lineNumber - 1) As Variant
'Close the file
Close #FileNum
Debug.Print Timer - start
Sheets("Control").Cells(6, 3).Value2 = Now
End Sub | {
"domain": "codereview.stackexchange",
"id": 21965,
"tags": "performance, vba, hash-map"
} |
Is there an equivalent of the red shift effect for cosmic rays? | Question: I had read somewhere that light from very distant sources can be measured to be increasingly red shifted the further away the object is (due to cosmic inflation?).
Suppose you had an object emitting cosmic rays or neutrinos other physical matter (not photons), is there an equivalent effect for these objects? ex: a red-shifted helium nuceli in a cosmic ray or red-shifted neutrinos coming from a neutrino source?
Answer: Yes, there is a "retardation of the co-moving velocity" of particles. It is important to take it into account to understand the time history of peculiar velocities of galaxies and for determining the energy distribution of cosmic rays coming from other galaxies.
Peculiar velocities, the difference between an object's velocity and the local rest velocity with respect to the microwave background, decay as 1/a(t), if there are no forces. The global scale factor a(t) is the relative size of the universe compared to today. So there is a "redshift" that applies to particles similar to that of the frequency of light.
The ratio of "observed" velocity to "emitted" velocity is $\frac{v_o}{v_e} = \frac{1}{1 + z}$. (Peebles, Principles of Physical Cosmology). The $z = \frac{1}{a_e}-1$ here is different from the $z$ of the galaxy because the time of the particle's emission is different from the emission time of the light that we see now. | {
"domain": "astronomy.stackexchange",
"id": 5484,
"tags": "redshift, neutrinos, cosmic-ray"
} |
In an electrical charged suit and at rest in an $\vec E$ field can you describe your situation as being at rest in a gravitational field? | Question: Suppose you are put in a uniformly electrical charged suit so that you're surrounded by a thin layer of electrons (the suit is negatively charged). You're put on a big plateau at rest in a uniform $\vec E$ field which pulls the electrons (staying put in the suit) down. We adjust the $\vec E$ field (or the number of electrons in the suit) so you are pulled down on the plateau in such a way that you feel the same force as your mass would experience at rest in a gravity field (which is, in fact, an electric force).
My question: Can you describe your situation by stating that you're standing at rest in a (locally uniform) gravity field?
Answer: There would be no gravitational pull in internal organs. Blood veins etc. would have to adjust, just as they must for astronauts in space. Furthermore your arms would not want to hang down along your side and your legs would not want to be closed, because the suit repels itself. Unless the charge may redistribute, in which case a gravitational pull will disappear from such parts. I can't say how much you would understand of your situation, but I guess it will feel weird. – Steeven | {
"domain": "physics.stackexchange",
"id": 39913,
"tags": "gravity"
} |
Read in file, and then compute class mean, median, max, min, class averages & update file | Question: The application is designed to read in class results of individual students from a single text file, and then manipulate the data to give information such as class mean, median, max, min, individual student averages and updating the result file if need be.
The format of the text file for each student is as such:
2 // States how many Test scores are attributed to student ID
S1234567 // Student ID
55 // Test score 1
70 // Test score 2
Software Code:
*/
Variables Used:
------------------
int limit // States the amount of test data per student
int readBuffer // Skips the first two array values before the mark data
int skipMarkData // Skips the marks data so that i is ready to start at the start of the next student
double sum // Totalling the sum of the student individual average
fileWriter // Used to write changes to file
fileReader // Used to read from file
string currentReadLine // Reads the current line within the file
vector<int> numberStore // Temp storage for string test data that is converted to integer for math processing
vector<string> marksArray // Stores the contents of the file --> marks.txt
char choice // Stores users choice in regards to whether they want to continue data entry or not
string markInput // Stores the user test data inputs as string for
int amountOfTestScores // Records the amount of test data the user has currently inputted
vector<string>addMarks // Stores user data entry input
string intToStrOfTestScores // Converts the amountOfTestScores integer value into a string value for inclusion into addMarks array vector.
string studentID // Stores the studentID value for lookup & data entry
int userInput // Stores the menu selection values 1-8
*/
#include <iostream>
#include <fstream>
#include <string>
#include <cstdlib>
#include <vector>
#include <algorithm>
#include <numeric>
//using namespace std;
bool readFile(const std::string fileName, std::vector<std::string>&marksArray) {
std::string currentReadLine;
std::ifstream fileReader;
//Opens the file
fileReader.open(fileName.c_str());
if (fileReader.fail()) {
std::cout << "\nThe file failed to open!"
<< "\nPlease check that the file exists. Press any key to continue"
<< std::endl;
std::cin.ignore(256, '\n'); //Stops the program from skipping ahead
std::cin.get(); //Pauses program
return false;
}
else {
//Reads the file and stores into array line by line
while (getline(fileReader, currentReadLine)) {
marksArray.push_back(currentReadLine);
}
fileReader.close();
return true;
}
}
void displayMark(const int arrayItemCount, std::vector<std::string>marksArray) {
for (int i = 0; i < arrayItemCount; i++) {
int limit = (atoi(marksArray[i].c_str())); // Gets the string within the array and converts it to an integer
int readBuffer = i + 2; // Skips the first two array values before the mark data
int skipMarkData = limit + 1;
std::cout << "\n================================="
<< "\nStudent ID: " << marksArray[i + 1];
// Loops through & outputs student test score
for (int z = 0; z < limit; z++) {
std::cout << "\nTest " << z << ": " << marksArray[readBuffer + z];
}
i = i + skipMarkData; // Skips the marks data and places (i) at the start of the next student
}
}
double findAverageForStudent(const int arrayItemCount, std::vector<std::string>marksArray, const std::string studentID) {
double sum = 0;
//Goes through the string vector and places the test scores into a int vector
for (int i = 0; i < arrayItemCount; i++) {
int limit = (atoi(marksArray[i].c_str())); // Gets the string value within the array and converts it to an integer
int readBuffer = i + 2; // Skips the first two array values before the mark data
int skipMarkData = limit + 1;
if (studentID == marksArray[i + 1]) {
// Loops through test score & stores them in a int vector array.
for (int z = 0; z < limit; z++) {
sum += atoi(marksArray[readBuffer + z].c_str());
}
//Calculate & return average for the student scores
return (sum / limit);
}
i = i + skipMarkData; // Skips the marks data and places (i) at the start of the next student
}
std::cout << "\nRECORD NOT FOUND! Press any key to continue";
std::cin.ignore(256, '\n'); //Stops the program from skipping ahead
std::cin.get(); //Pauses program
return -1;
}
bool updateFile(const std::string fileName, std::vector<std::string>addMarks) {
std::string currentReadLine;
std::fstream fileWriter;
//Opens the file
fileWriter.open(fileName.c_str());
if (fileWriter.fail()) {
std::cout << "\nThe file failed to open!"
<< "\nPlease check that the file exists.Press any key to continue."
<< std::endl;
std::cin.ignore(256, '\n'); //Stops the program from skipping ahead
std::cin.get(); //Pauses program
return false;
}
else {
fileWriter.seekg(0L, std::ios::end); //Move to the end of the file
//Loops through vector and writes the elements into file
for (int i = 0; i < (addMarks.size()); i++) {
fileWriter << "\n" << addMarks[i];
}
fileWriter.close();
std::cout << "\nSuccessfully written changes! Press any key to continue";
std::cin.ignore(256, '\n'); //Stops the program from skipping ahead
std::cin.get(); //Pauses program
return true;
}
}
double calculateMean(const int arrayItemCount, std::vector<std::string>marksArray) {
std::vector<int> numberStore; // Temp storage of numerical test data
//Goes through the string vector and places the test scores into a int vector
for (int i = 0; i < arrayItemCount; i++) {
int limit = (atoi(marksArray[i].c_str())); // Gets the string within the array and converts it to an integer
int readBuffer = i + 2; // Skips the first two array values before the mark data
int skipMarkData = limit + 1;
// Loops through test score & stores them in a int vector array.
for (int z = 0; z < limit; z++) {
numberStore.push_back(atoi(marksArray[readBuffer + z].c_str()));
}
i = i + skipMarkData; // Skips the marks data and places (i) at the start of the next student
}
// Return the mean by adding up all the values from begin to end then dividing
return std::accumulate(numberStore.begin(), numberStore.end(), 0.0) / numberStore.size(); ;
}
double calculateMedian(const int arrayItemCount, std::vector<std::string>marksArray) {
std::vector<int> numberStore; // Temp storage of numerical test data
//Goes through the string vector and places the test scores into a int vector
for (int i = 0; i < arrayItemCount; i++) {
int limit = (atoi(marksArray[i].c_str())); // Gets the string within the array and converts it to an integer
int readBuffer = i + 2; // Skips the first two array values before the mark data
int skipMarkData = limit + 1;
// Loops through test score & stores them in a int vector array.
for (int z = 0; z < limit; z++) {
numberStore.push_back(atoi(marksArray[readBuffer + z].c_str()));
}
i = i + skipMarkData; // Skips the marks data and places (i) at the start of the next student
}
//Sort the vector
std::sort(numberStore.begin(), numberStore.end());
//Return the median //
if (int div = numberStore.size() / 2 == 1) {
return numberStore[numberStore.size() / 2]; //Returns the median if vector size odd
}
return ((numberStore[(numberStore.size() / 2) - 1] + numberStore[(numberStore.size() / 2) + 1]) / 2); //Returns the median if vector size even
}
double findMinimum(const int arrayItemCount, std::vector<std::string>marksArray) {
std::vector<int> numberStore; // Temp storage of numerical test data
//Goes through the string vector and places the test scores into a int vector
for (int i = 0; i < arrayItemCount; i++) {
int limit = (atoi(marksArray[i].c_str())); // Gets the string within the array and converts it to an integer
int readBuffer = i + 2; // Skips the first two array values before the mark data
int skipMarkData = limit + 1;
// Loops through test score & stores them in a int vector array.
for (int z = 0; z < limit; z++) {
numberStore.push_back(atoi(marksArray[readBuffer + z].c_str()));
}
i = i + skipMarkData; // Skips the marks data and places (i) at the start of the next student
}
//Sort the vector
std::sort(numberStore.begin(), numberStore.end());
// Return the first element in the sorted vector (smallest) //
return numberStore.front();
}
double findMaximum(const int arrayItemCount, std::vector<std::string>marksArray) {
std::vector<int> numberStore; // Temp storage of numerical test data
//Goes through the string vector and places the test scores into a int vector
for (int i = 0; i < arrayItemCount; i++) {
int limit = (atoi(marksArray[i].c_str())); // Gets the string within the array and converts it to an integer
int readBuffer = i + 2; // Skips the first two array values before the mark data
int skipMarkData = limit + 1;
// Loops through test score & stores them in a int vector array.
for (int z = 0; z < limit; z++) {
numberStore.push_back(atoi(marksArray[readBuffer + z].c_str()));
}
i = i + skipMarkData; // Skips the marks data and places (i) at the start of the next student
}
//Sort the vector
std::sort(numberStore.begin(), numberStore.end());
// Return the last element in the sorted vector (largest) //
return numberStore.back();
}
int main() {
//Load the file data into the array initially
std::vector<std::string> marksArray;
std::string fileName = "marks.txt";
bool readFileSuccess = false;
//Checks if read file was successful
while (readFileSuccess == false)
{
readFileSuccess = readFile(fileName, marksArray);
// Only display message if readFile is false, stops this message from displaying when process successful.
if (readFileSuccess == false) {
std::cout << "\nPress any key to retry!";
std::cin.ignore(256, '\n'); //Stops the program from skipping ahead
std::cin.get(); //Pauses program
}
}
//Get the final size of the array
int arrayItemCount = marksArray.size();
while (true) {
int userInput;
std::string studentID;
//Related to case 7:
char choice = 'n'; // Y or N to exit data entry process
std::string markInput;
int amoutOfTestScores = 0;
std::vector<std::string>addMarks;
std::string intToStrOfTestScores; //Converts int to string
std::cout << "\n=================================\nStudent Score System \n";
std::cout << "Menu\n"
<< "(1) Display marks\n"
<< "(2) Calculate mean\n"
<< "(3) Calculate median\n"
<< "(4) Find minimum\n"
<< "(5) Find maximum\n"
<< "(6) Find average of student\n"
<< "(7) Add new student data\n"
<< "(8) Quit program\n"
<< "Please enter a value between 1-8: ";
std::cin >> userInput;
switch (userInput) {
case 1:
std::cout << "=================================\n Display of Marks";
displayMark(arrayItemCount, marksArray);
break;
case 2:
std::cout << "=================================\nClass Mean\n=================================";
std::cout << "\nThe class mean is: " << calculateMean(arrayItemCount, marksArray);
break;
case 3:
std::cout << "=================================\nStudent Median\n=================================";
std::cout << "\nThe class median is :" << calculateMedian(arrayItemCount, marksArray);
break;
case 4:
std::cout << "=================================\nClass Minimum\n=================================";
std::cout << "\nThe class minimum is :" << findMinimum(arrayItemCount, marksArray);
break;
case 5:
std::cout << "=================================\nClass Maximum\n=================================";
std::cout << "\nThe class maximum is :" << findMaximum(arrayItemCount, marksArray);
break;
case 6:
std::cout << "=================================\nStudent Average\n=================================";
std::cout << "\nPlease enter the student ID: ";
std::cin >> studentID;
std::cout << "\nThe average test scores for "
<< studentID
<< " are: "
<< findAverageForStudent(arrayItemCount, marksArray, studentID)
<< " %";
break;
case 7:
std::cout << "\nPlease enter the Student ID:";
std::cin >> studentID;
addMarks.push_back(studentID);
while (choice == 'n') {
std::cout << "\nPlease enter a test score:";
std::cin >> markInput;
addMarks.push_back(markInput);
//Counter of amount of test added
++amoutOfTestScores;
std::cout << "Are you done? (y or n)?";
std::cin >> choice;
}
//Inserts the total test items into the vector
intToStrOfTestScores = std::to_string(amoutOfTestScores); //Converts int to string
addMarks.insert(addMarks.begin(), intToStrOfTestScores);
//Calls the write function
updateFile(fileName, addMarks);
break;
case 8:
std::cout << "\nFor you may leave Narnia, but you shall never forget its existence - John Cena";
exit(0);
break;
default:
std::cout << "\nThe input is not valid. Please enter a number between 1-8";
break;
}
}
return 0;
}
I know it is long and tedious, but I wanted to know what you thought of the code.
I have incorporated advice from a code review I asked in the past ( such as using std::), but I feel as if my code is still sub-par and lacking in terms of efficiency and readability even though I achieved 100% for this particular assignment.
Answer: OK, let's start at the top. You need to choose good abstractions to make your code readable and maintainable. Right now, if you want to change anything about the problem, you have to change multiple places in the code. The details of the student file, for example, are spread all over the place.
C++ is (among other things) an object-oriented language. Let's use those object-oriented features to help us understand the code.
One of the fundamental types should be the Student. Instead of reading in a bunch of lines and making every function understand how those lines are formatted, let's write the code so that we read in a bunch of Students.
struct Student {
std::string id;
std::vector<int> test_scores;
};
std::istream& operator>>(std::istream& stream, Student& student) {
int num_scores = 0;
stream >> num_scores >> student.id;
stream.test_scores.resize(num_scores);
for (int& score : student.test_scores) {
stream >> score;
}
return stream;
}
Now you can write readFile like this:
std::vector<Student> readFile(const std::string& filename) {
std::vector<Student> students;
std::ifstream file(filename);
if (!fail) throw std::runtime_error("Could not open file.");
std::copy(std::istream_iterator<Student>(file),
std::istream_iterator<Student>(),
std::back_inserter(students));
return students;
}
There's a number of improvements here. First, as mentioned before, the rest of the functions don't have to know how to parse the file. Second, we're using istream's built-in parsing instead of calling out to atoi. We're not really doing much error checking here (but then again, neither were you). I'd say that the file format isn't very helpful here. If the first line of the file gave the total number of student records, you could throw an exception if file.fail() returned true after reading that number of records.
I'm not going to rewrite all your functions, but let's take a look at calculateMean as an example:
double calculateMean(const std::vector<Student>& students) {
Starting with the function signature: why should we copy the entire input to compute statistics? Let's pass by reference instead. Note how using a vector of Students allows us to omit arrayItemCount.
double sum = 0.0;
int num_scores = 0;
for (const Student& student : students) {
sum = std::accumulate(student.test_scores.begin(),
student.test_scores.end(),
sum);
num_scores += student.test_scores.size();
}
return sum / num_scores;
}
Let's save on memory by a different choice of algorithm as well. Instead of copying all the test scores into a single array, add them up where they already are.
There's more to work on, but I think this provides some good places to start. A final note -- some of your code makes me think that you're working from an old version of C++ (e.g. you passed filename.c_str() to the ifstream constructor, you're not using range-based for loops). C++11 (released in 2011) improved the language significantly. Most compilers should support it by now. If you're using g++ or clang++ and they don't seem to support C++11 features, you can try passing -std=c++11 to the compiler. | {
"domain": "codereview.stackexchange",
"id": 22994,
"tags": "c++, beginner, homework, database, statistics"
} |
Deserializing a byte array from Java in C++ | Question: I am writing a byte array value into a file, consisting of up to three of these arrays, using Java with big Endian byte order format. Now I need to read that file from a C++ program.
short employeeId = 32767;
long lastModifiedDate = "1379811105109L";
byte[] attributeValue = os.toByteArray();
I am writing employeeId, lastModifiedDate and attributeValue together into a single byte array. I am writing that resulting byte array into a file and then I will have my C++ program retrieve that byte array data from the file and then deserialize it to extract employeeId, lastModifiedDate and attributeValue from it.
This writes the byte array value into a file with big Endian format:
public class ByteBufferTest {
public static void main(String[] args) {
String text = "Byte Array Test For Big Endian";
byte[] attributeValue = text.getBytes();
long lastModifiedDate = 1289811105109L;
short employeeId = 32767;
int size = 2 + 8 + 4 + attributeValue.length; // short is 2 bytes, long 8 and int 4
ByteBuffer bbuf = ByteBuffer.allocate(size);
bbuf.order(ByteOrder.BIG_ENDIAN);
bbuf.putShort(employeeId);
bbuf.putLong(lastModifiedDate);
bbuf.putInt(attributeValue.length);
bbuf.put(attributeValue);
bbuf.rewind();
// best approach is copy the internal buffer
byte[] bytesToStore = new byte[size];
bbuf.get(bytesToStore);
writeFile(bytesToStore);
}
/**
* Write the file in Java
* @param byteArray
*/
public static void writeFile(byte[] byteArray) {
try{
File file = new File("bytebuffertest");
FileOutputStream output = new FileOutputStream(file);
IOUtils.write(byteArray, output);
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Now I need to retrieve the byte array from that same file using this C++ program and deserialize it to extract employeeId, lastModifiedDate and attributeValue from it. I am not sure what the best way is on the C++ side.
int main() {
string line;
std::ifstream myfile("bytebuffertest", std::ios::binary);
if (myfile.is_open()) {
uint16_t employeeId;
uint64_t lastModifiedDate;
uint32_t attributeLength;
char buffer[8]; // sized for the biggest read we want to do
// read two bytes (will be in the wrong order)
myfile.read(buffer, 2);
// swap the bytes
std::swap(buffer[0], buffer[1]);
// only now convert bytes to an integer
employeeId = *reinterpret_cast<uint16_t*>(buffer);
cout<< employeeId <<endl;
// read eight bytes (will be in the wrong order)
myfile.read(buffer, 8);
// swap the bytes
std::swap(buffer[0], buffer[7]);
std::swap(buffer[1], buffer[6]);
std::swap(buffer[2], buffer[5]);
std::swap(buffer[3], buffer[4]);
// only now convert bytes to an integer
lastModifiedDate = *reinterpret_cast<uint64_t*>(buffer);
cout<< lastModifiedDate <<endl;
// read 4 bytes (will be in the wrong order)
myfile.read(buffer, 4);
// swap the bytes
std::swap(buffer[0], buffer[3]);
std::swap(buffer[1], buffer[2]);
// only now convert bytes to an integer
attributeLength = *reinterpret_cast<uint32_t*>(buffer);
cout<< attributeLength <<endl;
myfile.read(buffer, attributeLength);
// now I am not sure how should I get the actual attribute value here?
//close the stream:
myfile.close();
}
else
cout << "Unable to open file";
return 0;
}
Can anybody take a look on C++ code and see what I can do to improve it, as I don't think it is looking much efficient? Any better way to deserialize the byte array and extract relevant information on the C++ side?
Answer: Obviously the code isn't portable to big-endian machines. I'll use C syntax, since I'm more familiar with that than C++.
If you have endian.h, you can use the functions in there; if not, you should have arpa/inet.h which defines functions for swapping network byte order (big-endian) to host byte order, but lacks a function for 64-bit values. Look for either be16toh (from endian.h) or ntohs (from arpa/inet.h) and friends.
Why not read directly into the values:
fread((void *)&employeeId, sizeof(employeeId), 1, file);
employeeId = be16toh(employeeId);
Since you can manipulate pointers in C, you just need to provide a universal pointer (void *) to the read function where it should place the results. The & operator takes the address of a value. Once that is done, you can manipulate the value directly, as above.
Using this Java test code:
import java.io.*;
public class write {
public static void main(String... args) throws Exception {
final FileOutputStream file = new FileOutputStream("java.dat");
final DataOutputStream data = new DataOutputStream(file);
final long time = System.currentTimeMillis();
final short value = 32219;
// fill a table with a..z0..9
final byte[] table = new byte[36];
int index = 0;
for (int i = 0; i < 26; i++) {
table[index++] = (byte)(i + 'a');
}
for (int i = 0 ; i < 10; i++) {
table[index++] = (byte)(i + '0');
}
data.writeLong(time);
data.writeShort(value);
data.writeInt(table.length);
data.write(table);
data.close();
System.out.format("wrote time: %d%n value: %d%n length: %d%n table:%n", time, value, table.length);
for (int i = 0; i < table.length; i++) {
System.out.format("%c ", (char)table[i]);
}
System.out.println();
}
}
The output from this code is:
wrote time: 1380743479723
value: 32219
length: 36
table:
a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9
You can read the values in with this C code:
#include <stdio.h>
#include <stdlib.h>
#include <endian.h>
#include <sys/types.h>
int main(int argc, char **argv) {
int64_t time;
int16_t value;
int32_t length;
u_int8_t *array;
FILE *in = fopen("java.dat", "rb");
fread(&time, sizeof(time), 1, in);
time = (int64_t)be64toh( (u_int64_t)time);
fread(&value, sizeof(value), 1, in);
value = (int16_t)be16toh( (u_int16_t)value );
fread(&length, sizeof(length), 1, in);
length = (int32_t)be32toh( (u_int32_t)length );
array = (u_int8_t *)malloc(length);
fread(array, sizeof(array[0]), length, in);
fclose(in);
printf("time: %ld\nvalue: %d\narray length: %d\narray:\n", time, value, length);
for (int i = 0; i < length; i++) {
printf("%c ", array[i]);
}
printf("\n");
free(array);
return 0;
}
I compiled this on Ubuntu x64 with clang. Its output was:
./a.out
time: 1380743479723
value: 32219
array length: 36
array:
a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9
Keep in mind that the only unsigned types in Java are byte (8 bits) and char (16-32 bits). | {
"domain": "codereview.stackexchange",
"id": 17276,
"tags": "java, c++, serialization"
} |
Generating a collection of controls | Question: This question I asked previously mentions a function named BuildControlCollection, which I didn't go into the details of since it wasn't relevant. However, because the implementation contains some funky code I'm not 100% on (it completely works, I'm just unsure if it's the best way to do it), I decided to put this up for review too.
Public Sub BuildControlCollection(ByRef ipForm As Form,
ByRef mpCollection As Collection,
ByVal ipControlType As ControlTypes)
The function takes the form that we're building a control collection from, an unset collection object (which will be created and filled), and an enum value to indicate the type(s) of controls to fill the collection with.
Enum ControlTypes
eTextBox = &H1
eComboBox = &H2
eLabel = &H4
eButton = &H8
eFrame = &H10
eRadioButton = &H20
eListBox = &H40
eLine = &H80
eRectangle = &H100
eCheckbox = &H200
eChart = &H400
eAll = &H800
End Enum
Public Sub BuildControlCollection(ByRef ipForm As Form, _
ByRef mpCollection As Collection, _
ByVal ipControlType As ControlTypes)
If Not mpCollection Is Nothing Then
Err.Raise 5000, "Collection has previously been set. This operation would delete the collection."
End If
Set mpCollection = New Collection
Dim lControl As Control
For Each lControl In ipForm.Controls
If ipControlType And eAll Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eButton) And TypeName(lControl) = "CommandButton" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eChart) And TypeName(lControl) = "ObjectFrame" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eCheckbox) And TypeName(lControl) = "CheckBox" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eComboBox) And TypeName(lControl) = "ComboBox" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eFrame) And TypeName(lControl) = "Frame" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eLabel) And TypeName(lControl) = "Label" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eLine) And TypeName(lControl) = "Line" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eListBox) And TypeName(lControl) = "ListBox" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eRadioButton) And TypeName(lControl) = "RadioButton" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eRectangle) And TypeName(lControl) = "Rectangle" Then
mpCollection.Add lControl
ElseIf (ipControlType And ControlTypes.eTextBox) And TypeName(lControl) = "TextBox" Then
mpCollection.Add lControl
End If
Next lControl
End Sub
The thinking behind the last argument of BuildControlCollection is to allow multiple options to be passed (eg eTextBox Or eButton) - I've seen this used in the built-in functions such as MsgBox - the second argument (somewhat inaccurately named Buttons) of which allows you to specify for eg vbOKOnly Or vbExclamation to get a messagebox with both an OK button and a warning triangle. I don't know what this is called, so I haven't been able to Google a real implementation, so I've had to make my best guess at it.
I understand that it works by comparing bits - for eg, vbOKOnly may be 0000 0001, whilst vbExclamation may be 0001 0000, so passing vbOkOnly Or vbExclamation (0001 0001) matches on both bits.
Whilst my implementation above definitely works, that enormous If/ElseIf smells funny. If anyone can tell me what the bit flagging thing used in MsgBox is called, that would be really useful too.
Answer:
I don't know what this is called, so I haven't been able to Google a real implementation, so I've had to make my best guess at it. [...] If anyone can tell me what the bit flagging thing used in MsgBox is called, that would be really useful too.
They're called Flag enums in .NET (see this SO question), and apparently the naming is also appropriate for VB6 enums.
That If block definitely smells, because all branches result in lControl being added to mpCollection. Hence, it's not really an If...Else If logic you need here, rather something like:
If CanAddThisControl(ipControlType, lControl) Then mpCollection.Add lControl
This effectively eliminates/replaces the entire If block, but leaves you with a CanAddThisControl method to implement. Let's see...
Private Function CanAddThisControl(ipControlType As ControlTypes, lControl As Control) As Boolean
'return true if the enum value matches the control's type
End Function
This is where VB6/VBA's lack of structures really hurts. What you need is really some kind of KeyValuePair that associates an enum value with a control type. What if we created a class to do just that?
Private Type tKeyValuePair
key As Variant
value As Variant
End Type
Private this As tKeyValuePair
Option Explicit
Public Property Get key() As Variant
If IsObject(this.key) Then
Set key = this.key
Else
key = this.key
End If
End Property
Public Property Let key(k As Variant)
If IsEmpty(k) Then Err.Raise 5
this.key = k
End Property
Public Property Set key(k As Variant)
If IsEmpty(k) Then Err.Raise 5
Set this.key = k
End Property
Public Property Get value() As Variant
If IsObject(this.value) Then
Set value = this.value
Else
value = this.value
End If
End Property
Public Property Let value(v As Variant)
this.value = v
End Property
Public Property Set value(v As Variant)
Set this.value = v
End Property
Public Function ToString() As String
ToString = TypeName(Me) & "<" & TypeName(this.key) & "," & TypeName(this.value) & ">"
End Function
(damn VB6 case insensitivity!)
So now we have a way of associating enum values with a string:
Private Function CreateKeyValuePair(key As ControlTypes, value As String) As KeyValuePair
Dim result As New KeyValuePair
result.key = key
result.value = value
Set CreateKeyValuePair = result
End Function
Private Function GetControlTypesAsKeyValuePairs As Collection
Dim result As New Collection
result.Add CreateKeyValuePair(ControlTypes.eButton, "Button")
result.Add CreateKeyValuePair(ControlTypes.eChart, "ObjectFrame")
result.Add CreateKeyValuePair(ControlTypes.eCheckBox, "CheckBox")
result.Add CreateKeyValuePair(ControlTypes.eComboBox, "ComboBox")
result.Add CreateKeyValuePair(ControlTypes.eFrame, "Frame")
result.Add CreateKeyValuePair(ControlTypes.eLabel, "Label")
result.Add CreateKeyValuePair(ControlTypes.eLine, "Line")
result.Add CreateKeyValuePair(ControlTypes.eListBox, "ListBox")
result.Add CreateKeyValuePair(ControlTypes.eRadioButton, "RadioButton")
result.Add CreateKeyValuePair(ControlTypes.eRectangle, "Rectangle")
result.Add CreateKeyValuePair(ControlTypes.eTextBox, "TextBox")
Set GetControlTypesAsKeyValuePairs = result
End Function
The above code could be simplified to a one-liner if you implemented a List to wrap the poorly tooled Collection class; see this CR post:
Private Function GetControlTypesAsKeyValuePairs() As List
Dim result As New List
result.Add CreateKeyValuePair(ControlTypes.eButton, "Button"), _
CreateKeyValuePair(ControlTypes.eChart, "ObjectFrame"), _
CreateKeyValuePair(ControlTypes.eCheckBox, "CheckBox"), _
CreateKeyValuePair(ControlTypes.eComboBox, "ComboBox"), _
CreateKeyValuePair(ControlTypes.eFrame, "Frame"), _
CreateKeyValuePair(ControlTypes.eLabel, "Label"), _
CreateKeyValuePair(ControlTypes.eLine, "Line"), _
CreateKeyValuePair(ControlTypes.eListBox, "ListBox"), _
CreateKeyValuePair(ControlTypes.eRadioButton, "RadioButton"), _
CreateKeyValuePair(ControlTypes.eRectangle, "Rectangle"), _
CreateKeyValuePair(ControlTypes.eTextBox, "TextBox")
Set GetControlTypesAsKeyValuePairs = result
End Function
Now that we have a way of associating each enum value with a specific string, we're equipped to implement CanAddThisControl - I'll assume you went with a Collection, but the code would be pretty much identical if you used the List class I've mentioned above (just swap Collection for List):
Private Function CanAddThisControl(ipControlType As ControlTypes, lControl As Control) As Boolean
Dim enums As Collection
Set enums = GetControlTypesAsKeyValuePairs
Dim kvp As KeyValuePair
For Each kvp In enums
If (ipControlType And kvp.Key) And TypeName(lControl) = kvp.Value Then
CanAddThisControl = True
Exit For
End If
Next
End Function
This should enable you to make the loop in BuildControlCollection as simple as this:
For Each lControl In ipForm.Controls
If (ipControlType And eAll) Or CanAddThisControl(lControl) Then
mpCollection.Add lControl
End If
Next
Now this is a little inefficient, because CanAddThisControl is rebuilding the KeyValuePair collection every time it's called. But it's a fair start I think. | {
"domain": "codereview.stackexchange",
"id": 7826,
"tags": "vba, ms-access"
} |
Is the Earth Really Spinning? (honest question) | Question: Most people consider it common knowledge that the Sun's "movement" in the sky is only perceived due to the Earth's "spin and movement".
Based on this, you'd think that the stars in the night sky (when viewed with time lapse photography) would take a path in the sky similar to the sun. After all, both the sun and stars are practically stationary over the course of one day, given their accepted distances from the earth.
Yet, time lapse photography shows this is not the case.
Compare these photos:
1) Time Lapse of Sun
2) Time Lapse of Stars
3) Time Lapse of Moon
If the Earth's spin is the primary reason for "movement" of things (farther than clouds in the sky), then why do the sun and moon move through the sky in a similar fashion that is totally different from the sky-path of the stars? The Earth is said to spin around 360 degrees every 24 hours. It seems to me that everything in the sky should move in one direction where I live (Texas).
To me (in Texas), it seems the stars should sweep from horizon to horizon over the course of the Earth's night-spin (like the sun and moon do). Yet, the time lapse photos show circular paths in the sky for the stars. That would make sense at the poles, but not in Texas!?
-Lonnie Lee Best
Answer: You are already starting to get it.
That would make sense at the poles
What about one meter from the poles?
Or a kilometre?
As long as you can see the celestial pole in the sky, you can see the stars revolve around it at night.
Let us see if you are able to see the celestial pole.
Texas was about 30 degrees north last time I checked:
That explains the circular movement of the stars, the Sun and the Moon.
This is true for all locations on the Earth, except for the equator:
Is the Earth spinning? That depends, you can always choose a frame of reference that suits you. However, only one of them are non-rotating, the Inertial frame. In all the others we have fictitious forces acting, like centrifugal or Coriolis forces.
We can test if the Earth rotates by watching a pendulum throughout a day. The pendulum would then seem to slowly rotate during this period of time, meaning some fictitious "force" is acting on it. That means that we are located in a rotating frame of reference, and thus the Earth rotates. | {
"domain": "astronomy.stackexchange",
"id": 3147,
"tags": "the-moon, the-sun, earth"
} |
Join two dataframes - Spark Mllib | Question: I've two dataframes. The first have the some details from all the students, and the second have only the students that haved positive grade.
How can I return only the details of the student that have positive grade (make the join) but not using SQL Context.
I've this code:
val all_students = sc.textFile("/user/cloudera/Data");
case class Students(Customer_ID:String,Name:String,Age:String);
def MyClass(line: String) = {
val split = line.split(',');
Students(split(0),split(1),split(2))
}
val df = all_students.map(MyClass).toDF("Customer_ID","Name", "Age").select("Customer_ID","Name", "Age");
val students_positive_grande = sc.textFile("/user/cloudera/Data");
How can I make the join between this datasets? I want to join the "Customer_ID" with the first column of the second column...
Answer: Use this syntax:
val joinedDF = students_positive_grande.as('a).join(
df.as('b),
$"a.Customer_ID" === $"b.Customer_ID")
joinedDF.select($"a.Customer_ID", $"b.Customer_ID") | {
"domain": "datascience.stackexchange",
"id": 1121,
"tags": "apache-spark, scala"
} |
How to describe arbitrary accelerations in special relativity | Question: Describing acceleration in special relativity is in principle straightforward, and for simple cases the resulting transformations are simple. Examples include circular motion and constant acceleration in the accelerating frame (the relativistic rocket). Anything more complicated is going to have to be done numerically, which is fine, but it's not immediately obvious to me how you'd go about this.
Let's call our frame $S$, and our metric is just the Minkowski metric. If we can write down an expression for the trajectory $x(t)$ in our coordinates $x$ and $t$ then everything is straightforward. But this isn't likely to be the case. It's more likely that the aceleration will be given in the accelerating object's frame $S'$ i.e. all we know is $a'(t')$.
So given that all we know is the form of $a'(t')$, how do we set about calculating the rocket's trajectory in our coordinates $S$? General principles will be fine as I'm sure I can work out the fine detail. It's just that I'm not sure where to start.
Assuming I'm not skirting too close to the homework event horizon, this might make a good blog type question. I've been thinking about writing an answer your own question post about acceleration in SR for some time.
Answer: This is an answer for motion in 1+1 dimensions. Let a dot stand for differentiation with respect to the rocket's proper time $t'$. The rocket's four-velocity is normalized, so
$$\dot{t}^2-\dot{x}^2=1\quad.\qquad (1)$$
Since the norm of the acceleration four-vector is invariant, we have
$$ \ddot{t}^2-\ddot{x}^2=-a'^2 \quad . \qquad (2)$$
Implicit differentiation of (1) gives
$$\ddot{t}=v\ddot{x} \quad ,$$
where $v=dx/dt$. If we substitute this into (2), we find
$$\ddot{x}=\gamma a'\quad.$$
Given $a'$ as a function of $t'$, this can be integrated numerically to find $x(t)$. | {
"domain": "physics.stackexchange",
"id": 17635,
"tags": "special-relativity, acceleration"
} |
Why does the background noise in this image of 2020 QG look like corduroy? | Question: The news item Small asteroid becomes closest ever seen passing Earth: NASA contains a "handout image" with the caption:
This NASA/JPL/ZTF/Caltech Optical Observatories handout image obtained on August 18, 2020 shows asteroid 2020 QG (the circled streak in the center), which came closer to Earth than any other nonimpacting asteroid on record NASA/JPL-CALTECH/AFP
Question: I saw what looks like a corduroy-like pattern in the background that is likely from some processing artifact. Does anybody recognize this kind of artifact? Does it look familiar?
above: NASA image from linked article. below: Corduroy source.
Fourier analysis (log power) of the image of the asteroid, the two spots demonstrating the strong linear periodic modulation of the background.
Concentric rings were seen when analyzing only the red part of the initial annotated image. Now that I average all three colors they are almost gone. The inclined line in the FT is probably that of the bright, straight linear trail of the asteroid.
import numpy as np
import matplotlib.pyplot as plt
fname = '424852de3808b4d6df1101d3b4f0790afd3b9d13.png'
img = plt.imread(fname)[..., :3].sum(axis=2) # sum 3 colors makes red circle gray to minimize impact on FFT
print(img.shape)
f = np.fft.fftshift(np.fft.fft2(img))
p = np.abs(f)**2
plt.figure()
c0, c1 = [int(s*2**-1) for s in p.shape]
hw0, hw1 = [int(s*2**-3) for s in p.shape]
plt.imshow(np.log10(p[c0-hw0:c0+hw0, c1-hw1:c1+hw1]), vmin=5, vmax=8)
plt.gca().set_aspect(16/9)
plt.colorbar()
plt.title('log10(ft power)')
plt.show()
Answer: I agree that it’s noise in a fixed pattern, but I think it’s unlikely to be related to ADC sensitivity. Typically if you have multiple ADCs, they read out blocks of the sensor (e.g. one on each corner to read out a quadrant). And sensitivity differences across those amplifiers usually is removed pretty well by flat-fielding.
In my experience, you get things like this from time-variable electronic noise, e.g. 60 Hz noise from surrounding equipment. (Probably the electronics are well-shielded from 60 Hz because it’s so ubiquitous, but there could be noise at other frequencies.)
Because CCD pixels are read out sequentially (e.g. the classic bucket brigade analogy), then any time-variable noise source that slightly changes a pixel value will manifest itself as a spatially varying pattern, where the exact pattern depends on the relationship of the readout frequency to the noise frequency. You see the diagonal stripes because the noise signal wasn’t quite at the same phase when pixel 1 of row 2 was read out as it was when pixel 1 of row 1 was read out, and so on.
(Extra points for including an actual image of corduroy in your question, BTW. ) | {
"domain": "astronomy.stackexchange",
"id": 4783,
"tags": "observational-astronomy, asteroids, photography, image-processing"
} |
Grignard reagent will act as a base or a nucleophile | Question: R- from grignard reagent (RMgBr) and gilmann's reagent (R2CuLi) although being a strong base reduces the carbonyl group and does not act as a base.
Why can't it abstract the alpha hydrogen forming an enolate ion?
Answer: As most of us chemists have understood and experienced, the Grignard reagent have undergone the many reactions with suitable substrates. Yet, the most familiar and probably the most used reaction is the addition to a carbonyl group to give $2^\circ$- or $3^\circ$-alcohol. The Grignard reagents' affinity for a corbonyl group in the presence of an acidic $\alpha$-$\ce{H}$ has been documented here. However, if at least $\ce{OH}$ group is present in the molecule, acid base reaction would be take over. For example, the usual Grignard reaction would not happen if your glassware is wet or not properly dried.
Albeit they were rarely published, it is well known fact that some side reactions with available acidic $\alpha$-$\ce{H}$ and some reduction reactions have been taken place during original Grignard reactions. These side reactions are frequently complicated the intended reactions, which may replace some or all of the intended addition reaction, reducing expected percent yield (Ref.1). Most importantly, the conversion of an alkyl halide to a deuterium labelled hydrocarbon has been achieved via Grignard reagent preparation (Figure A):
Keep in mind that these labelling are selective based on the alkyl halide. Recently, there was a method developed to selective labeling by using organozinc reagents (Ref.2). Although these are not conventional Grignard reagents, organozincs are close relatives (Figure B). Another example for Grignard reagents acting as a base is a method to prepare terminal alkynyl-Grignard reagents (Figure C).
Some times during Grignard preparation, you might experience no Grignard was evidently formed because you have loosen your alkyl halide reagent while substrate ketone was recovered unchanged. That is because, if your ketone is bulky like diisopropyl ketone (2,4-dimethylpent-3-one), following reaction may happen (look under an example for Grignard acting as a base):
If your Grignard reagent have an active $\alpha$-$\ce{H}$ to give, it can also acts as an reducing agent as in above case of Grignard acting as a reducing agent. This is the case of reaction between benzophenone and specific Grignard reagent (isobutyl magnesium bromde or 3-methylbutylmagnesium bromide) illustrated in Ref.1. The main products from that reaction were dibenzylcarbinol and isobutene.
References:
G. E. Dunn, John Warkentin, "An Isotopic Study of the Reducing Action of the Grignard Reagent," Canadian Journal of Chemistry 1956, 34(1), 13841–13857 (https://doi.org/10.1139/v56-007).
Aiyou Xia, Xin Xie, Xiaoping Hu, Wei Xu, Yuanhong Liu, "Dehalogenative Deuteration of Unactivated Alkyl Halides Using $\ce{D2O}$ as the Deuterium Source," J. Org. Chem. 2019, 84(21), 13841–13857 (https://doi.org/10.1021/acs.joc.9b02026). | {
"domain": "chemistry.stackexchange",
"id": 13912,
"tags": "organic-chemistry, carbonyl-compounds, nucleophilic-substitution, grignard-reagent, enolate-chemistry"
} |
When we can consider a body to be point sized? | Question: In motion we idealize big object to be point sized. But we don't do it all the times. I am having some confusion on it. Please tell when we consider objects to be point sized?
Answer: In almost every case, we choose to model an object as a point because we don't care about the object's internal structure for the calculation we're trying to make. This can have several causes; for example:
The effect of the object's internal structure may simply be too small to be detectable in the results;
The desired precision of the calculation may be loose enough that we can choose to ignore the effect of the object's internal structure without loss of accuracy;
The cost in additional complexity of treating the object's internal structure is not worth the increase in accuracy of the result;
and so on.
There is one class of exceptions: many of the fundamental particles (for example, the electron) have no currently-measurable internal structure. They are, to our current instruments, indistinguishable from being point particles. So in this particular case, we model them as point particles because, as far as we know, they really are point particles. | {
"domain": "physics.stackexchange",
"id": 61003,
"tags": "newtonian-mechanics, classical-mechanics, kinematics"
} |
Can you build a compass that is attracted to the South Pole? | Question: Was just curious, since all compasses point to the North Pole.
South is just the opposite polarity of of North, so it seems very likely, but I've never seen an example of this. Is there any videos demonstrating this?
Could a South attractor be added to a standard compass to help confirm the integrity of the North's signal? (For situations where the compass is being affected by another magnetic source).
Answer: If possible do as @AccidentalFourierTransform explained in a comment, namely:
Get a standard compass. Clean off the paint in one end of the needle, and pain the other one. Congrats, now you have a compass that is attracted to the South Pole!
Be aware, that some compasses are embedded in an oil capsule, so disassembling will destroy them mostly.
Also remember that the needle uses always both poles, as the magnetic field influences the structure of the needle as a whole. Otherwise it wouldnt't work, so the easiest is to use your imagination.
Perhaps you meant the declination which must be adjusted on the compass, which can influence the way you've got to take, when hiking close to the north or south? Navigation 101 | {
"domain": "physics.stackexchange",
"id": 42527,
"tags": "electromagnetism, magnetic-fields, geomagnetism"
} |
RGBDSLAM_freiburg : 'node not found' on groovy | Question:
I installed and built rgbdslam_freiburg as given in http://www.ros.org/wiki/rgbdslam. When I tried to execute the following command
roslaunch rgbdslam kinect+rgbdslam.launch
I got the following error:
ERROR: cannot launch node of type [rgbdslam/rgbdslam]: can't locate node [rgbdslam] in package [rgbdslam]
'roscd' is able to locate the package rgbdslam. It is found inside my catkin workspace, where I downloaded the code from this link http://alufr-ros-pkg.googlecode.com/svn/trunk/rgbdslam_freiburg given in the tutorial
However
rosnode info rgbdslam
returns
cannot contact [/rgbdslam]: unknown node
I'm running ros groovy on Ubuntu 12.10
Originally posted by Maheshwar Venkat on ROS Answers with karma: 5 on 2013-05-08
Post score: 0
Answer:
Have you compiled it? It's not a binary download. See the wiki page for how to do it.
Originally posted by Felix Endres with karma: 6468 on 2013-05-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by leo pauly on 2014-01-20:
compiled it. but still it shows same error..
Comment by Felix Endres on 2014-01-23:
And compilation was successful, I assume. You have to find the binary then. Try searching in your workspace with "find -executable -name rgbdslam -type f".
Comment by Michiel on 2014-01-29:
I have the same problem. Compilation was succesful. Any update on this?
Also, I noticed that I can't use rosrun or roscd, altough I included "source /opt/ros/hydro/setup.bash" in my ~/.bashrc file. The other ros-commands are working.
Comment by Felix Endres on 2014-02-14:
I have migrated rgbdslam to hydro. Let me know whether you would like to test a probably unstable release.
Comment by leo pauly on 2014-02-22:
im running on ros hydro.but still the error continues
Comment by Felix Endres on 2014-02-25:
Leo: Please be specific. You are running what on ros hydro? The rgbdslam package from googlecode?
Comment by Felix Endres on 2014-02-25:
Michiel:Have you created a workspace and sourced the respective setup.bash, so that rgbdslam is in your package path? Check with "echo $ROS_PACKAGE_PATH"
Comment by pinocchio on 2014-03-26:
@leo pauly @Michiel @Felix Endres Do you use catkin workspace or rosbuild one? The instruction uses a rosbuild ws. I am not sure which is correct to install this package on hydro.
Comment by Felix Endres on 2014-03-27:
For fuerte I am using rosbuild. For the (yet unreleased) hydro version I use catkin | {
"domain": "robotics.stackexchange",
"id": 14112,
"tags": "ros, slam, navigation, rgbdslam-freiburg, ros-groovy"
} |
Derivation of particle distribution in a gravitational field | Question: I'm trying to figure out where my logic is failing in the derivation of the concentation of particles with respect to the height in constant temperature and gravity ($n(h)$).
So we have the following equations:
$$p=\rho gh=mn(h)gh \\
p=n(h)kT$$
If we differentiate both of those with respect to h we get:
$$\frac{dp}{dh}=mg(h\frac{dn}{dh}+n)\\
\frac{dp}{dh}=kT\frac{dn}{dh}$$ So we get to this differential equation:
$$\frac{dn}{dh}kT=mg(h\frac{dn}{dh}+n)$$ which when solved gives us the solution:
$$n(h)=\frac{c}{kT-mgh}$$ And this is way different than the actual answer:
$$n(h)=n_0e^{-\frac{mgh}{kT}}$$
I understand that if in the first equation I treat n as a constant I will get the right answer but why should I do that? I dont see where is the logic
Answer: Your first equation is wrong. $P = \rho g h$ is true only when the density is the same. A proper way to derive is by noting that
$$ \frac{dP}{dh} = -\rho g$$
where this equation holds true for all static fluids that stay at rest under gravity. This can be derived by using newton's laws for a static slab of fluid suspended on air. Now if we assume that the gas is ideal, we can use $P = nk_BT$. Also assuming the gas is monoatomic we have
$$ \frac{dn}{dh} = -\frac{m g}{k_B T} n$$
You can then integrate this equation easily to get your answer. | {
"domain": "physics.stackexchange",
"id": 96650,
"tags": "thermodynamics, statistical-mechanics, ideal-gas, atmospheric-science, molecules"
} |
ROS Twist Mux Figuring Out which Channel is Selected | Question:
Hello. I am working with the ROS twist mux package http://wiki.ros.org/twist_mux and am writing a program that needs to know which channel of the twist mux is selected. Is there an easy way to do this or does anyone know of a good work around if an easy way does not exist? Thanks.
Originally posted by mequi on ROS Answers with karma: 111 on 2020-05-27
Post score: 0
Answer:
@gvdhoorn Thanks. I decided to just go ahead and implement the feature myself. https://github.com/ros-teleop/twist_mux/pull/20
Originally posted by mequi with karma: 111 on 2020-05-28
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2020-05-29:
Nit: until your PR is merged, your answer is not actually the answer. | {
"domain": "robotics.stackexchange",
"id": 35021,
"tags": "ros-melodic"
} |
arm_navigation reports JOINT_LIMITS_VIOLATED often | Question:
I'm trying to use the arm_navigation stack to do some motion planning on the pr2, and I notice that I often get an error code of JOINT_LIMITS_VIOLATED. [I'm not really sure what this is about, but it seems like it's based on the initial joint state because I can fix this by moving the arm around until I get to a good state (specifically, it seems to not like the wrist being at a 90 degree angle with the forearm). Is this actually what the error means?] Update: Oops yeah, should have checked rosout, the error is indeed Start state violates joint limits, can't plan..
If the arm is in a state that is physically possible (I'm working on a real PR2), it shouldn't be violating joint limits, right?
For some reason I can't seem to plan for the left arm at all. No matter what I do I get this error. I'm guessing maybe I have something copied and pasted from the right arm stuff that needs to be changed, but I'm not seeing it. I have started both arm planners using roslaunch pr2_3dnav both_arms_navigation.launch and I am sending the request to the move_left_arm service.
https://github.com/rll/berkeley_utils/blob/master/rll_utils/src/rll_utils/MoveArmUtils.py is what I wrote to construct and send a MoveArmGoal, just a convenience wrapper that populates some sensible defaults for a lot of fields. I hope others who prefer to use python over C++ when possible find it useful :)
Any help is appreciated! Thanks!
Edit: Here's the MoveArmGoal that I'm sending:
https://gist.github.com/1495429/8abcc19fb109f2dd813356cefb48a68e9159beb6
Edit2: Rosout is good, I should check it more. Here are the joints that are violating limits (as I suspected, it's the wrists):
https://gist.github.com/e9a379199e7a348a0f4f
Related to that, why is there both move_arm_msgs/MoveArmGoal and arm_navigation_msgs/MoveArmGoal which only differ in that the latter takes in a planning_scene_diff? I guess hopefully it doesn't make a difference if you don't populate it. Is the move_arm_msgs version a relic from Diamondback? If so, perhaps it shouldn't be listed on http://www.ros.org/wiki/move_arm .
Originally posted by Ibrahim on ROS Answers with karma: 307 on 2011-12-18
Post score: 1
Answer:
Hah, solved my own problem. The problem, if it is one, is just that the wrist_flex_link joint limits are perhaps too conservative and don't encompass the extremes that the wrist can actually take on (which happen to come up often because the rest positions of the wrist tend to the extremes due to gravity if you're disabling the runstops and repositioning the arms/wrists like I am).
http://ros.org/wiki/pr2_controller_manager/safety_limits
Originally posted by Ibrahim with karma: 307 on 2011-12-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jys on 2013-02-27:
@Ibrahim, So, how did you solve this problem?? I am fairly new to ROS, please describe in detail.. Thanks! | {
"domain": "robotics.stackexchange",
"id": 7683,
"tags": "ros, pr2-arm-navigation, joints"
} |
Could a powerful gravitational wave tear apart an object? | Question: Gravitational waves can temporarily squeeze the fabric of spacetime itself, and, given enough energy, this could cause a single object in the way to be squeezed with spacetime (that it is in) itself so much, that parts of the single object become separated. Now if this separation is large enough, this could cause the other forces (EM mainly, covalent bonds) that bind parts of the object together to loose contact. Just like when we tear an object apart, the covalent bonds could give in after a while.
Could a powerful gravitational wave cause electrons to emit light?
where John Rennie says:
This may seem a bit odd, but it happens because the gravitational wave changes the separation of objects by changing the geometry of the spacetime around them, not by exerting a force on the objects to move them.
How would a passing gravitational wave look or feel?
where Emil says:
However, for higher GW frequencies, and especially at the mechanical resonance, the system will experience an effective force
Question:
Could a powerful gravitational wave tear apart and object?
Answer:
Could a powerful gravitational wave tear apart and object?
If the force is strong enough the answer is yes, although the waves we have received until now are many orders of magnitude too small for that. For example, let's take GW150914, which had a received strain of $s=10^{-21} \rm m/m$ and a frequency of $500\rm Hz$, so the sine-function for the wave is
$f=\sin(2\pi \cdot 500 t)$
That gives you a relative acceleration of
$a=s·L·\ddot{f}$
from one end to the other, where $L$ is the length of the stretched object, and $\ddot{f}={\rm d}^2f/{\rm d}t^2$. At LIGO, the length was $L=4000\rm m$, so the maximal acceleration from one end to the other was around $4 \cdot 10^{-11} \rm N/kg$, which is not enough to tear LIGO's arm apart, but if you increase either the strain, the frequency or the arm length, the acceleration and therefore the force would get stronger, and if strong enough even tear apart the object:
x: time (sec), y: relative acceleration, specific force (m/sec², N/kg)
Either the object is rigid and strong enough to withstand the wave, then its molecules have to accelerate relative to the changing space in order to keep the object from stretching and squeezing, in which case it will feel a force because the electromagnetic forces between the molecules will act against the wave to counteract the deformation.
If the objects are loose and not bound to each other, they will increase and decrease their distance to each other in order to follow their force-free geodesics. If they can not do that because they are bound, they are no longer force-free; in that case it depends if the binding forces between the molecules are stronger or weaker that the required force to cancel out the deformation from the wave.
If you concentrate the masses on the two ends and connect them with a thin string the conversion from acceleration to force and the determination if the string will tear apart is simple, and if you have your mass spread out over the whole distance like on a rigid stick you have to integrate over the distance.
That is also the reason why even objects inside a wave which are not torn apart are at least heated up due to the internal exchange of forces and friction, see Feynman's sticky bead argument. Since the counteraction causes destructive wave interference (if bound objects stay at rest to each other while the space between them oscillates, they oscillate relative to the changing space), the wave is also getting weaker by the amount of energy produced in this process. | {
"domain": "physics.stackexchange",
"id": 64675,
"tags": "general-relativity, gravitational-waves, estimation"
} |
Why do we have to train a model from scratch every time? | Question: I have started on Andrew Ng's machine learning course. It seems that machine learning is learning correlations with known data based on as many parameters as possible. For example, if we collect data on existing property prices with information on the land area, built-in area, type of building, age of the building, etc, it is possible to predict the price of another property if we input the value of the various parameters of this property.
Similarly, if we keep the images (the black and white pixels) of cats, we can tell whether a new picture is a cat if it bears some resemblance to the pixels of existing labeled cat images.
This approach sounds great, but is it practical? How much effort and zettabytes of data do we have to keep just to reach the brainpower of, say, a 3-year old, who can recognize dogs, cats, tigers, a Mustang, trucks, a hamburger restaurant, and so on?
Why does everyone have to repeat the effort of learning the same things?
If Google has already learned cats, or if someone already has a program to recognize handwritten digits, can this knowledge be shared and re-used? Or is it just a matter of paying for them?
Answer: I think SmallChess makes a good point on your first question, so I want to focus on the repetition of different problems.
I could see you meaning several different things by this:
Why does every machine learning course out there do MNIST/real estate values/other simple problems? If you want to learn how to solve a complex problem, you need to understand how the individual parts come together. You could jump straight to a classifier trained on 10,000 image categories using the current, most advanced techniques. However, it's much easier to see how certain algorithms work better with certain types of problems when starting with small, easy ones that can be solved in a few minutes. You can try many different sets of algorithms and hyperparameters on a problem that takes 3 minutes to train on your local computer. Then, you get a sense of what each part contributes to the whole. This will help you progress a lot faster than training a network that takes 2 weeks on multiple GPUs each time and trying to figure out what's going on. In addition, you don't have a ton of layers of complexity to try and understand. You get a feel for a simple approach and once you understand it, add complexity. It's the same with learning anything else. My physics class started with gravity and throwing a ball in the air, not with relativity.
Why are there so many people out there doing all these different approaches to image classification/chatbots/reinforcement learning/whatever? Machine learning is a not a solved problem. There are a set of algorithms we understand pretty well. Fitting a polynomial to a set of points doesn't really have any hidden tricks up its sleeves. There are plenty of other algorithms that we're still figuring out. Sometimes, a neural network never converges, or only converges to some limit. When you change the hyperparameters, it converges or has better accuracy or something. But sometimes, it generalizes poorly. Looking at 10 million parameters and deciding which one screwed up is hard, so people research how to solve these problems. Or, often, current "state-of-the-art" approaches really aren't at the point where improvement is rare. Every single year, they hold another competition and lots of people come up with better ways to solve it, either with new algorithms or better refinements to the ones they were already using. There are still so many things not yet understood about this field and it seems there are major discoveries constantly.
Why can't I just use someone else's pre-trained methods? You can! If you can find someone who has put their model out there (for instance, TensorFlow), you can take it and run it for yourself. Some researchers put out their already trained models and some don't, so you might get lucky. The problem is, you have to decide if the model they trained fits your needs. If you really need a visual classifier that can detect penguins, and you download one that was trained on 1,000 classes, none of which were penguins, you're out of luck. So, definitely, look for someone else who might have already done it. Just make sure that it covers what you actually need.
It doesn't really fit into the above points, but I hope I can convey that machine learning is just another tool. Taking your comment about wanting to build a robot companion for a toddler, you need to decide whether it fits the problem you have. If it has some wheels for moving the base around, you could spend a ton of time building a controller with reinforcement learning. Or, you could program and implement a PID controller in a couple days. Does it need to detect the surroundings and label objects in camera images to interact with them? Then you probably need a CNN of some kind to do it accurately. Does it need to listen to the toddler's voice commands to go bring a ball back? Then, you need some way to interpret those, and it might be another machine learning algorithm. But let's say that you find several pre-trained algorithms that do the tasks you need. They still need to be linked together in whatever overall software you have. Analogously, if you were using someone else's path planning algorithm, you still need to define the goals, take the outputs and input them into the controller, and update your state estimate.
There are lots of people hoping to figure out machine learning algorithms that can just do everything like humans do. End-to-end learning tries to go directly from input data to actions (self-driving cars). Others are trying to create an artificial general intelligence that thinks and interacts with the world like we do. The point is, there are so many things that haven't been solved yet. Your robot won't be as good as a 3 year-old at many things, but that's not the point. ML gives the robot the ability to be really good at some certain thing, like figuring out where the toy it's looking at is. But it isn't a "wave your hands and it all works" solution.
tl;dr Use ML when it fits the problem. But it's a hard problem, and there are a lot of things to understand about it if you want to use it. It's a tool in your toolbox. | {
"domain": "ai.stackexchange",
"id": 396,
"tags": "machine-learning, transfer-learning"
} |
Integration involves derivative of delta function | Question: This appears in explicitly calculating the path integral of harmonic oscillators:
First note the second functional derivative of classical action is
$$\frac{\delta^2 S[x]}{\delta x(t1)\delta x(t2)}=-m(\frac{d^2}{d t_1^2}+\omega^2)\delta (t_1-t_2)$$
Then expand around the classical path $x_c$ is
\begin{align*}
S[x_c+y]=S[x_c]+\frac{1}{2!}\int dt_1\,dt_2 y(t_1)y(t_2)\frac{\delta^2 S[x]}{\delta x(t1)\delta x(t2)}\\
=S[x_c]-\frac{m}{2!}\int dt_1\,dt_2 y(t_1)y(t_2)(\frac{d^2}{d t_1^2}+\omega^2)\delta (t_1-t_2)
\end{align*}
Apply the integration by part to the delta function part, I got $$-\frac{m}{2} \int dt_1 y(t_1)\frac{d^2}{d(t_1)^2}y(t_1)$$
while the book gives
$$\frac{m}{2}\int dt_1 (\frac{d y(t_1)}{dt_1})^2.$$
Any suggestions for what I did wrong?
Answer: Your calculation is correct, with one more step you would have got the book result
$$-\frac{m}{2}\int dt_1\,y(t_1)\frac{d^2y(t_1)}{d t_1^2}=-\frac{m}{2}\int dt_1\,\Bigg[\frac{d}{dt_1}\Bigg(y(t_1)\frac{dy(t_1)}{d t_1}\Bigg)-\Bigg(\frac{dy(t_1)}{dt_1}\Bigg)^2\Bigg]=\\=\frac{m}{2}\int dt_1\Bigg(\frac{dy(t_1)}{dt_1}\Bigg)^2$$ | {
"domain": "physics.stackexchange",
"id": 78302,
"tags": "homework-and-exercises, lagrangian-formalism, dirac-delta-distributions, variational-calculus, boundary-terms"
} |
When a gas expands against an external pressure of 0, must the stopper on the cylinder be massless? | Question: Basically, I need to conceptually understand why the work a gas does is the integral $\int p_\mathrm{external}dv$ and is 0 when pressure external is 0. I understand why $\mathrm{d}w = - p_\mathrm{externa } \mathrm{d}v$ and so obviously I understand why the math says the work is 0; I need to conceptually understand it.
When you have isothermal expansion in a cylinder where the external volume outside the gas has pressure 0, a.k.a. vacuum, must the movable top of the cylinder be massless? The equations obviously say the work is 0, but I think of the stopper itself as being resistance. Could I think of isothermal expansion in a vacuum as the top basically "disappearing" when the gas is released?
This is difficult to phrase for me, so please ask if you don't understand. I also am only a sophomore in high school, so I have a limited understanding of calculus. Like I said, conceptually for the pressure against the gas to be 0, the lid must be frictionless and massless, right? Thank you.
Answer: You can indeed think of expansion in a vacuum as the top disappearing when the gas is released.
Imagine you take a powerful microscope and watch the gas molecules hitting the lid of your cylinder. If the lid isn't moving the gas molecules will bounce off with the same speed as they hit i.e. no energy is lost, so no work is done and the internal energy of your gas stays the same.
Now suppose you're allowing the gas to expand reversibly. If you watch a gas molecule hitting the lid, then because the lid is moving the gas molecule will bounce off with less speed than it hit. The difference in the energy of the gas molecule is transferred to the lid, and if you add up the energy transferred by all the molecules this gives you the work done.
Now imagine suddenly removing the lid so the gas is expanding into a vaccum. The gas molecules now don't bounce off the lid because it isn't there, so their speed doesn't change. Because their speed doesn't change no work can be done. That's why the work done in this case is zero. | {
"domain": "physics.stackexchange",
"id": 78459,
"tags": "thermodynamics, pressure, ideal-gas"
} |
Minimal velocity to throw an object to the Sun | Question: What is the minimal velocity to throw an object (material point) to the Sun from Earth, with no specific restrictions?
Answer: The limitation to hit the sun is that the object has to have very little angular momentum. The reason for this is that as the distance to the sun gets smaller, the velocity in a direction perpendicular to the sun gets larger, thanks to conservation of angular momentum:
$$ L = mv_\perp r\rightarrow v_\perp={L\over mv}$$
A good first-order approximation can be found just by assuming you throw the object so that it has zero angular momentum. To do this, you have to throw the object as fast as the earth is traveling around the sun, just in the opposite direction. So, roughly $30~\rm{km\over s}$. There are two effects that change this a little and one that changes it a lot:
Earth's gravity will slow the ball down, so you have to throw it a bit faster at the start. This requires you to throw the ball about $7\%$ faster, as when the object leaves the earth's gravity well it loses $\frac12mv_{\rm escape}^2$ of its kinetic energy. This means the initial kinetic energy must be $\frac12mv^2_{\rm required\ speed\ after\ escape}+\frac12mv^2_{\rm escape}$, so $v_{\rm throw}^2=v_{\rm ignoring\ escape}^2+v_{\rm escape}^2$
The sun has a finite extent, so the ball can have a small angular velocity and still hit the sun. This lets you throw the ball a little slower (but not much: the sun is a small target as far as orbits are concerned)
Air resistance is enormous at $30\ \rm{km\over s}$, so you're going to have to throw it a lot faster if you're not ignoring air resistance (So throw it from orbit, not from the ground) | {
"domain": "physics.stackexchange",
"id": 47211,
"tags": "newtonian-gravity, orbital-motion, projectile, solar-system, celestial-mechanics"
} |
How does one geometrically quantize the Bloch equations? | Question: I've just now rated David Bar Moshe's post (below) as an "answer", for which appreciation and thanks are given.
Nonetheless there's more to be said, and in hopes of stimulating further posts, I've added additional background material. In particular, it turns out that a 2003 article by Bloch, Golse, Paul and Uribe “Dispersionless Toda and Toeplitz operators” includes constructions that illustrate some (but not all) of the quantization techniques asked-for, per the added discussion below.
The question asked is:
How does one geometrically quantize the Bloch equations?
Background
From a geometric point-of-view, the Bloch sphere is the simplest (classical) symplectic manifold and the Bloch equations for dipole-coupled spins specifies the simplest (classical) nontrivial Hamiltonian dynamics.
In learning modern methods of geometric quantization — as abstractly described on Wikipedia's Geometric Quantization article for example — it would be very helpful (for a non-expert like me) to see the quantum Hamiltonian equations for interacting spins derived from the classical Hamiltonian equations.
To date, keyword searches on the Arxiv server and on Google Books have found no such exposition. Does mean that there's an obstruction to geometrically quantizing the Bloch equations? If so, what is it? Alternatively, can anyone point to a tutorial reference?
The more details given, and the more elementary the exposition, the better! :)
Some engineering motivations
It is natural in quantum systems engineering to pullback quantum Hamiltonian dynamics onto tensor network state-spaces of lower-and-lower dimension (technically, these state-spaces are a stratification of secant varieties of Segre varieties).
It should be appreciated too that in this context “quantum Hamiltonian dynamics” includes stochastic unravellings of Lindbladian measurement-and-control processes (per these on-line notes by Carlton Caves). Presenting the unravelled trajectories in Stratonivich form allows the open quantum dynamics of general Lindblad processes to be pulled-back with the same geometric naturality as closed quantum dynamics of Hamiltonian potentials and symplectic forms. This Lindbladian pullback idiom is absent from mathematical discussions of geometric quantization, e.g.</> the above-mentioned article Bloch et al.. In essence we engineers are using these pullback techniques with good success, without having a complete or even geometrically natural understanding of them.
Pulling back through successive state-spaces of smaller-and-smaller dimensionality, we arrive (unsurprisingly) at an innermost state-space that is a tensor product of Bloch spheres that inherits its (classical) symplectic structure from the starting Hilbert space. Moreover, the Lindblad processes pull back (also unsurprisingly) to classical noise and backaction that respects the standard quantum limit.
For multiple systems engineering reasons, we would like to understanding this stratification backwards and forwards, in the following geometrically literal sense: on any state-space of this stratification, we wish the dual option of either pulling-back the dynamics onto a more classical state-space, or pushing-forward the dynamics onto a more quantum state-space.
Insofar as possible, the hoped-for description of geometric (de/re)quantization will illuminate this duality in both directions. Needless to say, the simpler and more geometrically/informatically natural the description of this duality, the better (recognizing that this naturality is a lot to hope for). :)
Answer: I have written an answer to Mathoverflow in which explicit formulas for the classical and quantum Hamiltonians of a spin system (Generators of $SU(2))$ were written explicitely. The classical Hamiltonians are given by means of functions on the two sphere and the quantum Hamiltonians by means of holomorphic differential operators (which act on the sections of the quantum line bundle). For many spin system with a linear Hamiltonian in each spin, one just has a distinct one particle Hamiltonian per spin. Sorry for referring to my own work, but it is by no means original. | {
"domain": "physics.stackexchange",
"id": 3389,
"tags": "quantum-mechanics, research-level, quantization, spin-models, molecular-dynamics"
} |
Proof that $G$ is a yes-instance of IDS $\iff$ $f(G)$ is a yes-instance of SAT | Question: Consider the Independent Dominating Set problem with a directed graph $G=(V,E)$ as instance and the properties that:
$\forall (u,v) \in E, \{u,v\}\nsubseteq S$
$\forall v \in V: v \in S \lor \exists (u,v) \in E: u \in S$
then consider the function $f$ which is a many-one reduction from IDS to SAT for $G$:
$$f(G)= \bigwedge_{(u,v) \in E}(\neg x_u \lor \neg x_v) \land \bigwedge_{v \in V}(x_v \lor \bigvee_{(u,v) \in E}x_u)$$
Theorem: $G$ is a yes-instance of IDS $\iff$ $f(G)$ is a yes-instance of SAT
Proof:
"$\Leftarrow$": Assume that $f(G)$ is a yes-instance of SAT:
Consider the propositional atoms $x_u$ and $x_v$ which represent the vertices of an edge. $x$ is $true$ iff is is in $S$. To proof a CNF-formula true we have to proof all conjuncted subformulas true.
Now consider since $\bigwedge_{(u,v) \in E}(\neg x_u \lor \neg x_v)$ holds and tells us that only one of the vertices $u,v$ is allowed to be in $S$ at the same time, this implies $\forall (u,v) \in E, \{u,v\}\nsubseteq S$.
For the second part we know that $\bigwedge_{v \in V}(x_v \lor \bigvee_{(u,v) \in E}x_u)$ only evaluates to true if either $v \in S$ or any $u \in S$ which is connected by an edge to $v$. This implies $\forall v\in V: v \in S \lor \exists(u,v) \in E: u \in S$.
Therefore we are done.
Is my proof for "$\Leftarrow$" of the theorem correct?
Assumption: The proof is correct and the reduction is valid:
Now we know that IDS is NP-hard, for completeness of IDS we still have to show NP-membership of IDS, right? Furthermore we have to provide an efficient algorithm which uses non-determinism to show that IDS is a member of NP, correct?
Answer: Yes, your proof of $\Rightarrow$ is correct, but note that you still have to prove the second direction to complete the reduction.
Now we know that IDS is NP-hard
Unfortunately, we don't. We'd need a reduction from SAT to IDS and not the other way around.
To convince yourself of why it makes sense, a reduction from $A$ to $B$ means that if we had an algorithm that solves $B$, we could use it to trivially solve $A$. So, in your case - if we had an algorithm that solves SAT we would be able to solve IDS, but that's not what we want to show. We want to claim that if we could solve IDS then we could solve SAT, which means IDS is "at least as hard as SAT".
Also, notice that you need a karp reduction and not just a many-one reduction. That is, a many-one reduction which can be computed in polynomial time (your reduction achieves that)
Furthermore we have to provide an efficient algorithm which uses non-determinism to show that IDS is a member of NP, correct?
Correct.
Assuming we have provided a karp reduction from SAT to IDS, or equivalently - we proved that $\text{SAT}\le_p \text{IDS}$, then all we know is that IDS is NP-hard, and we need to prove that $\text{IDS} \in \text{NP}$ to get that IDS is NP-complete. | {
"domain": "cs.stackexchange",
"id": 15310,
"tags": "complexity-theory"
} |
Greatest volumetric heat capacity | Question: Is there any substance with bigger volumetric heat capacity than water? According to this table water has the biggest known VHC. But I can't believe that in the 21. century we have no special material with larger VHC.
Answer: Here is a thesis that details construction and characterization of thin films with large volumetric heat capacities (some nearing $6\ MJ\cdot m^{-3}K^{-1}$):
Volumetric heat capacity enhancement in ultrathin fluorocarbon polymers for capacitive thermal management | {
"domain": "physics.stackexchange",
"id": 3475,
"tags": "heat"
} |
Fetch robot, joint_state topic sometimes contains all joints, sometimes doesn't | Question:
I am programming with the Fetch robot (the physical one, not Gazebo simulation).
I use my laptop with the following settings:
Macbook Pro Late 2013 edition
Ubuntu 14.04 installed as sole operating system
ROS Indigo
Plus the Fetch ROS from here.
Also the ROS variables:
<fetch>~/FETCH_CORE/fetch_core$ env | grep ROS
ROS_ROOT=/opt/ros/indigo/share/ros
ROS_PACKAGE_PATH=/opt/ros/indigo/share:/opt/ros/indigo/stacks
ROS_MASTER_URI=http://fetch59.local:11311
ROSLISP_PACKAGE_DIRECTORIES=
ROS_DISTRO=indigo
ROS_IP=10.0.0.121
ROS_HOME=/home/daniel/.ros
ROS_ETC_DIR=/opt/ros/indigo/etc/ros
I have the following ROS topics:
<fetch>~/FETCH_CORE/fetch_core$ rostopic list
/arm_controller/cartesian_twist/command
/arm_controller/follow_joint_trajectory/cancel
/arm_controller/follow_joint_trajectory/feedback
/arm_controller/follow_joint_trajectory/goal
/arm_controller/follow_joint_trajectory/result
/arm_controller/follow_joint_trajectory/status
/arm_with_torso_controller/follow_joint_trajectory/cancel
/arm_with_torso_controller/follow_joint_trajectory/feedback
/arm_with_torso_controller/follow_joint_trajectory/goal
/arm_with_torso_controller/follow_joint_trajectory/result
/arm_with_torso_controller/follow_joint_trajectory/status
/base_controller/command
/base_scan
/base_scan_no_self_filter
/base_scan_raw
/base_scan_tagged
/battery_state
/charge_lockout/cancel
/charge_lockout/feedback
/charge_lockout/goal
/charge_lockout/result
/charge_lockout/status
/cmd_vel
/cmd_vel_mux/selected
/diagnostics
/diagnostics_agg
/diagnostics_toplevel_state
/dock/result
/enable_software_runstop
/graft/state
/gripper/gyro_offset
/gripper/imu
/gripper/imu_raw
/gripper_controller/gripper_action/cancel
/gripper_controller/gripper_action/feedback
/gripper_controller/gripper_action/goal
/gripper_controller/gripper_action/result
/gripper_controller/gripper_action/status
/gripper_controller/led_action/cancel
/gripper_controller/led_action/feedback
/gripper_controller/led_action/goal
/gripper_controller/led_action/result
/gripper_controller/led_action/status
/gripper_state
/head_camera/crop_decimate/parameter_descriptions
/head_camera/crop_decimate/parameter_updates
/head_camera/depth/camera_info
/head_camera/depth/image
/head_camera/depth/image_raw
/head_camera/depth/image_rect
/head_camera/depth/image_rect_raw
/head_camera/depth/points
/head_camera/depth_downsample/camera_info
/head_camera/depth_downsample/image_raw
/head_camera/depth_downsample/points
/head_camera/depth_rectify_depth/parameter_descriptions
/head_camera/depth_rectify_depth/parameter_updates
/head_camera/depth_registered/camera_info
/head_camera/depth_registered/hw_registered/image_rect
/head_camera/depth_registered/hw_registered/image_rect_raw
/head_camera/depth_registered/image
/head_camera/depth_registered/image_raw
/head_camera/depth_registered/points
/head_camera/depth_registered_rectify_depth/parameter_descriptions
/head_camera/depth_registered_rectify_depth/parameter_updates
/head_camera/driver/parameter_descriptions
/head_camera/driver/parameter_updates
/head_camera/head_camera_nodelet_manager/bond
/head_camera/ir/camera_info
/head_camera/ir/image
/head_camera/projector/camera_info
/head_camera/rgb/camera_info
/head_camera/rgb/image_raw
/head_camera/rgb/image_rect_color
/head_camera/rgb_rectify_color/parameter_descriptions
/head_camera/rgb_rectify_color/parameter_updates
/head_controller/follow_joint_trajectory/cancel
/head_controller/follow_joint_trajectory/feedback
/head_controller/follow_joint_trajectory/goal
/head_controller/follow_joint_trajectory/result
/head_controller/follow_joint_trajectory/status
/head_controller/point_head/cancel
/head_controller/point_head/feedback
/head_controller/point_head/goal
/head_controller/point_head/result
/head_controller/point_head/status
/imu
/imu1/gyro_offset
/imu1/imu
/imu1/imu_raw
/imu2/gyro_offset
/imu2/imu
/imu2/imu_raw
/joint_states
/joy
/laser_self_filter/cancel
/laser_self_filter/feedback
/laser_self_filter/goal
/laser_self_filter/result
/laser_self_filter/status
/odom
/odom_combined
/query_controller_states/cancel
/query_controller_states/feedback
/query_controller_states/goal
/query_controller_states/result
/query_controller_states/status
/robot_state
/robotsound
/rosout
/rosout_agg
/sick_tim551_2050001/parameter_descriptions
/sick_tim551_2050001/parameter_updates
/software_runstop_enabled
/sound_play/cancel
/sound_play/feedback
/sound_play/goal
/sound_play/result
/sound_play/status
/teleop/cmd_vel
/tf
/tf_static
/torso_controller/follow_joint_trajectory/cancel
/torso_controller/follow_joint_trajectory/feedback
/torso_controller/follow_joint_trajectory/goal
/torso_controller/follow_joint_trajectory/result
/torso_controller/follow_joint_trajectory/status
So far so good. Now I want to check the joint angles. What I often do here is echo the corresponding ROS topic. Here I call rostopic echo -n 1 /joint_states repeatedly. I press ENTER on my keyboard, then I count 4-5 seconds in my head, then press ENTER again, then count 4-5 seconds, press ENTER again, and so forth. Check out the output after running this several times:
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 63846
stamp:
secs: 1526495986
nsecs: 155393416
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585660994052887, 0.004585660994052887]
velocity: [-4.410743713378906e-06, -4.410743713378906e-06]
effort: [0.0, 0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 64562
stamp:
secs: 1526495993
nsecs: 315394732
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585575312376022, 0.004585575312376022]
velocity: [-0.0, -0.0]
effort: [0.0, 0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 65164
stamp:
secs: 1526495999
nsecs: 335393351
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585616290569305, 0.004585616290569305]
velocity: [-2.086162567138672e-06, -2.086162567138672e-06]
effort: [0.0, 0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 65728
stamp:
secs: 1526496004
nsecs: 975435878
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585575312376022, 0.004585575312376022]
velocity: [1.1920928955078125e-07, 1.1920928955078125e-07]
effort: [0.0, 0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 66307
stamp:
secs: 1526496010
nsecs: 765425292
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585538059473038, 0.004585538059473038]
velocity: [2.4437904357910156e-06, 2.4437904357910156e-06]
effort: [0.0, 0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 66830
stamp:
secs: 1526496016
nsecs: 477241490
frame_id: ''
name: ['l_wheel_joint', 'r_wheel_joint', 'torso_lift_joint', 'bellows_joint', 'head_pan_joint', 'head_tilt_joint', 'shoulder_pan_joint', 'shoulder_lift_joint', 'upperarm_roll_joint', 'elbow_flex_joint', 'forearm_roll_joint', 'wrist_flex_joint', 'wrist_roll_joint']
position: [-6.072175025939941, -0.20786872506141663, 0.002922113984823227, 0.0, 0.008403897285461426, 0.4989619489959717, 1.3218759175231933, 1.4561462151916504, -0.19919875611728668, 0.6284374523010254, 0.0005699233077907564, 2.0629285788806153, -0.00028681149895191244]
velocity: [-1.1920928955078125e-07, 0.0, -3.5762786865234375e-07, -1.7881393432617188e-07, -2.5451183319091797e-05, -0.0003771781921386719, 0.0003428459167480469, -0.0002372264862060547, 0.00010061264038085938, 0.00023758411407470703, 0.000217437744140625, -0.00033020973205566406, -4.589557647705078e-05]
effort: [-0.0, 0.0, 0.0, 0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 67623
stamp:
secs: 1526496023
nsecs: 925422146
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585616290569305, 0.004585616290569305]
velocity: [-1.9669532775878906e-06, -1.9669532775878906e-06]
effort: [0.0, 0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 68335
stamp:
secs: 1526496031
nsecs: 45394137
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585698246955872, 0.004585698246955872]
velocity: [-5.781650543212891e-06, -5.781650543212891e-06]
effort: [0.0, 0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 68972
stamp:
secs: 1526496037
nsecs: 897239020
frame_id: ''
name: ['l_wheel_joint', 'r_wheel_joint', 'torso_lift_joint', 'bellows_joint', 'head_pan_joint', 'head_tilt_joint', 'shoulder_pan_joint', 'shoulder_lift_joint', 'upperarm_roll_joint', 'elbow_flex_joint', 'forearm_roll_joint', 'wrist_flex_joint', 'wrist_roll_joint']
position: [-6.072175025939941, -0.20786872506141663, 0.002876337617635727, 0.0, 0.008403897285461426, 0.49742796385803223, 1.3199585553100586, 1.4561462151916504, -0.19919875611728668, 0.6295880603637696, -0.0005801878943598269, 2.0636962867053223, 0.0004801792073726649]
velocity: [-1.1920928955078125e-07, 0.0, 2.205371856689453e-06, 1.1026859283447266e-06, 0.0007863044738769531, 0.00018477439880371094, 3.3736228942871094e-05, 0.00017905235290527344, -3.272294998168945e-05, 0.00017309188842773438, -0.00012302398681640625, 5.263090133666992e-05, 0.0006728172302246094]
effort: [-0.0, 0.0, 0.0, 0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0, -0.0]
---
<fetch>~/FETCH_CORE/fetch_core$ rostopic echo -n 1 /joint_states
header:
seq: 69693
stamp:
secs: 1526496044
nsecs: 625404858
frame_id: ''
name: ['l_gripper_finger_joint', 'r_gripper_finger_joint']
position: [0.004585575312376022, 0.004585575312376022]
velocity: [6.556510925292969e-07, 6.556510925292969e-07]
effort: [0.0, 0.0]
---
Notice that in two of the above cases, all the links of the Fetch are present. However, in the other cases, only the two gripper joints are present. Here is the corresponding Fetch documentation for the links.
This is a problem because, for instance, when I try to move the robot in code, I can only get the gripper movement to work since it seems like the gripper joints are the ones that are consistently published. The torso, wheels, shoulders, etc., all seem to be unaccessible.
I think (though not sure) that the code I'm using isn't the main issue (which is why I'm not pasting it here). The main problem I think is to figure out why the joint_states topic isn't containing all the correct joints.
In case this helps:
<fetch>~/FETCH_CORE/fetch_core$ rostopic hz /joint_states
subscribed to [/joint_states]
average rate: 199.688
min: 0.000s max: 0.165s std dev: 0.01874s window: 176
no new messages
no new messages
average rate: 52.374
min: 0.000s max: 3.072s std dev: 0.21341s window: 208
average rate: 110.425
min: 0.000s max: 3.072s std dev: 0.13154s window: 548
average rate: 125.474
min: 0.000s max: 3.072s std dev: 0.11259s window: 748
average rate: 136.194
min: 0.000s max: 3.072s std dev: 0.10001s window: 948
average rate: 144.238
min: 0.000s max: 3.072s std dev: 0.09089s window: 1148
average rate: 150.420
min: 0.000s max: 3.072s std dev: 0.08385s window: 1349
average rate: 155.401
min: 0.000s max: 3.072s std dev: 0.07826s window: 1549
average rate: 159.610
min: 0.000s max: 3.072s std dev: 0.07361s window: 1751
^Caverage rate: 162.966
min: 0.000s max: 3.072s std dev: 0.06982s window: 1947
This happens after pressing the e-stop button, turning off the breaker, turning off the robot, and then turning the robot on again. I tried the process again but got some similar results.
Is this expected behavior or should I be worried?
Originally posted by DanielSeita on ROS Answers with karma: 46 on 2018-05-16
Post score: 0
Original comments
Comment by gvdhoorn on 2018-05-16:\
Notice that in two of the above cases, all the links of the Fetch are present.
No, they're not. Either the gripper joints are there, or the others.
I don't have a Fetch, but the gripper probably uses a separate JointState publisher, which only publishes for the gripper. The other joints ..
Comment by gvdhoorn on 2018-05-16:
.. are published by another JointState publisher.
That is all perfectly valid and supported. What I don't understand though is why Fetch would configure their robot that way. Typically different publishers would be placed in separate namespaces and joint_state_publisher would be configured ..
Comment by gvdhoorn on 2018-05-16:
.. with the source_list parameter to subscribe to all topics, coalesce the msgs into a single JointState msg and publish that on /joint_states.
You'd have to check the documentation and / or ask Fetch as to why that is not the case here.
Comment by DanielSeita on 2018-05-16:
Sorry, you are are right, I did not read carefully. Yeah it's either the gripper or the joints. Yeah let me look over at the Fetch github repository for ROS (their actual docs don't explain this topic)
Comment by DanielSeita on 2018-05-16:
I found an explanation somewhat https://github.com/cse481wi18/cse481wi18/wiki/Lab-8%3A-Reading-joint-states embarrassed that I didn't see this earlier. The wiki says that each message may only contain a subset of the joints. I see.
Comment by gvdhoorn on 2018-05-16:
I would be very surprised if there was no documentation at all about this topic. I'd really recommend you ask them. If you have one of their robots, you should be entitled to some support, no?
Comment by Moriarty on 2019-08-14:
Yes, all customers are entitled to support, and we're usually busy with the commercial side of things so ROS answers aren't the best way to get help.
I know this is documented somewhere more officially - I'll try to get a link.
Answer:
As gvdhoorn mentioned, I made a mistake here, either the gripper joints are there, or the other joints are there.
This is expected behavior. See this Wiki for the reference.
Luckily for us, the robot continuously
publishes the current joint angles to
the /joint_states topic. However, each
message might only contain a subset of
the joints. This works because
multiple nodes can publish the state
of the subset of joints they are
responsible for. This is how
/joint_states works on the real robot.
However, in simulation, all of the
joint states are published by Gazebo.
Because no single message on the
/joint_states topic will necessarily
tell us about all the joints of the
robot, we must listen to multiple
messages over time and accumulate the
full state of the robot. In practice,
the joint states are published very
quickly, so we will not have to wait
long.
Originally posted by DanielSeita with karma: 46 on 2018-05-16
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2018-05-17:
Note that the wiki you link to is not in any way 'official'. It's just a page on a wiki for a course. I would still recommend to ask Fetch about this.
we must listen to multiple messages over time and accumulate the full state of the robot
as I wrote, there are standard tools that can do ..
Comment by gvdhoorn on 2018-05-17:
.. this for you, no need to implement it in all consumers. But to be able to use those tools, things have to be configured in a certain way. That is where Fetch comes in, as they must have had a reason to configure your robot the way it is.
Comment by DanielSeita on 2018-05-17:
I have contacted Fetch support directly. Thank you for the suggestion.
Comment by gvdhoorn on 2018-05-17:
If/when you get a response, it would be great if you could update your answer.
Comment by Moriarty on 2019-08-14:
I believe the reason the gripper state publishes separately from the rest of the robots states is because the gripper is modular. Other grippers can be purchased from Schunk, Shadow, Robotiq and used on the Fetch, it has the same gripper mount as other commercially available robots.
I agree this needs to be documented somewhere more clearly.
Comment by DanielSeita on 2019-08-14:
Thanks @Moriarty! | {
"domain": "robotics.stackexchange",
"id": 30839,
"tags": "ros, rostopic, ros-indigo, joint-state"
} |
Why is the mapped universe shaped like an hourglass? | Question: I've watched a video from the American National History Museum entitled The Known Universe.
The video shows a continuous animation zooming out from earth to the entire known universe. It claims to accurately display every star and galaxy mapped so far. At one point in this video [3:00 - 3:15 minutes] it displays this text:
The galaxies we have mapped so far.
The empty areas where we have yet to
map.
At this point, the shape of the "universe" is roughly hourglass, with earth at it's centre.
I'm having trouble understanding what this represents and why they shape is hourglass. Is this simply because we have chosen to map in those directions first? Is there a reason astronomers would choose to map in this pattern, or is this something more fundamental about the shape of the universe? Or is it to do with the speed of light reaching us from these distant galaxies?
Continuing on from the hourglass pattern, the cosmic microwave background radiation is represented as a sphere and labelled "Our cosmic horizon in space and time". This doesn't help clear anything up. If we can map CMB in all directions, why have we only mapped galaxies in this hourglass shape.
Answer: First of all, the universe is most certainly not shaped like an hourglass. It simply looks that way because the gas and dust in the plane of our galaxy obstruct our view of anything outside the galaxy in those directions. So we can only see other galaxies (and similarly distant objects) by pointing telescopes at some angle to the galactic plane. That gives the "hourglass" shape: it's simply because those are the only directions we can see in. In reality, we have every reason to think galaxies are distributed more or less uniformly, once you look at a large enough scale.
The video description doesn't cite its sources, but I suspect that (some/most of) the information on the distant galaxies comes from the Sloan Digital Sky Survey, which is AFAIK the most comprehensive survey of objects in the universe outside our own cluster of galaxies. You might want to check out the information on their website if you're interested in this stuff.
And as long as I'm citing sources, the latest CMB data comes from the WMAP project. | {
"domain": "physics.stackexchange",
"id": 253,
"tags": "astrophysics, astronomy, universe"
} |
Do I need to encode the target variable for sklearn logistic regression | Question: I'm trying to get familiar with the sklearn library, and now I'm trying to implement logistic regression for a dataframe containing numerical and categorical values to predict a binary target variable.
While reading some documentation I found the logistic regression should be used to predict binary variables presented by 0 and 1.
My target variable is "YES" and "NO", should I code it to 0 and 1 for the algorithm to work properly, or there is no difference?
Maybe I just didn't get the idea but can someone confirm this to me.
Answer: The string labels work just fine, here is an example:
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
import numpy
X, y = load_iris(return_X_y=True)
y_string = numpy.array(['YES' if label == 1 else 'NO' for label in y])
clf = LogisticRegression(random_state=0, solver='lbfgs', multi_class='multinomial').fit(X, y_string)
y_pred = clf.predict(X[50:100, :])
print(y_pred)
Output:
['NO' 'NO' 'NO' 'YES' 'NO' 'YES' 'NO' 'YES' 'NO' 'NO' 'YES' 'NO' 'YES'
'NO' 'NO' 'NO' 'NO' 'YES' 'YES' 'YES' 'NO' 'NO' 'YES' 'YES' 'NO' 'NO'
'YES' 'NO' 'NO' 'YES' 'YES' 'YES' 'YES' 'YES' 'NO' 'NO' 'NO' 'YES' 'NO'
'YES' 'YES' 'NO' 'YES' 'YES' 'YES' 'NO' 'NO' 'NO' 'YES' 'NO']
Yo can replace y_string to y for the numerical example. | {
"domain": "datascience.stackexchange",
"id": 4787,
"tags": "scikit-learn, logistic-regression"
} |
A shorter way of Enablling/Disabling a text box from RadioButton selections | Question: Three radio buttons and a text box, grouped in one group box... Just when I select one of those radio button the text box gets enabled, for the other two radio buttons it should be disabled. Here is the code that comes to mind at first, but it looks ugly to me becuase I have copy pasted the same AllowMissingData() method to Change evnt of each radio button. I was wondering if there is a better way of writing it:
private void RequiredRadioButton_CheckedChanged(object sender, EventArgs e)
{
AllowMissingData();
}
private void AllowBlankRadioButton_CheckedChanged(object sender, EventArgs e)
{
AllowMissingData();
}
private void SuppressRadioButton_CheckedChanged(object sender, EventArgs e)
{
AllowMissingData();
}
private void AllowMissingData()
{
if (AllowBlankRadioButton.Checked)
{
MissingDataValueTextBox.Enabled = true;
}
else
{
MissingDataValueTextBox.Text = System.String.Empty;
MissingDataValueTextBox.Enabled = false;
}
}
Answer: Did you realize that multiple RadioButtons can all point to the same event handler?
(simply use the Visual Studio Properties Editor in the designer to assign the same handler. Alternatively, you could apply += AnyRadioButton_CheckedChanged to each of the RadioButtons' CheckedChanged events.)
private void AnyRadioButton_CheckedChanged(object sender, EventArgs e)
{
if (allowBlankRadioButton.Checked)
{
missingDataValueTextBox.Enabled = true;
}
else
{
missingDataValueTextBox.Enabled = false;
missingDataValueTextBox.Text = string.Empty;
}
}
I actually find the if-else very readable, so I resisted the urge to incorporate Jeff's concise answer. In my opinion, the more elaborate version is clearer about what it does.
I renamed all your controls to follow the camelCase naming convention (because PascalCase should be reserved for type names, etc). | {
"domain": "codereview.stackexchange",
"id": 2366,
"tags": "c#"
} |
Force and torque of a rod | Question: Suppose force is applied to an end of a rod in a perpendicular direction of the rod which is resting on a frictionless surface( for example: frictionless surface of a table). It is supposed to rotate. Why will that rod rotate?
To be honest I really do not understand why does it rotate in such cases. It would be a great help if anyone provides me a link or explains me this thing.
Answer: It will rotate as well as translate because the force is not applied to the center of mass and there are no other forces (except gravity, which is downward) acting on the rod. If the force was applied to the center of mass of the rod (presumed to be at the center of the rod if its mass is uniformly distributed along it length), then the rod will only translate and not rotate.
As you explained that for the case stated in the question the rod will
translate and rotate, it would be a little bit more helpful for me if
you could explain why does it rotate.
See the diagram below.
For the purpose of determining rotation, the center of mass of the rod can be considered mid point of its length (assuming its mass is uniformly distributed along its length.
You can see there is a net torque of $\tau=FL/2$ about the center of mass which will initiate clockwise rotation rotation of the rod. The force $F$, which is unopposed, will also result in translation of the COM. If the line of action of the force was through the COM, there would be no net torque on the rod and its motion would only be translation.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 77538,
"tags": "forces, torque"
} |
Rindler-Fulling Quantization - Rindler mode expansion of $\phi$: why are we ignoring the Past and Future Wedges? | Question: I am following along Chapter 2 of Takagi's Vacuum noise and stress induced by uniform accelerator. I am at the point of performing the Rindler-Fulling Quantization of a real scalar field, where you expand $\phi$ in terms of the Rindler modes in the left and right wedges - I am puzzled as to why you completely ignore the contributions to the field in the future and past wedges. Let's specify to dimension 4 to be concrete.
Minkowski space is partitioned into four regions:
$$
\text{Right Rindler Wedge:}\ \ \ \mathcal{R}_{+} = \left\{ \ (x^0, x^1, x^2, x^3) \in \mathbb{R}^{4} \ | \ x^1 > |x^0| \right\} \\
\text{Left Rindler Wedge:}\ \ \ \mathcal{R}_{-} = \left\{ \ (x^0, x^1, x^2, x^3) \in \mathbb{R}^{4} \ | \ x^1 < - |x^0| \right\} \\
\text{Future Wedge:}\ \ \ \mathcal{F} = \left\{ \ (x^0, x^1, x^2, x^3) \in \mathbb{R}^{4} \ | \ x^0 \geq |x^1| \right\} \\
\text{Past Wedge:}\ \ \ \mathcal{P} = \left\{ \ (x^0, x^1, x^2, x^3) \in \mathbb{R}^{4} \ | \ x^0 \leq -|x^1| \right\}
$$
Recall that Rindler coordinates $(\eta, \xi, x^2, x^3)$ are related to rectangular Minkowski coordinates $(x^0, x^1, x^2, x^3)$ through the transformation:
$$
x^0 = \xi \sinh(\eta)\ , \ \ \ \ \ \ x^1 = \xi \cosh(\eta)
$$
The coordinates $(\eta,\xi,x^2,x^3)$ only cover $R_{+}$ and $R_{-}$.
One solves the Klein-Gordon equation $(\Box_{x} - m^2)r_{\mathbf{k}}(x) =0 $ for Rindler mode functions $r_{\mathbf{k}}$ (where $\mathbf{k} = (\Omega,k_2,k_3) \in (0,\infty) \times \mathbb{R} \times \mathbb{R}$ are the mode parameters), with the constraint that they are positive-frequency with respect to Rindler time $\eta$, ie. This means that $\frac{\partial}{\partial \eta} r_{\mathbf{k}} = - i \omega r_{\mathbf{k}}$ for some $\omega > 0$ (taking $r_{\mathbf{k}}^{\ast}$ gives you negative-frequency modes).
One finds that you need a separate solution in each of the Rindler wedges: So you have positive-frequency modes $r^{+}_{\mathbf{k}}$ in $\mathcal{R}_{+}$, and positive-frequency modes $r^{-}_{\mathbf{k}}$ in $\mathcal{R}_{-}$. A little more explicitly you find:
$$
r^{+}_{\mathbf{k}}(\eta, \xi, x^2, x^3) =\begin{cases} \ f^{+}_{\mathbf{k}}(\xi)\ e^{ - i \Omega \eta + i k_2 x^2 + i k_3 x^3 } \ \ \ \ \ , \ x \in \mathcal{R}_{+} \\
\ 0 \ \ \ \ \ , \ x \in \mathcal{R}_{-} \end{cases} \\
r^{-}_{\mathbf{k}}(\eta, \xi, x^2, x^3) =\begin{cases} \ 0 \ \ \ \ \ , \ x \in \mathcal{R}_{+} \\
\ f^{-}_{\mathbf{k}}(\xi) \ e^{ + i \Omega \eta + i k_2 x^2 + i k_3 x^3 } \ \ \ \ \ , \ x \in \mathcal{R}_{-} \end{cases}
$$
Where $f^{\pm}_{\mathbf{k}}(\xi)$ are terrible functions I don't have the bravery to type out here. For the negative-frequency modes you just take the complex conjugates of the above. The combination of all of these modes $\{ r^{+}_{\mathbf{k}}, r^{-}_{\mathbf{k}} , r^{+\ast}_{\mathbf{k}}, r^{-\ast}_{\mathbf{k}} \}$ are complete over $\mathcal{R}_{+} \cup \mathcal{R}_{-}$. So then Takagi expands the field $\phi$ in terms of this portion of Minkowski space:
$$
\phi(x) = \int d^3\mathbf{k}\ \left[ r_{\mathbf{k}}^{+}(x) b_{\mathbf{k}}^{(+)} + r_{\mathbf{k}}^{+\ast}(x) b_{\mathbf{k}}^{(+)\dagger} + r_{\mathbf{k}}^{-}(x) b_{\mathbf{k}}^{(-)} + r_{\mathbf{k}}^{-\ast}(x) b_{\mathbf{k}}^{(-)\dagger} \right]
$$
My Question: Why you can expand the field over just this subset $\mathcal{R}_{+}\cup \mathcal{R}_{-}$ of Minkowski space? I would think that you need to expand the field over all points in Minkowski space? I am not sure how to phrase this properly, but shouldn't there contributions to the field $\phi$ coming from $\mathcal{F} \cup \mathcal{P}$?
At least, this is what is normally done when you quantize $\phi$ in terms of rectangular Minkowski time ie in terms of plane-waves $\propto e^{\mp i \sqrt{\mathbf{p}^2+m^2} x^0 \pm i \mathbf{p} \cdot \mathbf{x} }$. Here you'd have a valid expansion of the field $\phi(x)$ for all points in Minkowski space including $x \in \mathcal{F} \cup \mathcal{P}$
Answer: There are two things you should observe. First, the union of the two open wedges is a (non-connected) globally hyperbolic spacetime in its own right, so quantization is possible without problems. Secondly, the union of those pair of wedges is a static spacetime with respect to the boost Killing vector field which is timelike exactly inside these regions (it is lightlike on their boundary but it vanishes at the bifurcation surface and it is spacelike in the remaining past and future wedges). The quantization procedure in the right and left wedges relies upon the standard construction of the static vacuum with respect to that notion of time. That static vacuum is a ground state it being the zero eigenvector of the positive Hamiltonian referred to the boost time (with apposite directions in the two wedges). This construction is impossible in the rest of spacetime. Indeed Fulling-Unruh vacuum and its Fock space are only defined for observables inside the said wedges and cannot be extended to the whole Minkowski spacetime (it has too bad singularities on the Killing horizon). So, in a sense you are right on the fact that some contribution is missed from the remaining regions, in fact, this state cannot be extended to those regions as I said. Conversely, Minkowski vacuum is everywhere defined in Minkowki spacetime and Poincare' invariant. It is a ground state (0 eigenvector of the corresponding positive Hamiltonian) with respect every notion of Minkowski time. As you probably know, Minkowski vacuum restricted to the algebra of field observables localized in the left and right wedges appears as a thermal state with respect to the boost notion of time (a KMS state) in view of the so-called Bisognano-Wichmann (Fulling-Sewell) theorem applied to the simplest case of non-interacting fields... However this restriction cannot be represented as a state (density matrix) in the Fock space constructed upon Fulling vacuum and the algebraic notion of state is necessary...
Strictly speaking one should say that Fulling-Unruh vacuum does not exist. What exists is just the thermal state, apparently referring to that notion of vacuum state, arising when restricting Minkowski vacuum. | {
"domain": "physics.stackexchange",
"id": 50824,
"tags": "general-relativity, metric-tensor, causality, qft-in-curved-spacetime, unruh-effect"
} |
Proof of quantum correlation functions | Question: I'm reading through David Tong's lecture notes on QFT.
On pages 76-77, he gives a proof about correlation functions. See the below link:
QFT notes by Tong
I'm following the proof steps to obtain equation (3.95). But several intermediate steps of the proof are not clear.
First question
Why can we write
$$T\phi_{1I} \dots \phi_{nI}S=U_{I}(+\infty, t_{1})\phi_{1I}U(t_{1},t_{2})\phi_{2I}\dots \phi_{nI}U_{I}(t_{n},-\infty)\ \ ?$$
I mean, after dropping the $T$, shouldn't we have
$$=\phi_{1I}\phi_{2I}\dots \phi_{nI}S$$$$=\phi_{1I}\phi_{2I}\dots \phi_{nI}U_{I}(+\infty,-\infty)\ \ ?$$
Does $T$ relate to the $\phi_{1}\dots\phi_{n}$ only, or to the $\phi_{1}\dots \phi_{nI}S$ and
$$U_{I}(+\infty,-\infty)=U_{I}(+\infty, t_{1})U_{I}(t_{1},t_{2})\dots U_{I}(t_{n},-\infty)\ \ ?$$
Second question
How do we convert each of the $\phi_{I}$ into $\phi_{H}$ using
$$U_{I}(t_{k},t_{k+1})=Texp(-i\int_{t_{k}}^{t_{k+1}}H_{I})$$
to arrive at
$$T\phi_{1I} \dots \phi_{nI}S=U_{I}(+\infty, t_{0})\phi_{1H}\dots \phi_{nH}U_{I}(t_{0},-\infty)\ \ ?$$
Third question
Why do we have
$$U_{I}(t, -\infty)=U(t,-\infty)\ \ ?$$
Answer: First question
Using that $S=U_I(+\infty,-\infty)=U_I(+\infty, t_1)U_I(t_1,t_2)\cdots U_I(t_n,-\infty)$, as you state, you have that
\begin{align}
T\phi_{1I}\phi_{2I}\cdots\phi_{nI}S &=
T\phi_{1I}\phi_{2I}\cdots\phi_{nI}
U_I(+\infty, t_1)U_I(t_1,t_2)\cdots U_I(t_n,-\infty) \\
& =
U_I(+\infty, t_1)\phi_{1I}U_I(t_1,t_2)\phi_{2I}\cdots
\phi_{nI}U_I(t_n,-\infty),
\end{align}
where the second equality is given by the definition of time ordering.
Second question
Choosing the operators in the interaction picture and the Heisenberg picture to be equal at some time $t_0$, we have that $\phi_{kI}=U(t_0,t_k)^{-1}\phi_{kH}U_I(t_0,t_k)$. Subtituting into the result for the previous question:
\begin{align}
T\phi_{1I}\phi_{2I}\cdots\phi_{nI}S =&
U_I(+\infty, t_1)U(t_0,t_1)^{-1}\phi_{1H}U_I(t_0,t_1)
U_I(t_1,t_2) U(t_0,t_2)^{-1}\\
& \phi_{2H}U_I(t_0,t_2)
\cdots
U(t_0,t_n)^{-1}\phi_{nH}U_I(t_0,t_n)U_I(t_n,-\infty) \\
=& U_I(+\infty,t_0)\phi_{1H}\phi_{2H}\cdots\phi_{nH}U_I(t_0,-\infty)
\end{align}
Third question
Notice that Tong is not saying that $U_I(t,-\infty)=U(t,-\infty)$, but
that for any $\left|\Psi\right>$, we have $\left<\Psi\right| U_I(t,-\infty)\left|0\right>=\left<\Psi\right|U(t,-\infty)\left|0\right>$. This statement is equivalent to
\begin{equation}
U_I(t,-\infty)\left|0\right>=U(t,-\infty)\left|0\right>
\end{equation}
By definition $\left|0\right>$ is an eigenvector of $H_0$ with eigenvalue $0$, so
\begin{equation}
H_I\left|0\right>=H_Ie^{iH_0t}\left|0\right>=H_I\left|0\right>_I=
i\frac{d}{dt}\left|0\right>_I=
i\frac{d}{dt}\left(e^{iH_0t}\left|0\right>\right)=
i\frac{d}{dt}\left|0\right>=H\left|0\right>.
\end{equation}
Thus, the interaction picture time evolution $U_I(t,-\infty)$ (obtained by exponentiating the integral of $H_I$) and the Schrödinger picture time evolution $U(t,-\infty)$ (the exponential of the integral of $H$) are the same when applied to $\left|0\right>$. | {
"domain": "physics.stackexchange",
"id": 36109,
"tags": "homework-and-exercises, quantum-field-theory, operators, correlation-functions, time-evolution"
} |
Is it useful to eliminate the less relevant filters from a trained CNN? | Question: Imagine I have a tensorflow CNN model with good accuracy but maybe too many filters:
Is there a way to determine which filters have more impact in output? I think it should be possible. At least, if a filter A has a 0, that only multiples the output of a filter B, then filter B is not related to filter A. In particular, I'm thinking in 2d data where 1 dimension is time-related and the other feature related (like one-hot char).
Is there a way to eliminate the less relevant filters from a trained model, and leave the rest of the model intact?
Is it useful or there are better methods?
Answer: NOTE: All the observations and results are from the paper The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks.
To answer your questions one by one:
Yes there are ways to determine which filters have more impact on the output. Its a very naive way but works very good in practice. Filters with small weights impact output less (according to empirical evidence), which basically means neurons whose weights lie in the switching region i.e ~$0$ in ReLu and ~$-1$ to $1$ (say) have less impact on final output.
Yes, just eliminating these lower weight filters eliminate the unnecessary noise and indecisiveness introduced by these filters and suprisingly makes the model perform better (observed empirically).
The concept a relatively old paradigm but has been a new twist by the simplicity of the method of elimination of unnecessary weights in the aforementioned paper and thus winning it the best paper award in ICLR 2019.
TL;DR: Eliminating unnecessary weights makes the model perform better than the original model.
Also here is the TensorFlow code. | {
"domain": "ai.stackexchange",
"id": 1356,
"tags": "deep-learning, convolutional-neural-networks, tensorflow, convolution"
} |
SBC Environment Setup | Question:
Hi.
I got an trouble when I followed the instruction below.
link text
16.2.7.1 Domain ID Allocation
I made a mistake when typing commands.
I wrote echo 'source ~/turtlebot3_ws/install/setp.bash' >> ~/.bashrc by mistake. (setup>>setp)
I wrote correct commands and rebooted, but
home/ubuntu/turtlebot3_ws/install/setp.bash: No such file or directory appears.
Do you know how to delete the history that I wrote the incorrect commands ?
Originally posted by shunta ito on ROS Answers with karma: 5 on 2020-02-18
Post score: 0
Answer:
What the echo 'source ~/turtlebot3_ws/install/setp.bash' >> ~/.bashrc command does is writing source ~/turtlebot3_ws/install/setp.bash at the end of the ~/.bashrc file.
To correct it you need to edit (with a text editor, e.g. nano) the ~/.bashrc file and modify the line from setp.bash to setup.bash
Originally posted by marguedas with karma: 3606 on 2020-02-18
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by shunta ito on 2020-02-18:
Thank you!
The problem was solved.
Comment by marguedas on 2020-02-18:
As this solved the issue, can you please accept the answer by clicking the checkmark on the left so that this doesn't show up in the list of unanswered questions | {
"domain": "robotics.stackexchange",
"id": 34456,
"tags": "ros2, turtlebot3"
} |
Output Tildes and Pluses | Question: Inspired by this question, I wrote a program in Whitespace to output the following text:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~+~~+~~+~~+~~+~~+~~+~~+~~+~~+~~+~~+~~+~~+~~+~~+~
+~++~++~++~++~++~++~++~++~++~++~++~++~++~++~++~+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I left comments in the source code to state what the program does, if I understand the instructions correctly, but I will state the essential parts here as well:
IMP Meaning
[Space] Stack Manipulation
[Tab][Space] Arithmetic
[Tab][Tab] Heap access
[LF] Flow Control
[Tab][LF] I/O
The program stars with a space, signified with S, to start a stack manipulation statement. After the space, I enter the value of a Tilde (126) in binary, with spaces representing 0 and tabs representing 1. Then, I enter a linefeed (L) to signal the end of the binary character.
Next comes a TL to being an I/O command, followed by SS to output the value at the top of the stack.
This continues on, with the three characters '~', '+', and L (newline). Is there any way to simplify this either externally or internally? You can run this program on Ideone.
S S S T T T T T T S L---BEGINS-WITH-SPACE-TO-SIGNIFY-STACK-MANIPULATION-OTHER-VALUES-ARE-BINARY-CODE-FOR-126
T L---BEGIN-OUTPUT-COMMAND
S S S S S T T T T T T S L---BELOW-HERE-LINES-NEED-SS-AT-BEGINNING-TO-OUTPUT-VALUE-AT-TOP-OF-STACK
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L---END-OF-FIRST-LINE
S S S S S T S T S L---NEW-LINE(BINARY-CODE-10)
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L---BINARY-CODE-43-FOR-PLUS
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L---END-OF-SECOND-LINE
S S S S S T S T S L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L
S S S S S S T S T S T T L
T L
S S S S S T T T T T T S L
T L
S S S S S S T S T S T T L
T L---END-OF-THIRD-LINE
S S S S S T S T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L
S S S S S T T T T T T S L
T L---END-OF-FOURTH-LINE
S S S S S T S T S L
T L---EXTRA-NEW-LINE-TO-PUSH-END-OF-PROGRAM-OUTPUT-DOWN
S S L---TRIPLE-LINEFEED-TO-END-PROGRAM
L
L
Answer: I know nothing of whitespace, but your code strikes me as something like this:
...
Console.Write("+");
Console.Write("~");
Console.Write("+");
Console.Write("+");
Console.Write("~");
Console.Write("+");
Console.Write("+");
Console.Write("~");
Console.Write("+");
Console.Write("+");
...
You have a heap, and you're not using it.
Heap access commands look at the stack to find the address of items to be stored or retrieved. To store an item, push the address then the value and run the store command. To retrieve an item, push the address and run the retrieve command, which will place the value stored in the location at the top of the stack.
http://compsoc.dur.ac.uk/whitespace/tutorial.html
I'm not going to test this, but the idea would be something like this:
Push 126 ("~") to heap at address A // tilde
Push 43 ("+") to heap at address B // plus
Now, SpaceSpace marks a label, TabTab would be a modulo operation, and TabSpace jumps to a label if the value at the top of the stack is zero: you can implement a form of conditional goto loop. And you can call a subroutine with SpaceTab, and return to caller with TabLF.
Make a subroutine that pushes the value at address A ("~") to the stack, and outputs it before discarding (LFLF) the value at the top of the stack, and then make another subroutine that does the same thing for address B ("-"). Or better, make one that pops an address from the stack, reads the heap value at that address and pushes it to the stack, outputs it and removes it from the stack, then returns to caller. For the purpose of this post let's call this subroutine PrintChar.
Now, break down the patterns:
~~~~ // pattern A
~+~~ // pattern B
+~++ // pattern C
Make a subroutine that pushes address A ("~") onto the stack from the heap and calls PrintChar, 4 times. Or better, store 4 on the heap at address C and implement a goto loop that exits after 4 iterations, and inside that loop you push address A onto the stack from the heap and call PrintChar, once; let's call that subroutine PatternA. Then, implement similar subroutines for PatternB and PatternC.
The rest is simple: make a subroutine that pops the address of a pattern subroutine from the stack, makes 12 iterations of calling that pattern subroutine and then outputs a newline - let's call that PrintLine.
The "main" procedure simply pushes a pattern to the stack and calls PrintLine:
Push the address of PatternA to the stack
Call PrintLine
Push the address of PatternB to the stack
Call PrintLine
Push the address of PatternC to the stack
Call PrintLine
Push the address of PatternA to the stack
Call PrintLine
And don't forget to comment along the way - I find your commenting not very useful here. | {
"domain": "codereview.stackexchange",
"id": 17161,
"tags": "ascii-art, whitespace-lang"
} |
Can P gain too high cause roll of death? | Question: I've been sperimenting with PIDs on my quadcopter and I've noticed that pitch P = 65 causes double front and back flips which looks like a roll of death (or in this case a flip of death) and lowering pitch P to 60 solves the issue.
Why is that?
Can I mitigate this with filters or D gain?
I've also noticed that certain values of P and D mix well and make the quad smooth but if I bump up the I gain to twice as high as my P gain then my quadcopter bounces back.
Is this reasonable or there's a problem somewhere else?
Answer: Yes the proportional gain can cause overreaction. As it is only concerned with instant correction a larger gain value of P can make it over compensate. Basically I like to think of it as, in any instant point the grater the difference between the actual position or signal the more you have to accelerate to get there as quick as you can.
This can be corrected by using a derivative gain. The D gain is only concerned with compensating for the rate of change. This make it good to reduce the overshoot of your system (preventing death rolls).
What you are noticing in the last part of your question is a natural ratio of the gains. A good method to find those well performing ratios is to use the Ziegler–Nichols tuning method.
Also, if you are interested in control systems I would strongly suggest looking at Brian Douglas' YouTube series on control systems. | {
"domain": "robotics.stackexchange",
"id": 1882,
"tags": "quadcopter, pid"
} |
Geometry inside the event horizon | Question: I'm trying to understand intuitively the geometry as it would look to an observer entering the event horizon of a Schwarszchild black hole. I would appreciate any insights or corrections to the above.
Immediately after your enter the event horizon, if you look back and try to reach again for the horizon, it will seem to be expanding away quicker than the speed of light. Near this region, the apparent shape of the horizon is a sphere expanding away, and we are inside the sphere
Near the singularity, we really don't know what happens. I've heard that spaghettification is not a necessarily occurrence, since the metric field diagonal components are shrinking as the curvature grows, so it could very well be the case that a infinite length hyper-cylinder $S^3 \times R^+$ of constant physical radius is being conformally mapped to the $S^3 - \{0\}$ region around the singularity, or that in general a region around the singularity can be mapped to anything in the other end, which is basically because the degrees of freedom of curvature and stress-energy in our end of the spacetime cannot really predict what sort of topology endpoint will connect to the matter in the other end. Since the metric components are tending to zero at the singularity, this argument sounds pretty interesting, since it would seem to imply that observers will "shrink" relative to kruskal coordinates, because the local physics would always be that physical observers will stay fixed relative to their local metric, since the metric is covariantly constant!.
However, i'm not expert on how to describe the asymptotic physics in the neighbourhood of the schwarszchild singularity. (which is why i'm asking on this site, after all!). Question: does this argument hold any water?
Answer: The geometry in your picture is too classical. Once you pass the event horizon, it doesn't look like a sphere surrounding you anymore, and you don't see it as a special surface anyway. If you look back along a radial direction, you will see the same horizon point ahead of you (in the past) and behind you (also in the past), at different affine parameter along the horizon (this is clear in a Penrose diagram). But you won't see the horizon as a sphere.
When you approach a Schwarzschild singularity, there is no way to avoid getting compressed to oblivion, because all the volume you carry is compressed to a tiny volume near r=0. The radial area is r, and the area of a sphere is $4\pi r^2$ always, and r is time inside the horizon, and you are necessarily drawn to r=0, which is the singularity. You can't save yourself by conformal mapping, because the actual physical distances are shrunk--- even if you were to conformally get shrunk to zero size, your matter is not conformally invariant, the atoms set a scale.
The dr component of the metric doesn't vanish at the singularity, it's limiting value is ${1\over 2m}$. This means that you are losing a certain unit of r per unit time as you fall in, which means your radial volume is shrinking to zero quadratically with time. The time part of the metric (which is spatial now) goes to ${2m\over r}$, and so you gain a linearly diverging space in exchange, but the quadratic compression doesn't make up in volume for the quadratic sphere shrinking. Further, this is not a conformal transformation in any reasonable sense, it's spaghettification.
The real caveat about black holes is that this whole story assumes the black hole is neutral and nonspinning. For spinning or charged black holes, the interior structure is altered in radical ways, and there is nothing classically wrong with going in and coming out, except for some dubious arguments about what happens when you hit the Cauchy horizon in the interior. | {
"domain": "physics.stackexchange",
"id": 4027,
"tags": "general-relativity, black-holes, topology, singularities, event-horizon"
} |
Would Foucault's pendulum work on the moon? | Question: I am taking a course in introductory general relativity and came up on this question, which a google search didn't answer.
The rotation of the Earth can be measured using a Foucault pendulum.
The moon also rotates to always keep the same side facing the Earth, and it seems therefore that we should be able to measure this rotation using a pendulum on the moon. However, I have just learnt that a particle falling freely in a gravitational field is really following a geodesic curve through space-time. This got me wondering whether the apparent rotation of the moon is an artifact of the curvature of space-time, or if it is a real physical effect. I think my question can be phrased as in the title - would a Foucault pendulum rotate on the moon?
Another way of phrasing the question which may be clearer: If a (non-rotating) satellite were held still above the surface of the Earth, and then given a sudden sideways kick, putting it into orbit: would its orientation remain fixed relative to the stars, or would it rotate to always face the same side towards the Earth?
Simplifying assumptions: The Earth is a perfect gravitational point source, and the orbits are perfectly circular.
Answer: I think I understand your question. It's not so much about Foucault's Pendulum as it is about frames of reference. I believe this is what you are thinking: If you ignore the rotation of the moon and just think of it as an orbiting point, then an obvious way to use it to generate a coordinate system that follows it is to have one axis tangent to the orbit, and the other two always perpendicular to that one. It is natural to think that, if the moon is just following a geodesic, then that coordinate system that has one axis tangent to the orbit is the natural coordinate system associated to that geodesic, and in that coordinate system the moon isn't rotating.
But it doesn't work that way. I think a roughly correct way to get the coordinate system that follows the moon is to imagine replacing the moon with a family of small particles. The ones nearer the earth orbit faster, and the ones farther away orbit slower. But the closer they get to the particle at the center of where the moon was, the closer they get to moving at the same speed. The coordinate system comes from taking the limit, and it will not have an axis tangent to the orbit, nor will it keep an axis pointing towards the earth. It will in fact maintain the same orientation as it orbits.
Really, the best way to think of the geodesic path is to drop one of the space dimensions and model the earth as a disk for each slice of spacetime, combining them to make a cylinder running from the past to the future. Then the geodesic path of the moon is a spiral around that cylinder, and the full coordinate system has a $t$ axis, an $x$ axis, and a $y$ axis, plus a $z$ axis that we are ignoring in this mental model with 2D of space and 1D of time. In general, when you project from spacetime down to space, the axes of the coordinate system won't turn to follow the direction of the track. They'll keep the same orientation like one of those gyroscope balls some people used to have in their cars. | {
"domain": "physics.stackexchange",
"id": 76491,
"tags": "newtonian-mechanics, rotational-dynamics, reference-frames, moon, coriolis-effect"
} |
Isn't the aether existent? | Question: Before you say I'm wrong consider this, Einstein is supposedly the first person to get completely get rid of the various aether models that were proposed. But didn't Einstein actually prove them right in the sense that things move through a medium called time (i.e. Minkowski space). After all we can measure it just like any other physical thing. It might not be considered spacial but that seems pretty arbitrary because we are defining from our perspective. Am I just splitting hairs? Is this not true?
I'm obviously not a physicist but I'm reading Feynmans Lectures to get a general idea of things (I'm not on the special or general relativity parts yet) but I'm just trying to get a bit of intuition.
Answer: It's important to state exactly what one means by "aether" when saying that aether theories are discredited.
Specifically, the notion of medium that one can in principle detect one's motion relative to is what has been ruled out by experiment. Mediums such as water for acoustic waves fall into this category: the acoustic wave equation changes its form when one is in uniform motion relative to the medium. In contrast, Maxwell's equations do not change their form in this way.
All experiments so far support Galileo's relativity principle - that there is no experiment that one could do making measurements within one's own laboratory that could detect the uniform motion of the laboratory relative to another frame. To understand more deeply exactly what Galileo means here, see the allegory of Salviati's Ship. Most of the 19th century notions of an aether tell against this principle because, as for the motion relative to the water, they would give us an easy way to tell whether we were moving relative to the medium.
However, an aether fulfilling Galileo's principle is not ruled out experimentally. Indeed there was one aether theory, namely Lorentz Aether Theory which is identical in its experimental predictions to special relativity. User ACuriousMind summarizes this theory, and why SR is preferred, in his answer here.
General Relativity brings home the notion that "empty space" is not a void vividly and in a very in-your-face way: in GR "empty space" has definite properties[1] that differ from place to place: for example: its geometry - and outcomes of experiments to detect this geometry - can vary with a nonconstant curvature tensor. Modern quantum field theory goes further: empty space is a real, "material" entity, and modern physics conceives of it as being made of quantum fields in their ground state: modern physics has no need for an extraneous and mind bending notion of "empty space" further to the quantum fields that make up reality.
So, although one must be careful with the word aether to exclude anything that violates Galileo's principle and yields Lorentz-invariant predictions as being in conflict with experiment, I personally kind of like the word as a metaphor for empty space to emphasize the 20th century achievements of general relativity and quantum field theory. We can describe how empty space takes different geometry through the Einstein field equations. The quest for quantum gravity can be thought of as seeking to understand the mechanisms and that machinery of empty space that lead to the EFE description: quantum gravity can be thought of as the quest to find out how the Lorentz-invariant "aether" works.
[1]. It's important to note that even in Newtonian physics "empty space" has definite properties so that, from a philosophical standpoint, the distinction between void and empty space is still very real here, but subtler. | {
"domain": "physics.stackexchange",
"id": 24768,
"tags": "special-relativity, spacetime, time, inertial-frames, aether"
} |
Have the Rowan University "hydrino" findings been replicated elsewhere? | Question: In 2009, Rowan University released a paper claiming to replicate Blacklight Power's results on energy generation using hydrino states of the hydrogen atom. The paper (link now dead) appears to describe the procedure in every detail as far as my untrained eye can tell.
The press release 11/29/10 states:
Cranbury, NJ (November 29, 2010)—BlackLight Power, Inc. (BLP) today announced that CIHT (Catalyst-Induced-Hydrino-Transition) technology has been independently confirmed by Dr. K.V. Ramanujachary, Rowan University Meritorious Professor of Chemistry and Biochemistry.[...]
Answer: I am highly skeptical of this result, primarily because the theories promoted by Black Light Power are improbable to the point of being gibberish. The energy states of hydrogen can be calculated exactly, and have been both calculated and measured spectroscopically to extremely high precision, and experiment and theory are in perfect agreement. If the modern understanding of quantum physics (including QED) were incomplete enough to leave room for mysterious lower-energy states in hydrogen, there would've been some indication of this in one of the countless experiments that have been done on hydrogen.
Another good reason to be skeptical of this result is that the report in question seems to have been "released" only via Black Light Power's web site. The only mention of the authors of this report in conjunction with "hydrinos" that Google can find come from Black Light Power. This result has not appeared in any scientific journal known to Google. Or even the Rowan University web site. This is not what I would call a ringing endorsement of the work.
As for the report itself, it is entirely concerned with chemical NMR spectra, and I don't have any first-hand experience with those. I know just enough about the field to know that there can be subtle issues involved with the recording and interpretation of these. I'm more inclined to believe that the mysterious peaks seen in their samples are some NMR artifact than that they are the signature of radically new physics.
It's conceivable, barely, that this really does represent some dramatic new discovery, and has not yet appeared in print because it's working its way through the peer review process, taking a long time because extraordinary claims require extraordinary scrutiny. The principal person behind Black Light Power has been making claims like this since I was in grad school in the 1990's, though, and has yet to produce anything solid. I wouldn't hold my breath waiting for this to appear in a reputable peer-reviewed journal, if I were you. | {
"domain": "physics.stackexchange",
"id": 531,
"tags": "energy, experimental-physics, hydrogen"
} |
custom field type for ROS msg | Question:
We're trying to read a radar udp packet and one of the data types is of a RadarDetection[] type which is not supported by ROS (because its a custom type) the size of each index is 224 bits, one solution Im thinking of is :
-creating a message that contains int64[4] RadarDetection_custom and then use that message as a field type to represent the RadarDetection type but with 256 bits instead 224
I am not confident that this solution will work and I was wondering if there is a better solution than this
Originally posted by mohthepro on ROS Answers with karma: 1 on 2018-07-25
Post score: 0
Original comments
Comment by Geoff on 2018-07-25:
Could you reformat your question with more punctuation and more information about the data you want to send? You should also check your numbers. int64[3] would only hold 192 bits, not 256, and where did "215" come from?
Comment by mohthepro on 2018-07-26:
Sorry I meant int64[4], and I reformatted the question to make it easier to read, the 215 bits is just the size of the field type that I am receiving from the radar's UDP packets, I dont know why the radar manufacturers made it like so
Answer:
An array of int8 28 values long will give you 224 bits. This is the usual approach to storing a binary blob such as your RadarDetection type. It will be presented as a boost::array of int8_t, which you can get the raw pointer to and read into your data type. See the msg data types page for more information.
Originally posted by Geoff with karma: 4203 on 2018-07-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by mohthepro on 2018-07-26:
is there any difference between int8[28] and what I suggested with int64[4] other than less wasted space?
Comment by PeteBlackerThe3rd on 2018-07-26:
In some ways no. In fact they will probably take exactly the same space in memory due to the way compilers work. However semantically int64 implies a particular encoding that the data probably doesn't have whereas an array of bytes is the standard encoding of raw binary data.
Comment by PeteBlackerThe3rd on 2018-07-26:
So it's a convention really but your code will be more meaningful to other developers of you use a buffer of bytes. | {
"domain": "robotics.stackexchange",
"id": 31365,
"tags": "ros-kinetic"
} |
Find the longest word in a sentence | Question: This is my second attempt after I started to learn about STL. I am oblivious and need suggestions/advices on if there can be improvements made on this code.
#include <iostream>
#include <string>
#include <sstream>
#include <algorithm>
#include <iterator>
#include <vector>
using namespace std;
string LongestWord(string sen) {
vector<string> coll; //initialize vector
istringstream iss(sen); //read the string "sen"
copy(istream_iterator<string>(iss), //copy from beginning of iss
istream_iterator<string>(), //to the end of iss
back_inserter(coll)); //and insert string to vector
//istream_iterator by default separates word by whitespace
string longestWord = coll.at(0);
int longestCount = longestWord.length();
for(auto element : coll)
{
if(element.length() > longestCount)
{
longestWord = element;
}
}
return longestWord;
}
Answer: As @user1118321 points out: just using your implementation, I believe you're missing a crucial line (inside your for loop and if-statement):
longestCount = element.length();
Which becomes:
#include <iostream>
#include <string>
#include <sstream>
#include <algorithm>
#include <iterator>
#include <vector>
using namespace std;
string LongestWord(string sen) {
vector<string> coll; //initialize vector
istringstream iss(sen); //read the string "sen"
copy(istream_iterator<string>(iss), //copy from beginning of iss
istream_iterator<string>(), //to the end of iss
back_inserter(coll)); //and insert string to vector
//istream_iterator by default separates word by whitespace
string longestWord = coll.at(0);
int longestCount = longestWord.length();
for(auto element : coll)
{
if(element.length() > longestCount)
{
longestWord = element;
longestCount = element.length();
}
}
return longestWord;
}
Don't use using namespace std, it's better to use std:: as needed. While it's insignificant here, if you work on larger projects it will cause headaches. Especially if you're learning STL, it's a good idea to learn what goes in the STL and the C++ Standard Library (ie within namespace std).
Instead of iterating over the string (when tokenizing into words) and again over the vector, you could just iterate once:
Below is a common way to find the maximum of a container of elements, in this case std::vector<std::string>, while taking advantage of std::istringstream:
#include <sstream>
#include <string>
std::string LongestWord(std::string str){
std::string::size_type max_len = 0;
std::string longest_word;
std::string word;
std::istringstream stream(str);
while(stream >> word) {
if(max_len < word.length()) {
max_len = word.length();
longest_word = word;
}
}
return longest_word;
}
This function could be improved depending on how you wish to use it. You may wish to not count punctuation (recommended), you may wish to return a vector of strings if there are words of the same length, or you may wish to throw an error when given an empty string. These are all fairly easy to implement as needed. | {
"domain": "codereview.stackexchange",
"id": 31172,
"tags": "c++, algorithm, strings"
} |
Average friction force for an object going up a slope | Question: What is the difference between frictional force and average force of friction?
I know we calculate frictional force in a typical slope problem by finding out the forces that act in opposition, most commonly $F_f = mg\sin\theta$ in most slope problems.
A problem is asking me to find average force of friction, and the method mentioned above is not working, so I assume that average force of friction is a whole different value. My peers have said it is calculated by using $W = F*d$ (given the values of $W$ and $d$) Why are these two different forces a different value? What is different about average force of friction?
Answer: A value of a force is usually an instantaneous value. For forces that don't change over time, that's fine.
If a force is changing over time (or might be changing over time), you might not have sufficient information to compute the value at any one instance in time. But you might have enough to compute the average value.
If a ball is sitting on a scale, the situation is static and you can calculate the force (the instantaneous force) for the ball on the scale.
If the ball is bouncing on the scale, you can't report "the force", since it changes as the ball interacts with the scale and then separates from the scale. But you could calculate an average force over time (which would be the same as the first force if losses are ignored). | {
"domain": "physics.stackexchange",
"id": 89674,
"tags": "forces, work, friction"
} |
Why would decreased temperature be associated with increased cell size, in deep-sea crustaceans (or in general?) | Question: I was reading Wikipedia regarding deep-sea gigantism -- the fact that deep-sea species are often much larger than their shallow-dwelling counterparts. The article said,
Decreasing temperature is thought to result in increased cell size and
increased life span... both of which lead to an increase in maximum
body size.
The citation is:
S F Timofeev Izv Akad Nauk Ser Biol. 2001 Nov-Dec:(6):764-8.
[Bergmann's principle and deep-water gigantism in marine crustaceans]
I haven't tried to retrieve that article because I can't read Russian. Following up on "Bergmann's principle" doesn't help; it's just identifying ecographic patterns in gigantism.
From a surface area to volume ratio perspective, I can see how smaller cells are adaptive to cold temperatures. A huge amount of cellular processes are diffusion-driven. All of them, except for some potential electron or proton tunnelling processes, all mediated with thermal movements.
So let's say the transport of glucose into a given cell is driven by the concentration gradient (diffusion). At lower temperatures the diffusion rate is lower, so a cell of a given volume gets a reduced import of glucose. So it seems that increased surface-to-volume ratio (a smaller volume) would allow for a cell to compensate, and to accomplish the same rate of import as the warmer cell. This would also apply to some intracellular processes.
So assuming my thinking about diffusion processes is irrelevant to the real-life biology, as it predicts for colder temperatures that smaller cells are adaptive, is the point that metabolic processes are slower at lower temperatures? Then, a reduced rate of transport is not necessarily the limiting factor. Maybe the cellular volume can be scaled up until the total metabolic flux is comparable to the warmer-living cell? If there's more dissolved oxygen at lower temperatures, and if oxygen equilibrates well-enough, maybe no other nutrient or signalling diffusion is relevant?
This thinking does not at all seem consonant with this story by Curtis Deutsch and coworkers: Impact of warming on aquatic body sizes explained by metabolic scaling from microbes to macrofauna.
The model reproduces three key aspects of the observed patterns of
intergenerational size reductions measured in laboratory warming
experiments of diverse aquatic ectotherms (i.e., the "temperature-size
rule" [TSR]). First, the interspecific mean and variability of the TSR
is predicted from species' temperature sensitivities of hypoxia
tolerance, whose nonlinearity with temperature also explains the
second TSR pattern-its amplification as temperatures rise. Third, as
body size increases across the tree of life, the impact of growth on
O2 demand declines while its benefit to O2 supply rises, decreasing
the size dependence of hypoxia tolerance and requiring larger animals
to contract by a larger fraction to compensate for a thermally driven
rise in metabolism. Together our results support O2 limitation as the
mechanism underlying the TSR, and they provide a physiological basis
for projecting ectotherm body size responses to climate change from
microbes to macrofauna.
Answer: Woods (1999) proposed an explanation for the negative effect of temperature on ectotherm cell size based on the rates of transport and metabolism of oxygen - both of which you considered in your question. The diffusion of oxygen decreases with increasing temperature. However, this effect is small compared to how much the consumption of oxygen increases with increasing temperature. Therefore, in a warmer environment, despite having greater oxygen supply, a cell is still more oxygen-limited because of the relatively greater increase in the oxygen demand. In a colder environment, a cell can be larger because it is less oxygen-limited and can afford to have a lower surface area:volume ratio. As Woods puts it:
The critical observation is that oxygen supply (determined jointly by [the surface concentration and diffusion coefficient]) rises slowly with temperature, but that oxygen consumption rises rapidly with temperature. Thus, at higher temperatures, the oxygen gradient within a metabolizing sphere will be steeper, and the radius at which the oxygen concentration at the center falls to zero will be smaller.
Note that while this hypothesis makes theoretical sense, it's not clear that the effect of temperature on cell size actually drives differences in body size or even occurs in nature. The study on temperature and cell size (Van Voorhies, 1996) that was cited by the Timofeev (2001) paper was solidly refuted by Partridge and Coyne (1996) for looking at genetically identical individuals and failing to consider latitudinal genetic variation, along with other sources of faulty inference. There are a number of plausible competing hypotheses to explain deep-sea gigantism and Bergmann's rule, not all relating to cell size.
Nevertheless, this same process proposed by Woods could also apply to organisms as a whole, rather than individual cells. Deutsch et al. (2022) consider the same mechanism as Woods, except they considered whole organisms. Therefore, their equations are somewhat abstracted from the biophysical processes that Woods deals with, but it's basically the same mechanism. | {
"domain": "biology.stackexchange",
"id": 12517,
"tags": "cell-biology, metabolism, theoretical-biology"
} |
How did Newton discover his second law? | Question: I've always assumed/been told that Newton's 2nd law is an empirical law — it must be discovered by experiment. If this is the case, what experiments did Newton do to discover this? Is it related to his studies of the motion of the moon and earth? Was he able to analyze this data to see that the masses were inversely related to the acceleration, if we assume that the force the moon on the earth is equal to the force the earth exerts on the moon?
According to Wikipedia, the Principia reads:
Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur.
Translated as:
Law II: The alteration of motion is ever proportional to the motive force impress'd; and is made in the direction of the right line in which that force is impress'd.
My question is how did Newton come to this conclusion? I get that he knew from Galileo Galilei the idea of inertia, but this doesn't instantly tell us that the change in momentum must be proportional to the net force. Did Newton just assume this, or was there some experiment he performed to tell him this?
Answer: Newton's 1st and 2nd laws weren't particularly revolutionary or surprising to anyone in the know back then. Hooke had already deduced inverse-square gravitation from Kepler's third law, so he understood the second law. He just could not prove that the bound motion in response to an inverse square attraction is an ellipse.
The source of Newton's second law was Galileo's experiments and thought experiments, especially the principle of Galilean relativity. If you believe that the world is invariant under uniform motion, as Galileo states clearly, then the velocity cannot be a physical response because it isn't invariant, only the acceleration is. Galileo established that gravity produces acceleration, and its no leap from that to the second law.
Newton's third law on the other hand was revolutionary, because it implied conservation of momentum and conservation of angular momentum, and these general principles allow Newton to solve problems. The real juicy parts of the Principia are the specific problems he solves, including the bulge of the Earth due to its rotation, which takes some thinking even now, three centuries later.
EDIT: Real History vs. Physicist's History
The real history of scientific developments is complex, with many people making different contributions of various magnitude. The tendency in pedagogy is to relentlessly simplify, and to credit the results to one or two people, who are sort of a handle on the era. For the early modern era, the go-to folks are Galileo and Newton. But Hooke, Kepler, Huygens, Leibniz and a host of lesser known others made crucial contributions along the way.
This is especially pernicious when you have a figure of such singular genius as Newton. Newton's actual discoveries and contributions are usually too advanced to present to beginning undergraduates, but his stature is immense, so that he is given credit for earlier more trivial results that were folklore at the time.
To repeat the answer here: Newton did not discover the second law of motion. It was well known at the time, it was used by all his contemporaries without comment and without question. The proper credit for the second law belongs almost certainly to the Italians, to Galileo and his contemporaries.
But Newton applied the second law with genius to solve the problem of inverse square motion, to find the tidal friction and precession of the equinoxes, to give the wobbly orbit of the moon (in an approximation), to find the oblateness of the Earth, and the altitude variation of the acceleration of gravity g, to give a nearly quantitative model of the propagation of sound waves, to find the isochronous property of the cycloid, and a host of other contributions which are so brilliant ad so complete in their scope, that he is justly credited as founding the modern science of physics.
But in physics classes, you aren't studying history, and the applications listed above are too advanced for a first course, and Newton did indeed state the second law, so why not just give him credit for inventing it?
Similarly, in mathematics, Newton and Leibniz are given credit for the fundamental theorem of calculus. The proper credit for the fundamental theorem of calculus is to Isaac Barrow, Newton's advisor. Leibniz does not deserve credit at all. The real meat of the calculus however is not the fundamental theorem, but the organizing principles of Taylor expansions and infinitesimal orders, with successive approximations, and differential identities applied in varied settings, like arclength problems. In this, Newton founded the field.
Leibniz gave a second set of organizing principles, based on the infinitesimal calculus of Cavalieri. Cavalieri was Galileo's contemporary in Itali, and he either revived or rediscovered the ideas originally due to Archimedes in "The Method of Mechanical Theorems" (although he might not have had access to this work, which was only definitively rediscovered in the early 20th century. One of the theorems in Archimedes reappear in Kepler's work, suggesting that perhaps the Method was available to these people in an obscure copy in some library, and only became lost at a later date. This is pure speculation on my part. Kepler might have formulated and solved the problem independently of Archimedes. It is hard to tell. The problem is the volume of a cylinder cut off by a prism, related to the problem of two cylinders intersecting at right angles). Cavalieri and Kepler hardly surpassed Archimedes, while Newton went far beyond. Leibniz gave the theory its modern form, and all the formalism of integrals, differentials, product rule, chain rule, and so on are all due to Leibniz and his infinitesimals. Leibniz was also one of the discoverers of the conservation of mechanical energy, although Huygens has his paws on it too, and I don't know the dates.
The mathematicians' early modern history is no better. Again, Newton and Leibniz are given credit for theorems they did not produce, and which were common knowledge.
This type of falsified history sometimes happens today, although the internet makes honest accounting easier. Generally, Witten gets credit for everything, whether he deserves it or not. The social phenomenon was codified by Mermin, who called it "The Matthew principle", from the biblical quote "To those that have, much will be given, and to those that have not, even the little they have will be taken away." The urge to simplify relentlessly reassigns credit to well known figures, taking credit away from lesser known figures.
The way to fight this is to simply cite correctly. This is important, because the mechanism of progress is not apparent from seeing the soup, you have to see how the soup was cooked. Future generations deserve to get the recipe, so that we won't be the only ones who can make soup. | {
"domain": "physics.stackexchange",
"id": 14780,
"tags": "newtonian-mechanics, experimental-physics, history"
} |
Binary Classification Model Comparison - Interpretation of Training, Test and Validation Set Performance | Question: I am looking for some advice regarding the best choice of binary classification model based on training, validation and test set results. Model 1 (results in 1st image) shows better test set results than Model 2, but Model 2 (results in 2nd image) shows results that seem more intuitive to me with better training set performance than its test set performance. I feel as if the Model 1 test set results might have been a bit of a fluke, whereas Model 2 appears more like a well trained model with more long-term reliability.
Any advice on this is much appreciated.
Answer: Generally if a model A performs worse than model B on "Test set" while performing better on "Train/Val set" then model A has suffered from overfitting.
So, in your case Model 1 should give you long term reliability against unseen data. | {
"domain": "ai.stackexchange",
"id": 3457,
"tags": "binary-classification, confusion-matrix"
} |
Calculate the expression of divergence in spherical coordinates $r, \theta, \varphi$ | Question: Hi this is my first question in [Physics.SE] I saw a lot of posts and I liked them. I hope that my question will be answered too.
While I'm solving a problem in vector calculus. I recognized that I need a proof to answer it.
The problem is the following: Calculate the expression of divergence in spherical coordinates $r, \theta, \varphi$ for a vector field $\boldsymbol{A}$ such that its contravariant components $A^i$
Here's my attempts:
We know that the divergence of a vector field is :
$$\mathbf{div\ V}=\nabla_i v^i$$
Notice that $\mathbf{V}$ is the vector field and $\nabla_k v^i$ its covariant derivative, contracting it we obtain the scalar $\nabla_i v^i$.
My questions are how I can apply this to solve the main problem ?
Can I use the developed expression of the covariant derivative? which is : $$\nabla_k v^i=\partial_k v^i+v^j\Gamma_{kj}^i$$
Answer: and welcome to [Physics.SE], I tried to solve your problem and here is what I found:
As you said the divergence can be written :
$$\mathbf{div \ V}=\nabla_i v^i$$
And the expression of the covariant derivative is :
$$\nabla_k v^i=\partial_k v^i+v^j\Gamma_{kj}^i$$
Using it we obtain :
$$\mathbf{div \ V}=\partial_i v^i +v^j\Gamma_{ij}^i$$
Using Ricci theorem :
$$\nabla_k g_{ij}=\partial_kg_{ij}-\Gamma_{ik}^l g_{lj}-\Gamma_{jk}^l g_{il}=0$$
Multiplying by $g^{ij}$ :
Recall: $g^{ij}g_{jl}=\delta_i^l$
$$g^{ij}\partial_k g_{ij}-\Gamma_{ik}^l \delta_i^l-\Gamma_{jk}^l\delta_l^j =0$$
Thus:
$$g^{ij}\partial_k\ g_{ij}-\Gamma_{ik}^l-\Gamma_{jk}^l=0$$
Since $\Gamma_{ik}^i=\Gamma_{jk}^j$ we have :
$$ g^{ij}\ \partial_k\ g_{ij}=2\Gamma_{ik}^i$$
Let $g$ be the determinant of $g_{ij}$ we obtain :
$$\partial_k g=g\ g_{ij}\ \partial_k\ g_{ij}$$
Thus :
$$\Gamma_{ik}^l=\frac{1}{2g} \partial_k \ g=\frac{1}{\sqrt{|g|}}\partial_k \sqrt{|g|}$$
Applying it we obtain:
$$\mathbf{div \ V}=\partial_iv^i+\frac{v^i}{\sqrt{|g|}}\partial_i \sqrt{|g|}$$
Recall : $$\frac{1}{a} d(ba)=db+\frac{b}{a} da$$
Let $a=\sqrt{|g|}$ , $b=v^i$
finally we have :
$$\fbox{$\mathbf{div \ V}=\frac{1}{\sqrt{|g|}} \partial_i\biggr( v^i \sqrt{|g|}\biggl)$}$$
Using this result in your main problem we get :
$$\mathbf{div \ A}=\partial_i A^i +\frac{A^i}{\sqrt{|g|}}\partial_i \sqrt{|g|}$$
I think I would let you continue. Good luck ! | {
"domain": "physics.stackexchange",
"id": 66920,
"tags": "homework-and-exercises, metric-tensor, coordinate-systems, differentiation, vector-fields"
} |
Name of experiment | Question: I'm seeking the name of or reference for an experiment I once saw in a college physics class. At the beginning of one class the instructor repeatedly wound a wiper that spread a blot of some type of ink all over the interior of a glass jar. Then during the lecture (which I admittedly don't remember very well) he must have explained something about the second law of thermodynamics or entropy and that once a large system gets all mixed up, there's really no chance for it to return to its original state. Then he concluded the class by winding the wiper in the opposite direction, and clearly to his delight our jaws all dropped---the film of ink, which had been spread all over the interior of the jar, reappeared in the original blot.
Surely, there are folks on this site who demonstrate this every semester. What on earth is this experiment?
Answer: I don't know if there is a formal name for it, but my favorite search engine likes to call it a Reverse Entropy Machine.
The main fluid is glycerin and the dye is food coloring. You can see an example of the setup here.
You can also watch a video that describes it along with some lecture notes where it is called Kinematic Reversibility.
It works because the motion of the inner cylinder is relatively slow and the glycerin is very viscous so the flow is laminar. Molecular diffusion, an irreversible process, is negligible over these time scales (although if you let it sit a really really long time it would smear). So the entire process itself is reversible and the ink blob can reform from it's distorted shape. | {
"domain": "physics.stackexchange",
"id": 5740,
"tags": "fluid-dynamics, entropy, viscosity"
} |
Advent of Code 2019 Day 7 (simple emulator, opcode processing) | Question: Since couple of days ago I started to learn python by doing AoC 2019.
I would like to share with you my solution to day7 (Amplification Circuit) part 1 and part 2.
Challenge summary:
--- Day 7: Amplification Circuit ---
==================== Part 1 ====================
There are five amplifiers connected in series; each one receives an input signal and produces an output signal. They are connected such that the first amplifier's output leads to the second amplifier's input, the second amplifier's output leads to the third amplifier's input, and so on. The first amplifier's input value is 0, and the last amplifier's output leads to your ship's thrusters.
O-------O O-------O O-------O O-------O O-------O
0 ->| Amp A |->| Amp B |->| Amp C |->| Amp D |->| Amp E |-> (to thrusters)
O-------O O-------O O-------O O-------O O-------O
The Elves have sent you some Amplifier Controller Software (your puzzle input), a program that should run on your existing Intcode computer. Each amplifier will need to run a copy of the program.
For example, suppose you want to try the phase setting sequence 3,1,2,4,0, which would mean setting amplifier A to phase setting 3, amplifier B to setting 1, C to 2, D to 4, and E to 0. Then, you could determine the output signal that gets sent from amplifier E to the thrusters with the following steps:
Start the copy of the amplifier controller software that will run on amplifier A. At its first input instruction, provide it the amplifier's phase setting, 3. At its second input instruction, provide it the input signal, 0. After some calculations, it will use an output instruction to indicate the amplifier's output signal.
Start the software for amplifier B. Provide it the phase setting (1) and then whatever output signal was produced from amplifier A. It will then produce a new output signal destined for amplifier C.
Start the software for amplifier C, provide the phase setting (2) and the value from amplifier B, then collect its output signal.
Run amplifier D's software, provide the phase setting (4) and input value, and collect its output signal.
Run amplifier E's software, provide the phase setting (0) and input value, and collect its output signal.
The final output signal from amplifier E would be sent to the thrusters. However, this phase setting sequence may not have been the best one; another sequence might have sent a higher signal to the thrusters.
Here are some example programs:
Max thruster signal 43210 (from phase setting sequence 4,3,2,1,0):
3,15,3,16,1002,16,10,16,1,16,15,15,4,15,99,0,0
Max thruster signal 54321 (from phase setting sequence 0,1,2,3,4):
3,23,3,24,1002,24,10,24,1002,23,-1,23,
101,5,23,23,1,24,23,23,4,23,99,0,0
Max thruster signal 65210 (from phase setting sequence 1,0,4,3,2):
3,31,3,32,1002,32,10,32,1001,31,-2,31,1007,31,0,33,
1002,33,7,33,1,33,31,31,1,32,31,31,4,31,99,0,0,0
Try every combination of phase settings on the amplifiers. What is the highest signal that can be sent to the thrusters?
==================== Part 2 ====================
O-------O O-------O O-------O O-------O O-------O
0 -+->| Amp A |->| Amp B |->| Amp C |->| Amp D |->| Amp E |-.
| O-------O O-------O O-------O O-------O O-------O |
| |
'--------------------------------------------------------+
|
v
(to thrusters)
Most of the amplifiers are connected as they were before; amplifier A's output is connected to amplifier B's input, and so on. However, the output from amplifier E is now connected into amplifier A's input. This creates the feedback loop: the signal will be sent through the amplifiers many times.
In feedback loop mode, the amplifiers need totally different phase settings: integers from 5 to 9, again each used exactly once. These settings will cause the Amplifier Controller Software to repeatedly take input and produce output many times before halting. Provide each amplifier its phase setting at its first input instruction; all further input/output instructions are for signals.
Don't restart the Amplifier Controller Software on any amplifier during this process. Each one should continue receiving and sending signals until it halts.
All signals sent or received in this process will be between pairs of amplifiers except the very first signal and the very last signal. To start the process, a 0 signal is sent to amplifier A's input exactly once.
Eventually, the software on the amplifiers will halt after they have processed the final loop. When this happens, the last output signal from amplifier E is sent to the thrusters. Your job is to find the largest output signal that can be sent to the thrusters using the new phase settings and feedback loop arrangement.
Try every combination of the new phase settings on the amplifier feedback loop. What is the highest signal that can be sent to the thrusters?
==================== Here is my solution ====================
I found it quite clever that I actually connected this amplifiers in such cascade manner in code. What do you think?
#!/usr/bin/env python3
import sys
import itertools
from queue import Queue
class amplifier(object):
code = None
def __init__(self, phase_input):
self.pc = 0
self.halted = False
self.other_amplifier = None
self.inputs = Queue()
self.add_input(phase_input)
self.outputs = []
def set_other_amplifier(self, other_amplifier):
self.other_amplifier = other_amplifier
def has_other_amplifier(self):
return self.other_amplifier is not None
def add_input(self, _input):
self.inputs.put(_input)
def get_input(self):
return self.inputs.get()
def has_input(self):
return not self.inputs.empty()
def add_output(self, _output):
if self.has_other_amplifier() and not self.other_amplifier.halted:
self.other_amplifier.add_input(_output)
else:
self.outputs.append(_output)
def run_program(self):
ncp = amplifier.code.copy()
i = self.pc
while i < len(ncp):
op = ncp[i]
if op == 1:
ncp[ncp[i+3]] = ncp[ncp[i+1]] + ncp[ncp[i+2]]
i += 4
elif op == 2:
ncp[ncp[i+3]] = ncp[ncp[i+1]] * ncp[ncp[i+2]]
i += 4
elif op == 3:
if self.has_input():
inp = self.get_input()
ncp[ncp[i+1]] = inp
i += 2
else:
self.pc = i
if self.has_other_amplifier() and not self.other_amplifier.halted:
self.other_amplifier.run_program()
return
elif op == 4:
self.add_output(ncp[ncp[i+1]])
i += 2
elif op == 104:
self.add_output(ncp[i+1])
i += 2
elif op == 5: # jump-if-true
if ncp[ncp[i+1]] != 0:
i = ncp[ncp[i+2]]
else:
i += 3
elif op == 105:
if ncp[i+1] != 0:
i = ncp[ncp[i+2]]
else:
i += 3
elif op == 1005:
if ncp[ncp[i+1]] != 0:
i = ncp[i+2]
else:
i += 3
elif op == 1105:
if ncp[i+1] != 0:
i = ncp[i+2]
else:
i += 3
elif op == 6: # jump-if-false
if ncp[ncp[i+1]] == 0:
i = ncp[ncp[i+2]]
else:
i += 3
elif op == 106:
if ncp[i+1] == 0:
i = ncp[ncp[i+2]]
else:
i += 3
elif op == 1006:
if ncp[ncp[i+1]] == 0:
i = ncp[i+2]
else:
i += 3
elif op == 1106:
if ncp[i+1] == 0:
i = ncp[i+2]
else:
i += 3
elif op == 7: # less than
if ncp[ncp[i+1]] < ncp[ncp[i+2]]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 107:
if ncp[i+1] < ncp[ncp[i+2]]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 1007:
if ncp[ncp[i+1]] < ncp[i+2]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 1107:
if ncp[i+1] < ncp[i+2]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 8: # equals
if ncp[ncp[i+1]] == ncp[ncp[i+2]]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 108:
if ncp[i+1] == ncp[ncp[i+2]]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 1008:
if ncp[ncp[i+1]] == ncp[i+2]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 1108:
if ncp[i+1] == ncp[i+2]:
ncp[ncp[i+3]] = 1
else:
ncp[ncp[i+3]] = 0
i += 4
elif op == 101: # addition
ncp[ncp[i+3]] = ncp[i+1] + ncp[ncp[i+2]]
i += 4
elif op == 1001:
ncp[ncp[i+3]] = ncp[ncp[i+1]] + ncp[i+2]
i += 4
elif op == 1101:
ncp[ncp[i+3]] = ncp[i+1] + ncp[i+2]
i += 4
elif op == 102: # multiplication
ncp[ncp[i+3]] = ncp[i+1] * ncp[ncp[i+2]]
i += 4
elif op == 1002:
ncp[ncp[i+3]] = ncp[ncp[i+1]] * ncp[i+2]
i += 4
elif op == 1102:
ncp[ncp[i+3]] = ncp[i+1] * ncp[i+2]
i += 4
elif op == 99:
i = len(ncp)
else:
print(op, "opcode not supported")
i += 1
self.halted = True
if self.has_other_amplifier() and not self.other_amplifier.halted:
self.other_amplifier.run_program()
def get_signal(permutation_iter):
a = amplifier(next(permutation_iter))
a.add_input(0)
b = amplifier(next(permutation_iter))
c = amplifier(next(permutation_iter))
d = amplifier(next(permutation_iter))
e = amplifier(next(permutation_iter))
a.set_other_amplifier(b)
b.set_other_amplifier(c)
c.set_other_amplifier(d)
d.set_other_amplifier(e)
e.set_other_amplifier(a)
a.run_program()
return e.outputs
def solve(permutation_base):
permutations = itertools.permutations(permutation_base)
max_signal = None
max_signal_phase_seq = None
for p in permutations:
signal = get_signal(iter(p))
if not max_signal or signal > max_signal:
max_signal = signal
max_signal_phase_seq = p
print(max_signal_phase_seq, '->', max_signal)
if __name__ == "__main__":
with open("input") as f:
amplifier.code = list(map(lambda x: int(x), f.readline().split(',')))
solve([0, 1, 2, 3, 4]) # part1
solve([5, 6, 7, 8, 9]) # part2
Answer: Style
Use CamalCase for class names, like class Amplifier.
No need to explicitly extends object.
When encountering unsupported opcode, raise an exception to kill the program immediately instead of printing an error message. It helps you discover bugs earlier. This is known as "fail fast".
get_signal() should accept an Iterable instead of an Iterator. You can do a lot of magic with Iterables, like this:
def get_signal(permutation_iter):
# Transform list of integer into list of amplifiers and unpack them.
a, b, c, d, e = map(amplifier, permutation_iter)
a.add_input(0)
a.set_other_amplifier(b)
b.set_other_amplifier(c)
c.set_other_amplifier(d)
d.set_other_amplifier(e)
e.set_other_amplifier(a)
a.run_program()
return e.outputs
It also makes the iter() call in solve() unnecessary.
The main job of solve() is getting the maximum from a list of permutations, using get_signal() as key. Python already has max() function for this, but we need to extract the permutation itself as well. So we can write our own argmax() function to simplify this. Note that the code is a lot cleaner without for loop.
def argmax(iterable, key=None):
arg = max((item for item in iterable), key=key)
value = key(arg) if key else arg
return arg, value
def solve(permutation_base):
permutations = itertools.permutations(permutation_base)
max_signal_phase_seq, max_signal = argmax(permutations, key=get_signal)
print(max_signal_phase_seq, "->", max_signal)
Structure
Pull out the intcode computer into its own function or class, which will ease code reuse(You'll need the intcode computer in multiple challenges of AoC later).
Don't "hard wire" parameter modes into opcode. Parse parameter modes independently of actual operations. For example, opcode 102, 1002, and 1102 should trigger the same function(multiplication), only passing different parameters.(Spoiler: You'll need to add another parameter mode later) | {
"domain": "codereview.stackexchange",
"id": 37719,
"tags": "python, python-3.x, programming-challenge, emulator"
} |
pKa of the alpha proton of an alpha amino ester | Question: Is there a theoretical pKa value for the proton on the alpha carbon of a methylated amino acid, or just a general alpha amino methyl ester?
Answer: It will depend on what is attached to the N. In the O'Donnell alkylation procedure where the N is protected as a benzophene imine, the pKa of the glycine proton is estimated at 18.7 and the alanine proton 22.8 source here | {
"domain": "chemistry.stackexchange",
"id": 16083,
"tags": "organic-chemistry, synthesis, amino-acids"
} |
Is there a backup/replacement for the Complexity Zoo? | Question: This is a non-technical question, but certainly relevant for the TCS community. If considered inappropriate, feel free to close.
The Complexity Zoo webpage (http://qwiki.stanford.edu/index.php/Complexity_Zoo) has certainly been of great service to the TCS community over the years. Apparently it is down since quite a while. I was wondering, if someone is still maintaining it, if it has moved, if there is a backup server, or if there are other plans to preserve this wonderful database of complexity classes, their relationships and citations to relevant publications. If not, are there comparable webpages that could be used as a replacement?
UPDATE (Aug 1, 2012): The Zoo is back online, and Scott is looking for people volunteering to mirror it to avoid any future outages.
Answer: I cannot remark on whether the Zoo has a continuous existence on the web or elsewhere. However, there are still some proto-Zoo and Zoo-derived resources available on the web.
There seems to be a copy of at least an earlier incarnation of the Zoo on Greg Kuperburg's UC Davis website.
A Latex/PDF version of some state of the Zoo written by Chris Bourke seems still to be available on his webspace at the University of Nebraska–Lincoln.
Other sources on the net seem to be links to the first (or to the URL to the stanford.edu subsite), or pieces of the second.
It is worth noting that the entire qwiki.stanford.edu subsite seems to have disappeared, and that google searches for "qwiki", with or without specifying "stanford", either yields references to a multimedia product launched in January of last year, or produces the typical spoor of SEO companies trying to leach off of online resources. | {
"domain": "cstheory.stackexchange",
"id": 1637,
"tags": "cc.complexity-theory, reference-request, soft-question"
} |
Where are all the slow neutrinos? | Question: The conventional way physicists describe neutrinos is that they have a very small amount of mass which entails they are traveling close to the speed of light. Here's a Wikipedia quote which is also reflected in many textbooks:
It was assumed for a long time in the framework of the standard model of particle physics, that neutrinos are massless. Thus they should travel at exactly the speed of light according to special relativity. However, since the discovery of neutrino oscillations it is assumed that they possess some small amount of mass.1 Thus they should travel slightly slower than the speed of light... -- Wikipedia (Measurements of Neutrino Speed)
Taken at face value, this language is very misleading. If a particle has mass (no matter how small), its speed is completely relative, and to say that neutrinos travel close to the speed of light, without qualification, is just as incorrect as saying electrons or billiard balls travel close to the speed of light.
So what is the reason everyone repeats this description? Is it because all the neutrinos we detect in practice travel close to the speed of light? If so, then I have this question:
Neutrinos come at us from all directions and from all sorts of sources (stars, nuclear reactors, particle accelerators, etc.), and since they have mass, just like electrons, I would have thought we should see them traveling at all sorts of speeds. (Surely some cosmic neutrino sources are traveling away from the earth at very high speeds, for example. Or what about neutrinos emitted from particles in accelerators?)
So like I said at the start: Where are all the slow neutrinos? And why do we perpetuate the misleading phrase: 'close to the speed of light' (i.e. without contextual qualification)?
Answer: Strictly speaking, it is indeed incorrect that neutrinos travel at "close to the speed of light". As you said, since they have mass they can be treated just like any other massive object, like billiard balls. And as such they are only traveling at nearly the speed of light relative to something. Relative to another co-moving neutrino it would be at rest.
However, the statement is still true for almost all practical purposes. And it doesn't even matter in which reference frame you look at a neutrino. The reason is that a non-relativistic neutrino doesn't interact with anything. Or in other words: all the neutrinos you can detect necessarily have to have relativistic speeds.
Let me elaborate. Since neutrinos only interact weakly they are already extremely hard to detect, even if they have high energies (> GeV). If you go to ever lower energies the interaction cross-section also decreases more and more. But there is another important point. Most neutrino interaction processes have an energy threshold to occur. For example, the inverse beta decay
$$ \bar\nu_e + p^+ \rightarrow n + e^+$$
in which an antineutrino converts a proton into a neutron and a positron, and which is often used as a detection process for neutrinos, has a threshold of 1.8 MeV antineutrino energy. The neutron and the positron are more massive than the antineutrino and the proton, so the antinneutrino must have enough energy to produce the excess mass of the final state (1.8 MeV). Below that energy the (anti)neutrino cannot undergo this reaction any more.
A reaction with a particularly low threshold is the elastic scattering off an electron in an atom. This only requires a threshold energy of the order of eV (which is needed to put the electron into a higher atomic energy level). But a neutrino with eV energies would still be relativistic!
Assuming that a neutrino has a mass of around 0.1 eV, this would still mean a gamma factor of $\gamma\approx 10$. For a neutrino to be non-relativistic it would have to have a kinetic energy in the milli-eV range and below. This is the expected energy range of Cosmic Background Neutrinos, relics from the earliest times of the universe. They are so to say the neutrino version of the Cosmic Microwave Background. So not only do non-relativistic neutrinos exist (according to mainstream cosmological models), they are also all around us. In fact, their density at Earth is $\approx$50 times larger than neutrinos from the Sun!
There is a big debate if they can ever be detected experimentally. There are a few suggestions (and even one prototype experiment), but there are differing opinions about the practical feasibility of such attempts. The only process left for neutrinos at such small energies is neutrino-induced decay of unstable nuclei. If you have an already radioactive isotope, it's like the neutrino would give it a little "push over the edge". The $\beta$-electron released in the induced decay would then receive a slightly larger energy than the Q-value of the spontaneous decay and the experimental signature would be a tiny peak to the right of the normal $\beta$-spectrum. This will still be an extremely rare process and the big problem is to build an apparatus with a good enough energy resolution so that the peak can be distinguished from the spectrum of normal spontaneous nuclear decay (amidst all the background).
The Katrin experiment is trying to measure the endpoint of $\beta$-spectrum of Tritium in order to determine the neutrino mass. But under very favorable circumstances they even have some chance to detect such a signature of cosmic background neutrinos.
TL;DR: In fact there are non-relativistic neutrinos all over the place, but they they interact so tremendously little that they seem to not exist at all. | {
"domain": "physics.stackexchange",
"id": 32187,
"tags": "special-relativity, particle-physics, neutrinos"
} |
A particle subject to a potential of the form $V(x)=V_0\vert x \vert$ | Question: A particle is moving in a potential $V(x)=V_0\vert x \vert$. I need to get the angular frequency and the period of the movement of the particle.
This is what i have done.
The equation of motion is
$$
\DeclareMathOperator{\sgn}{sgn}\begin{align}
m\ddot x &= -\dfrac{\partial V}{\partial x} \\
&= -V_0 \sgn (x)
\end{align}$$
$$x=x_0+v_0t-\dfrac{V_0}{m}\sgn(x)\dfrac{t^2}{2}$$
My problem is:
How to compare the equation of motion of this system with the equation of motion of a harmonic oscillator in order to get the angular frequency $\omega$?
Answer: The general problem $V\left(x\right) \propto \left|x\right|^n$ is discussed here.
For your problem $\left(n=1\right)$, if the particle is released from rest at $x=A$ at $t=0$, where $A$ is the amplitude, then the particle will cross $x=0$ at $T/4$, where $T$ is the period.
As you found, from $x=A$ to $x=0$, the force is $-V_0$, and the acceleration is $-V_0/m$, so
$$
\begin{eqnarray}
x\left(t\right) &=& x\left(0\right) + v\left(0\right) t + \frac{1}{2} a t^2 \\
&=& A - \frac{V_0}{2m} t^2
\end{eqnarray}
$$
I'll leave the rest for you to work out.
If you want the period in terms of the energy $E$ instead of the amplitude $A$, note that since there is no kinetic energy at $x=A$, $V_0 A = E$.
Finally, just use $\omega = 2 \pi / T$ for the angular frequency. | {
"domain": "physics.stackexchange",
"id": 44160,
"tags": "homework-and-exercises, newtonian-mechanics, oscillators, angular-velocity"
} |
Changing the hypercharge of the Higgs field | Question: I'm trying to solve a question about the Higgs mechanism:
If the hypercharge of the Higgs were to be $Y=1$ (Changing from $Y=1/2$ ), what would be the photon of this world? Meaning, how can I describe the field? Moreover, what are the charges of the quarks and the leptons? And what is the coupling constant of the electromagnetic force?
I think the new formula for the charge is $Q=Y+u-d$, but I don't know how, or if, the field $A_\mu$ will change.
Answer: I gather you are using the (sensible!) "minority usage convention" for the Weak Hypercharge, $Q=T_3+Y$, so, then, half of what appears on this WP article table. This is the simplest and most tasteful one, anyway. (But not for long...)
You wish to contemplate a hypothetical world in which the Higgs hypercharge grows from 1/2 to 1, so it doubles. Since the hypercharge is the quantity least connected to observations, let us take g to stay the same. It is a little like taking your bicycle apart and putting it back together with double its length, to see how it works.
The starting point is the Higgs mechanism, "the first job" of the Higgs. The crucial piece in this mechanism which enters in the gauge field mass matrix to be diagonalized is the hypercharge U(1) gauge field $B_\mu$ coupling to the Higgs. Was
$$
D_\mu H= \partial_\mu H -ig {\mathbf W}_\mu \cdot \frac{\mathbf \tau}{2} H -i \frac{g'}{2} B_\mu H ,
$$
now going to
$$
... -i g' B_\mu H ,
$$
keeping everything else the same. We can then use all the formulas of the standard model the same, substituting $g''\equiv 2g'$ for $g'$ in all the expressions.
So, for example, the Weinberg angle will now increase from about 30 degrees to about 49 degrees, as
$$
g''/g=\tan \theta_W''= 2 \tan \theta_W \approx 2/\sqrt{3}.
$$
As a result, there is more mixing, and the Z grows even heavier than the W, and the photon becomes less hyperchargy $B_\mu$ and more $W^3_\mu$-like,
$$
A_\mu = \sin \theta''_W ~ W_\mu ^3+ \cos \theta''_W B_\mu~.
$$
The electric charge $e=g \sin \theta''_W$ will thus go up by a factor of 1.5. You can further see the coupling ZWW decreases, etc...
In essence, we have transported ourselves to 1972, before neutral currents and the measurement of the Weinberg angle.
Now, however, $Q=T_3 +1$ won't do after the $1\mapsto 2$ transition, anymore, as the Higgs doublet needs to be anchored to have charges $0,\pm 1$, for the goldstons to be eaten right by the W triplet! ... and the leftover H to stay neutral.
We must then adjust this to $Q=T_3+Y/2$,
and for U(1) invariance, carry this over to the Yukawa couplings, the second job of the Higgs, so they all conserve weak isospin, hypercharge, and charge, as before. You must then adjust your hypercharge table for fermions, to actually yield the present charges, so $Y(e^-_L)=-1=Y(\nu_L)$, $Y(e^-_R)=-2$, $Y(u_L)=1/3 =Y(d_L)$, etc...
So charges and their assignments have stayed the same, but the hypercharges of the fermions have adjusted to the new charge-isospin-hypercharge relation: the hypercharge is now twice the average charge of the isomultiplet. I understand this is not what you had in mind, but I gather you see the logic of it now. | {
"domain": "physics.stackexchange",
"id": 46697,
"tags": "particle-physics, standard-model, higgs, electroweak, models"
} |
Recurrence Equation Question | Question: I have some difficulty trying to tell which equation to use when I'm given an explanation on how an algorithm operates. Especially divide and conquer.
Normally I see these kind of equations:
C(n) = aC(n/a) + b where a and b are constants
Other times I don't see the a in front of C(n/a) as an answer. That really confuses me.
Can you tell me when I will need to use which?
Thanks!
Answer: We can't tell you when you need to use which, since it depends on the algorithm. If the algorithm makes $a$ recursive calls to subproblems of size $n/a$, then you will see the $a$ in front. If it makes only one recursive call, then you won't see it. You need to understand what the algorithm is doing. That said, you will tend to see the $a$ in front when you are applying some operation that updates the entire list, and not see it in front when you're trying to focus on some specific element. There could also be other values other than $1,a$ in front. | {
"domain": "cs.stackexchange",
"id": 1856,
"tags": "algorithms, recurrence-relation"
} |
ROS base - rviz | Question:
Can I install/start rviz, when I only have installed ROS base (Ubunto 10.04, electric)?
Originally posted by Janina on ROS Answers with karma: 11 on 2012-04-18
Post score: 0
Answer:
You should install the ROS visualization stack if you want to use RVIZ:
sudo apt-get install ros-electric-visualization
You could basically also checkout RVIZ source and manually install all dependencies but i think you really should NOT do this...
Originally posted by michikarg with karma: 2108 on 2012-04-18
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 9024,
"tags": "rviz"
} |
Cyber Security - recommended reading | Question: I'm applying for a doctoral training in Cyber Security, although I come from a maths background. I've been told that my background meets the entry criteria, although for my application I would like to be able to say that I've looked into Cyber Security it at least some detail.
Given that I will have very little time to do any reading before applying (about 2 weeks), what would you recommend I read? A few chapters from an academic text, or a popular book on the topic?
Specific book/paper recommendations would be appreciated.
Answer: Take a peek at Anderson's classic "Security Engineering" (PDF legally available for free at the link). It won't help you with the details, but it gives an overview of the whole area, for (more or less) laypeople. | {
"domain": "cs.stackexchange",
"id": 5599,
"tags": "reference-request"
} |
What is the capacitance of a parallel plate capacitor with different areas? | Question: Suppose there is a parallel-plate capacitor but the two plates have different areas, $A_1$ and $A_2$. How will we derive an expression for its capacitance?
I have been told to take the area common between the two, but from where does this follow?
Ideally we start off by assuming that one plate has $+Q$ charge and the other has $-Q$. Then we find that the potential difference $V$ is directly proportional to $Q$, from which we find capacitace $C$.
I am unable to find the potential difference.
Answer: The capacitance is a result of the polarization of the medium due to electric field and the attraction of charges on one plate due to the charge on the other (as mediated by the electric field).
When you have two plates facing each other, the electric field is present in their common area (ignoring small fringe effects).
This is why you use the area of overlap to compute the capacitance. | {
"domain": "physics.stackexchange",
"id": 49157,
"tags": "electrostatics, capacitance"
} |
Rosbag package not found-fuerte | Question:
Hi,
I am using fuerte
I am getting the following error when i am trying to rosmake packages laser_filters,laser_assembler,costmap_2D etc. "Package rosbag was not found in pkg-config search path.Perhaps you should add the directory containing 'rosbag.pc' to PKG_CONFIG_PATH environment variable."
I have rosbag installed and when i checked the directory "/opt/ros/fuerte/lib/pkgconfig"
rosbag.pc seems to be present there. I am confused why am still getting the error. !
Can any one please help me regarding this
Regards,
Radhika
Originally posted by Radhika on ROS Answers with karma: 1 on 2012-10-23
Post score: 0
Answer:
Why are you trying to rosmake the packages? Just use the debian packages:
sudo apt-get install ros-fuerte-laser-pipeline navigation
I guess the problem is that you checked out the development versions of the packages which are probably already ported to Groovy (the upcoming ROS Distro). If you really need to compile the packages from source, make sure you check out the Fuerte branch.
Originally posted by Lorenz with karma: 22731 on 2012-10-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11488,
"tags": "navigation, build, ros-fuerte, laser-pipeline, source"
} |
Voice commands / speech to and from robot? | Question:
I have used the sound_play package with festival to synthesize voices to make my robot "talk", but I would also like to be able to command the robot by voice.
Basically, I feel like someone has used a tool like CMU Sphinx with ROS, but I am unable to find any examples.
Originally posted by evanmj on ROS Answers with karma: 250 on 2011-02-21
Post score: 13
Answer:
It's quite experimental and definitely not documented, but we have been using PocketSphinx to do speech recognition with ROS. See the cwru_voice package for source.
If you run the voice.launch file (after changing some of the hardcoded model paths appropriately in whichever node it launches), you should be able to get certain keywords out on the "chatter" topic. As an example, voice.launch should recognize a command to "Open the Door" or "Go to the hallway" and output a keyword on the chatter topic. If you do try it out and have problems, let me know as you would be the first outside our lab to try it that I know of.
Stanford also has a speech package in their repository. EDIT: Thanks to @fergs for finding the Stanford package.
UPDATE: Make sure to take a look at Scott's answer below for a nice tutorial and demo code for getting speech recognition up and running for your own uses.
Originally posted by Eric Perko with karma: 8406 on 2011-02-21
This answer was ACCEPTED on the original site
Post score: 11
Original comments
Comment by fergs on 2011-02-22:
Here's the sail-ros-pkg version you were probably thinking about: https://sail-ros-pkg.svn.sourceforge.net/svnroot/sail-ros-pkg/trunk/semistable/audio/speech/
Comment by Eric Perko on 2011-02-21:
I hadn't seen that package before. Thanks for the info.
Comment by evanmj on 2011-02-21:
Thanks. I'll give it a go when I get a chance. I found another implementation of some sort here: http://www.ros.org/wiki/ua_language | {
"domain": "robotics.stackexchange",
"id": 4828,
"tags": "ros, sound-play"
} |
How to apply a gate to a LittleEndian in Q# | Question: I have been given a LittleEndian register. I want to know the following things:
How many bits there are in the LittleEndian?
How to convert the LittleEndian into a Qubit[] Array.
How to access individual qubits of LittleEndian?
How to apply simple and controlled gates onto individual qubits of LittleEndian?
Answer: The LittleEndian type is basically a wrapper for a register of qubits to let the user know how to interpret it as another value. It changes nothing about the register it wraps.
There is no fixed number of bits in a LittleEndian, it only documents that the least significant bit of a register is index 0 (on the left).
If you want to get back just the register not wrapped in the LittleEndian type, you can use the ! operator like this:
using (register = LittleEndian(Qubit[3]) {
ResetAll(register!);
}
Similar to #2 if you us ! to unwrap the LittleEndian type you can then index it like normal.
Same as #3, just unwrap the type and you should be able to do the gates as you would regularly. The controlled functor may be of use to you.
I also have a section in my book that talks about how to use UDTs or User Defined Types which LittleEndian is an example of (provided by the Numerics library) | {
"domain": "quantumcomputing.stackexchange",
"id": 1654,
"tags": "programming, q#"
} |
Why does static electricity not make a charged body reflective? | Question: If mirrors work by deflecting photons by free electrons in surface layer of mirror, so it could be possible to take a glass pane and provide it with extra free electrons by giving it massive static electricity charge, so it will become reflective — but it seems it would not. Why?
Answer: From the wiki article on the coulomb
Since the charge of one electron is known to be about 1.60217657×10^−19 coulombs, a coulomb can also be considered to be the charge of roughly 6.241509324×10^18 electrons.
Reflection from metals, usual substrate of mirrors, involves the fermi level electrons of the material. Silver with $5.5\,$eV fermi level will have $5.9\cdot10^{29}\frac{1}{m^3}$ free electron density so even a coulomb in numbers does not add that much extra electrons in comparison.
I suspect that is why extra charge would not make for measurably better reflectivity | {
"domain": "physics.stackexchange",
"id": 70034,
"tags": "optics, electrostatics, photons, reflection"
} |
What is “International Service of Weights and Measures”? | Question: In the Resolution 2 of the 3rd meeting of the CGPM, defining the kilogram, “the International Service of Weights and Measures” is mentioned (the French original text reads “le Service international des Poids et Mesures”). I wonder which organization this should be, and I am unable to find any other mention on Google except quotations of the resolution.
(Sorry if this question is considered off-topic here, I thought Physics would be the best match.)
Answer: It was an early name of what became the trinity of the International Bureau of Weights and Measures (BIPM), the General Conference on Weights and Measures (CGPM) and the International Committee for Weights and Measures (CIPM).
The phrase "international service of weights and measures" or its French equivalent seems to have been used earlier at the CIPM meeting of 1887 related to adoption of the centigrade scale of the hydrogen thermometer. | {
"domain": "physics.stackexchange",
"id": 1144,
"tags": "si-units"
} |
Difference between Gabor filtering and Discrete Wavelet Transform | Question: Both Gabor filtering and discrete wavelet transform (DWT) analyze the image in both spatial and frequency domains, unlike Fourier transform which analyzes the image only in the frequency domain. What is the difference between DWT and Gabor filtering?
Answer: Per se, a Gabor filter in image processing is one linear filter at a certain scale and 2D frequency used for orientation filtering and texture analysis.
It would be easier to compare Gabor representations and discrete wavelet transforms. Both are related to a linear decomposition of possibility multidimensional data at different scales with somehow oscillating functions. Main differences are:
Discrete wavelets: critical or non-redundant scheme (stable, invertible), discretize some families of wavelets, more scale-based though wavelets range from weakly oscillating (Haar) to oscillation (Shannon), often applied separably across dimensions so weakly directional, a lot of statistical and approximation properties (moments, regularity). Their redundant counterparts: discretized continuous wavelets, shift-invariant or stationary wavelets.
Gabor transforms: (highly) redundant decomposition, mostly one shape: a modulated Gaussian, more frequency-based though computed at a couple of scale (Gaussian spread), inherently non-separable or directional. Their non-redundant counterparts: modulated or orthogonal lapped transforms, Malvar wavelets. | {
"domain": "dsp.stackexchange",
"id": 10044,
"tags": "wavelet, gabor"
} |
Running multiple Kobuki bases in the same roscore by changing node names | Question:
Hi! I need to run two Qbot2s (Kobuki bases) within the same network using ROS Kinetic distro. I have figured out that I need to launch them with different node names.
So I edited my kobuki/kobuki_node/launch/minimal.launch file by giving name spaces for the nodes.
<launch>
<arg name="kobuki_publish_tf" default="true"/>
<node pkg="nodelet" type="nodelet" name="mobile_base_nodelet_manager" ns="qbot0_nm" args="manager"/>
<node pkg="nodelet" type="nodelet" name="mobile_base" ns="qbot0" args="load kobuki_node/KobukiNodelet mobile_base_nodelet_manager">
<rosparam file="$(find kobuki_node)/param/base.yaml" command="load"/>
<param name="publish_tf" value="$(arg kobuki_publish_tf)"/>
<remap from="mobile_base/odom" to="odom"/>
<remap from="mobile_base/joint_states" to="joint_states"/>
</node>
<node pkg="diagnostic_aggregator" type="aggregator_node" name="diagnostic_aggregator" ns="qbot0_da">
<rosparam command="load" file="$(find kobuki_node)/param/diagnostics.yaml" />
</node>
</launch>
Here I have given namespace parameter for all three nodes available. When I launch the given file, I can see the nodes registered with new names.
>> rosnode list
/qbot0/mobile_base
/qbot0_da/diagnostic_aggregator
/qbot0_nm/mobile_base_nodelet_manager
/rosout
But when I look at the rostopic list, I still cannot see the topics relevant to Kobuki base (eg: odom, joint_states etc. ). Here is my rostopic list,
>> rostopic list
/diagnostics
/diagnostics_agg
/diagnostics_toplevel_state
/rosout
/rosout_agg
This is the structure of the rostopic list I should be getting, (generated by using the original launch file)
>> rostopic list
/diagnostics
/diagnostics_agg
/diagnostics_toplevel_state
/joint_states
/mobile_base/commands/controller_info
/mobile_base/commands/digital_output
/mobile_base/commands/external_power
/mobile_base/commands/led1
/mobile_base/commands/led2
/mobile_base/commands/motor_power
/mobile_base/commands/reset_odometry
/mobile_base/commands/sound
/mobile_base/commands/velocity
/mobile_base/controller_info
/mobile_base/debug/raw_control_command
/mobile_base/debug/raw_data_command
/mobile_base/debug/raw_data_stream
/mobile_base/events/bumper
/mobile_base/events/button
/mobile_base/events/cliff
/mobile_base/events/digital_input
/mobile_base/events/power_system
/mobile_base/events/robot_state
/mobile_base/events/wheel_drop
/mobile_base/sensors/core
/mobile_base/sensors/dock_ir
/mobile_base/sensors/imu_data
/mobile_base/sensors/imu_data_raw
/mobile_base/version_info
/mobile_base_nodelet_manager/bond
/odom
/rosout
/rosout_agg
/tf
I have an intuition that I need to map the new node names with expected topics, but I do not know how to. Can you help me identify the way I should so this task?
Thanks in advance.
Originally posted by TharushiDeSilva on ROS Answers with karma: 79 on 2019-03-05
Post score: 0
Answer:
You'll need to run the entire launch file in a namespace. Look into ROS_NAMESPACE or (probably preferred): create a new launch file that uses the ns attribute on three include tags.
There's also quite a few "multiple turtlebot" posts here on ROS Answers. I'd recommend to take a look at those.
Edit:
I included the whole content of the launch file under the required namespace.
just making sure: the include tag supports the ns attribute, so you can "push down" entire launch files (and their includes) into namespaces easily that way. There is no need to do that manually.
Originally posted by gvdhoorn with karma: 86574 on 2019-03-05
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by TharushiDeSilva on 2019-03-05:
@gvdhoorn, Thank you so much for your comment. I looked into some turtlebot answers, and as you mentioned, I included the whole content of the launch file under the required namespace. Then It worked!! Thanks again. | {
"domain": "robotics.stackexchange",
"id": 32586,
"tags": "ros-kinetic, kobuki"
} |
Setting up workspace | Question:
I recently had to reinstall ros and roll back to electric.
I have been trying to configure my system to use my ~/ros_workspace folder instead of the default /opt/ros/electric/
I've tried altering the ROS_PACKAGE_PATH and still cannot get ros to recognize that folder.
roscd continues to take me to opt/ros/electric and will not locate my packages even when I explicitly call them.
I've tried following all the instructions I could find in the tutorials and here in the forums, but I'm at a bit of a loss.
Originally posted by DocSmiley on ROS Answers with karma: 127 on 2012-07-20
Post score: 1
Answer:
First, switching a workspace from fuerte back to electric can always cause errors when compiling / building sources. Be aware of that. It is safer to create separate workspaces for different ROS versions.
Next, I suggest that you use rosws / rosinstall to create and manage your workspaces.
Third, open your .bashrc in some editor, and find any place mentioning ROS_ROOT, ROS_WORKSPACE, or setup.sh.
Delete all those lines. Regarding ROS, there should only be one line in your .bashrc, that reads:
source ~/ros_workspace/setup.sh
Even if it can work, managing the ROS_PACKAGE_PATH yourself almost always leads to bad ROS_PACKAGE_PATH in the long run and confusing errors.
Originally posted by KruseT with karma: 7848 on 2012-07-20
This answer was ACCEPTED on the original site
Post score: 6 | {
"domain": "robotics.stackexchange",
"id": 10297,
"tags": "ros"
} |
First Telescope SkyWatcher Heritage 5" Tabletop Dobsonian Telescope | Question: I have been star gazing for awhile with a pair of binoculars and now I am ready to step it up a bit and purchase a telescope.
I have been looking at the SkyWatcher Heritage 5" Tabletop Dobsonian Telescope.
Would this be a suitable first telescope , I hope to be able to make out the rings of Saturn using stock the Eyepiece.
Or would this be considered overkill and something like the Celestron 21035 70mm Travel Scope be more suitable.
Answer: No telescope that fits your budget and observing habits is overkill.
As you may have read, the bigger the aperture, the more light it collects, and the finer detail it can resolve in steady air.
A 5-inch f/5 reflector (650mm focal length) is good for wide-field views of deep-sky objects and easy to transport to a dark site.
It should show the rings of Saturn clearly with a 10mm eyepiece (65x magnification); a shorter eyepiece could double that without losing clarity.
However, with such a short optical tube assembly, you might either operate mostly on your knees or put the mount on something less sturdy than the ground.
Also if the secondary mirror is exposed as shown here, it's vulnerable to dew, and stray light can enter the eyepiece and reduce contrast.
If you're primarily interested in planets, a 6-inch f/8 reflector (1200mm focal length) could be an even better choice.
A 10mm eyepiece gives 120x magnification, the primary mirror is less sensitive to collimation error, and the secondary mirror can be smaller for slightly better contrast.
On the other hand, you might want an additional eyepiece longer than 25mm for better views of certain faint objects. | {
"domain": "astronomy.stackexchange",
"id": 3092,
"tags": "telescope, star-gazing"
} |
Cosmological principle: can there be a center of the universe "outside" the universe? | Question: I've been watching youtube videos about the cosmological principle. I understand that the expansion of the universe is not concentric (around a specific point in space). The balloon example helped me understand this better: if the universe were the surface of an inflating balloon, every point on the surface could be expanding without any "center" of expansion on the balloon surface.
However, we as a 3-dimensional observer could see that there is an actual center of the balloon which doesn't reside on the surface but rather inside it, and from which everything is expanding out uniformly (It's a little hard for me to define the true center of the balloon, but I hope you get the intuition).
Now I'm thinking, maybe there is such a true center of the universe which resides not on the universe itself, but on some higher-dimensional space, of which the universe is a subspace.
I know there must be bulks of research about this but I can't seem to find the right keywords to search for it. So I'd appreciate any explanations/resources on this idea!
Answer: This kind of confusion arises when you take the visualizations too seriously. By definition, the universe is all there is, so there can't be anything outside it.
Now, let's come to your balloon example. The problem here is, that in order to visualize a curved 2-dimensional space (for example a sphere), we have to embed it into a three-dimensional space. But this is only done for visualization! Mathematically, only the surface of the balloon exists, not the space around it. You can define a curved space, without defining any higher-dimensional space into which it is curved.
Only the surface exists, so there is no such thing as a center.
Now, you could imagine a theory where what we call the universe is actually just a part of a higher-dimensional space. People are doing that. String Theory is such an example. So let's imagine that our universe is actually a sphere inside a higher-dimensional space. Now, the sphere really does have a center. Would this violate the cosmological principle? No, because the cosmological principle only applies to our universe, which by definition is only the surface of the sphere.
Another thing: There are actually 3 possible shapes the universe can have that satisfy the cosmological principle. A closed sphere (the one you mentioned), a flat space or an open hyperboloid. And all current data that we have suggests that it is actually flat. | {
"domain": "physics.stackexchange",
"id": 95576,
"tags": "cosmology, spacetime, space-expansion, universe, visualization"
} |
building android_core failed | Question:
I am trying to build rosjava_core and android_core from source with Android SDK (r17) and ROS (electric). Here are the steps I've tried:
hg clone https://code.google.com/p/rosjava rosjava_core
hg clone https://code.google.com/p/rosjava.android android_core
From http://docs.rosjava.googlecode.com/hg/rosjava_core/html/building.html
roscd rosjava_core
./gradlew install
FAILURE: Build failed with an exception:
Exception occurred:
File "/usr/lib/python2.7/dist-packages/pygments/lexers/__init__.py", line 80, in get_lexer_by_name
raise ClassNotFound('no lexer for alias %r found' % _alias)
ClassNotFound: no lexer for alias u'groovy' found
The full traceback has been saved in /tmp/sphinx-err-xdEEl7.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
Either send bugs to the mailing list at <http://groups.google.com/group/sphinx-dev/>,
or report them in the tracker at <http://bitbucket.org/birkenfeld/sphinx/issues/>. Thanks!
make: *** [html] Error 1
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':docs:install'.
> Command 'make' finished with (non-zero) exit value 2.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
I've tried with: ./gradlew install -x docs:install
BUILD SUCCESSFUL
Total time: 1 mins 13.005 secs
From http://docs.rosjava.googlecode.com/hg/android_core/html/building.html
roscd android_core
./gradlew debug
BUILD FAILED
/home/hado/AndroidSDK/android-sdk-linux/tools/ant/build.xml:651: The following error occurred while executing this line:
/home/hado/AndroidSDK/android-sdk-linux/tools/ant/build.xml:672: Compile failed; see the compiler error output for details.
Total time: 3 seconds
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':android_honeycomb_mr2:debug'.
> Command 'ant' finished with (non-zero) exit value 1.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Could you please give me idea how to fix those failures. Thanks.
Originally posted by hado on ROS Answers with karma: 11 on 2012-04-08
Post score: 1
Original comments
Comment by damonkohler on 2012-04-11:
As suggested, please post the output from running with --info/--debug/--stacktrace since the cause of the build failure is not in the output you shared.
Answer:
You're missing the compressed_visualization_transport_msgs package. If you install the google stack (http://ros.org/wiki/google, ./gradlew install) this error should go away. You may need to try ./gradlew clean debug.
Originally posted by damonkohler with karma: 3838 on 2012-04-22
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 8907,
"tags": "android-core, android"
} |
Is numerical lattice wavefunction smooth? -- graphene tight binding case | Question: I tried to follow exactly Sec. II.K [page 112-113, Hamiltonian after Eq. (113)] of the standard Review of Modern Physics paper on graphene, which is a tight-binding model of a graphene stripe under magnetic field.
It's periodic and hence fourier transformed along x, but open along y.
The resulted Landau-level-like energy spectrum looks perfectly fine as in the paper. However, I got confused by the wavefunctions since they look somewhat messy, sawtooth, and not smooth.
I haven't played with tight-binding models much and am not sure if this is correct or not. Probably one don't expect lattice wavefunctions to be smooth at all?
Another question is whether Landau level (LL) degeneracy is in general lifted in lattice models. If so, is the lattice LL a certain superposition of many degenerate LLs, which depends on the lattice model details?
Here I plot, at a certain $k_x$ around the flat bands, norms of wavefunctions of the lowest 4 armchair bands (from left to right) on one sublattice.
Answer: I have one possible explaination. I used the infinite square well Hamiltonian (I didn't pay attention to boundary condititions since it will not matter for the big picture). Then I calculated the eigenvectors/eigenvalues in Mathematica.
n = 50;
d = KroneckerDelta;
H = -Table[d[Abs[i - j] - 1] - 2 d[i - j], {i, 1,
n}, {j, 1, n}];
v = Eigenvectors@H;
e = Eigenvalues@H;
The unsorted energy spectrum e looks as follows
Notice two things: firstly the lowest energies are to the right, which tells us that you generally can't trust the order of these energies. You would probably want to sort lowest to highest and sort the eigenvectors along with them. Secondly we would expect a quadratic dependence on $k$ where $k$ is the kth eigenvalue. Here $k$ happens to be the wavenumber like in $\psi=A\sin kx$ as you will see from the plots. So we would expect $E\propto k^2$. You can see that this quadratic dependence holds for low values of $k$ but fails for small wavelengths (large $k$). Again the energies are not sorted so large $k$ is on the left in this plot.
Now let's plot two of these eigenvectors
The first and second picture correspond to $k=2$ and $k=n-2$ respectively. You can see that $k=2$ behaves nicely like we expected. For $k=n-2$ the solution looks like a sawtooth like in your solutions. You can interpret this as the discrete approximation failing for high $k$. When $k$ is high the approximation
$$\frac{d^2\psi}{dx^2}\approx \psi(x+a)-2\psi(x)+\psi(x-a)$$
works less and less well so we can expect the solutions to deviate from the exact answer. I'm not sure if this is what happens in your case but this should be something to keep in mind.
EDIT This probably won't be useful for you anymore given this was asked a year ago but I still answered because this seems to be a problem that more people face. | {
"domain": "physics.stackexchange",
"id": 69263,
"tags": "condensed-matter, solid-state-physics, computational-physics, graphene, tight-binding"
} |
Mechanism by which $lacI^{d}$ is a dominant mutation, impairing the function of normal copies of the Lac Repressor | Question: Jacob-Monod model for the lac Operon was based on experiments using two strands of bacteria which constitutively expressed $\beta$-gal: $I^{c}$(mutation in the gene lacI , which encodes the repressor) and $O^{c}$(mutation in the operator, the site where the repressor binds).
$I^{c}$ mutants are usually recessive: the Lac Repressor cannot bind to the operator, however, if a wild-copy of the gene is present in a merodiploid, the inducible pattern of expression is restored, because the Lac Repressor acts in trans (that is, it will inhibit expression of both operons when lactose is not present).
However, I've read that there's a strain ($lacI^{d}$) which is dominant, so the expression is constitutive even in the presence of the normal repressor. According to the article linked, abnormal subunits may mix with normal subunits, resulting in a disfunctional tetramer, even if a wild type lacI is also present.
In terms of protein structure, why some mutations may have this effect and be dominant while other mutations do not interfere with normal copies of the protein and are recessive?
Answer: The lac repressor act as a tetramere molecule and requires all 4 of the subunit to be able to bind DNA to act on the operon and repress β-galactosidase expression.
The "all 4" is the key here, if any of the 4 subunits is unable to bind DNA then the whole complex cannot attach to the operon. The lacId mutation produces a repressor subunit that cannot attach DNA, yet can tetramerize with other lac repressor subunits, therefore impeding WT subunits to bind the operon.
For your question on the protein structure. Mutations can occur at different locations in a gene and therefore impact different proteins domains. Proteins contain different type of domains, such as an active site for an enzyme or a binding sites for attaching to the DNA. A mutation might act on one of the domain while leaving others fully functional.
In your case this is exactly what happens, the mutation is in the protein region involved in binding DNA but does not affect the region involved in the protein-protein binding with other subunits. | {
"domain": "biology.stackexchange",
"id": 3623,
"tags": "biochemistry, molecular-biology, gene-expression"
} |
Understanding P vs NP | Question: I want to make sure my understanding on P vs NP is correct. I know that NP-complete problems cannot be solved in polynomial time, and if P != NP, then all problems in NP cannot be solved in polynomial time. Furthermore, if we consider the church turing thesis, and if P != NP, then no computational formalism can solve NP-complete problems in polynomial time. Would I be fine saying this?
Answer:
I know that NP-complete problems cannot be solved in polynomial time.
We don't know this. This is exactly the P vs NP question. NP-complete problems can be solved in polynomial time iff P=NP.
If P ≠ NP, then all problems in NP cannot be solved in polynomial time.
P is a subset of NP, so some problems in NP can definitely be solved in polynomial time.
Furthermore, if we consider the Church-Turing thesis, and if P ≠ NP, then no computational formalism (ever) can solve NP-complete problems in polynomial time.
The Church–Turing thesis isn't really relevant here, since it is about computability rather than complexity. It could be that P ≠ NP but NP-complete problems can be solved in polynomial time by randomized algorithms or by quantum algorithms (though both of these are considered unlikely). | {
"domain": "cs.stackexchange",
"id": 17495,
"tags": "np-complete, decision-problem, p-vs-np"
} |
In this ray diagram, a plane mirror seems to form a real image | Question: In this ray diagram the image formed seems to be real with the given position of the eye. I have learnt that plane mirrors cannot form real images at any circumstance. But at this one it does. Please explain the answer like I'm 5 and how you deduced what you propose.
Answer: Farcher's answer is correct. But it can be elaborated a bit to make it easier to understand.
If you observe the above ray diagram for real images, you can see that the real images are formed when rays from the same point of an object intersect to form an image.
As your image shows, this is not the case. Rays from the top and bottom of the object intersect at the eye. Hence the image formed is not real.
As Farcher's modified image shows, the rays from the same point never intersect, but diverges. Hence the image formed is virtual. | {
"domain": "physics.stackexchange",
"id": 78848,
"tags": "optics, geometric-optics"
} |
Viewing a solar eclipse through a leafy tree | Question: Viewing my Facebook feed today, my local news station posted regarding a solar eclipse taking place today:
Note the line about using a leafy tree as a filter:
Scientists say NOT to directly look at the sun (if skies are clearing where you are), but looking at the eclipse through a leafy tree creates a natural filter!
I've heard a lot of people recommend viewing a solar eclipse by looking at the projection of the eclipse caused by light passing between leaves in a tree, in this case looking at the ground or a wall, where ever the light hits after passing through the tree. However, the Facebook posting here seems to recommend looking up at the solar eclipse through the tree.
I suspect that they have misunderstood the projection method (or inadequately explained), and their advice is not only wrong, but potentially dangerous.
Is this method recommendable? Should it be considered dangerous?
Follow-up
Almost immediately upon having seen the posting I had commented on it noting my belief that such advice could be dangerous, including a more clear explanation of how viewing using trees should be done. The staff member who originally posted the status later took note of this and edited the status to less ambiguously explain viewing the projection. Unfortunately, this was done after the solar eclipse had concluded. Hopefully no damage was done.
Answer: It seems that the station has, in fact, given out extremely dangerous advice - simply because it was poorly phrased.
Viewing a solar eclipse is incredibly dangerous - mostly under circumstances where you're looking towards the Sun. As Wikipedia warns,
Looking directly at the photosphere of the Sun (the bright disk of the Sun itself), even for just a few seconds, can cause permanent damage to the retina of the eye, because of the intense visible and invisible radiation that the photosphere emits. This damage can result in impairment of vision, up to and including blindness.
You couldn't pay me enough to so much as face toward the Sun and look at my feet and attempt to view a solar eclipse out of the corner of my eye!
There are safe ways to view an eclipse, the most common being using a pinhole camera to indirectly view the eclipse. Not only are you not looking directly at the Sun, but you can actually turn away and look at the projection.
But here's where I think the station messed up. According to CNN,
Or let a tree do the work for you. "Overlapping leaves create a myriad of natural little pinhole cameras, each one casting an image of the crescent-sun onto the ground beneath the canopy," NASA says.
So NASA actually advocates using the leaves as a pinhole and using indirect viewing methods - i.e. looking anywhere but the Sun - as opposed to watching the eclipse through the leaves. | {
"domain": "astronomy.stackexchange",
"id": 568,
"tags": "solar-eclipse"
} |
Would salt cause a pistachio nut not to be able to germinate? | Question: If I buy a bag of pistachio nuts that have been hand picked off the tree when ripe, then salted in a (light) saltwater solution, then vacuum-sealed, and stored in a refrigerator, then will they germinate if I take them out and soak them in a moistened towel laying out in the sun?
Answer: This article https://wikifarmer.com/how-to-grow-pistachio-tree-from-nut/
says unsalted. Lots more on that site about growing them. | {
"domain": "biology.stackexchange",
"id": 8166,
"tags": "trees, seeds, germination"
} |
RF-Chain Signal Delay for Sensor Switching | Question:
Lets have a RF-chain as above with system bandwith from HP corner to the green line say 1 MHz.
The signal accumulates delay as it passes through this analog chain, and due to it being non-liner filter chain, the group delay is likely as shown. Now at the end of LP1 and LP2 we have some form of detectors, which taken in the signal and do some type of functional detection on it. However the most important part of the detection is the critical control of switches s1 and s2 to determine when the signal is actually valid ( coming out of the two LFs ?).
So, the question would be what is a valid signal out of the LPF1 , since the input can not be a fixed single freq. component, otherwise we could just use the phase delay. Here the bandwidth is large upto 1 Mhz.
So essentially we have to determine the optimum read-out freq or some statistical values to find the optimum switching times.
Can you please suggest how can one actually go on to do it ?
Without any new hardware design . Just lets try signal processing and statistical analysis
Thanks
Answer: The OP clarified in comments under the question that the intention is to compensate for group delay distortion introduced in the hardware. The typical approach to optimally compensate for this (in a least squared sense) with processing alone is to use the Wiener-Hopf equations to determine the coefficients of an equalizer that can be implemented with an FIR filter (meaning a difference equation with only feed-forward terms). I detail the full approach of doing this in these posts so will provide the links below, but to bottom line the process, the channel to be equalized (the receiver) is "sounded" with a known waveform that is spectrally rich (pseudo-random noise or frequency chirps are great choices as they also offer high average power or SNR; an impulse is a poor choice since it is a challenge to do that with high SNR), and with that and the received signal after the channel the reverse deconvolution can be computed in a least squared sense to determine the effective inverse channel, but importantly applicable to mixed phase systems (systems with both leading and trailing echoes) which on their own can't be inverted due to having zeros in the right-half plane. This approach would be ideally suited for distortions introduced in hardware implementations that are not changing with time (within an acceptable tolerance) since the channel can be sounded once and fixed coefficients determined relatively easily with pre-processing that can then be used without further modification, in contrast to the iterative algorithms that are needed to provide an adaptive solution for time-changing channels.
Details of the Wiener-Hopf Equations and shows application to determine the transfer function of the channel:
Compensating Loudspeaker frequency response in an audio signal
Equalizer Implementation Example by swapping Tx and Rx in the previous case, we can instead solve for the causal equalizer for a mixed phase system instead of the channel itself:
How determine the delay in my signal practically | {
"domain": "dsp.stackexchange",
"id": 10213,
"tags": "discrete-signals, cross-correlation, filtering, statistics, group-delay"
} |
Max velocity of turtlebot | Question:
Hello, I have been simulating a turtlebot3 waffle pi in Gazebo, and been using velocities as high as 0.5m/s. Looking at the hardware specifications: https://emanual.robotis.com/docs/en/platform/turtlebot3/features/, it says that the max velocity is 0.26 m/s. Does this mean that it can drive at 0.5m/s in the simulations, but not physically because of hardware limitations?
Originally posted by Roshan on ROS Answers with karma: 51 on 2021-12-26
Post score: 1
Original comments
Comment by osilva on 2021-12-26:
Hi @Roshan
If the simulation is higher than specs it must be that a configuration file has been changed. I checked from the repo and the simulation matches robot spec:
BURGER_MAX_LIN_VEL = 0.22
BURGER_MAX_ANG_VEL = 2.84
WAFFLE_MAX_LIN_VEL = 0.26
WAFFLE_MAX_ANG_VEL = 1.82
LIN_VEL_STEP_SIZE = 0.01
ANG_VEL_STEP_SIZE = 0.1
https://github.com/ROBOTIS-GIT/turtlebot3/blob/master/turtlebot3_teleop/nodes/turtlebot3_teleop_key
Comment by Roshan on 2021-12-26:
That's weird, I haven't changed the configuration file, but I can still publish for example 0.5 or 1 into the cmd_vel and see changes. Does the 0.5 I'm publishing into the cmd_vel mean something other than velocity maybe?
Comment by osilva on 2021-12-26:
And here as well: https://github.com/ROBOTIS-GIT/turtlebot3_simulations/blob/master/turtlebot3_fake/include/turtlebot3_fake/turtlebot3_fake.h
#define MAX_LINEAR_VELOCITY 0.22 // m/s
#define MAX_ANGULAR_VELOCITY 2.84 // rad/s
Comment by osilva on 2021-12-26:
You can publish any speed but the robot will just go to your max speed
Comment by osilva on 2021-12-26:
I think you may be right:
void Turtlebot3Fake::commandVelocityCallback(const geometry_msgs::TwistConstPtr cmd_vel_msg)
{
last_cmd_vel_time_ = ros::Time::now();
goal_linear_velocity_ = cmd_vel_msg->linear.x;
goal_angular_velocity_ = cmd_vel_msg->angular.z;
wheel_speed_cmd_[LEFT] = goal_linear_velocity_ - (goal_angular_velocity_ * wheel_seperation_ / 2);
wheel_speed_cmd_[RIGHT] = goal_linear_velocity_ + (goal_angular_velocity_ * wheel_seperation_ / 2);
}
https://github.com/ROBOTIS-GIT/turtlebot3_simulations/blob/master/turtlebot3_fake/src/turtlebot3_fake.cpp
It never checks against max velocity just checks against published velocity
Comment by Roshan on 2021-12-26:
Ah ok, so that does mean that the simulations don't actually have a max velocity, but once you start using a physical setup the max velocity will be at 0.26 m/s for linear velocity
Comment by osilva on 2021-12-26:
That’s correct. You could add the checks like in the teleop program
Comment by osilva on 2021-12-27:
Hi @Roshan, Added an answer to complete the cycle and summarize our discussion.
Answer:
As keenly observed by @Roshan, the Turtlebot3 simulation linear velocity can be set higher than the physical robot.
The main function:
void Turtlebot3Fake::commandVelocityCallback(const geometry_msgs::TwistConstPtr cmd_vel_msg)
{
last_cmd_vel_time_ = ros::Time::now();
goal_linear_velocity_ = cmd_vel_msg->linear.x;
goal_angular_velocity_ = cmd_vel_msg->angular.z;
wheel_speed_cmd_[LEFT] = goal_linear_velocity_ - (goal_angular_velocity_ * wheel_seperation_ / 2);
wheel_speed_cmd_[RIGHT] = goal_linear_velocity_ + (goal_angular_velocity_ * wheel_seperation_ / 2);
}
https://github.com/ROBOTIS-GIT/turtlebot3_simulations/blob/master/turtlebot3_fake/src/turtlebot3_fake.cpp
Doesn't check for:
#define MAX_LINEAR_VELOCITY 0.22 // m/s
#define MAX_ANGULAR_VELOCITY 2.84 // rad/s
found at: https://github.com/ROBOTIS-GIT/turtlebot3_simulations/blob/master/turtlebot3_fake/include/turtlebot3_fake/turtlebot3_fake.h
To simulate accurately, it's suggested to just a check for maximum velocity like in the teleop program:
def checkLinearLimitVelocity(vel):
if turtlebot3_model == "burger":
vel = constrain(vel, -BURGER_MAX_LIN_VEL, BURGER_MAX_LIN_VEL)
elif turtlebot3_model == "waffle" or turtlebot3_model == "waffle_pi":
vel = constrain(vel, -WAFFLE_MAX_LIN_VEL, WAFFLE_MAX_LIN_VEL)
else:
vel = constrain(vel, -BURGER_MAX_LIN_VEL, BURGER_MAX_LIN_VEL)
return vel
Originally posted by osilva with karma: 1650 on 2021-12-27
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 37292,
"tags": "ros"
} |
gazebo 2.2.2 with ros hydro from source and TurtleBot | Question:
I installed ROS-Hydro-Desktop-Full from source in April this year on Ubuntu 13.10 (Saucy Salamander). During the install, the script installed Gazebo 2.2.2 using APT-GET. The script also installed gazebo_ros_pkgs 2.3.5. Up until now, everything's been working great, including some simple Gazebo stuff I started playing with.
But now I would like to get Gazebo running with ROS, and I understand that Hydro is intended to work with Gazebo 1.9, but I have 2.2.2, and I'm not sure what to do?
I decided I would try working through the turtlebot/Tutorials/hydro/Installation page, but in step 4.3 "rosdep install --from-paths src -i -y" fails with the following errors:
ERROR: the following packages/stacks could not have their rosdep keys resolved to system dependencies:
yocs_virtual_sensor: No definition of [rospy_message_converter] for OS version [saucy]
turtlebot_teleop: No definition of [joy] for OS version [saucy]
kobuki_random_walker: No definition of [ecl_threads] for OS version [saucy]
kobuki_node: No definition of [ecl_threads] for OS version [saucy]
kobuki_safety_controller: No definition of [ecl_threads] for OS version [saucy]
turtlebot_core_apps: No definition of [map_store] for OS version [saucy]
kobuki_auto_docking: No definition of [ecl_linear_algebra] for OS version [saucy]
kobuki_dock_drive: No definition of [ecl_linear_algebra] for OS version [saucy]
kobuki_keyop: No definition of [ecl_time] for OS version [saucy]
turtlebot_gazebo: No definition of [depthimage_to_laserscan] for OS version [saucy]
yocs_diff_drive_pose_controller: No definition of [ecl_threads] for OS version [saucy]
kobuki_ftdi: No definition of [ecl_command_line] for OS version [saucy]
yocs_velocity_smoother: No definition of [ecl_threads] for OS version [saucy]
turtlebot_bringup: No definition of [depthimage_to_laserscan] for OS version [saucy]
kobuki_driver: No definition of [ecl_command_line] for OS version [saucy]
The errors seem to indicate that my problem has to do with the version of Ubuntu I have rather than my version of Gazebo.
Since Hydro was released for 13.10, and the tutorial refers to Hydro, the errors don't quite make sense. Unless it has to do with the gazebo_ros_pkgs that I have installed.
I've invested a lot of time and download bandwidth building the existing install and would prefer to somehow fix this up with an incremental solution. If this possible?
Since Gazebo 2.2.2 was actually installed using APT-GET, can I remove the package and install 1.9 instead? And I can delete gazebo_ros_pkgs I have and download the version that normally comes with 1.9.
Will this fix the problem?
Cheers,
Nap
Originally posted by Nap on ROS Answers with karma: 302 on 2014-10-17
Post score: 0
Answer:
Yes, you should be able to remove the debian packages installed by apt for gazebo 2.2 and the corresponding new gazebo_ros_pkgs and install the older ones.
Anything built on top of them will need to be rebuilt.
Your rosdep errors are a separate issue. I would guess that you are not in an environment that as the ROSDISTRO envrionment variable set to hydro so it does not know to look for those packages from hydro. This is usually done by sourcing /opt/ros/hydro/setup/bash source /opt/ros/hydro/setup.bash
Originally posted by tfoote with karma: 58457 on 2014-10-18
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Nap on 2014-10-24:
Thanks for that, I will try it. I have source ~/ros_catkin_ws/install_isolated/setup.bash in my .bashrc. but no, ROSDISTRO is not set. I'll add that export to my .bashrc file. | {
"domain": "robotics.stackexchange",
"id": 19768,
"tags": "gazebo, turtlebot, ros-hydro"
} |
What is the antonym of "closest approach"? | Question: The distance from Earth to Mars, during their closest approach, is about 55 million kilometers.
At their furthest apart, that distance would be about 401 million kilometers.
Distance
at closest approach
at ___________
Earth–Mars
55 million km
401 million km
Earth–Jupiter
629 million km
928 million km
Is there a standard astronomical name for "their furthest apart"?
My vague understanding is that if one body orbits around another, then we have words like "at perigee" (closest approach to Earth) "at apogee" (furthest apart from Earth); "perihelion" and "aphelion"; and in general "periapsis" and "apsis." However, it doesn't seem correct to talk about the distance from Mars to Earth "at apsis" because neither body orbits the other.
IIUC, in the specific case of Earth and Mars, we might also say that their closest approach happens (more or less) when Mars is in opposition (relative to the Sun, as seen from Earth); but I don't think I can infer from that any useful terminology for their maximum distance — which would happen more or less when Mars is in opposition relative to the Earth, as seen from the Sun. [eshaya's answer indicates that the phrase I'm looking for here is "...when Mars is in conjunction (relative to the Sun, as seen from Earth)." Contrariwise, for Venus, both extrema occur during conjunctions.]
(I'm interested in names for the distance and/or the position. "Closest approach" applies to both distance and position; "apsis" applies only to the position AFAIK. You wouldn't say "the apsis of the Earth and the Moon is 406,000 km.")
Answer: The term "maximum separation" is often used, though maximum separation can also refer to the maximum angle between two bodies in the celestial sphere.
Here is an example from Quintana and Lissaur of the usage referring to distance:
close binary stars with maximum separations $Q_B≤0.2 AU$
and here is an example from Nouh of the usage referring to angle:
In this paper, an efficient algorithm is established for computing the
maximum (minimum) angular separation ρ max(ρ min) [...] of visual
binary stars | {
"domain": "astronomy.stackexchange",
"id": 5288,
"tags": "solar-system, orbital-mechanics, terminology"
} |
custom array sorting algorithm | Question: So I came up with a "new" sorting algorithm:
function indexSort(array, min, max) {
var newArray = Array.from({length:Math.abs(max-min)}, () => 0);
for (let i = 0; i < max; i++) {
if (array.includes(array[i])) {
newArray[array[i]] = array[i];
}
}
for (let i = 0; i < newArray.length; i++) {
if (newArray[i] == 0) {
newArray.splice(i, 1);
i--;
}
}
return newArray;
}
This algorithm sorts numbers in ascending orders so:
Input -> Output
indexSort([ 3, 1, 2 ], 1, 3) -> [ 1, 2, 3 ]
indexSort([ 64, 12, 9 ], 9, 64) -> [ 9, 12, 64 ]
This algorithm sorts arrays, pretty slow at that, and, has, in its current state some major downsides in comparison to other sorting algorithms:
Only works with positive integers.
Has to loop over the entire array twice.
Doesn't allow for duplicate items.
It has probably other downsides that I cannot currently think of.
So what I want to figure out is:
Why is this sorting algorithm so slow?
Is it possible to do everything in just one loop using an else statement?
Why is this sort outperformed by Bubble Sort?
What is the big O notation of this algorithm?
Has this already been discovered and if so, what is the name of it?
Answer: I'll try to summarize what's been said in the comments into an answer and add my two cents to it.
First of all I suppose this was made because you had an idea on how to sort numbers and just tried implementing it to see how it'd work out. Nothing wrong with that, but as has been said said, sorting algorithms are very well researched and you'll always be better off using the build-in method or implementing someone else's algorithm if it satisfies some type of requirement (e.g. cycle sort. Not fast, but uses minimum writes). That doesn't mean you shouldn't mess around with sorting any more, as this could be a good way to learn about how they work and why.
As for the algorithm, it seems like your idea was the following:
Create an array that fits every number
Put the number where they belong, using their value as the index
Remove all filler 0s
Now, the purpose of the min / max parameters is clear, they're used to limit the size of the step 1 array. As @ggorlen said, this information is usually not given to sorting algorithms. If this was needed you'd probably iterate over the input array and find them this way. This also prevents someone giving bogus values to the algorithm that result in an invalid output (ex: giving [1,2,3,4], 2, 2 as args yields [<1 empty value>, 1, 2]).
As for the complexity, it's at least O(n²). The oversimplified reason is that you're iterating over an entire array in O(n) and for every iteration you search or splice, which again is O(n), resulting in O(n²). However, since the loops don't iterate over n but over an array with the size max, your complexity can be WAY higher, as seen in @ggorlens comment about the [12341231, 143] case. This comment also answers the question of why this is outperformed by bubble sort; you're doing not one but two passes of bubble-sort complexity over an array that can be way larger than the one bubble sort has to manage.
To my knowledge, this hasn't been discovered, probably because it's neither as fast as the existing algorithms or as interesting as Bogosort, Stooge sort or Stalin sort.
As a final remark, Counting sort may be interesting to you. | {
"domain": "codereview.stackexchange",
"id": 42252,
"tags": "javascript, performance, sorting, ecmascript-6"
} |
Does mass affect speed of a sliding object with friction? | Question: Sorry if this has been already asked, but I've been looking around google for a while and couldn't find an answer suitable. I'm a beginning physics student so pardon the dumb question.
Let's say we let a block slide down a ramp of angle $\theta$. I know the component down the ramp is equal to $mg \sin \theta$ and the component normal to the ramp is mg cos theta. Since $F = ma$, $mg \sin \theta = ma$, and the masses cancel right? But this is without friction. So my question is
Does mass affect the speed of an object (down a ramp for example, or even in free fall) when there is friction/air resistance?
I guess it would be written as $F = ma = mg \sin \theta - uF_N = u mg \cos \theta$. So mass still doesn't matter right?
One more question. Let's say we have an object moving at a constant velocity on a rough surface with friction, so some force is applied. Will adding mass to the object slow it down? Common sense says yes, but why?
Answer:
In the example of the incline plane that you have provided, the mass
does not affect the speed, because the only friction force present
is proportional to the object's weight. However, oftentimes
significant dissipative forces are proportional to the velocity of
the object --- for example, if an object if freely falling through a
viscous fluid. You could model such situation by the equation below:
$$ F = ma = mg - kv $$
Here, the coefficient $k$ of the drag force is some parameter that
could depend, for example, on the object's shape. We cannot simply divide
through by $m$ to solve for $a$; moreover, we have a differential equation, since
$a$ is the derivative of $v$. If you solve it, you would indeed get
that $v$ is a function that depends on mass. We first notice that
when the object reaches the velocity large enough for the drag force
to cancel $mg$ completely, the object will maintain that velocity,
since there will be no acceleration. If we substitute $a = 0$ in the
equation, we get that this terminal velocity should equal to
$v_{terminal} = \frac{mg}{k}$. We could actually solve this
differential equation by using a substitution function $u = g -
\frac{k}{m}v$ and the fact that $a = v'$. We get then:
$$ u' = -\frac{k}{m}u $$
Guessing a solution $u = C_{1}e^{-\frac{k}{m}t} + C_{2}$, where
$C_1$ and $C_2$ are some constants, we get that
$$ v(t) = \frac{mg}{k} - C'e^{-\frac{k}{m}t} - C'' $$
Where some constants $C'$ and $C''$ depend on our initial
conditions; if the initial velocity of the object is zero, the
function that would work is:
$$ v(t) = \frac{mg}{k}(1 - e^{-\frac{k}{m}t}) $$
Where we set $C' = \frac{mg}{k}$ and $C''= 0$, so that the boundary
conditions $v_{initial} = 0$ and $v_{terminal} = \frac{mg}{k}$ are
met. We ended up with a velocity function that depends on time and the mass
of the object.
As for the second question: let's track back to when we set the object in motion. We know that a coefficient for static friction is greater that the coefficient of kinetic friction: $\mu_{static} > \mu_{kinetic}$. This ensures that objects resting on rough services don't start moving from a slightest touch, but rather require an initial "jerk". Hence the initial force with which we will be pushing an object to start it moving, $F_{initial}$, will be greater than the eventual force $F_{final}$, to provide that "jerk": $F_{initial}$ is slightly greater than $\mu_{static}mg$, and $F_{final}$ is just equal to $\mu_{kinetic}mg$ to ensure constant velocity. We only need to exert $F_{initial}$ for a short amount of time to get the object going; but in that short time, it accelerates, and the magnitude of the acceleration depends on the force: the larger the mass of the object is, the larger the static friction is, and the larger $F_{initial}$ has to be. That implies a greater acceleration in the same amount of time --- and hence a greater constant velocity. | {
"domain": "physics.stackexchange",
"id": 81731,
"tags": "newtonian-mechanics, newtonian-gravity, mass, friction, free-body-diagram"
} |
Configuration concept and implementation | Question: I created an API to load, save and read user configuration in my application. I've a Configuration interface which provides the basic methods to read, save and read configuration data and then in the application I've the implementation JsonConfiguration.
I tried to keep the Configuration interface as generic as possible. So I didn't make any assumption about how the settings are stored, it can be plain text as well as something else.
This is my Configuration interface:
public interface Configuration {
/**
* Called when the configuration should load the settings
*
* @param inputStream In this stream you can found the settings of
* the user.
* An implementation can decide what is inside this
* stream.
* Example: If an implementation
* uses JSON to save and read settings, it should
* assume that this stream contains a valid json.
* If it doesn't, it is allowed to throw
* any exception since it's not
* their job to convert the stream to your
* format.
* Do not close the stream. You don't own it.
*/
void load(@NotNull final InputStream inputStream);
/**
* Called when the configuration should save ALL the settings
*
* @param outputStream Write in this stream the settings of the
* user in the form of the implementation.
* Do not close the stream. You don't own it.
*
* @throws IOException Throw if something went wrong during the
* save process
*/
void save(@NotNull final OutputStream outputStream) throws IOException;
/**
* @param name The setting name
* @return The setting as integer (if it's an integer)
* or NumberFormatException if not
*/
@NotNull
OptionalInt getAsInt(@NotNull final String name);
@NotNull
OptionalDouble getAsDouble(@NotNull final String name);
@NotNull
OptionalLong getAsLong(@NotNull final String name);
@NotNull
Optional<String> getAsString(@NotNull final String name);
@NotNull
Optional<Boolean> getAsBoolean(@NotNull final String name);
/**
* Set the value
*
* @param name The name of the setting
* @param value The value to save
* @param <T> The type of the value
*/
<T> void set(@NotNull final String name, @NotNull final T value);
}
(I've removed documentation from getAsX since it's the same.) (Yes, I know there is no getAsFloat, I will add it after the review.)
As you can see, the concept of save/load and read are mixed in the same interface. My first question is, should I move the two concepts in two interfaces? Maybe using a Factory to load and save the settings and Configuration will only handle the reading of the data.
This is the implementation, JsonConfiguration, which stores data in a JSON format:
public class JsonConfiguration implements Configuration {
private Map<String, ConfigurationSection> configurations = new HashMap<>();
@Override
public void load(@NotNull final InputStream inputStream) {
final JsonParser parser = new JsonParser();
final JsonObject root = parser.parse(new InputStreamReader(inputStream)).getAsJsonObject();
for (final Map.Entry<String, JsonElement> entry : root.entrySet()) {
final String confName = entry.getKey();
final ConfigurationSection configuration = new JsonConfigurationSection();
configuration.load(new ByteArrayInputStream(
entry
.getValue()
.toString()
.getBytes(StandardCharsets.UTF_8))
);
configurations.put(confName, configuration);
}
}
@Override
public void save(@NotNull final OutputStream outputStream) throws IOException {
final Type type = new TypeToken<Map<String, Configuration>>() {}.getType();
final OutputStreamWriter out = new OutputStreamWriter(outputStream);
final JsonWriter writer = new JsonWriter(out);
new GsonBuilder()
.registerTypeAdapter(JsonConfigurationSection.class, new JsonConfigurationSectionSerializer())
.create()
.toJson(configurations, type, writer);
writer.flush();
out.flush();
}
@NotNull
private <T> Optional<T> get(@NotNull final String name) {
final int dotSeparatorPosition = name.indexOf('.');
if (dotSeparatorPosition == -1) {
throw new IllegalArgumentException(name + " is not a correct user preference setting name");
}
final String section = name.substring(0, dotSeparatorPosition);
final String element = name.substring(dotSeparatorPosition + 1);
if (!configurations.containsKey(section)) {
throw new IllegalArgumentException("Section " + section + " doesn't exists.");
}
return configurations.get(section).get(element);
}
@NotNull
@Override
public OptionalInt getAsInt(@NotNull final String name) {
final Optional<String> optional = get(name);
return !optional.isPresent() ? OptionalInt.empty() : OptionalInt.of(Integer.parseInt(optional.get()));
}
@NotNull
@Override
public OptionalDouble getAsDouble(@NotNull final String name) {
final Optional<String> optional = get(name);
return !optional.isPresent() ? OptionalDouble.empty() : OptionalDouble.of(Double.parseDouble(optional.get()));
}
@NotNull
@Override
public OptionalLong getAsLong(@NotNull final String name) {
final Optional<String> optional = get(name);
return !optional.isPresent() ? OptionalLong.empty() : OptionalLong.of(Long.parseLong(optional.get()));
}
@NotNull
@Override
public Optional<String> getAsString(@NotNull final String name) {
return get(name);
}
@NotNull
@Override
public Optional<Boolean> getAsBoolean(@NotNull final String name) {
final Optional<String> optional = get(name);
return !optional.isPresent() ? Optional.empty() : Optional.of(Boolean.parseBoolean(optional.get()));
}
/**
* Sets the value of a section. The name
* is composed of: sectionName.entryName
*
* If the section doesn't exists it will
* be created.
*
* If the entryName doesn't exists it will
* be created.
*
* @param name The name of the setting
* @param value The value to save
* @param <T> The type of the value
* @throws IllegalArgumentException If the name passed is not of the format: sectionName.entryName
*/
@Override
public <T> void set(@NotNull final String name,
@NotNull final T value) {
final int dotSeparatorPosition = name.indexOf('.');
if (dotSeparatorPosition == -1) {
throw new IllegalArgumentException(name + " is not a correct user preference setting name");
}
final String sectionName = name.substring(0, dotSeparatorPosition);
final String element = name.substring(dotSeparatorPosition + 1);
ConfigurationSection section = configurations.get(sectionName);
if (section == null) {
section = new JsonConfigurationSection();
configurations.put(sectionName, section);
}
section.set(element, value.toString());
}
}
Since I don't make any assumption in the Configuration interface, the concept of "sections" exists only in the implementation and the user access an entry in a section using the notation sectionName.entryName.
The interface ConfigurationSection is without documentation:
public interface ConfigurationSection {
void load(@NotNull final InputStream inputStream);
void save(@NotNull final OutputStream outputStream) throws IOException;
<T> Optional<T> get(@NotNull final String name);
<T> void set(@NotNull final String name, @NotNull final T value);
}
and the implementation:
public class JsonConfigurationSection implements ConfigurationSection {
@NotNull
private ConcurrentMap<String, Object> values = new ConcurrentHashMap<>();
@Override
public void load(@NotNull final InputStream inputStream) {
values = new ConcurrentHashMap<>();
final JsonParser parser = new JsonParser();
final JsonObject root = parser.parse(new InputStreamReader(inputStream)).getAsJsonObject();
for (final Map.Entry<String, JsonElement> entry : root.entrySet()) {
final String key = entry.getKey();
final JsonElement value = entry.getValue();
if (value.isJsonPrimitive()) {
final JsonPrimitive primitive = value.getAsJsonPrimitive();
values.put(key, primitive.getAsString());
} else if (value.isJsonNull()) {
throw new IllegalArgumentException("null is not a valid parameter");
} else if (value.isJsonArray()) {
throw new UnsupportedOperationException("Arrays not supported yet");
} else if (value.isJsonObject()) {
throw new UnsupportedOperationException("Objects not supported yet");
}
}
}
@Override
public void save(@NotNull final OutputStream outputStream) throws IOException {
final Type type = new TypeToken<Map<String, Object>>() {}.getType();
final OutputStreamWriter out = new OutputStreamWriter(outputStream, StandardCharsets.UTF_8);
final JsonWriter writer = new JsonWriter(out);
new Gson().toJson(values, type, writer);
writer.flush();
out.flush();
}
@NotNull
@Override
public <T> Optional<T> get(@NotNull final String name) {
return Optional.ofNullable((T) values.get(name));
}
@Override
public <T> void set(@NotNull final String name,
@Nullable final T value) {
values.put(name, value);
}
@NotNull
Map<String, Object> getValues() {
return Collections.unmodifiableMap(values);
}
}
As you can see from load everything is stored as String in the map (so yes, values can be changed from <String, Object> to <String, String> but my plan is to support arrays and objects soon... so).
ConfigurationSection interface and Configuration are pretty similar as interfaces, but have different concepts.
My application has a Application interface with a getConfiguration method so Plugins/Application use it to access a Configuration object and read settings. The fact that the concept of save/load is of the Boot class only (which prepare/load etc the application) it could be an incentive to use a factory instead of the current interface (so I hide the two things).
I'm planning to add a Transformer concept to convert a configuration format to another, but it's just an idea for now.
My questions:
As I said, the concepts save/load and read are mixed together in one interface, make sense separate them and implement a Factory to load and save settings?
I want to remove the throws IOException from save because it's not consistent with load which doesn't throw IOException.
Can I improve the load of JsonConfigurationSection? The ifs seems okay, but maybe could be improved.
Any comment? I've read how other languages/framework does it but...
Answer:
As I said, the concepts save/load and read are mixed together in one interface, make sense separate them and implement a Factory to load and save settings?
Wells, Java's Properties provide similar load/store functionality as well, so I think this is fine.
I want to remove the throws IOException from save because it's not consistent with load which doesn't throw IOException.
As mentioned by yourself in the lengthy comments, a custom Exception class foremost may be more helpful to wrap underlying reasons such as IOException (or SQLException, UnknownHostException etc.). The question then becomes whether you want to stick with checked Exception classes or not.
If you prefer checked Exception classes, then I think both save/load methods should throw it for consistency. As helpfully explained here, here and here, checked Exceptions may be used to indicate to the method's callers that they should be able to reasonably recover from these scenarios. For your implementation, I think a checked, custom Exception class has its merits for users of your library to explicitly handle cases where their configuration could not be initialized properly.
Can I improve the load of JsonConfigurationSection? The ifs seems okay, but maybe could be improved.
You may want to consider the fail-fast approach here:
for (final Map.Entry<String, JsonElement> entry : root.entrySet()) {
final JsonElement value = entry.getValue();
if (value.isJsonNull()) {
throw new IllegalArgumentException("null is not a valid parameter");
} else if (value.isJsonArray()) {
throw new UnsupportedOperationException("Arrays not supported yet");
} else if (value.isJsonObject()) {
throw new UnsupportedOperationException("Objects not supported yet");
} else if (value.isJsonPrimitive()) {
// note: inlined entry.getKey() and value.getAsJsonPrimitive()
values.put(entry.getKey(), value.getAsJsonPrimitive().getAsString());
}
}
It may not be apparent now, but once your library gains new features, e.g. for array/object support, you can gradually remove the checks from the top and append the implementations (or preferably method calls to the new features) below.
Any comment? I've read how other languages/framework does it but...
When you are converting your Optional<String> instance to one of the OptionalInt/OptionalLong, you can rely on the map().orElse() chained methods to better convey the conversion:
return optional.map(v -> OptionalInt.of(Integer.parseInt(v)))
.orElse(OptionalInt.empty());
I also don't think you really need an Optional<Boolean> case... do you really need a tri-state configuration choice, true, false and null? Wouldn't it be easier to just say true or false?
You also mentioned...
Since I don't make any assumption in the Configuration interface, the concept of "sections" exists only in the implementation...
Do you intend to support a tree hierarchy of configuration in future versions? That may be worth pondering about... | {
"domain": "codereview.stackexchange",
"id": 15795,
"tags": "java, json, configuration"
} |
Do I get less accurate results if I do not use a power of 2 points for my FFT? | Question: I heard that the padding with zeros is mainly for efficiency and speed, other than that, is there a downside to use fft(signal) instead of fft(signal, N) where N is a power of 2 assuming signal length is not a power of 2.
Thank you
Answer: Assuming a proper implementation, and discarding the small rounding differences that you'll get just by doing a different sequence of operations, no, there's no substantive drop in accuracy if you slightly pad your DFT length to the next power of 2. Whether it ends up actually being any faster is dependent upon your implementation, however. Why is that?
It's a very common misconception that there is a single Fast Fourier Transform algorithm and that you must use a power-of-2 size for best performance. If you look for a description of an FFT algorithm, you'll often see the radix-2 decimation-in-time or decimation-in-frequency techniques explained, likely because they're the easiest to illustrate. However, even the seminal FFT technique, the Cooley-Tukey algorithm, generically factorizes the FFT size into smaller numbers, not just powers of 2.
Using a good FFT library, you'll get the best performance if your FFT size can be factored into a number of small prime factors. The FFT library will then have optimized implementations of DFT kernels for each of these primes, which can then be recombined appropriately to yield the full set of DFT outputs. As I alluded to in a comment on another question, modern libraries will often give good performance for all prime factors ~13 and below.
With that said, you may find that by padding up to the next power of 2, it's possible that your transform might become slower. If you're already using an FFT size that your library implementation is well-suited for, you're just adding extra work for yourself by padding the size out. The best way to judge this is to benchmark a few candidate sizes and see which does best on your platform. If you have to make an automated choice of a good FFT size, then based on the characteristics of the radixes that your library supports, you can choose the next size that has an appropriate set of prime factors. | {
"domain": "dsp.stackexchange",
"id": 856,
"tags": "fft"
} |
Powershell Windows Service Deployment | Question: Based on this with a couple of changes.
Any issues you can point out would be great.
param([string]$targetServer, [string]$user, [string]$pass)
function Get-Service(
[string]$serviceName = $(throw "serviceName is required"),
[string]$targetServer = $(throw "targetServer is required"))
{
$service = Get-WmiObject -Class Win32_Service `
-ComputerName $targetServer -Filter "Name='$serviceName'" -Impersonation 3
return $service
}
Function Copy-Files {
param(
[Parameter(Mandatory=$true,ValueFromPipeline=$True)]
[string]$source,
[Parameter(Mandatory=$true,ValueFromPipeline=$True)]
[string]$targetServer
)
$destination = "\\$targetServer\C$\APP\"
"Creating network share on $targetServer"
$share = Get-WmiObject Win32_Share -List -ComputerName $targetServer
$share.create("C:\share","autoShare", 0)
#create backup
Move-Item -path $destination -destination "$destination\backup"
#copy new files
Copy-Item -path $source -destination $destination -recurse
"Removing network share on $targetServer"
if ($s = Get-WmiObject -Class Win32_Share -ComputerName $targetServer -Filter "Name='autoShare'") `
{ $s.delete() }
}
function Start-Service(
[string]$serviceName = $(throw "serviceName is required"),
[string]$targetServer = $(throw "targetServer is required"))
{
"Getting service $serviceName on server $targetServer"
$service = Get-Service $serviceName $targetServer
if (!($service.Started))
{
"Starting service $serviceName on server $targetServer"
$result = $service.StartService()
Test-ServiceResult -operation "Starting service $serviceName on $targetServer" -result $result
}
}
function Uninstall-Service(
[string]$serviceName = $(throw "serviceName is required"),
[string]$targetServer = $(throw "targetServer is required"))
{
$service = Get-Service $serviceName $targetServer
if (!($service))
{
Write-Warning "Failed to find service $serviceName on $targetServer. Nothing to uninstall."
return
}
"Found service $serviceName on $targetServer; checking status"
if ($service.Started)
{
"Stopping service $serviceName on $targetServer"
#could also use Set-Service, net stop, SC, psservice, psexec etc.
$result = $service.StopService()
Test-ServiceResult -operation "Stop service $serviceName on $targetServer" -result $result
}
"Attempting to uninstall service $serviceName on $targetServer"
$result = $service.Delete()
Test-ServiceResult -operation "Delete service $serviceName on $targetServer" -result $result
}
function Test-ServiceResult(
[string]$operation = $(throw "operation is required"),
[object]$result = $(throw "result is required"),
[switch]$continueOnError = $false)
{
$retVal = -1
if ($result.GetType().Name -eq "UInt32") { $retVal = $result } else {$retVal = $result.ReturnValue}
if ($retVal -eq 0) {return}
$errorcode = 'Success,Not Supported,Access Denied,Dependent Services Running,Invalid Service Control'
$errorcode += ',Service Cannot Accept Control, Service Not Active, Service Request Timeout'
$errorcode += ',Unknown Failure, Path Not Found, Service Already Running, Service Database Locked'
$errorcode += ',Service Dependency Deleted, Service Dependency Failure, Service Disabled'
$errorcode += ',Service Logon Failure, Service Marked for Deletion, Service No Thread'
$errorcode += ',Status Circular Dependency, Status Duplicate Name, Status Invalid Name'
$errorcode += ',Status Invalid Parameter, Status Invalid Service Account, Status Service Exists'
$errorcode += ',Service Already Paused'
$desc = $errorcode.Split(',')[$retVal]
$msg = ("{0} failed with code {1}:{2}" -f $operation, $retVal, $desc)
if (!$continueOnError) { Write-Error $msg } else { Write-Warning $msg }
}
function Install-Service(
[string]$serviceName = $(throw "serviceName is required"),
[string]$targetServer = $(throw "targetServer is required"),
[string]$displayName = $(throw "displayName is required"),
[string]$physicalPath = $(throw "physicalPath is required"),
[string]$userName = $(throw "userName is required"),
[string]$password = $pass,
[string]$startMode = "Automatic",
[string]$description = "",
[bool]$interactWithDesktop = $false
)
{
# can't use installutil; only for installing services locally
#[wmiclass]"Win32_Service" | Get-Member -memberType Method | format-list -property:*
#[wmiclass]"Win32_Service"::Create( ... )
# todo: cleanup this section
$serviceType = 16 # OwnProcess
$serviceErrorControl = 1 # UserNotified
$loadOrderGroup = $null
$loadOrderGroupDepend = $null
$dependencies = $null
# description?
$params = `
$serviceName, `
$displayName, `
$physicalPath, `
$serviceType, `
$serviceErrorControl, `
$startMode, `
$interactWithDesktop, `
$userName, `
$password, `
$loadOrderGroup, `
$loadOrderGroupDepend, `
$dependencies `
"Username: $username Password: $password"
$scope = new-object System.Management.ManagementScope("\\$targetServer\root\cimv2", `
(new-object System.Management.ConnectionOptions))
"Connecting to $targetServer"
$scope.Connect()
$mgt = new-object System.Management.ManagementClass($scope, `
(new-object System.Management.ManagementPath("Win32_Service")), `
(new-object System.Management.ObjectGetOptions))
$op = "service $serviceName ($physicalPath) on $targetServer"
"Installing $op"
$result = $mgt.InvokeMethod("Create", $params)
Test-ServiceResult -operation "Install $op" -result $result
"Installed $op"
"Setting $serviceName description to '$description'"
Set-Service -ComputerName $targetServer -Name $serviceName -Description $description
"Service install complete"
}
function Publish-Service
{
param(
[Parameter(Mandatory=$true,ValueFromPipeline=$True)]
[string]$targetServer
)
$serviceName = "APPSyncronizationService"
Uninstall-Service $serviceName $targetServer
"Pausing to avoid potential temporary access denied"
Start-Sleep -s 5 # Yeah I know, don't beat me up over this
Copy-Files -source "C:\src\app\APP\APP.SyncService\bin\Debug\" -targetServer $targetServer
Install-Service `
-ServiceName $serviceName `
-TargetServer $targetServer `
-DisplayName "APP Test Syncronization Service" `
-PhysicalPath "C:\APP\APP.SyncService.exe" `
-Username $user `
-Description "Description"
Start-Service $serviceName $targetServer
}
Publish-Service -targetServer $targetServer
Answer: Your code and overall logic look good. In general the areas of improvement I see are
There are some things you are inconsistent about, e.g., function parameters
You are sending notification information down the pipeline.
Some of the parameter declaration could be improved.
I see that your are trying to make the code fit within a certain amount of characters on each line. While that is perfectly fine the are some features of PowerShell I could show that would improve code functionality and still give that same visual effect.
Param Block
Mandatory Parameters
In your first function you have the param $serviceName throw an error if it is not represent.
[string]$serviceName = $(throw "serviceName is required")
In another function you get the same thing but leveraging advanced parameters.
[Parameter(Mandatory=$true,ValueFromPipeline=$True)]
[string]$source,
So as you can see there is no reason for throw. The Mandatory flag set to true accomplishes the almost exact same thing. Calling the function without passing the parameter will make PowerShell ask for it.
cmdlet Get-Service at command pipeline position 1
Supply values for the following parameters:
serviceName: _ <-- Pretend that underscore is blinking
Perhaps this has occurred to you and you prefer the messages from throw. Setting the parameter will mitigate errors. If nothing else just take it as an FYI.
Watch out for scope
The function Install-Service has a parameter defined $password with a default of $pass. While that code will still function is can be misleading as $pass is a variable defined in a parent scope. Consider making that one mandatory as well and just use $pass when you are calling the function.
Error Checking
You make several calls to WMI but there is not guarantee they are going to work. Consider using a try/catch block or -SilentlyContinue and validating the result. Examples of both are below. These are not production examples but samples to show functionality.
# Try/Catch Example
try{
$result = Get-WmiObject -Class Win32_Volume -ComputerName doesnotexist
} catch {
"There was and error: $($_.Exception)"
}
# SilentlyContinue Example
$result = Get-WmiObject -Class Win32_Volume -ComputerName doesnotexist -ErrorAction SilentlyContinue
if(!$result){"Something happened"}
Code Clarity
I refereed to this in a bullet above about mak[ing] the code fit within a certain amount of characters on each line. While playing with the backticks works it can be annoying when you have to make changes and forget to add them. Once place in Install-Service where you use them and it is not even required. PowerShell is forgiving for this:
$params =
$serviceName,
$displayName,
$physicalPath,
$serviceType,
$serviceErrorControl,
$startMode,
$interactWithDesktop,
$userName,
$password,
$loadOrderGroup,
$loadOrderGroupDepend,
$dependencies
In other places you cannot get away with that. When you call Install-Service you use backticks again to get each parameter on its own line. That in itself is fine but I wanted to show you splatting. You pass a hashtable of parameter and value pairs and splat them to the cmdlet. Each one is on its own line and can be edited easily in place.
$params = @{
ServiceName = $serviceName
TargetServer = $targetServer
DisplayName = "APP Test Syncronization Service"
PhysicalPath = "C:\APP\APP.SyncService.exe"
Username = $user
Description = "Description"
}
Install-Service @params
Function names
Get-Service and Start-Service are already cmdlet names. What you are doing is changing the command order precedence. Yours will get called first but you are using them for the same functionality the builtin cmdlets.
# Using the builtin
Get-Service -ComputerName $target -Name Spooler
I would opt for just using the built in ones. They return [System.ServiceProcess.ServiceController] objects that you can reference to check command success.
Be careful with your function output
Again, this is really wrong but something you need to be aware of. You are sending lots of status information down the pipeline. Basic example being
"Pausing to avoid potential temporary access denied"
That line is sent down the output stream. If you are going capture output from function, or use the pipeline like you configured on some of your parameters ValueFromPipeline=$True, you might get unexpected behavior. If this is truly information only then consider Write-host or a separate logging function.
Test-ServiceResult
The way you convert the error code to its friendly text could be shortened. If you have at least PowerShell v5 you can use the enum keyword. If not you could always add a type definition as well. To keep it simple I am just going to improve your error code array.
You can also use the -is operator to test the type of a variable.
if ($result -is [uint32]) {$retVal = $result} else {$retVal = $result.ReturnValue}
if ($retVal -eq 0) {return}
$errorCodes = "Success","Not Supported","Access Denied","Dependent Services Running",
"Invalid Service Control","Service Cannot Accept Control","Service Not Active","Service Request Timeout",
"Unknown Failure","Path Not Found","Service Already Running","Service Database Locked","Service Dependency Deleted",
"Service Dependency Failure","Service Disabled","Service Logon Failure","Service Marked for Deletion","Service No Thread",
"Status Circular Dependency","Status Duplicate Name","Status Invalid Name","Status Invalid Parameter","Status Invalid Service Account",
"Status Service Exists","Service Already Paused"
$msg = ("{0} failed with code {1}:{2}" -f $operation, $retVal, $errorCodes[$retVal]) | {
"domain": "codereview.stackexchange",
"id": 17932,
"tags": "powershell, installer"
} |
Improving poor man's translations mechanism in c++98 program | Question: In order to add basic translation capabilities in an old c++98 program
I've come up to a basic and shameless code summarized by this
snippet:
#include <string>
#include <iostream>
// Translation ids
enum
{
TR_RELOAD =0,
TR_SAVE,
TR_MSGWARNRESET,
TR_MSGAPPLYCHANGES
};
// (Returning an array reference)
const std::string (&resolve_translation(const std::string& lang))[]
{
static const std::string tr_en[] =
{
"Reload", // TR_RELOAD
"Save", // TR_SAVE
"Current values will be lost, are you sure?", // TR_MSGWARNRESET
"Apply changes to material %s?" // TR_MSGAPPLYCHANGES
};
static const std::string tr_it[] =
{
"Ricarica", // TR_RELOAD
"Salva", // TR_SAVE
"I valori correnti saranno persi, sei sicuro?", // TR_MSGWARNRESET
"Applicare modifiche al materiale %s?" // TR_MSGAPPLYCHANGES
};
static const std::string tr_es[] =
{
"Recargar", // TR_RELOAD
"Salvar", // TR_SAVE
"Los valores actuales se perderán, está seguro?", // TR_MSGWARNRESET
"¿Aplicar cambios al material %s?" // TR_MSGAPPLYCHANGES
};
static const std::string tr_fr[] =
{
"Recharger", // TR_RELOAD
"Enregistrer", // TR_SAVE
"Les valeurs actuelles seront perdues, êtes-vous sûr?", // TR_MSGWARNRESET
"Appliquer les modifications au matériau %s?" // TR_MSGAPPLYCHANGES
};
if(lang=="en") return tr_en;
if(lang=="it") return tr_it;
if(lang=="fr") return tr_fr;
if(lang=="es") return tr_es;
return tr_en; // default
}
int main()
{
std::string lang = "it"; // unknown at compile time
const std::string (&tr)[] = resolve_translation(lang);
std::cout << tr[TR_RELOAD] << '\n';
std::cout << tr[TR_MSGWARNRESET] << '\n';
}
The usage is cumbersone because needs a local call to resolve_translation,
however I can compile it in bcc and g++ and is working,
but I'm not sure why does not compile with clang and msvc,
I fear that there's some major problem under the rug.
I'm seeking some advices to improve it.
Answer: Improvements:
The return type is strange and awkward enough that it needs a comment to explain it! What benefit does returning a reference to an array have over simply returning a pointer to the first element?
It would be better to return an object. It might simply contain a pointer and length, but it means you could update it to support dynamically loaded tables or other new features, and do error checking on the operator[], and use a strong type for the subscript as well (that is, it requires the enumeration constant, not just any old integer).
Building an array of std::string is inefficient since it copies all of the literals into the string object at run time. If you're compiling as C++98 you don't have string_view built in, but you could supply your own as part of the program, or make it an array of plain char* instead. I guess it depends on how the return values are being used: if it repeatedly needs to convert that to a string you'd rather have it done and remembered. But you don't need to copy and consume memory for all the unused tables. That's another reason to make it an abstract object, as it can be optimized and improved "under the hood" later without changing the usage.
the compiler error
reference to incomplete type 'const std::string []' could not bind to an lvalue of type 'const std::string [4]'
The function's type is declared without bounds. It's not like an initializer where the actual array in the return statement will inform it; though apparently g++ accepts that as an extension (sort of an implicit partial auto). From cppreference: (emphasis mine)
If expr is omitted in the declaration of an array, the type declared is "array of unknown bound of T", which is a kind of incomplete type, except when used in a declaration with an aggregate initializer.
⋮
References and pointers to arrays of unknown bound can be formed, but cannot be initialized or assigned from arrays and pointers to arrays of known bound.
Your code is actually illegal in standard C++. | {
"domain": "codereview.stackexchange",
"id": 42008,
"tags": "c++, strings, static, c++98"
} |
Can the minimum kinetic energy of a simple harmonic oscillator be non zero? | Question: I was practicing physics problems when I faced one which said that minimum kinetic energy of a simple harmonic oscillator (in the question it was a spring block system) can be non zero.
However, it occurred to me that it cannot be possible since in such a motion a time will come when the spring will be stretched to its maximum and the velocity of the block will change direction. So, at a time between the change in direction of the velocity there will be an instant of zero velocity where the kinetic energy will be zero(the instant of minimum Kinetic Energy).
So who is right here, and if the minimum kinetic energy of a simple harmonic oscillator can be non zero then cite an example.
Edit: Sorry the question is actually about "minimum Kinetic Energy"
Answer: There are times where there is zero kinetic energy in a harmonic motion. However, there are times where it is non-zero. And the question is just whether it can be non-zero. And it certainly can be non-zero.
Update to reflect the updated question:
The minimum kinetic energy can be non-zero if the oscillator is two dimensional. Then for every trajectory that is circular or elliptical, the pendulum would never completely stop. | {
"domain": "physics.stackexchange",
"id": 37239,
"tags": "energy, harmonic-oscillator"
} |
Is this problem P or NP? | Question: Given a set of whole numbers $M=\{z_0, ..., z_n\}$ Are there $z_i$ and $z_j$ with $i \neq j$ but $z_i = z_j$?
Is this Problem (surely or only probably) in $P$ or in $NP$? Is it $NP-hard$?
Answer: This is the element distinctness problem, and can be solved in polynomial time in many ways. For example, you can go over all pairs of elements and compare them, or if the elements are comparable, you can sort them and then check for duplicate elements (though in principle comparisons could be very time consuming, and in that case potentially the algorithms wouldn't be polynomial time; there's no such problem with the former algorithm).
Make sure you understand what "polynomial time" is. Also, you have a false dichotomy in your title: every problem in P is also in NP; if P=NP then every problem in P is also NP-complete; and if P≠NP then there are problems which are neither in P nor NP-hard. | {
"domain": "cs.stackexchange",
"id": 4060,
"tags": "algorithms, complexity-theory, time-complexity, np-hard, np"
} |
Calculating Kryptonian speed in the movie "Man of Steel" | Question: This is my first question, so I'm not sure if this is the right place to ask but:
How would you go about calculating the speed of Kryptonians in the movie Man of Steel (2013)?
Specifically, I'm referring to this scene,
where Faora-Ul blitzes the soldiers in quick succession. I was trying to figure out how to calculate her speed in that scene, but given the camera angle, I'm not quite sure how to go about doing that.
Answer: A rough calculation can be done using this frame :
On the far left we have Faora and on the far right the soldier she is attacking in the next scene.
If we assume the height of a soldier to 1m80 then they are at a distance of 10m
It takes her 4 frames to cover that distance in the next scene and the video is 24 fps therefore the speed is
$$
V = \frac{10m}{4/24 s} = 60 m/s = 216 km/h
$$
An interesting calculation would be the air friction on her. | {
"domain": "physics.stackexchange",
"id": 22357,
"tags": "homework-and-exercises, kinematics, speed, estimation"
} |
Why is chemical accuracy defined as 1 kcal/mol? | Question: "Chemical accuracy" in computational chemistry, is commonly understood to be $1~\mathrm{kcal\over mol}$, or about $4~\mathrm{kJ\over mol}$. Spectroscopic accuracy is $1~\mathrm{kJ\over mol}$, and that definition has intuitive sense. However, where does the $1~\mathrm{kcal\over mol}$ quantity come from?
From Wikipedia:
A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol.
Answer: Short answer: the goal of "thermochemical accuracy" for computational chemistry is to match or exceed experimental accuracy. Thus, ~1 kcal/mol comes from the typical error in thermochemical experiments.
The drive began with John Pople, who begin the modern effort to consider "Model Chemistries," comparing the accuracy of different methods across many molecules and often multiple properties. He realized that for thermodynamic properties, one could approach the accuracy of experiments. (See, for example his Nobel lecture).
As the model becomes quantitative, the target should be that data is reproduced and predicted within experimental accuracy. For energies, such as heats of formation or ionization potentials, a global accuracy of 1 kcal/mole would be appropriate.
He then started work on composite methods like G1, G2, G3, etc. that could approach predicting many chemical properties to this accuracy. | {
"domain": "chemistry.stackexchange",
"id": 13574,
"tags": "thermodynamics, quantum-chemistry, computational-chemistry"
} |
Finding perimeters of right triangles with integer-length legs, up to a limit | Question: I have a nested for-loop that populates a list with elements:
a = []
for i in range(1, limit+1):
for j in range(1, limit+1):
p = i + j + (i**2 + j**2)**0.5
if p <= limit:
a.append(p)
I could refactor it into list comprehension:
a = [i + j + (i**2 + j**2)**0.5
for i in range(1, limit+1)
for j in range(1, limit+1)
if i + j + (i**2 + j**2)**0.5 <= limit]
But now the same complex expression is in both parts of it, which is unacceptable. Is there any way to create a list in a functional way, but more elegantly?
I guess in Lisp I would use recursion with let. How it is done in more functional languages like Clojure, Scala, Haskell?
In Racket it's possible to bind expressions inside a for/list comprehension. I've found one solution to my problem:
[k
for i in range(1, limit+1)
for j in range(1, limit+1)
for k in [i + j + (i**2 + j**2)**0.5]
k <= limit]
I'm not sure how pythonic it is.
Answer: In the general case, Jeff's answer is the way to go : generate then filter.
For your particular example, since the expression is increasing wrt both variables, you should stop as soon as you reach the limit.
def max_value(limit, i) :
"""return max positive value one variable can take, knowing the other"""
if i >= limit :
return 0
return int(limit*(limit-2*i)/(2*(limit-i)))
def collect_within_limit(limit) :
return [ i + j + (i**2 + j**2)**0.5
for i in range(1,max_value(limit,1)+1)
for j in range(1,max_value(limit,i)+1) ]
Now, providing this max_value is error prone and quite ad-hoc. We would want to keep the stopping condition based on the computed value. In your imperative solution, adding break when p>limit would do the job. Let's find a functional equivalent :
import itertools
def collect_one_slice(limit,i) :
return itertools.takewhile(lambda x: x <= limit,
(i + j + (i**2 + j**2)**0.5 for j in range(1,limit)))
def collect_all(limit) :
return list(itertools.chain(*(collect_one_slice(limit, i)
for i in range(1,limit)))) | {
"domain": "codereview.stackexchange",
"id": 2893,
"tags": "python, combinatorics"
} |
what is the meaning for the orientation of pr2 arm in cartesion controller? | Question:
I try to use the keyboard control the pr2 simulator, but Ĩ can not get a clear idea about the mean setting of orientation.
cmd.pose.orientation.x=-0.00244781865415;
cmd.pose.orientation.y=-0.548220284495;
cmd.pose.orientation.z=0.00145617884538;
cmd.pose.orientation.w=0.836329126239;
That is in the demo source code.
How can I set this value to make the gripper is vertical face the table so as to grasp the object on the table.
Originally posted by zhenli on ROS Answers with karma: 287 on 2012-02-26
Post score: 0
Answer:
The orientation is represented as a quaternion. The related question/answer on this site can be found here.
Originally posted by Lorenz with karma: 22731 on 2012-02-26
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8399,
"tags": "ros, pr2-simulator"
} |
MSD vs LSD Radix sort | Question: I read the following in CRLS:
I don't understand the text in yellow. Why would radix sort not work so well if we sort by their most significant digit? What extra "piles of cards" is it referring to?
Perhaps I'm not able to follow the example with cards, and an example with actual numbers would be best.
In case it helps, sorting naively by MSD doe not seem to work, even if all values are d-digit numbers:
Input (1) (2) (3)
* * *
321 132 321 321
522 321 426 522
132 426 522 132
426 522 132 426
where (i) is the result of sorting the previous column by the ith-highest digit.
Answer: It would work, the only problem is that it will generate a lot of extra piles for the intermediate results that are difficult to track.
if the sorting algorithm sorts the $d$-digit numbers starting from their most significant digit it would create 10 piles at first (one pile for the numbers starting with 0, another one for those starting with 1 and so on) and then sort each pile recursively as explained in the book.
The problem is that to sort the pile of numbers starting with 0 the algorithm needs to create ten more piles (one for the numbers starting with 00, the second for the numbers starting with 01 and so on, the last pile is for the numbers starting with 09).
Thus we have created 19 piles until now. To sort the pile of the numbers starting with 00 we need to create 10 more piles. If this process continues until the $d$ digit are sorted you can imagine that a huge number of piles are created (how many?).
This are the extra piles that the book was referring to.
If you use LSD radix sort you don't have to split the numbers in piles at all. You can just sort the input pile by last digit, then the same pile by the penultimate digit and so on, after $d$ steps you will end up with the expected result.
The intuition is the following: let $d$ be $2$ and let $83$, $19$ and $17$ be the input.
The first step on MSD radix sort will place $17$ and $19$ before $83$. Then if you sort the same whole pile by the second digit you will end up with $83$ before $17$ which is wrong. You need to split the pile because you have to "remember" the previous sorting made by the algorithm, which is more "important".
Conversely, the first step of LSD radix sort will place $83$ before $17$, then $19$. If you take the whole pile and sort it by the first digit you get $17$, $19$, $83$ which is correct. You don't need to split the pile because the current step of the algorithm is more "important" of the previous ones and it is allowed to "mess up" the previous ordering in any way (for example by placing $83$ after $17$ and $19$). This would work as long as the sorting algorithm used for each digit is stable. | {
"domain": "cs.stackexchange",
"id": 15728,
"tags": "algorithms, sorting, radix-sort"
} |
Crackling of Speakers-Audio | Question: Why do speakers make crackling noises when the pitches get too high for them? And why is it that lower end speakers tend to crackle more? If you try to feed in too high of a frequency, I would imagine the magnet just physically wouldn't be able to oscillate fast enough, but I don't know why it would make such a hideous noise.
Answer: As you increase the frequency of the electrical audio signal being fed into the loudspeaker beyond its intended range, mechanical feedback in the oscillation of the electromagnet driver itself can result in out of control vibrations of the electromagnetic into its magnet frame so that it's literally rattling around to some extent.
Quality speakers divide the frequency range over multiple specialized drivers housed in a single enclosure and uses a filter network to partition the input signal accordingly and deliver each part to the right driver. | {
"domain": "physics.stackexchange",
"id": 7705,
"tags": "acoustics"
} |
Recover the path to a goal state in A* search algorithm | Question: In the A* search algorithm, we use a priority queue with heuristic function to find optimum result with minimum cost. But, how do we get the path after reaching goal?
Answer: Every node should have been given a pointer to the node it was reached from. Then you only need to follow and reverse a linked list. | {
"domain": "cs.stackexchange",
"id": 1778,
"tags": "algorithms, search-algorithms"
} |
apt update fails in ROS2 Humble docker container | Question:
Hello! I have the following issue when try to build a docker image based on ROS2 Humble. When I am running apt update for further installation of other packages I get the following error:
W: http://packages.ros.org/ros2/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
E: Problem executing scripts APT::Update::Post-Invoke 'rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true'
E: Sub-process returned an error code
The same thing happens when I just pull the original image (osrf/ros:humble-desktop) and run apt update inside it
Originally posted by jenamax on ROS Answers with karma: 1 on 2022-11-11
Post score: 0
Answer:
Hello @jenamax,
I think You need to install the latest docker. In my case, installing version 20.10.14 helped. It did not work with 20.10.8
Follow the below link's issues that were raised in GitHub.
https://github.com/osrf/docker_images/issues/623
https://stackoverflow.com/questions/71941032/why-i-cannot-run-apt-update-inside-a-fresh-ubuntu22-04
https://github.com/osrf/docker_images/issues/621
I think that will solve your problem,
Originally posted by Ranjit Kathiriya with karma: 1622 on 2022-11-11
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jenamax on 2022-11-12:
Yes, after Docker update this problem was solved. Thank you! | {
"domain": "robotics.stackexchange",
"id": 38118,
"tags": "ros2, docker"
} |
How does subcarrier correlation affect the BER of OFDM? | Question: I am studying the OFDM system and would like to examine it under different subcarrier correlation scenarios. I have calculated BER for the uncorrelated scenario and now I have rewritten it for the correlated channel.
If there is no noise, BER shoud be 0. It works for the uncorrelated scenario. Why does it not work for the correlated scenario?
Answer: OFDM subcarriers are packed relatively tightly together. If you look at the original OFDM signal in the frequency domain, you may wonder why adjacent subcarriers are not interfering with each other. The answer is that subcarriers are orthogonal to each other. Even adjacent subcarriers, have 0 influence on each other, and are independent in that sense.
It can be proven that OFDM is the most efficient way to pack these subcarriers together in the sense that it is the smallest inter-subcarrier spacing whereby the subcarriers can all be orthogonal to each other.
So then, in the real world, during transmission, things may happen besides getting white noise added, that cause the subcarriers to lose orthogonality to each other, and become correlated at the receiver. Probably your textbook would explain what may cause the subcarriers to get correlated?
In that case, there would be inter-subcarrier interference (also known as inter-carrier interference), and that would give you some non-zero BER. | {
"domain": "dsp.stackexchange",
"id": 8862,
"tags": "digital-communications, ofdm, channel"
} |
Wavelength-dependence of refractive index in internal reflection spectroscopy? | Question: In many literature sources (web example), only a single value for the refractive index is assumed for the infrared element, and another is typically assumed for the sample substance. However, should not the refractive index be wavelength-dependent? So the wavelength-dependence of the penetration depth,
\begin{equation}
d_p = \frac{\lambda}{2\pi n_1 \sqrt{\sin^2\theta - (n_2/n_1)^2}}
\end{equation}
would not only be in the numerator, but also in the denominator: $n_1 = n_1(\lambda)$ and $n_2 = n_2(\lambda)$? Here $d_p$ is the penetration depth, $\lambda$ is wavelength, $\theta$ is the incident angle, and $n_1$ and $n_2$ are the refractive indices of the infrared element and sample, respectively.
Answer: Yes it should, but it doesn't matter much.
In physics, roughly speaking, everything depends on everything. We wouldn't be able to do a thing if we couldn't tell what is important and what isn't. Now let's see how much the refractive index actually changes as $\lambda$ goes, say, from 400 to 800nm (thus covering the entire visible range).
(source)
Looks like $n$ of a heavy flint glass goes from 1.85 to 1.75 (that's a change of 5%, in most other materials even less), while $\lambda$ changes by... how many percent? | {
"domain": "chemistry.stackexchange",
"id": 9309,
"tags": "spectroscopy, ir-spectroscopy"
} |
Torque sensitivity of a VFD induction motor through a gearbox? | Question: I have an application that requires torque limiting and I need to determine if VFD torque limiting will provide sufficient protection. I have used VFDs for torque limiting through gearboxes before with success, but I am hoping to find a way to calculate the result as opposed to just gut feeling.
10HP motor
1775 rpm
69.5:1 gearbox reduction (im assuming 50% efficient)
25.5 rpm at driving sprocket
I have looked at some frequency drives to get an indication of their torque/amp sensitivity. Ive seen 0.1A, 0.01A, 0.1% of torque and 0.01% of torque as control resolutions, but have not found any literature specifically addressing if this resolution can be reasonably expected.
In this situation I am gearing down, so inertia, backlash and controller reaction time will not be an issue.
Answer: Induction motors don't have a consistent current vs. torque relationship. In order accurately control or limit torque, VFDs must determine an equivalent circuit for the motor then monitor and control the electrical parameters that determine the torque. I believe that VFDs that have a good torque limiting capability will have a specification stating the accuracy with which they can do that. They will only be able to state what can be done at the motor shaft. The variations in torque lost in the gearbox will significantly degrade the torque limiting or controlling accuracy. I suspect that finding out what to expect from the gearbox will be the most important and most difficult task in this situation.
Can you state what VFD models you have experience with and/or are considering for use in this situation?
Re link in comment.
If you click the "Specifications" tab, you will see "PowerFlex 700 AC Drive Technical Data," a document that you should download. On page 4 of that document, you will find:
You might find another manufacturer that can do better, but I doubt any will do a lot better. When a speed-controlled drive goes into torque limit, the motor will slow down until it reaches an operating point where the limiting value of torque is sufficient to drive the load. If no such point is reached, the drive will be at a standstill. At that point, the torque should not exceed the setpoint, but it may be less than the setpoint.
Re Answer Posted by Asker
I went through your answer, made a diagram and added some notes as shown below. The way I interpreted your numbers, my calculations gave slightly different results, but I put yours on the diagram. If you you used "best estimate" losses, I would suggest that you also do the calculations using the highest and lowest losses that you think might be possible. I believe that the lowest losses will show the smallest margin between normal operating torque and the failure level torque.
It seems to me that you might get better performance by sizing the drive for the maximum desired operating torque. You calculations seem to indicate it is sized at the failure level torque. | {
"domain": "engineering.stackexchange",
"id": 985,
"tags": "control-engineering, motors, torque"
} |
Is (or why isn't) static charge as lethal as ionizing radiation? | Question: Ionizing radiation, e.g. the "stuff" emitted by radioactive materials, is dangerous to humans since changes to the electron configurations (in the human body) causes the various molecules (in the human body) to change their shape (i.e. break down or form new ones), which can have all sorts of devastating effects..
But why is it any less of an issue if this ionization occurs through more mundane means, like walking over a wool carpet with leather/rubber shoes? Is that ionization any different in quality?
Similarly, why can't the excess/shortage of electrons caused through ionizing radiation not be simply compensated for by "grounding" people, as it occurs with static charges?
..or is perhaps my understanding of why ionizing radiation is an issue for humans a wrong one to begin with?
Answer: Ionizing radiation is radiation that is strong enough so that, when it hits an atom or molecule, will knock off electrons. This happens even if the target object doesn't have freely mobile electrons, which leaves free radicals and broken bonds, both of which are harmful to complex biological processes. There's no selection based on electron binding energy; whatever the radiation hits gets disrupted.
Static electricity, when applied to living tissue, just causes current to flow as in any conductive medium. Due to the presence of salt and other electrolytes, there are electrons available for conduction, and in the electrical field those electrons move to produce a current and drain the static electricity. No individual electron gets energetic enough to break bonds; they just conduct. So, no molecular damage is done. | {
"domain": "physics.stackexchange",
"id": 26335,
"tags": "electrons, radiation, biophysics, radioactivity, molecules"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.