anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Computing the Kronecker product over a tensor chain | Question: I'm working on some Python code and have a few functions which do similar things, and the only way I've found of writing them is quite ugly and not very clear.
In the example below, the goal is to compute the Kronecker product over a tensor chain of length M, in which the mth tensor is R and every other tensor is J.
Is there any nice way to rewrite this?
def make_rotate_target(m, M, J, R):
out = J
if M == 1:
return R
else:
for i in range(M):
if i == 0:
out = J
else:
if i + 1 == m:
out = np.kron(out, R)
else:
out = np.kron(out, J)
return out
Answer: functools.reduce is what you need here:
from functools import reduce
def make_rotate_target(m, M, J, R):
input_chain = [J] * M
input_chain[m - 1] = R
return reduce(np.kron, input_chain)
The input_chain list could be replaced with an iterable constructed from itertools.repeat and itertools.chain to save space.
from functools import reduce
from itertools import repeat, chain
def make_rotate_target(m, M, J, R):
input_chain = chain(repeat(J, m - 1), [R], repeat(J, M - m))
return reduce(np.kron, input_chain)
The computation could be accelerated by exploiting the associative property of the Kronecker product:
$$\underbrace{J\otimes J\otimes\cdots\otimes J}_{m-1}\otimes R\otimes\underbrace{J\otimes J\otimes\cdots\otimes J}_{M-m} \\
=(\underbrace{J\otimes J\otimes\cdots\otimes J}_{m-1})\otimes R\otimes(\underbrace{J\otimes J\otimes\cdots\otimes J}_{M-m}) $$
$$\underbrace{J\otimes J\otimes\cdots\otimes J}_{a + b}=(\underbrace{J\otimes J\otimes\cdots\otimes J}_{a})\otimes(\underbrace{J\otimes J\otimes\cdots\otimes J}_{b})$$
So some intermediate computation results could be reused. I'll leave the rest to you. | {
"domain": "codereview.stackexchange",
"id": 39427,
"tags": "python, beginner, iteration"
} |
How to dynamically Generate Sort strings for Data Layer from Controller | Question: I need help refactoring this code to dynamically generate a sort string that I can send to my data layer so that my database does the sorting instead of it happening in memory. I am using MVC4 with EF5 as my data layer.
public ActionResult InstanceSearch(int? page, string sortOrder, string computerName, string instanceName,
string productName, string version)
{
int pageSize = 10;
int pageNumber = (page ?? 1);
if (pageNumber < 1)
{
pageNumber = 1;
}
if (string.IsNullOrEmpty(sortOrder))
{
sortOrder = "Computer";
}
ViewBag.CurrentSort = sortOrder;
ViewBag.ComputerSort = sortOrder == "Computer" ? "ComputerDesc" : "Computer";
ViewBag.InstanceSort = sortOrder == "Instance" ? "InstanceDesc" : "Instance";
ViewBag.VersionSort = sortOrder == "Version" ? "VersionDesc" : "Version";
ViewBag.ProductSort = sortOrder == "Product" ? "ProductDesc" : "Product";
//IEnumerable<Instance> instances = ecuWebDataContext.Instances;
IQueryable<Instance> instances
= string.IsNullOrEmpty(computerName) == true ? ecuWebDataContext.Instances : ecuWebDataContext.Instances.Where(i => i.Computer.Name == computerName);
if (string.IsNullOrEmpty(instanceName) == false)
{
instances = instances.Where(i => i.Name == instanceName);
}
if (!string.IsNullOrEmpty(productName))
{
ProductName product = (ProductName)Enum.Parse(typeof(ProductName), productName);
instances = instances.Where(i => i.Product.Name == product);
}
if (!string.IsNullOrEmpty(version))
{
instances = instances.Where(i => i.Version == version);
}
switch (sortOrder)
{
case "Computer":
instances = instances.OrderBy(i => i.Computer.Name);
break;
case "ComputerDesc":
instances = instances.OrderByDescending(i => i.Computer.Name);
break;
case "Instance":
instances = instances.OrderBy(i => i.Name);
break;
case "InstanceDesc":
instances = instances.OrderByDescending(i => i.Name);
break;
case "Version":
instances = instances.OrderBy(i => i.Version);
break;
case "VersionDesc":
instances = instances.OrderByDescending(i => i.Version);
break;
case "Product":
instances = instances.OrderBy(i => i.Product.Name);
//instances = instances.OrderBy(i => Enum.Parse(typeof(ProductName), i.Product.Name));
break;
case "ProductDesc":
instances = instances.OrderByDescending(i => i.Product.Name);
//instances = instances.OrderByDescending(i => Enum.Parse(typeof(ProductName), i.Product.Name));
break;
}
ViewBag.SortOrder = sortOrder;
var instanceSearchModel = new InstanceSearchModel { ComputerName = computerName, InstanceName = instanceName };
ViewBag.ComputerName = computerName;
ViewBag.InstanceName = instanceName;
ViewBag.ProductName = productName;
ViewBag.InstanceCount = instances.Count();
ViewBag.Version = version;
return View(instances.ToPagedList(pageNumber, pageSize));
}
I tried returning an IQueryable from the data layer, but then I run into connection problems because I never close it.
Answer: The data layer that appears to be causing your problem is the root cause.
You're abstracting over an abstraction, and in the process having to marshall sort orders around as a result. Prefer injecting your context in a request scope and remove the abstraction.
You can always create an abstraction over the specific query, and inject the query, which has a dependency on the context, if you really need to. | {
"domain": "codereview.stackexchange",
"id": 2971,
"tags": "c#, asp.net, asp.net-mvc-4"
} |
Data reduction and photometry without IRAF? | Question: The IRAF package is old.
I've been looking around for a more modern software to replace it in the processes of CCD data reduction and photometry, but haven't been able to find any.
The closest I've found is the PyRAF tool, but this seems more like a Python wrapper around IRAF rather than a replacement for it.
Is there some new software I might've miss or is IRAF really the only option even today?
Add:
I forgot to mention this, but I'm looking for tools that work under Linux and are free (open source + no charge), if possible. I will not pay (neither for a Windows license nor for a software package) to get rid of IRAF.
Answer: I suspect that everything you want and more is available and written in python or has python wrappers.
Astropy
ccdproc
photutils | {
"domain": "astronomy.stackexchange",
"id": 3683,
"tags": "software, photometry"
} |
A box plot of qualitative variables | Question: I have 2 groups of patients; responders to chemotherapy and non-responders
I have calculated cancer cell fraction (CCF) of a set of driver genes (by variant allele frequency) for each group and I have
> head(dat)
Response CCF
1 Responders 1.0000000
2 Responders 0.5413323
3 Responders 1.0000000
4 Responders 1.0000000
5 Responders 1.0000000
6 Responders 1.0000000
unique(dat$Response)
[1] Responders Nonresponders
Levels: Nonresponders Responders
>
If CCF > 0.95 , the mutation is clonal otherwise sub clonal
I want to show how many clone and subclone are in these two groups by box plot or something similar like below
I have tried this which was nonsense
ggplot(dat, aes(x=Response, y=CCF)) +
geom_boxplot()
Can you help me?
Answer: I'm not sure that boxplot will be the more appropriate representation, as you will end with two numbers (count of Clonal and SubClonal) per groups of patients.
One solution will be to first create a new categorical variable based on CCF values using for example ifelse statement for example by writing:
df$Clonal <- ifelse(df$CCF > 0.95,"Clone","SubClonal")
Then you can get the count of each Clone and subclone for each responders and non responders by using table:
DF <- as.data.frame(table(df[,c("Response","Clonal")]))
You can finally convert these new values as percentage by grouping according to the Response. For example, using dplyr, you can do something like that:
library(dplyr)
DF %>% group_by(Response) %>% mutate(Freq_percent = Freq / sum(Freq))
And finally, you can plot it as a barchart or single point in ggplot2.
Hope it helps you to figure it out how to deal with your data. | {
"domain": "bioinformatics.stackexchange",
"id": 1341,
"tags": "r, ggplot2, software-usage"
} |
Example stabilizer code which is not $ GF(4) $ linear | Question: In the paper
https://arxiv.org/abs/quant-ph/9704043
Eric Rains talks about $ GF(4) $ linear codes and proves some of their properties, for example
"many codes of interest (e.g., GF(4)-linear codes) are built
out of distance 2 codes."
"Since GF(4)-linear codes are built out of [[2n, 2(n−1), 2]]s,
we find, for instance, that any equivalence between GF(4)-linear codes (subject to
certain trivial restrictions) must lie in the Clifford group."
"Corollary 14. Any equivalence of GF(4)-linear quantum codes lies in the Clifford
group, unless the codes have minimum distance 1, or contain a codeword of weight
2."
"Lemma 15. A GF(4)-linear code C is spanned by its minimal codewords."
"Corollary 16. If Q is a GF(4)-linear code, then every automorphism of Q lies in
the Clifford group."
I'm trying to better understand the limited scope of these results.
What is an example of a stabilizer code which is not $ GF(4) $ linear?
Update: The comment from @unknown says that a CSS code is $ GF(4) $ linear if and only if $ H_x=H_z $. Now I'm curious if the $ [[5,1,3]] $ code is $ GF(4) $ linear.
Answer: Theorem 4 in https://arxiv.org/pdf/quant-ph/9608006.pdf says that linear $GF(4)$ codes are even. In particular this means that odd codes cannot be linear.
As an example, consider the $[[5,1,2]]$ CSS code given by stabilizer generators
\begin{align*}
ZZZZZ \\
XXXXI \\
IXXXX \\
XXIXX.
\end{align*}
This code is odd because the first generator has weight $5$. Note that one can easily check that there are no weight $1$ elements in the stabilizer.
If we use $I \to 0$, $Z \to 1$, $X \to \omega$, and $Y \to \omega^2$, then the stabilizer in $GF(4)$ is
\begin{align*}
11111 \\
\omega \omega \omega \omega 0 \\
0 \omega \omega \omega \omega \\
\omega \omega 0 \omega \omega.
\end{align*}
If we multiply the second generator by $\omega$ twice and use the fact that $\omega^3 = 1$ then we get the element $11110$. But that added with the first generator is $00001$. But we know there are no weight 1 elements in the stabilizer so this code is not closed under $\omega$ and hence it is not linear (as expected since it is odd).
More generally, for linear codes if $g$ is in the stabilizer then $\omega g$ and $\omega^2 g$ must also be in the stabilizer. But this is a sort of "Pauli cycle", e.g., $\omega g$ is just $g$ with each $X$ replaced by $Y$, each $Y$ replaced by $Z$, each $Z$ replaced by $X$ and each $I$ left alone (and $\omega^2 g$ is the same except in reverse order).
For example, the $[[5,1,3]]$ code has generators $IIIII$ and
\begin{align*}
(XZZXI)_\text{cyc} \\
(YXXYI)_\text{cyc} \\
(ZYYZI)_\text{cyc}
\end{align*}
where the subscript indicates all 5 cyclic shifts occur. This way of writing the stabilizer makes it clear that the code is linear because the 2nd line is just $\omega$ times the first and the 3rd line is just $\omega^2$ times the first.
Thus a linear $GF(4)$ code is a stabilizer code that is even and whose stabilizer is closed under "Pauli cycles".
Note there is also a restriction on $n$ and $k$. For an $[[n,k,d]]$ code, the stabilizer has size $2^{n-k}$. On the other hand, for linear codes, the size must be $1 + 3 p$ for some integer $p$. To be sure, the $+1$ comes from the identity and the $3p$ comes because each generator comes in a package of $3$ from the "Pauli cycles." Thus we must have $1+3p = 2^{n-k}$ or $p = (2^{n-k}-1)/3$. However, $p$ is an integer iff $n-k$ is even. It follows that linear codes must have $n-k$ even.
Then for example, linear stabilizer states $[[n,0,d]]$ can only occur when $n$ is even. Moreover, $[[n,1,d]]$ can only occur for $n$ odd. | {
"domain": "quantumcomputing.stackexchange",
"id": 4795,
"tags": "stabilizer-code"
} |
Is there free will? | Question: From what I've understood about the answer to this question quantum physics doesn't contradict determinism, but instead it simply isn't achievable only because of our universe's nature: we are unable to detect particles without affecting them.
So, is our universe deterministic, independently from the fact that we can or cannot predict it?
Does that mean that this was the only possible evolving of our universe? (implying that we have no free will)
Answer: Note: this is really a philosophy question so I am going to give a philosophy response. Despite it not being an actual physics question it is an interesting question nonetheless that many people have asked at one point or another and it is nice to give a coherent response to these sorts of things:
Whats actually kind of interesting is that even if the universe was completely and utterly classical and deterministic and if it was possible to know the positions and velocities of every particle within a radius $ R $ to perfect precision, you still would not be able to make predictions with 100% accuracy.
Lets say for example that you wanted to predict what the world was going to be like 1 year from now, so you gather information about all particles within one light year and throw them into your simulator and wait for it to churn out a response. The issue with this is that it doesn't take into account the back reaction of the simulator on the world around it and there is not any way to perfectly account for it.
If the informational content of a system containing the simulator is $X$, the information contained within the simulator must be $\leq X $ with the equality happening when the system consists of only the simulator. Now of course you could thermodynamically isolate the simulator from the system and account for it in your system as a simple heat generator, but that introduces approximations which will eventually cause divergences in the chaotic system you are modeling.
An even better way of seeing this is as follows (this one hinges on the speed of causality being the speed of light which is true for our universe): I want to simulate the world for a year so I take all of the information about the current state of earth, travel two light years away set up an exact replica of earth and the surroundings (one edge is 1 light years away, the other is 3 light years away) then let it evolve for a year, get my perfect answer, go back to earth and find... That all of my information is 5 (3 years to set up, 1 year to simulate, 2 years to get back) years out of date! I wasn't able to get any information that would let me predict your behavior perfectly in advance!
In a completely unrelated note, if you look into the issue with more detail you may find that the concept of free will as is normally espoused (non determinism) implies that you react randomly without stimulus, but that is an entirely different issue. | {
"domain": "physics.stackexchange",
"id": 24529,
"tags": "quantum-mechanics"
} |
Can you get waves in water without gravity? | Question: Is it possible to produce water waves in absence of gravity?
Answer: Waves inside a container are, in general, something to be avoided. Waves inside containers have capsized ships, derailed railroad cars, and rolled tanker trucks off the road. Waves and wave-like behavior of liquids in the fuel and oxidizer tanks in a spacecraft are also bad. Slosh has been a problem from day one in launching spacecraft, and continues to be an issue. The Near Earth Asteroid Rendezvous mission almost ended when slosh disturbances caused the spacecraft to go out of control. The second flight of the Falcon-1 failed due to unexpected slosh interactions. The partially successful SloshSat was launched in 2005 with the specific intent of studying slosh in zero g conditions.
The fluids (liquid+gas) in a partially filled tank that has been in free fall for a sufficiently long time are bizarre mix of gas bubbles, free-floating liquid blobs, foam, and liquid blobs and films crawling along the walls of the tank. This is a world of very low Bond, Weber, and Reynolds numbers. These dimensionless numbers capture the ratio of gravitational effects to capillary effects (Bond number), the ratio of inertial effects to capillary effects (Weber number), and the ratio of inertia effects to viscous effects (Reynolds number).
Firing a thruster results in forces analogous to gravity and cause the liquid to coalesce. The transient and wave-like phenomena that result can be very deleterious. Starting thrust should be very low (< 1/100 g) until the liquid coalesces. Stopping thruster firings can also result in transients. This short youtube video shows a camera inside one of the kerosene tanks on the Saturn 1. There is a very nice water hammer at about 1:35 into the video when the thrusters shut down. | {
"domain": "physics.stackexchange",
"id": 41540,
"tags": "waves"
} |
Gaussian vs ABINIT for solids | Question: I see that Gaussian has feature to set periodic boundary conditions specifying the parameter Tv in the input file. Does it do it via the plane wave basis set? Also, the question is whether it is efficient to use Gaussian to compute the relaxed lattice constant, band structure and DOS for solids or it is better to use programs like ABINIT.
Answer: Even if it is possible to calculate periodic structures in Gaussian, I would not suggest it.
Better option is to use codes devoted to calculation of solids, among others the ABINIT you mentioned, or e.g. Quantum Espresso.
Gaussian does not use plane wave basis set, will be slower and in general less powerful than codes devoted to solids. | {
"domain": "chemistry.stackexchange",
"id": 634,
"tags": "computational-chemistry, quantum-chemistry"
} |
How can I make a simple number guessing game more efficient? | Question: I have been programming a simple number guessing game and I am wondering if there is any way of making my code more efficient and cleaner. I have spent some time on it implementing error checking to make it as safe as I know how. I feel as if the way's that I have done this may be 'long winded' so if you know of a shorter way to do so then this would be much appreciated.
I do have comments, and I hope they make the program readable!
I am using Visual Studio 2010 - not sure if this changes much.
#include "stdafx.h"
#include <iostream>
#include <ctime>
#include <string>
#include <sstream>
// Include the standard namespace for easy use of cout/cin
using namespace std;
// Function Declarations.
string convertIntToString(int input);
bool isValidInput_Int(string input);
void cls();
void pause();
void printError(int ErrorNumber, bool ClearWindow);
void game(int difficulty);
// Main Function
int _tmain(int argc, _TCHAR* argv[])
{
// Variables
string input;
int inputVal;
bool gameIsRunning = true;
// We want rand() to be as random as possible.
srand( time(NULL) );
// Run until the user wishes to quit.
while(gameIsRunning)
{
cout << "Welcome to the number guessing game V1.0\n\nPlease select an option:\n";
cout << "1- Easy\n2- Medium\n3- Hard\n4- Expert\n5- Exit\n>>";
cin >> input;
// Check if the data is valid according to our wishes.
if(isValidInput_Int(input))
{
// Act correctly according to the input.
// Store the integer we want.
inputVal = atoi(input.c_str());
// Make sure the input is a correct selection
if(inputVal < 5 && inputVal > 0)
{
// Run the game.
game(inputVal);
}
else if(inputVal == 5)
{
// Quit the application.
exit(0);
}
else
{
printError(2, true);
}
}
else
{
printError(1, true);
}
}
// Leave the Application.
return 0;
}
///--------------------------------------
/// Resource:
/// http://www.cplusplus.com/forum/beginner/7777/
/// Converts an integer to a string.
///--------------------------------------
string convertIntToString(int input)
{
stringstream ss; //create a stringstream
ss << input; //add number to the stream
return ss.str(); //return a string with the contents of the stream
}
///--------------------------------------
/// Checks if the input is of size 1 and
/// The values inside this are of type
/// Integer and nothing else.
///--------------------------------------
bool isValidInput_Int(string input)
{
bool retVal = 0;
try
{
if(atoi(input.c_str()))
retVal = 1;
}
catch(exception)
{
retVal = 0;
}
return retVal;
}
///--------------------------------------
/// Clears the console window of text.
///--------------------------------------
void cls()
{
system("cls");
return;
}
///--------------------------------------
/// Pauses the console window.
///--------------------------------------
void pause()
{
system("pause");
return;
}
///--------------------------------------
/// Shows an error based on the input to
/// The function. If your error is not
/// Listed then put '0' in as a default
/// value. This returns a some-what
/// Generic message to the use.
///--------------------------------------
void printError(int ErrorNumber, bool ClearWindow)
{
// Clear the console window
if(ClearWindow)
cls();
switch(ErrorNumber)
{
case 1:
cout << "---Error: Invalid Input---\n\n";
break;
case 2:
cout << "---Error: Please enter a correct option---\n\n";
break;
default:
cout << "---Error: Something went wrong---\n\n";
break;
}
return;
}
///--------------------------------------
/// This is where the 'game' code is kept
///--------------------------------------
void game(int difficulty)
{
// Variables
string input;
string message;
int inputVal;
// Generate our random number!
int numberToGuess = rand() % ((difficulty * 2) * 10) + 1;
cout << "Welcome!\n";
cout << "You have 10 lives.\nThe number is between 1 and " << (difficulty * 2) * 10 << "\n";
cout << "Start guessing!\n\n";
// Loop until they are dead or they guess the number.
for(int lives = 10;lives > 0; lives--)
{
cout << ">>";
cin >> input;
// Check if the data is valid according to our wishes.
if(isValidInput_Int(input))
{
// Store the integer we want.
inputVal = atoi(input.c_str());
// Check if the guess was correct or not
// If it isnt, give them some 'guidance'.
if(inputVal == numberToGuess)
{
message = "Congratulations!\nYou Win!\n";
lives -= 10;
}
if(lives != 1)
{
if(inputVal > numberToGuess)
{
message = "Incorrect, try guessing lower!\n";
}
if(inputVal < numberToGuess)
{
message = "Incorrect, try guessing higher!\n";
}
}
else
{
message = "Incorrect, the number was: [" + convertIntToString(numberToGuess) + "]\n";
}
cout << message;
}
else
{
// Let them know that there input was invalid.
printError(1, false);
// We could 'punish' however, we will be nice.
lives++;
}
}
// Pause.
pause();
// Clear the screen before showing the menu again.
cls();
return;
}
Answer: Here some of my language-focused remarks, not touching the actual program design.
usings
// Include the standard namespace for easy use of cout/cin
using namespace std;
Very nice to list what you need by adding the comment. But instead of pulling all names you could make it more explicit
using std::cin;
using std::cout
In big files you may want to "sectionize" the using by from which #include they came from. (Edit, thanks to comment below) Best do not introduce using before other #includes though, you can group the using afterwards. E.g.:
#include <iostream>
#include <string>
#include <map>
#include <vector>
// for shorten code
using std::string;
using std::map;
using std::vector;
// from iostreeam
using std::cin;
using std::cout;
I do not do this for all names, only for the most frequent ones when it really saves space -- and most people do know anyway. Most std::-items I write with their namespace when I use them.
string I often apply a using to (lots of strings in function arguments), but map and vector I typically do not (better being explicit at the few location they are used). When I have to use lots of std::vector<thingy>::const_iterator somewhere I shorten the code with a local typedef or using.
void drawData(std::vector<thingy> &data) {
typedef std::vector<thingy>::const_iterator tit;
const tit end = data.end();
for(tit it=data.begin(); it!=end; ++it) {
*it.drawYourself();
}
}
And with C++11 you do not even need that, you have auto and ranged-for to beautify your.
variable initialization
// Variables
string input;
int inputVal;
bool gameIsRunning = true;
It is ok not to initialize string, because it is a class/object.
But you should initialize int variables.
latest possible declaration
// Variables
string input;
...
while(...)
...
cin >> input;
Your rule-of-thumb should be to declare a variable as late as possible which would mean
...
while(...)
...
string input;
cin >> input;
The exception are "tight loops" when the frequent initialization and destruction would cost to much. But actually, string is quite cheap and this is not a "tight loop", so I would recommend it. | {
"domain": "codereview.stackexchange",
"id": 1882,
"tags": "c++, optimization, game"
} |
List with object and index filtering out the object depends upon the index present at list | Question: I have a following list which indicates the indexes.
List<Integer> integerList = Arrays.asList(1, 3, 5, 6); // index
I have the following collection of objects.
Collection<Test> testCollection = new ArrayList<>();
Test test01 = new Test(0, "A"); // 0
Test test02 = new Test(1, "B"); // 1
Test test03 = new Test(2, "C"); // 2
Test test04 = new Test(3, "D"); // 3
Test test05 = new Test(4, "E"); // 4
Test test06 = new Test(5, "F"); // 5
testCollection.add(test01);
testCollection.add(test02);
testCollection.add(test03);
testCollection.add(test04);
testCollection.add(test05);
testCollection.add(test06);
The integerList has indexes which are present on testCollection. I want to filter out the testCollection which will have object consisting the index 1, 3, 5. Index 6 is not present on object collection.
I wrote the code as below example. Is there any better as like Java 8 way?
List<Test> testList = new ArrayList<>(testCollection);
List<Test> newTestList = new ArrayList<>();
for (Integer integer : integerList) {
for (int j = 0; j < testList.size(); j++) {
if (integer == j) {
newTestList.add(testList.get(j));
}
}
}
System.out.println(newTestList);
It will have the following output as result:
[Test{id=1, name='B'}, Test{id=3, name='D'}, Test{id=5, name='F'}]
The class Test has following information.
class Test {
private int id;
private String name;
}
Answer: It seems to me that your solution is basically O(n*m),the size of the unfiltered list times the number of filter indexes. I think you can get O(n+mlogm),the size of the unfiltered list plus sorting the filtered indexes, by using a combination of indexes and iterators and iterating through both lists at the same time. It could look something like this:
public static List<Test> getFilteredList(Collection<Test> unfilteredList, List<Integer> filterIndexes) {
if(filterIndexes == null || filterIndexes.size() == 0){
return new ArrayList<Test>(unfilteredList);
}
Collections.sort(filterIndexes);
List<Test> newTestList = new ArrayList<>();
Iterator uIterator = unfilteredList.iterator();
Iterator fIterator = filterIndexes.iterator();
Integer fIndex = (Integer)fIterator.next();
for (Integer uIndex = 0;uIterator.hasNext();++uIndex) {
Test nextTest = (Test)uIterator.next();
if (uIndex == fIndex) {
newTestList.add(nextTest);
if(!fIterator.hasNext()){
break;
}else{
fIndex = (Integer)fIterator.next();
if(fIndex >= unfilteredList.size()){
break;
}
}
}
}
return newTestList;
} | {
"domain": "codereview.stackexchange",
"id": 38438,
"tags": "java"
} |
Regular expression for strings not starting with 10 | Question: How can I construct a regular expression for the language over $\{0,1\}$ which is the complement of the language represented by the regular expression $10(0+1)^*$?
Answer: If a word doesn't start with $10$, then either it starts with some other combination of two letters, or it is shorter than two letters. All in all, we get the following regular expression for your language:
$$
\epsilon + 0 + 1 + (00+01+11)(0+1)^*.
$$ | {
"domain": "cs.stackexchange",
"id": 15157,
"tags": "regular-languages, regular-expressions"
} |
Gyroscope/Powerball gearing system for bikes? | Question: There is an interesting and potentially useful property to do with the procession of Gyroscopes, as well as devices like the "Powerball" toy.
When the gyroscope is rotating (Ws) and then force is applied to increase the procession (Wp) the Ws rotation increases.
Experiments with the Powerball show that a small increase in Wp results in a very large increase in both Ws and the resistance to an increase of rotational velocity in Wp.
More intuitively, imagine the dynamics here as being like a variable gear that increases in gear ratio at an accelerating rate as the speed of the input increases.
Would it be practical to design a vehicle such as a human powered bike or a motorised car that used this property of a gyroscope instead of a conventional gearing system.
Answer: Naively, this looks as if it would be rather efficient, as there are few obvious mechanisms for heat loss. However, I'm unsure as to how make such a mechanism reliable. What if the wheel starts turning in the wrong direction?
Gearing would probably still be required, given the rotational speeds required for the exploited gyroscopic effects, so heat would be lost there. But the lack of chain makes for one fewer mode of failure.
If one make the gyroscopic effect reliable, such an engine might be reliable, perhaps more so than the traditional cog & chain, but given the need to step down the rotation speed to drive the wheel, there's not likely to be a noticeable energy saving compared to your traditional bike.
Overall, a nice idea. It would be worth building such a thing to investigate further. | {
"domain": "engineering.stackexchange",
"id": 2576,
"tags": "mechanical-engineering, mathematics, mechanisms, mechanical"
} |
What does $\text{dom}(\Gamma)$ mean in the context of an inference rule? | Question: In the wikipedia page on pure type systems, it gives the following inference rule:
$\frac{\Gamma \vdash A : s \quad x \notin \text{dom}(\Gamma)}{\Gamma, x : A \vdash x : A }\quad \text{(start)}$
What does $\text{dom}(\Gamma)$ mean here? These are string rewriting systems, so $\Gamma$ is supposed to be a string of symbols, a purely syntactic object. I'm not sure what a "domain of a string" is.
Answer: There are various ways to define contexts in type theories. In this style, we assume there is some infinite set of variable names $V$ and define the context $\Gamma : V \rightharpoonup S$ as a partial function from the set of variable names to the set of types. $\mathrm{dom}(\Gamma)$ is then the subset of $V$ on which $\Gamma$ is defined, i.e. the variables in the context $\Gamma$. To require that $x \notin \mathrm{dom}(\Gamma)$ is asking for $x$ to be a fresh variable for the context $\Gamma$. | {
"domain": "cs.stackexchange",
"id": 17872,
"tags": "type-theory, term-rewriting"
} |
Selecting a polygon within an array of complex polygons | Question: I have an array of polygons which are arrays of points. There are no gaps and none of them overlap like a Voronoi diagram.
Unlike a Voronoi diagram I cannot simply find the nearest centroid to select a polygon, this returns the correct polygon most of the time but sometimes the point lies within a neighboring polygon.
The developer tools in my chrome browser seem to be able to do it with the selection tool but I have no idea how it is doing it.
Answer: I would use a winding number algorithm. There are a few, but the fastest goes like this:Imagine a line from your point along the positive x-axis. Now, for every edge of your polygon, determine if it crosses this line. if it crosses the line from below to above, then increment the winding number (which is initially zero), if it crosses going from above to below then decrement. If the winding number is zero, then the point lies outside of the polygon, otherwise it lies inside.
You probably don't want to test every polygon every time the point moves, so I would pre-compute a bounding box or sphere for every polygon. I would then only test winding numbers against those polygons whose bounding shapes collide with the point in question.
Hope this helped.
Good luck! | {
"domain": "cs.stackexchange",
"id": 14608,
"tags": "polygons"
} |
Finding difference between date now and last comment | Question: I want users to wait 9 minutes after their last comment. I have written this code. It works but do you think this is practical? Is there a simpler solution?
$now = new DateTime( date( 'Y-m-d H:i:s' ) );
$lastPost = new DateTime( '2016-01-08 13:16:59' );
$diff = $lastPost->diff( $now );
if ( $diff->format( '%Y-%m-%d %H' ) == '00-0-0 00' ) {
if ( $diff->format( '%i' ) < 9 ) {
echo $diff->format( '%i minuts left or ahead' );
}
}
Answer: Current time is easily obtained simply with new DateTime(), no need to specify any formatting (default is now). Also you don't need to format dates to compare strings because DateTime itself is comparable. All together:
$now = new DateTime();
$lastPostTime = new DateTime('2016-01-08 13:16:59');
$notEnabledBefore = lastPostTime->add(new DateInterval('PT9M'));
if (now < notEnabledBefore) {
$timeToWait = notEnabledBefore->diff(now);
echo $timeToWait->format('%i minutes left or ahead');
} | {
"domain": "codereview.stackexchange",
"id": 17825,
"tags": "php"
} |
Group representation of Standard Model | Question: On page 527 of Srednicki's textbook "Quantum Field Theory", the Standard Model is described as follows:
It can be succinctly specified as a gauge theory with gauge group $SU(3) \times SU(2) \times U(1)$, with left-handed Weyl fields in three copies of the representation $(1, 2, -\frac{1}{2}) \oplus (1, 1, +1) \oplus (3, 2, + \frac{1}{6}) \oplus (\overline{3}, 1, -\frac{2}{3}) \oplus (\overline{3}, 1, +\frac{1}{3})$, and a complex scalar field in the representation $(1, 2, -\frac{1}{2})$. Here the last entry of each triplet gives the value of the $U(1)$ charge, known as $\it{hypercharge}$.
I am puzzled by the group representation $(1, 2, -\frac{1}{2}) \oplus (1, 1, +1) \oplus (3, 2, + \frac{1}{6}) \oplus (\overline{3}, 1, -\frac{2}{3}) \oplus (\overline{3}, 1, +\frac{1}{3})$. How does it come about? What are the steps (if any) to get this representation?
Answer: Your text assumes you are familiar with the quantum number content of the elementary particle fermions, determined by the Millikan oil-drop experiment, structure functions of the light quarks, V-A structure of the weak currents, etc. These are experimental inputs and they come from out there, your world.
It helps you summarize the self-evident logic of their apparently diverse quantum numbers so you could write a compact QFT for them, that's all. I assume you seek an appreciation of the manifest logic involved.
It gives you the SU(3) color rep, singlet for leptons, color 3 for quarks, or color anti triplet for antiquarks. Likewise, their SU(2) weak isospin, vanishing for right handed singlets, and doublet for left-handers. (No separate 2-bars, of course, as SU(2) is pseudoreal.)
And, of course, mutatis mutandis for their CPT conjugates. You only have singlets and fundamental reps, since these are fundamental fermion building blocks of our world.
Thus,
(1,2,-1/2) , e.g. for $e _L$
(1,1,1), e.g. for $\overline {e _R}$
(3,2,1/6) e.g. for $u_L, d_L$
($\bar 3$,1,-2/3) for $\overline{u_R}$
($\bar 3$,1,1/3) for $\overline{d_R}$.
The hypercharge in the third entry is dross -- an error-correction number, if you wish, given by $Y_W\equiv Q-T_3$, once you input the charge, in the "minority usage", but actually modern mainstream definition, so you might have to multiply it by 2 to agree with hidebound historical listings, like those linked here. It is the eigenvalue of U(1), as your particles are all singlets, of course, under it, and multiplies the B coupling charge of the fermion currents. The sooner you get used to its Golden Mnemonic, the better: It is the average charge of isomultiplets.
There is nothing more to it. Given these numbers you may completely, and concisely specify the fermion sector of the SM QFT. | {
"domain": "physics.stackexchange",
"id": 39098,
"tags": "standard-model, group-theory, representation-theory"
} |
Wave velocity Calculation | Question:
Find the maximum velocity of the particle of the wave given by $Y=A\sin(\omega t-kx)$.
My book says that the maximum velocity of the particle of a wave is $A\omega$. But I have a doubt that why don't we take the resultant of the wave velocity and $A\omega$ for the maximum velocity of the particle.
Answer: The particles of the wave are undergoing pure transverse motion (motion in $y$ axis): The velocity of particles is completely in the $y$ direction. Therefore, the velocity of a particle located at position $x$ at time $t$ is given by
$$v=\frac{\partial y}{\partial t} \Biggr|_{(x,t)}=A \omega \cos(\omega t - kx)$$
Suppose I had a row of children and I asked them to jump one after the other, you would observe that each child jumps/moves only in the vertical direction but the disturbance they create travels in the horizontal direction. This analogy is not exactly the same as the sine wave but hopefully illustrates my point. | {
"domain": "physics.stackexchange",
"id": 63766,
"tags": "waves, harmonic-oscillator"
} |
Choosing a suitable learning rate based on validation or testing accuracy? | Question: I have simulated a neural network with different learning rate, ranging from 0.00001 to 0.1, and recording each test and validation accuracy. The result i obtained is as below. There is 50 epoch for each learning rate, and i note down the validation accuracy at the last epoch, while the training accuracy is computed throughout the process.
Learning rate: 0.00001
Testing accuracy: 0.5850
Validation accuracy at final epoch: 0.5950
Learning rate: 0.0001
Testing accuracy:0.6550
Validation accuracy at final epoch: 0.6400
Learning rate: 0.001
Testing accuracy: 0.6350
Validation accuracy at final epoch: 0.6900
Learning rate: 0.01
Testing accuracy: 0.6650
Validation accuracy at final epoch: 0.6700
Learning rate: 0.1
Testing accuracy: 0.2500
Validation accuracy at final epoch: 0.2100
How does testing and validation accuracy influence which learning rate is better? Would a higher validation accuracy determine the most suitable learning rate for the model?
Hence, is it correct that 0.001 is the most suitable learning parameter since it has the highest validation accuracy at the last epoch?
Answer: You cannot select a parameter based on test accuracy, because the moment you do that, it becomes a validation accuracy as it has affected the final model. Therefore, you are always choosing based on validation accuracy.
As a result, the best result comes from learning rate 0.001, with the highest validation accuracy 0.6900. We have ignored Testing accuracy. If we select based on Testing accuracy, it becomes a validation accuracy.
Generally, a learning rate that is a looser at epoch 50, might be a winner at epoch 200. In other words, a slower convergence may lead to a higher accuracy. Therefore, this issue is worth considering too. | {
"domain": "datascience.stackexchange",
"id": 4916,
"tags": "machine-learning, neural-network, deep-learning"
} |
Blind bearing bore design | Question: I've tried to find some rules of thumb about this without any luck. I'm designing a part with a blind cartridge bearing bore. It's a 624 bearing (13x4x5) and the fit size guides point me to a tolerance of 13.0-13.026mm for the bore for my application, so slip fit on the housing. FWIW the shaft is the rotating part of the design and will be pressed into the inner race. Do I need to create some kind of lip or shelf inside the bore so that the inner race doesn't come into contact with the housing and create friction? It seems like this would obviously be a concern if both the inner race and outer race are the same thickness.
Answer: Yes, this is normally the solution, both to avoid the inner race touching or the shaft it supports.
Another method is to fit a spacer / washer to do the same thing - possibly cheaper but can be left out when taken apart. | {
"domain": "engineering.stackexchange",
"id": 4624,
"tags": "machining, bearings"
} |
What's a difference between the neoperceptron and CNN? | Question: What's a difference (in terms of architecture) between the neoperceptron and CNN?
Both ANNs have hidden layers and scanners, as I understood, but many sources subdivide them in two classes.
Answer: According to the research paper, neoperceptrons are a class of CNN that are not sensitive to rotations.
One of the issues with traditional kernels (that was the case before CNN and it is still true with them) is that the rotation of the input image would lead to different results, because the neurons in the dense layer would have different levels of activations.
With these new neurons, you don't get an issue with orientation. So in theory, if you have a gradient in your image, no matter what the orientation is, you would get the same value.
For a traditional CNN, you would get maximum activation with the original orientation, inverse with a 180° rotated image, and no activation with 90° or 270°. | {
"domain": "datascience.stackexchange",
"id": 3992,
"tags": "neural-network, cnn"
} |
Combining 'class_weight' with SMOTE | Question: This might sound a weird question, but I could not find enough details in sklearn documentation about 'class_weight'. Can we first oversample the dataset using SMOTE and then call the classifier with the 'class_weight' option? As my testing set is highly imbalanced, I want to penalize misclassifications for minority classes. Thank you!
Answer: I tried different classifiers using a combination of SMOTE and class_weight, the results are almost the same as using only the SMOTE approach, and this new config made almost no difference (which could be expected, following the logic behind the class_weigh approach).
PS: I have a pretty large dataset with multiple classes. This might result in different performance in different contexts. | {
"domain": "datascience.stackexchange",
"id": 5955,
"tags": "scikit-learn, multiclass-classification, class-imbalance, smote, imbalanced-learn"
} |
ROS Answers SE migration: Mapping robot | Question:
Hi
What is needed to make a keyboard - driven robot (w - a - s - d) with 2 motors + encoders, which is capable of creating maps with 360 laser scans (RPLIDAR)? Should the robot Subscribe msg / cmd_vel and publish msg / odom, msg / laser_scan to RVIZ? anything else?
Thanks
Originally posted by mateusguilherme on ROS Answers with karma: 125 on 2019-01-30
Post score: 0
Answer:
You've come to the right place. This tutorial will do everything you're asking. Take your time and work through it slowly. That will be much faster than trying to skim it and take short cuts.
http://wiki.ros.org/navigation/Tutorials
Originally posted by billy with karma: 1850 on 2019-01-30
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 32378,
"tags": "ros-kinetic"
} |
Entropy change of an elastic cylinder | Question: We had in our thermodynamics class an example regarding the entropy change of a rubber cylinder.It has an initial length $L_0$ and after we stretch it it has a final length $L_f$.A thermodynamic state equation regarding the force acting on the cylinder was given f(T,L), which I don't think it's very important right now to write down, as it is not this the problem I am having. So anyway we proceed and write the following down:
$dF= -SdT + fdl$
The sign is positive as a stretching of the rubber cylinder, increases it's internal energy (in difference from when we have the expansion of a gas).
Then it is said that since the free energy is a state variable then:
$\frac {\partial S(T,L)}{\partial L} + \frac {\partial f(T,L)}{\partial T}=0$.
How is the fact that the free energy F is a state variable, the reason for us receiving the above equation ?
Answer: By state variable, people mean that the free energy is completely specified by the 2 (thermodynamic) coordinates T, and L (and not by "how" you reached that state).
A small change in F can be then decomposed by partials with respect to each coordinate:
$$ dF = \underbrace{\partial_{T} F}_{\equiv -S} dT + \underbrace{\partial_{L} F}_{\equiv f} dL $$
This allows us to identify what S and f are: they are just the partials of F with respect to the thermodynamic variables.
$$ \rightarrow S = -\partial_T F(L, T) $$
$$ \rightarrow f = \partial_L F(L, T) $$
Because partial derivatives commute, we have:
$$ \partial_L \partial_T F = \partial_T \partial_L F$$
$$ -\partial_L S = \partial_T f$$
which is what you want. | {
"domain": "physics.stackexchange",
"id": 84497,
"tags": "thermodynamics"
} |
$n\textrm{ Hz}$ waveform sampled at $m\textrm{ Hz}$ per second | Question: Here is an example of plotting a square wave given in SciPy Documentation
A $5\textrm{ Hz}$ waveform sampled at $500\textrm{ Hz}$ for 1 second:
from scipy import signal
import matplotlib.pyplot as plt
t = np.linspace(0, 1, 500, endpoint=False)
plt.plot(t, signal.square(2 * np.pi * 5 * t))
plt.ylim(-2, 2)**strong text**
The documentation page states that
The square wave has a period 2*pi, has value +1 from 0 to 2*pi*duty
and -1 from 2*pi*duty to 2*pi. duty must be in the interval [0,1].
Why in the example 5*t is multipled by 2*np.pi?
Doesn't the signal.sqrt() function
take care of multiplying the frequency 5*t by 2*np.pi?
What is the meaning of sampled at $500\textrm{ Hz}$ for second? I understand it is a $5\textrm{ Hz}$ waveform since 2*np.pi*5*t takes care of creating $5\textrm{ Hz}$ wave. But I do not understand
sampled at 500 Hz for 1 second
Answer:
The basic square wave function signal.square() has a period of 2*pi. Multiplying each time by 2*pi*f is a way of stretching the wave along the time axis to get a wave with a period of 1/f.
To understand what the sample rate refers to, look at the first line:
t=np.linspace(0,1,500,endpoint=False)
If you look at the numpy documentation for this function, you'll see this creates a list of 500 evenly spaced numbers between 0 and 1. This is an array of times. When you then call the signal.square() function with this array, it returns an array of points on the square wave for each of those times. Thus you have sampled the square wave at 500Hz for 1 second. | {
"domain": "dsp.stackexchange",
"id": 5027,
"tags": "discrete-signals, python, continuous-signals, scipy"
} |
Why are metals generally cooler compared to their surrounding? | Question: I have felt sometimes that steel boxes or steel utensils are cooler than rest of the non-metal things when touched. Is it because it is a good conductor and our hands being at a higher temperature loose heat to it hence if we feel its cooler?
Answer: Yes, it is just as you explained. | {
"domain": "physics.stackexchange",
"id": 42576,
"tags": "thermodynamics, everyday-life"
} |
How to setup ROS for multi-robot domains (roscores, namespaces for nodes, tf frames) | Question:
Please help in writing up a ROS best practice.
Originally posted by mmwise on ROS Answers with karma: 8372 on 2011-11-07
Post score: 3
Answer:
For simulation you have a very complete answer here about how to manage node namespaces and tf issues.
Working with real robots makes it harder.
I suggest using a roscore in each robot, so that all data is private to each robot.
In order to share certain topics, you can use packages such as ros-rt-wmp, which provides a wireless communication protocol that propagate selective topics to other roscores.
Originally posted by Pablo Urcola with karma: 188 on 2012-09-21
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 7220,
"tags": "ros, best-practices, transform"
} |
Using a telescope to look at the Earth History in Some Detail | Question: The following paragraph is a scientific fact.
When we look distantly into the universe (with telescopes) we see galaxies
(or whatever else you see) as they were millions years
ago- in other words it is how they looked in the past (this is because of the speed of light).
So my question is could we build a technology (using current science or more speculative/theoretical science) using this scientific fact too see the earth history or past to such a level of detail that we could see historical events (e.g. individual dinosaurs) and the famous people's faces (in history books) - as if we were standing face to face with them- of the long ago past? What's the best level of detail could we theoretically see?
Answer: No.
Let's say we want to see Earth as it was 1000 years ago. Assume that someone has set up a perfect mirror 500 light-years away, so that we can actually see the light that left Earth 1000 years ago. (That's a really big assumption.)
The best telescopes in the world can't see the Apollo landing sites from Earth. We didn't get decent images of the descent stages, which were left on the surface, until the Lunar Reconnaissance Orbiter sent back photos it took from Lunar orbit. See http://www.nasa.gov/mission_pages/LRO/news/apollo-sites.html
1000 light-years is about 20 billion times as far away as the Moon. There's no way we could see people's faces at that distance with current technology -- or with any reasonable future technology. (There are physical limits on the resolution of an optical telescope of a given size.)
And all this assumes that we have that perfect mirror out there. As far as I know, nobody has set up such a mirror for us, and all the light that left Earth 1000 years ago is now 1000 light-years away, badly faded, and beyond our reach.
It's conceivable that we could develop faster-than-light travel (which may or may not be physically possible), go out there, build a telescope with a really big aperture, and point it back at Earth. But that's not likely to happen any time soon, and I wouldn't know how to determine how good an image we might be able to get.
And going back millions of years just makes the problem worse. | {
"domain": "astronomy.stackexchange",
"id": 1829,
"tags": "light"
} |
Does communication with new parts of body requires internal changes in brain? | Question: I am not a biological scientist and have low biology knowledges in general, but I want to know some thing.
Most of us probably can't even imagine what it feels the sixth finger to be touched. Because we do not have one. We can't imagine what should we do to move the sixth finger, although we can imagine same for the finger we have.
I can assume, that people with anomaly additional part of body, like finger or even hand, can fully control and feel it, independently and separately from the "trivial" one.
Analogy
Now, let's consider another world for a while. You probably know about USB - it is a protocol for data communication. There can be USB-device with any type of functionality, and USB-port, situated at some another device (PC, for example), where USB-device could be plugged in.
Number of USB ports of some device, where USB-devices could be plugged in (PC, for example) is finite, so if a user already used them, another device can't be used, without changing some PC internals.
But, USB protocol provides a concept of the USB-hub: this is a device, that concentrates multiple usb-devices, it has several USB-ports in the output, that can be used.
So, with such device, user can use even single built-in USB-port with multiple devices. So, although internally, PC provides only one port, the technology itself can provide many devices, without changing internals of host device (PC).
Back to biology
I was thinking: the person, that is born with additional part of body, that completely functionates, can be felt, can be controlled, and the mainly - separately and independently from others, during ontogenesis obtained some "internal", "core" changes somewhere in the brain, or it is just peripheral stuff, i.e. brain provides some analogy of hub functionality?
If second statement is true, that should mean two interesting things:
Theoretically, even for a person born, with 5 fingers we can emulate feelings of the six's finger.
Again, theoretically there can be infinite, or at least pretty big number of degrees of freedom for such feelings.
Answer: I think you'd benefit from reading about the concept of "critical periods" - basically, at different times during development, the brain is learning specific tasks, associating sensory stimuli and motor actions and more complex things, as well (like comprehending and producing language). Some of these things are best studied in model organisms where we can take it apart, but you can also watch a lot of this development in human infants since they are born so early relative to their developmental progress. A newborn really has no voluntary muscle control in their limbs and trunk at all. Arms and legs flail wildly, and only over time do their brains learn and refine how to control those limbs.
I would drop the USB analogy entirely, I don't think it's helping you at all in this case.
Experimentally, it's a bit of a challenge to just "add on" new limbs; it's not like you can tape a new digit on a hand because the developmental processes that connect the nervous system to a digit happen while that digit is growing. The nervous system connections are being made at the same time that the limb or digit is growing. However, there has been study of what happens when you take something away: taping digits together, for example, creates a unified representation in the brain. Occluding one eye causes the brain to use the remaining eye for the entire visual cortex. Some of these experiments are described in this Wikipedia article. These experiments tell us something about the relationship between the brain and the rest of the body: brain regions are dedicated to the parts of the body that sensory stimuli and motor connections are available from. They do not have a pre-ordained accounting of exactly how many or where those sources of information are at, when you take away things that are normally present, brain regions are proportionally dedicated to whatever inputs and outputs are available.
Conceptually, rather than thinking about extra parts, all of your same questions apply just as much to what is "normal". There is no evidence that brain development has any assumption built in that there will be 4 limbs or 5 digits on a hand, rather, connections are made with the nervous system as the limbs and digits develop, and these connections carry information between the CNS and sensory neurons and motor neurons, and the brain develops in response to those connections. | {
"domain": "biology.stackexchange",
"id": 12423,
"tags": "neuroscience, neurophysiology, neuroanatomy, neurology"
} |
What are the programming languages important to learn for a geneticist or bioinformatician? | Question: I am interested in learning more about both genomics and bioinformatics, with emphasis on genomics. I was told after taking an introductory course of genomics that the programming language "R" and "python" are widely used. However having asked about learning genomics before on this fourm, I was suggested alot of books for learning genomics with "Perl". I was also told anecdotally that "perl" is falling "out of favor" and "Python" and "R" are desired.
To give background on my coding skills, I flunked out of computer science due to poor coding skills and poor math skills. I still had an interest in the field and therefore started studying genomics and biology instead.
I would categorize my self as a beginner in java and python in terms of skills.
I was also told anecdotally that the best way to learn coding is by having a project. But I feel incompetent in my coding skills to even attempt anything.
I tried to learn Python via "Codecademy" and their 13 hour course for python. Having completed it I felt lost enough to make this post, looking for further guidance. I tried to read a bioinformatics book based in Python, but I missed the handholding and directed exercises that Codecademy gave me.
Should one interested in genomics learn the coding language, R or Perl? Also what would be the best ways to learn these for one who does "not excel" in coding?
If this post seems opinion based mods feel free to delete it or move it, I just was not sure how to phrase my concerns adequately.
Answer: I have found that this chapter by Lincoln Stein is a very easy, accessible, and useful introduction to writing a Perl script: Using Perl to facilitate biological analysis. It is Chapter 18 in "Bioinformatics: A Practical Guide to the Analysis of Genes and Proteins" 3rd edition, by Baxevanis and Ouellette. I use Perl and bash for almost everything, but most of my co-workers prefer Python. I only use R when I need to. One software engineer I know swears by Ruby. As long as your scripting language has lots of useful libraries I don't know if it makes a big difference which one you select. | {
"domain": "biology.stackexchange",
"id": 4220,
"tags": "bioinformatics, genomics"
} |
Why is Venus's atmospheric pressure 75 times that of earth when carbon dioxide is only 1.5 times heavier than air? | Question: Obviously I have forgotten by basic college chemistry. I am getting carbon dioxide at 1.87kg per cubic meter and compared it to nitrogen and oxygen but source says carbon dioxide is 1.5 times heavier than air. Let's go with that and since the earth is about the same size as Venus we should be able to use weight instead of density I assume.
Venus's surface atmospheric pressure is 75 times that of earth. Some sources say 90. The atmosphere is mostly carbon dioxide. The numbers just seem not to work out here. 90 times is much more than 1 and 1/2 times which is how much heavier CO2 is relative to air.
P.S.
Next paragraph is not part of the question but just and interesting observation. If Venus's atmosphere really is 90 times that of earth the hull of a the most advanced nuclear sub would be crushed like a pancake. 90 ATM should be 750 meters and crush depth is 730 meters for nuclear sub. I mention this because it is so interesting Venera survived in order to take the measurements at the surface.
Answer: Venus's atmosphere is very dense at the surface because Venus's atmosphere is very massive. The composition is nearly irrelevant. The pressure at Venus's surface is proportional to the mass of Venus's atmosphere and inversely proportional to Venus's surface area. The constant of proportionality is Venus's surface gravitational acceleration: $P \approx g_\text{Venus} \frac{m}{A}$. (I used $\approx$ rather than = because corrections are needed for Venus's uneven surface and to account for Venus being more or less spherical as opposed to a flat plane.)
That said, $P \approx g_\text{Venus} \frac{m}{A}$, or $m \approx \frac {PA}{g_\text{Venus}}$ is a very good approximation. What this means based on observations of Venus's surface pressure, surface area, and surface gravity is that Venus's atmosphere is very massive compared to that of the Earth.
Regarding the Venera and Vega landers not being crushed, it was the high surface temperature rather than the high surface pressure that did them in. People have built vessels that have descended to the bottom of the Challenger Deep, where the pressure is 1000 atmospheres. Compare that with the 93 atmospheres Venus's surface. Nuclear submarines operate at one atmosphere so as to be able to rise to the surface quickly, which means they can indeed be crushed by pressure at a sufficient depth. The Venera and Vega landers didn't operate under those constraints. | {
"domain": "astronomy.stackexchange",
"id": 6877,
"tags": "planetary-atmosphere, venus"
} |
Building a machine learning model based on a set of timestamped features to predict/classify a label/value? | Question: I'm trying to apply machine learning to pharmaceutical manufacturing to predict whether batches of drug products manufactured are good or not. for the sake of relatability, let's use coffee brewing as an analogous process. Let's imagine that I'm trying to predict the acidity of the coffee that I've brewed.
The dataset that I have contains features such as temperature of water, stirring speed and pressure that are constantly measured (say..on a per second basis) over a variable amount of time (the first cup may be brewed in 5 minutes, the second in 10 etc).
What kind of preprocessing should I perform on such a multidimensional dataset? One stumbling block is that for each observation, the duration is different, which may complicate dimension reduction? Once preprocessed, is there any specific model that would suit the task at hand? I'm looking at something like a regression but alternatively, classifiers seem to be fine as well if I split the acidity(pH) into "<5.5" or ">5.5"?
I hope to get some general directions and if you can paste a few links to texts or examples that'll be good! Also, I'm more familiar with python and scikit learn, so if you can point me in the right section in the documentation that'll be great too!
Answer: I don't know much about coffee or pharmaceuticals but I think the widely varying time samples is a problem. If I brewed one batch of coffee for a minute and another for 5 hours, I'm pretty sure the 5 hour batch would come out burnt-tasting in all cases.
Can you break the samples up into cohorts by duration and then train on each cohort? You'd end up with a model for the "1 minute batch", a model for the "1 hour batch", etc. | {
"domain": "datascience.stackexchange",
"id": 1599,
"tags": "machine-learning, classification, predictive-modeling, svm"
} |
Which strange fruit is this? | Question: Yesterday, my brother brought a strange fruit resembling Lychee (but much bigger in size). Here is a pic of it:
When cut it, it had smell like banana and fibers were like of the chicken meat, though taste was more or less like banana (sorry have no pic of that).
Can someone kindly help me identify it?
Answer: This appears to be a jackfruit. Jackfruit are a large, tropical fruit, commonly reported as smelling similar to banana. | {
"domain": "biology.stackexchange",
"id": 8926,
"tags": "species-identification, botany"
} |
How can I use my current camera whose zoom lens can't be removed behind an f/6 refractor for as an astrophotography? | Question: I am looking at an Astrotech AT60ED (60 mm f/6 with a field flattener) and a Digi-Kit telescope adapter to do astrophotography, mainly deep-sky and I'm wondering if I can use my current Panasonic DMC-FZ50 Type CCD camera body.
The problem is that it has a zoom lens that can't be detached which is why I am worried.
Answer: The AT60ED is a nice small refractor. However, its objective lens simply projects an image at the focal plane. Adding a field flattener mimimizes curvature that degrades sharpness at the edges of the image. The flattener requires a specific back-focus, or distance (about 57 mm for the AT60 FF) between its (T2, 42 mm thread) flange and the image plane where the sensor should be located.
If your camera has a non-removable lens, its sensor cannot be placed directly at the focal plane. You cannot just attach your lens to the telescope or field flattener to take an image.
I believe that the only alternative for a camera with lens is to use eyepiece projection, also called digi-scoping. An eyepiece magnifies the image at the refractor's focal plane and projects it, usually so the eye can then focus the image on the retina. Instead, if a camera is attached to the eyepiece using a suitable adapter, its lens can then focus the image.
Eyepiece projection also has disadvantages, including a small field of view, vignetting, and often balance issues due to the camera weight and adapter attached to a small eyepiece, rather than directly to the focuser of the telescope. However, lots of people use eyepiece projection just by holding their cell phone camera at the exit pupil of the eyepiece.
Edit: Added some comments regarding deep-sky imaging.
Also, deep-sky imaging presents additional challenges. The Moon and planets can be photographed with short exposures. However, images of even the brightest nebulas and galaxies require long exposures. Astrophotographers usually take a large number of individual images, called sub-exposures, each typically lasting tens of seconds to many minutes. These are then aligned and stacked to improve signal to noise by averaging. Total integration time can often be several hours or more, depending on the brightness of the object.
A telescope mount that can accurately track a celestial object, compensating for the Earth's rotation, is necessary to prevent motion blur over these long exposures. The cost of a mount suitable for even a relatively short focal length refractor such as the AT60ED will be several times the cost of the telescope, and much more than a camera capable of prime focus astrophotography. | {
"domain": "astronomy.stackexchange",
"id": 4117,
"tags": "amateur-observing, photography, deep-sky-observing"
} |
Could a cryo-volcano be the reason behind this colour difference in Iapetus's hemispheres? | Question: Iapetus's hemisphere facing Saturn is dark, whereas the opposite one is bright.
Could a cryo-volcano be the reason behind this colour difference in Iapetus's hemispheres?
Answer: Reposting my own answer from Space Exploration on nearly identical question:
The color dichotomy of Iapetus is due to the darker half, the Cassini Regio, being a result of the moon's accumulation of the dust in the Saturn's largest, yet extremely tenuous, diffuse dust ring called the Phoebe ring depositing onto the moon's leading hemisphere, while the trailing hemisphere's surface remains uninfluenced by this source of dust.
Phoebe ring was only observed first in late 2009 by the Spitzer telescope in the infrared range, and is believed to have been caused by bombardment of micrometeorites of Phoebe, another Saturn's moon that the prevailing theories suggest is a captured Kuiper belt originating planetesimal orbiting Saturn in a retrograde, 173° to the ecliptic inclined orbit. It is the largest discovered ring of the Saturn, stretching from calculated 59 to 300 Saturn's radii (observed between 128 and 207 Saturn's radii by the Spitzer telescope), is tilted 27° to Saturn's inner rings and with an orbital inclination of 175°, suggesting its origins and again retrograde orbit:
The Phoebe ring was discovered by NASA's Spitzer Space Telescope, detecting it in the infrared spectrum. (Source: Wikimedia)
Iapetus, being in a prograde orbit with a 17.28° inclination to the ecliptic, is directly in the path of this retrograde dust ring at an average distance from Saturn of 215 Saturn radii. Since it's tidally locked to Saturn, it would always face same side towards any debris and dust in its path. Joseph A. Burns, Cornell's Irving Porter Church Professor of Engineering and professor of astronomy said in a press release for Cornell Chronicle:
"The ring of collisional debris that has come off Phoebe and its
companion moons is out there, and now we understand the process
whereby the stuff is coming in. When you see the coating pattern on
Iapetus, you know you've got the right mechanism for producing it."
So this is the theory. Iapetus is in a prograde orbit intersecting at inclination Phoebe dust ring in a retrograde orbit around Saturn, and since Iapetus is tidally locked and always hitting this dusty path head on, only one hemisphere (the darker one, called Cassini Regio) is covered in Phoebe's dust, while the other hemisphere (the lighter one, called Roncevaux Terra) remains free from it, unveiling this Saturn moon's true surface.
Artist's conception showing nearly invisible and largest Saturn's Phoebe ring (Source: Wikimedia)
Same mentioned Cornell Chronicle press release explains why the transition from the dark hemisphere to the whiter parts of the moon isn't seamless, but rather a mottled, patchy array of bright and dark spots:
Small, white craters that dot Iapetus' darker half indicate a veneer
of dark dust, only meters deep, covering a white, icy surface that
matches the rest of the satellite. The imaging data also revealed that
all the materials on the leading side are much redder than the
shielded and brighter trailing side -- another indication that the
leading side's dust came from elsewhere.
The pattern, the scientists say, supports a theory described in a
companion paper in Science that the darker parts of the moon tend to
heat up when struck by sunlight, encouraging the ice to evaporate
underneath.
This causes any dark spots to get even darker, creating the mottled
look.
Mosaic of Iapetus taken by Cassini probe during NASA's Cassini Solstice mission, showing the bright trailing Roncevaux Terra hemisphere with part of the dark area Cassini Regio appearing on the right. (Source: Wikimedia) | {
"domain": "astronomy.stackexchange",
"id": 142,
"tags": "natural-satellites, volcanism"
} |
Yukawa Potential in non-relativistic limit | Question: In Peskin's book "An Introduction to Quantum Field Theory", on page 121 (section 4.7) , it tries to recover the Yukawa Potential in the nonrelativistic limit, but there's a simplification that I don't understant. It says that:
$$
(p'-p)^2=-|{\bf{p'}}-{\bf{p}}|^2+\mathcal{O}({\bf{p}}^4)
$$
where $p'$ and $p$ are the 4-momenta of the incoming particles (and $\bf{p'}$ and $\bf{p}$ are the 3-momenta), both with the same mass $m$.
If I try this expansion, keeping terms only to lowest order, I get:
$$
(p'-p)^2=p'^2-2p'p+p^2=m^2-2(E'E-{\bf |p'||p|})+m^2
$$
Expanding $E=\sqrt{m^2+{\bf |p|}^2}=m+\frac{{\bf |p|}^2}{2m}+\mathcal{O}({\bf{p}}^4)$, I get:
$$
(p'-p)^2=-({\bf |p'|}^2-{\bf |p|}^2)+\mathcal{O}({\bf{p}}^4)
$$
which is clearly different from what the book finds.
Any help appreciated. Thanks!
Answer: The four vector $p$ is given as $(E,{\bf p})$ so that
$$(p' -p)^2 = (E' - E)^2 - ({\bf p'} - {\bf p})^2 = (E'^2 - {\bf p'}^2) + (E^2 - {\bf p}^2)- 2E'E + 2 {\bf p'} \cdot {\bf p}$$
Then put $E'E \approx (m + \frac{{\bf p'}^2}{2m})(m + \frac{{\bf p}^2}{2m}) \approx 2m^2 + {\bf p'}^2 + {\bf p}^2$ (ignoring the term $\frac{{\bf p'}^2{\bf p}^2}{4 m^2}$). Using $p'^2 = E'^2- {\bf p'}^2 = p^2 = E^2- {\bf p}^2 = m^2$ leaves you with $-{\bf p'}^2 +2 {\bf p'} \cdot {\bf p} -{\bf p}^2= -|{\bf p'}-{\bf p}|^2$. | {
"domain": "physics.stackexchange",
"id": 31021,
"tags": "homework-and-exercises, quantum-field-theory, special-relativity"
} |
Calculating difference in time | Question: In table first value I have value with datatype = Start
I need to calculate difference first value - midnight
Last value is with datatype =Stop
I need to calculate difference midnight - last item
For other I need to calculate datatype.Stop - previous.datatype=Start
Here is ViewModel that I use for repo method
public class HeatMapViewModel
{
public decimal? Latitude2 { get; set; }
public decimal? Longitude2 { get; set; }
public int coeff = 2;
public int Difference;
public DateTime Date { get; set; }
}
I wrote this code in repo method
public List<HeatMapViewModel> GetStops()
{
using (TraxgoDB ctx = new TraxgoDB())
{
List<HeatMapViewModel> items = new List<HeatMapViewModel>();
var firstitem = ctx.Logging.Where(x => x.Datatype == Datatype.Start).AsEnumerable().FirstOrDefault();
var midnight = new DateTime(firstitem.CurrentDateTime.Year, firstitem.CurrentDateTime.Month, firstitem.CurrentDateTime.Day, 00, 00, 00);
TimeSpan difference = (firstitem.CurrentDateTime - midnight);
var difference_after_midnight = (int)difference.TotalMinutes;
items.Add(new HeatMapViewModel
{
Latitude2 = firstitem.Latitude,
Longitude2 = firstitem.Longitude,
Difference = difference_after_midnight,
Date = firstitem.CurrentDateTime
});
var lastItem = ctx.Logging.Where(x => x.Datatype == Datatype.Stop).AsEnumerable().LastOrDefault();
var before_midnight = new DateTime(lastItem.CurrentDateTime.Year, lastItem.CurrentDateTime.Month, lastItem.CurrentDateTime.Day, 23, 59, 00);
TimeSpan difference_before = (before_midnight - lastItem.CurrentDateTime);
var difference_before_midnight = (int)difference_before.TotalMinutes;
items.Add(new HeatMapViewModel
{
Latitude2 = lastItem.Latitude,
Longitude2 = lastItem.Longitude,
Difference = difference_before_midnight,
Date = firstitem.CurrentDateTime
});
var allitems = ctx.Logging;
var filteredQuery = allitems.Where(x => x.Datatype == Datatype.Start || x.Datatype == Datatype.Stop).OrderByDescending(x => x.LogID).ToList();
for (int i = 1; i < filteredQuery.Count; i++)
{
if (filteredQuery[i].Datatype == Datatype.Stop && filteredQuery[i - 1].Datatype == Datatype.Start)
{
TimeSpan differenceTicks = filteredQuery[i - 1].CurrentDateTime - filteredQuery[i].CurrentDateTime;
items.Add(new HeatMapViewModel
{
Latitude2 = filteredQuery[i].Latitude,
Longitude2 = filteredQuery[i].Longitude,
Difference = (int)differenceTicks.TotalMinutes,
Date = firstitem.CurrentDateTime
});
}
}
return items;
}
}
Here is data from db
It works great, but I need to know if I can modify it?
Answer: The main area where you code can be improved is in your database calls. Currently your making 3 database calls. When you use
var lastItem = ctx.Logging.Where(x => x.Datatype == Datatype.Stop).AsEnumerable().LastOrDefault();
your loading all the records into memory (using .AsEnumerable()) in order to get the last one then throwing all the rest away, and then making another database query using
var filteredQuery = allitems.Where(...)
which fetches those records all over again. You only need the one database call to materialize the records to memory, and then you can get the first and last records from that collection.
Your also not testing for null when you use .FirstOrDefault() and .LastOrDefault() which has the potential to throw an exception in the code following those queries.
You can also simplify some code such as
var midnight = new DateTime(firstitem.CurrentDateTime.Year, firstitem.CurrentDateTime.Month, firstitem.CurrentDateTime.Day, 00, 00, 00);
which can be rewritten as
var midnight = firstitem.CurrentDateTime.Date;
You could re-write your method to have only one database call as
public List<HeatMapViewModel> GetStops()
{
using (TraxgoDB ctx = new TraxgoDB())
{
List<HeatMapViewModel> items = new List<HeatMapViewModel>();
var logs = ctx.Logging.Where(x => x.Datatype == Datatype.Start || x.Datatype == Datatype.Stop).OrderByDescending(x => x.LogID).ToList();
if (logs.Count < 2)
{
return items; // or return null;?
}
var first = logs.First();
var currentDateTime = first.CurrentDateTime;
double minutes = (first.CurrentDateTime - first.CurrentDateTime.Date).TotalMinutes;
items.Add(new HeatMapViewModel
{
Latitude2 = first.Latitude,
Longitude2 = first.Longitude,
Difference = (int)minutes,
Date = currentDateTime
});
var last = logs.Last();
minutes = (last.CurrentDateTime.Date.AddDays(1) - last.CurrentDateTime).TotalMinutes;
items.Add(new HeatMapViewModel
{
Latitude2 = last.Latitude,
Longitude2 = last.Longitude,
Difference = (int)minutes,
Date = currentDateTime
});
for (int i = 1; i < logs.Count; i++)
{
var previous = logs[i - 1];
var current = logs[i];
if (current.Datatype == Datatype.Stop && previous.Datatype == Datatype.Start)
{
minutes = (previous.CurrentDateTime - current.CurrentDateTime).TotalMinutes;
items.Add(new HeatMapViewModel
{
Latitude2 = current.Latitude,
Longitude2 = current.Longitude,
Difference = (int)minutes,
Date = currentDateTime
});
}
}
return items;
}
}
Note that I have assumed that if there were less than 2 items in the table, then your data would not make sense in a view which is the purpose of if (logs.Count < 2). That check will also prevent exceptions in the code that follows.
I have also assumed (based on the description in the question) that .First() will always return an item with DataType.Start and .Last() will always return an item with DataType.Stop so I have excluded checks for that in the code above | {
"domain": "codereview.stackexchange",
"id": 28348,
"tags": "c#, datetime, asp.net, entity-framework, asp.net-mvc"
} |
Most efficient FizzBuzz solution in Ruby | Question: I would appreciate any feedback concerning my analysis of the most efficient FizzBuzz solution programmed in Ruby. I submit that a Case-statement solution is more efficient than utilizing a ConditionalIf-statement in my recent blog-post within my conclusion provided thusly:
Consequently, the predictability pattern is the reason why a "Case" statement, or branchIf, is optimal and less expensive than a "Conditional If" statement, as clarified by Igor Ostrovsky's Blog Post: Fast and Slow If-Statements: Branch Prediction in Modern Processors
"If the condition is always true or always false, the branch prediction logic in the processor will pick up the pattern. On the other hand, if the pattern is unpredictable, the if-statement will be much more expensive."
Back to my optimized FizzBuzz solution- when the "Case" statement processes a number initializing the method, the constraint or case stops calculating when the condition is satisfied, and it will not continue to verify the constraint by branching unless divisible as it were for an IF/ELSIF construction, which saves time, performing faster, as the best solution possible, ultimately proving Danielle Sucher's point:
"I'd expect if/elsif to be faster in situations where one of the first few possibilities is a match, and for case to be faster in situations where a match is found only way further down the list (when if/elsif would have to make more jumps along the way on account of all those branchUnlesses)."
Furthermore, a real programmer can impress an interviewer by asking if the range invoked will be consecutive or random. Although a "Case" statement solution for FizzBuzz is generally more efficient, it will certainly be faster for a random range of numbers called.
I've approached several academics and professionals alike but nobody has challenged, so I would like to broaden the forum for additional contribution:
require 'benchmark'
def fizzbuzz(array)
array.map!{ |number|
divisibleBy3 = (number % 3 == 0)
divisibleBy5 = (number % 5 == 0)
case
when divisibleBy3 && divisibleBy5
puts "FizzBuzz"
when divisibleBy3
puts "Fizz"
when divisibleBy5
puts "Buzz"
else
puts number
end
}
end
puts Benchmark.measure{fizzbuzz(Array(1..10000))}
puts "fizzbuzz"
# puts RubyVM::InstructionSequence.disasm(method(:fizzbuzz))
def super_fizzbuzz(array)
array.map! { |number|
if(number % 3 == 0 && number % 5 == 0)
puts "FizzBuzz"
elsif(number % 3 == 0)
puts "Fizz"
elsif (number % 5 == 0)
puts "Buzz"
else
puts number
end
}
end
puts Benchmark.measure{super_fizzbuzz(Array(1..10000))}
puts "super_fizzbuzz"
# puts RubyVM::InstructionSequence.disasm(method(:super_fizzbuzz))
# Additional Documentation
# https://en.wikipedia.org/wiki/Assembly_language
# underTheHood => http://ruby-doc.org/core-2.0.0/RubyVM/InstructionSequence.html
# http://ruby-doc.org/stdlib-2.0.0/libdoc/benchmark/rdoc/Benchmark.html
Upon executing the above, terminal-output correspondingly revealed the following when scraped with grep:
Desktop ruby fizzbuzz.rb | grep 0.0
0.010000 0.000000 0.010000 ( 0.010018)
0.010000 0.000000 0.010000 ( 0.011353)
Answer: The test setup seems unfair. Your case implementation precomputes the divisibleBy3 and divisibleBy5 booleans, while the if..elsif implementation does the modulo operation(s) and comparison(s) for each attempted branch. I haven't checked, but putting the implementations on equal footing might reduce them to identical instruction sets, since they really become functionally and structurally equivalent.
But I can't really speak to performance with much confidence. I get practically identical performance for 10 million numbers*. So this is really deep in "why bother?" territory, if you ask me.
I don't mean to be harsh; it's an interesting idea to ponder. And I can see that the case can be made for one version being theoretically more efficient than the other at a very, very low level. But Ruby is very, very high level. In practical terms, I highly, highly doubt it's something you'll ever actually need for anything - at least while coding Ruby. Again, I can appreciate the thought that went into it, but the phrase "overthinking things" also comes to mind.
My tests used plain Ruby by the way; have you tested on JRuby, which uses the JVM under the hood? No idea if it'll make a difference either way, but the point is that Ruby is so far removed from the metal of the CPU that lots of things might impact performance.
In short: If your main concern is the rawest of raw performance optimization, don't use Ruby to begin with.
Other than that, Caridorc's answer covers my main concern about this code (the use of map!, and {..} instead of do..end for multiline blocks).
I'd add that indentation is inconsistent: whens should be indented the same as their case, not more, and general indentation should be 2 spaces.
*) I changed the puts calls to call an empty method since the point here is testing the branching performance, not stream IO. I also precomputed the array of numbers so it wouldn't factor into the benchmarking, and used each instead of mapping. For style, I fixed indentation and removed trailing whitespace. Code below.
require 'benchmark'
N = Array(1..10_000_000)
def no_op(*a); end
def fizzbuzz(array)
array.each do |number|
divisibleBy3 = (number % 3 == 0)
divisibleBy5 = (number % 5 == 0)
case
when divisibleBy3 && divisibleBy5
no_op "FizzBuzz"
when divisibleBy3
no_op "Fizz"
when divisibleBy5
no_op "Buzz"
else
no_op number
end
end
end
puts Benchmark.measure{fizzbuzz(N)}
puts "fizzbuzz"
def super_fizzbuzz(array)
array.each do |number|
divisibleBy3 = (number % 3 == 0)
divisibleBy5 = (number % 5 == 0)
if divisibleBy3 && divisibleBy5
no_op "FizzBuzz"
elsif divisibleBy3
no_op "Fizz"
elsif divisibleBy5
no_op "Buzz"
else
no_op number
end
end
end
puts Benchmark.measure{super_fizzbuzz(N)}
puts "super_fizzbuzz"
Example result:
3.370000 0.000000 3.370000 ( 3.386060)
fizzbuzz
3.350000 0.010000 3.360000 ( 3.375098)
super_fizzbuzz
(Yes, the if..elsif implementation was actually faster on this run. I had another run where it ended up 0.2 seconds faster, and others where it lost out to the case implementation. I'd average that out to "no appreciable difference whatsoever".) | {
"domain": "codereview.stackexchange",
"id": 16386,
"tags": "algorithm, ruby, fizzbuzz"
} |
Methyl vs halogen : order of precedence | Question:
How would you name this compound?
4-bromo-2-methylpentan-3-one
or
2-bromo-4-methylpentan-3-one
Answer: It should be named as 2-bromo-4-methylpentan-3-one
The alphabatical order should be followed while naming this compound. As b in bromo comes before m in methyl, it should be named as above.
See here and here for more information. | {
"domain": "chemistry.stackexchange",
"id": 8534,
"tags": "organic-chemistry, nomenclature"
} |
Methane Reservoir | Question: Analysis from the Curiosity Rover on Mars has detected tiny amounts of methane in the atmosphere. It is possible the origin of this detected methane is from methanogens deep below the subsurface.
My question is, how deep would we need to drill to reach a methane reservoir below the Martian soil? Once here, could this help us determine the origin of methane gas?
Answer: The origin of methane on Mars is still unknown. Methanogens may be the source & if so would prove life exists on Mars, but no-one yet knows. As to the depth of the source of methane on Mars, this too is unknown.
The presence of methane in Mars' atmosphere could be of geological origin.
One way to confirm the biological origin of methane would be to measure the isotope ratios of carbon and hydrogen, the two elements in methane. Life on Earth tends to use lighter isotopes, for example, more Carbon-12 than Carbon-13, because this requires less energy for bonding.
What is known is that the levels of methane in the atmosphere of Mars varies, but the variation is not due to seasonal changes. The reason for the variation is still unknown. | {
"domain": "earthscience.stackexchange",
"id": 1561,
"tags": "soil, mars"
} |
about to create a standing wave | Question: I think partial of this question being asked before but I have some other doubts.As shown in this post how to add two plane waves if they are propagating in different direction?, reading the third reply by Wouter. If we have two waves
$$
f_1 = \sin(\vec{k}\cdot\vec{r} + \omega t), f_2 = \sin(\vec{q}\cdot\vec{r} - \omega t)
$$
1) my first question, in one text, I saw that to create the standing wave, we should have the two sinusoidal waves propagate in counter direction. I am confusing with above function of wave. I know that the group velocity is defined as
$$\vec{v}_p = \hat{k}\omega/|\vec{k}|$$
so if we want two waves propagate in opposite direction, we could have one $\omega$ and the other $-\omega$ or we have have one $\vec{k}$ positive and the other one $-\vec{k}$. But starting from above waves, $\omega$ is opposite, so we should make sure $\vec{k}=\vec{q}$. But what's the physical significance to make $-\omega$? To me, it makes more sense to
$$
f_1 = \sin(\vec{k}\cdot\vec{r} + \omega t), f_2 = \sin(\vec{q}\cdot\vec{r} + \omega t) \quad \mbox{with} \quad \vec{q}=-\vec{k}
$$
2) Let start from the last formulas, let $\vec{q}=-\vec{k}$ so to have the sum in the form
$$
f_1+f_2 = \sin(\vec{k}\cdot\vec{r})\cos(\omega t)
$$
what I understand from the text is standing wave should be stationary in space. But there is a modulated amplitude $\cos(\omega t)$ there so does it really stationary (or standing wave)?
3) If I have two such "standing waves" propagating along different direction, says $\cos^{-1}(\vec{p}\cdot\vec{q})=\pi/6$
$$
s_1 = \sin(\vec{p}\cdot\vec{r})\cos(\omega t), \quad
s_2 = \sin(\vec{q}\cdot\vec{r})\cos(\omega t)
$$
so will these two wave makes a two-dimensional standing wave if $|q|=|p|$?
Answer:
It's not very useful to think of the frequency as negative. The wave vector $\vec{k}$ should be interpreted as the indicating the direction of the wave, so if the wave is going in the opposite direction, then $\vec{k}$ should be made negative, not $\omega$.
Here is an animation that may help you. It shows the two propagating waves, and the resulting standing wave. It's called a "standing" wave in the sense that the nodes (places where the wave function has a value of zero) of the wave do not move.
It's a little odd to say you have standing waves that are propagating, but yes, you will have a two dimensional standing wave, with a different wavevector in each dimension. Here is a visualization of it frozen in time (I used $|\vec{p}| = |\vec{q}| = 1$ for simplicity). | {
"domain": "physics.stackexchange",
"id": 18789,
"tags": "waves"
} |
Purpose of converting continuous data to categorical data | Question: I was reading through a notebook tutorial working with the Titanic dataset, linked here, and noticed that they highly favored ordinal data to continuous data.
For example, they converted both the Age and Fare features into ordinal data bins.
I understand that categorizing data like this is helpful when doing data analytics manually, as fewer categories makes data easier to understand from a human perspective. But intuitively, I would think that doing this would cause our data to lose precision, thus leading to our model losing precision or accuracy.
Can someone explain when is converting numerical data to ordinal data is appropriate, and the underlying statistics of why it is effective?
Answer: Your intuition is generally correct - in many cases, premature discretization of continuous variables is undesirable. Doing so throws away potentially meaningful data, and the result can be highly dependent on exactly how you bucket the continuous variables, which is usually done rather arbitrarily. Bucketing people by age decade, for example, implies that there is more similarity between a 50-year-old and a 59-year-old than there is between a 59-year-old and a 60-year-old. There can be some advantages in statistical power to doing this, but if your binning doesn't reflect natural cutpoints in the data, you may just be throwing away valuable information.
You can find a very similar question here:
https://stats.stackexchange.com/questions/68834/what-is-the-benefit-of-breaking-up-a-continuous-predictor-variable?noredirect=1&lq=1 | {
"domain": "datascience.stackexchange",
"id": 5447,
"tags": "machine-learning, data, categorical-data, numerical"
} |
An object having a general plane motion exerting force on air | Question: assume that we have ball that's both spinning and moving in a straight path through the air.
Does the ball exert a force at right angles to the path it's moving on the air?
If the answer is yes, please explain how is that possible!
Answer: Yes, a spinning ball will produce a force perpendicular to the direction of it's path of travel through air.
This is called the Magnus effect. The fluid dynamics that lead to this are fairly complicated, and I don't know them well enough to do it justice. The simplified version of events is that due to those complicated fluid dynamics effects, the spinning ball causes the air stream to deflect based on the direction that the ball is spinning. That deflection requires a force from the ball, and the equal and opposite reaction to that force produces a force perpendicular to the direction of travel for the ball.
Something similar happens with airplanes. The wings deflect the air downwards, and the plane pushes up against that. | {
"domain": "physics.stackexchange",
"id": 59985,
"tags": "newtonian-mechanics"
} |
JPEG compression steps after quantization | Question: I have a 3-channel (for colours) a png image that I opened and
I splitted the image into 8x8 blocks
I applied all of the blocks discrete cosine transform
And then applied quantization
I stored the values in an array by zigzag traverse
I do not know what to do to reduce the size in this step
Now I do not really understand what i am supposed to after this steps. How did I compress this image? If I applied inverse of these steps and put the values in a new image, I believe the size will not change at all. I mean I am supposed to save the values in a txt and decode them with my application so that I actually made a JPEG compression?
Answer:
And then applied quantization
That's the lossy part of your compression. You don't quantize all coefficients with the same bit depth, and that's how you save a lot of data.
Typically, higher frequency bins are quantized with fewer bits. Same goes for Chroma components, which are often sub-sampled (i.e. only 1/N as many values considered at all) from the start.
The zigzag pattern only serves the purpose of putting coefficients that are likely to be similar next to each other, so that run-length encoding (RLE) works well.
I actually found wikipedia's article on JPEG compression to be explaining this rather nicely.
I mean I am supposed to save the values in a txt and decode them with my application so that I actually made a JPEG compression?
Um, this is numerical data, not text, so I'm not sure where you're going with a text file (dumping numbers in a text file is almost never a good solution), but: You now have a run-length encoded stream of bytes. Compare the length of that string with the number of bytes your uncompressed image had. | {
"domain": "dsp.stackexchange",
"id": 7055,
"tags": "image-compression, quantization, jpeg"
} |
Relation between heat and work | Question: If we assume the ideal gas law
$E=\frac{f}{2}pV$, and then differentiate get
$$dE=\frac{f}{2}dpV+\frac{f}{2}pdV$$
do $\frac{f}{2}dpV$ stands for $dQ$, and $\frac{f}{2}pdV$ for $dW$ in first law of thermodynamics?
I just started thermodynamics is my understanding correct?
Answer: It’s not quite correct. I’ll note internal energy $U$ rather than $E$. It is true for any process:
$$
dU=\delta W +\delta Q=-pdV+TdS
$$
It is only for reversible processes that you can identify the terms namely:
$$
\delta W= -pdV \\
\delta Q=TdS
$$
Note the sign and overall factor that differs from your expression of work. In general, when considering internal energy, the natural variables to work with are volume and entropy i.e. $V,S$.
Your reasoning would work if you had rather said for a reversible process:
$$
\delta W=-pdV \\
\delta Q= \frac{f+2}{2}pdV+\frac{f}{2}Vdp
$$
The second equation is actually consistent with the formula of entropy of an ideal gas and the formula for heat.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 92763,
"tags": "thermodynamics, work"
} |
Validate that a threaded binary tree works as intended | Question: I am attempting to validate that my threaded binary tree’s insertion and deletion works as intended.
Would it be safe to assume that the following procedure would have tested all corner cases at least once?
I have an array of integer S = {1, 2, …, n-1, n} for n = 1’000’000.
I then randomize the order of S to obtain S’ and S*.
S’ is then used to insert its elements sequentially, into the tree.
After all elements are inserted, I create an in order list of the tree, say A, and confirm that A = S.
This concludes the insert test.
For deletion, I pass elements of S* sequentially as argument to be removed from the tree, testing that each call was successful. After all elements was removed I confirm that the tree is empty.
Answer: Shortly - no, it is not safe.
Huge random test does not guarantee success (but of course, when there is a bug, it finds it in the most cases).
To test threaded tree with deletion and insertion you need to have saturated all nodes (with thread links), make them change thread, the same for deletion.
To make sure it works all corner cases must be covered both for insertion and deletion. With huge input there is huge probability it works, but this is not conclusive. Also making more passes (more test) it increases probability of success, but nothing more.
To test it manually - case by case - you do not need very big tree (I did tests on double threaded AVL, tree height was maximally 5). | {
"domain": "cs.stackexchange",
"id": 6164,
"tags": "data-structures, binary-trees, search-trees, correctness-proof, software-testing"
} |
Anagram finder in F# | Question: I've been learning F# for a month or so and I'm wondering how good my "functional" aspect of coding is. When I first started, I did this using an iterative approach with a lot of <- and mutables etc. I tried to rewrite it as functional as possible. Any critique is appreciated.
This code prints out all anagrams in the english dictionary read from a "word.txt" file, the LONGEST anagram, and the anagram with the MOST permutations.
open System
let sortStringAsKey (listOfWords : string array) =
let getKey (str : string) =
str.ToCharArray()
|> Array.sort
|> String
listOfWords
|> Array.groupBy getKey
|> Array.filter (fun (sortedWord, originalWord) -> originalWord.Length > 1)
[<EntryPoint>]
let main argv =
let filename = "words.txt"
let listOfWords = System.IO.File.ReadAllLines(filename)
let listOfAnagrams = sortStringAsKey listOfWords
// Prints every single anagram combination
listOfAnagrams |> Array.iter (fun (_ , anagramList) -> anagramList |> Array.iter (fun str -> printfn "%s" str); printfn "")
// Gets the LONGEST anagram
let longestAnagrams = listOfAnagrams
|> Array.filter (fun (sortedWord, _) -> sortedWord.Length >= (listOfAnagrams
|> Array.maxBy (fun (sortedWord, _)-> sortedWord.Length)
|> fst
|> String.length))
// Gets the set that has the MOST anagrams
let mostAnagrams = listOfAnagrams
|> Array.filter (fun (_, originalWords) -> originalWords.Length >= (listOfAnagrams
|> Array.maxBy (fun (_, originalWords) -> originalWords.Length)
|> snd
|> Array.length))
// Prints the longest and most anagrams
longestAnagrams |> Array.iter (fun (_ , anagramList) -> anagramList |> Array.iter (fun str -> printfn "%s" str); printfn "")
mostAnagrams |> Array.iter (fun (_ , anagramList) -> anagramList |> Array.iter (fun str -> printfn "%s" str); printfn "")
0 // return an integer exit code
Answer: This looks like a good start, so the following is by no means a criticism, but there are various refactorings you can perform to make the code smaller, and more generic.
As a general comment, I'd advise against naming arrays listOfWhatever, since list is a concrete data type in F#, separate from arrays. I've renamed listOfWords to words, listOfAnagrams to anagrams, and so on.
Finding the anagrams
The first sortStringAsKey function doesn't need type annotations if you refactor it a bit:
// seq<'a> -> seq<string * seq<'a>> when 'a :> seq<char>
let sortStringAsKey words =
words
|> Seq.groupBy (Seq.sort >> Seq.toArray >> String)
|> Seq.filter (fun (_, originalWords) -> Seq.length originalWords > 1)
You'll notice that I inlined the getKey function, but perhaps that's taking it too far. If you think that having a named local function makes the code more readable, I wouldn't disagree.
I've also changed from working explicitly with arrays, to using the Seq module. This makes the function more generic, but it can still handle arrays.
You could even perform an eta reduction on the function, in order to make it even shorter, but I'm not sure it becomes more readable by it:
let sortStringAsKey =
Seq.groupBy (Seq.sort >> Seq.toArray >> String)
>> Seq.filter (fun (_, originalWords) -> Seq.length originalWords > 1)
Just for the fun, you can make it even more cryptic:
let sortStringAsKey =
Seq.groupBy (Seq.sort >> Seq.toArray >> String)
>> Seq.filter (snd >> Seq.length >> ((<) 1))
Personally, I don't even find that readable myself; I'd prefer the first, most verbose option.
Printing the anagrams
Since there's at least three places where the code prints out the anagrams, it's more reasonable to turn that into a function:
// seq<string> -> unit
let prints a =
a |> Seq.iter (printfn "%s")
printfn ""
Once again, you'll notice that I chose to use Seq.iter instead of Array.iter. It'll still be able to handle arrays, but there's no reason to constrain the input if it isn't necessary.
Loading anagrams
Loading the anagrams from file can be simplified a bit as well, since you don't need the intermediate listOfWords value:
// In main function:
let filename = "../../words.txt"
let anagrams = System.IO.File.ReadAllLines(filename) |> sortStringAsKey
Because of the more generic version of sortStringAsKey, the type of anagrams is seq<string * seq<string>>.
Printing all the anagrams
With the prints function, you can easily print all the anagrams
anagrams |> Seq.map snd |> Seq.iter prints
Instead of performing work inside on of Seq.iter (which is possible), I often prefer to perform transformations etcetera first, because I can easily test such pure functions. Once you have data in the appropriate shape, you can always use Seq.iter to e.g. print it.
Finding the longest anagrams
The proposed solution suffers from calling Array.maxBy for every element, so it'd be quite inefficient.
You can make it more efficient by only doing it once:
let longestAnagrams =
let longest = anagrams |> Seq.map (fst >> String.length) |> Seq.max
anagrams |> Seq.filter (fst >> String.length >> ((=) longest))
This still requires two traversals of the array, so isn't as efficient as it could be, but is probably (but measure instead of assume, when it comes to performance) more efficient than doing Array.maxBy for every entry.
See below for an optional refactoring.
Finding the words with most anagrams
Likewise, you can find the largest collections of anagrams:
let mostAnagrams =
let most = anagrams |> Seq.map (snd >> Seq.length) |> Seq.max
anagrams |> Seq.filter (snd >> Seq.length >> ((=) most))
Notice how this is similar to longestAnagrams.
Printing
Both longestAnagrams and mostAnagrams can be printed using the prints function:
longestAnagrams |> Seq.map snd |> Seq.iter prints
mostAnagrams |> Seq.map snd |> Seq.iter prints
Alternative max and filter operation
The problem with even the refactored version of longestAnagrams is that it requires two traversals of the sequence in order to compute the results.
Once you realise that the operation is essentially a fold, you may want to optimise it to a single traversal:
let longestAnagrams =
let folder l (k, v) =
let xl = Seq.length k
match xl, l with
| _, [] -> [k, v]
| x, (hk, _)::_ when xl > Seq.length hk -> [k, v]
| x, (hk, hv)::t when xl = Seq.length hk -> (k, v)::(hk, hv)::t
| _ -> l
anagrams |> Seq.fold folder []
As you can tell, it's more code (so more complicated), but at least in theory more efficient, as it only uses a single traversal. It does, however, potentially causes more memory allocations, so as always when performance is involved: measure.
On my machine, though, it seems to be more than twice as fast...
You can refactor the computation of mostAnagrams in the same way, but I will leave that as an exercise ;) | {
"domain": "codereview.stackexchange",
"id": 19111,
"tags": "algorithm, strings, array, f#"
} |
Confused about definition of three dimensional position operator in QM | Question: My QM text defines the position operator as follows:
The position operator $X= (X_1,X_2,X_3)$ is such that for $j=1,2,3: \ X_j \psi(x,y,z)= x_j \psi(x,y,z)$.
To me this can mean two things.
1) $X$ is a vector and acts as $X \psi(x,y,z)= (x \psi(x,y,z), y \psi(x,y,z), z \psi(x,y,z))$. But this doesn't make sense as $X$ is an observable/operator and so must send vectors to vectors (here functions).
2)There are three position operators $X_1, X_2, X_3$ and each act as defined.
How does the postion operator act on a state? Could anyone help me out here? Thanks!
Answer: This is quite an odd way to introduce the position operator, I have to admit. Both definitions you have used are correct, they're just used in different ways in quantum mechanics.
In the the first one, $X$ is what is technically called a vector operators, in this case it's a little like a vector but the components are matrices (or operators). Sometimes it's useful to do this and we can do a sort of dot-product with other vector operators (which you will probably come across soon in QM).
$X$ is composed of the three operators you've defined in 2), and when we want to think about the position operator in three dimensions, definition 1) does actually work. It's a little bit odd, but like I said, $X$ isn't an operator, $X$ is a vector operator, and so the technical issue of mapping the wavefunction to a vector isn't actually an issue. If it's still puzzling, then you can try to think about it as each component of the vector $X\psi$ as being an individual function, and then noticing that this is really just a way of putting three separate scalar equations into one vector equation. | {
"domain": "physics.stackexchange",
"id": 68295,
"tags": "quantum-mechanics, hilbert-space, operators, wavefunction, definition"
} |
Why does water break salt really | Question: A kid asked me this question just the other day, and I said that the positive part of $\ce{H2O}$ attracts the negative $\ce{Cl}$ ion and the negative part of the $\ce{H2O}$ attracts the positive Na ion.
Then he asked me several questions which stumped me:
Why doesn't $\ce{NaCl}$ dissolve water instead (break water into $\ce{H}$ and $\ce{O}$)?
If $\ce{H2O}$ is electrically neural, why would it attract any ions?
Why doesn't $\ce{NaCl}$ stick together, how can one know if one molecule will attract the ions hard enough that the ions breaks away?
Why is Na positive and $\ce{Cl}$ negative?
Hopefully someone can help me answer the above questions as I have not done chemistry in years!
Answer: 1.) The covalent OH bond strength in water is slightly stronger than the NaCl ionic bond strength.
2.) Water might be neutral but each atom exhibits a partial charge. Oxygen sucks up most of the electron density away from the hydrogen atoms leaving O with a partial negative charge and hydrogen with a partial positive charge. This gives rise to the famous hydrogen bonding behavior of water.
3.) You would have to analyze the energies of each of the interacting components to know.
4.) Cl is more electronegative than Na due to the larger number of protons in the nucleus. This means that Cl will suck electron density away from Na as opposed to the other way around. | {
"domain": "chemistry.stackexchange",
"id": 1800,
"tags": "molecules, ions"
} |
Utilizing std::map and std::array for displaying a modulus grid | Question: I've written a modulus-type program (using std::vector). The user inputs a number, and the program displays a modulus grid pertaining to that number.
This is what it looks like (example program output with 12):
(0) 12 24 36 48 60 72 84 96 108 120
(1) 13 25 37 49 61 73 85 97 109 121
(2) 14 26 38 50 62 74 86 98 110 122
(3) 15 27 39 51 63 75 87 99 111 123
(4) 16 28 40 52 64 76 88 100 112 124
(5) 17 29 41 53 65 77 89 101 113 125
(6) 18 30 42 54 66 78 90 102 114 126
(7) 19 31 43 55 67 79 91 103 115 127
(8) 20 32 44 56 68 80 92 104 116 128
(9) 21 33 45 57 69 81 93 105 117 129
(10) 22 34 46 58 70 82 94 106 118 130
(11) 23 35 47 59 71 83 95 107 119 131
(12) 24 36 48 60 72 84 96 108 120 132
Now, I've decided to make it nicer by using std::map and std::array. I get the same output as before, but with the same alignment issues (fixing those could complicate my display function).
Is this an effective use of these containers (and am I using them correctly)? If not, what other containers could be used instead? I'm using std::array instead of std::vector here because I'm keeping the number of columns (array size) at 10.
#include <iostream>
#include <map>
#include <array>
const unsigned NUM_ARR_ELEMS = 10;
std::map<unsigned, std::array<unsigned, NUM_ARR_ELEMS>> getModGrid(unsigned);
void displayModGrid(const std::map<unsigned, std::array<unsigned, NUM_ARR_ELEMS>>&);
int main()
{
std::map<unsigned, std::array<unsigned, NUM_ARR_ELEMS>> modGrid;
unsigned mod;
std::cout << "\n\n> Mod: ";
std::cin >> mod;
std::cout << std::endl << std::endl;
modGrid = getModGrid(mod);
displayModGrid(modGrid);
std::cout << std::endl << std::endl;
std::cin.ignore();
std::cin.get();
}
std::map<unsigned, std::array<unsigned, NUM_ARR_ELEMS>> getModGrid(unsigned mod)
{
std::map<unsigned, std::array<unsigned, NUM_ARR_ELEMS>> modRow;
std::array<unsigned, NUM_ARR_ELEMS> modValues;
for (unsigned modIter = 0; modIter <= mod; ++modIter)
{
for (unsigned arrIter = 0; arrIter < NUM_ARR_ELEMS; ++arrIter)
modValues[arrIter] = modIter + (mod * (arrIter+1));
modRow[modIter] = modValues;
}
return modRow;
}
void displayModGrid(const std::map<unsigned, std::array<unsigned, NUM_ARR_ELEMS>> &modGrid)
{
for (auto rowIter = modGrid.cbegin(); rowIter != modGrid.cend(); ++rowIter)
{
std::cout << " (" << rowIter->first << ") ";
for (auto colIter = rowIter->second.cbegin(); colIter != rowIter->second.cend(); ++colIter)
{
std::cout << *colIter << " ";
}
std::cout << std::endl;
}
}
Answer: After looking through my code output and thinking about the comments, I've decided that a map of arrays isn't best. I can see how a map can still display it in order (first column numbers as the keys), but that's about it. It does make more sense to use a vector of vectors, after deciding that it's okay to let the user control the number of columns calculated and displayed. | {
"domain": "codereview.stackexchange",
"id": 8891,
"tags": "c++, c++11, matrix, stl"
} |
drcsim-nightly/sandia-hand-nightly dependency bug | Question:
When I try to install drcsim-nightly using
sudo apt-get install drcsim-nightly
having already installed gazebo-nightly and sandia-hand-nightly,
I get the following error message:
...
The following packages have unmet dependencies:
drcsim-nightly : Depends: sandia-hand-nightly (>= 5.1.10) but 5.1.10~hg20130325rb995efea5e65-1~precise is to be installed
E: Unable to correct problems, you have held broken packages.
Looks like the system can't figure out that version
5.1.10~hg20130325rb995efea5e65-1~precise is >= 5.1.10
Any suggestions?
Chris
Originally posted by cga on Gazebo Answers with karma: 223 on 2013-03-25
Post score: 0
Original comments
Comment by Stefan Kohlbrecher on 2013-03-26:
Just tried, happens for me too.
Answer:
This is a bug in versioning for drcsim nightly releases.
Debian versioning policy (and Ubuntu) considers 5.1.10 greater than 5.1.10~hgxxx so the system is complaining about now having a valid sandia-hand >=5.1.10 version.
A fix is already in the repo and should make tomorrow nightly build to not suffer from this error.
Workaround:
Download manually drcsim-nightly package from gazebosim.org and install it ignoring sandia-hand dependency:
sudo apt-get install sandia-hand-nightly # to be sure of have it installed
sudo dpkg -i --ignore-depends=sandia-hand-nightly drcsim-nightly_*.deb
Originally posted by Jose Luis Rivero with karma: 1485 on 2013-03-26
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by dcconner on 2013-04-01:
Has anyone gotten nightly build to work since this fix? Both times I tried it screwed up the install, and had to spend time cleaning up to re-install release version.
Comment by cga on 2013-04-01:
Working for me. Install gazebo-nightly, then sandia-hand-nightly (which installs osrf-common-nightly), then drcsim-nightly
Comment by Jose Luis Rivero on 2013-04-01:
For my system (precise with amd64) without any OSRF related package, an apt-get install drcsim-nightly is enough (it will pull gazebo, osrf-common, sandia-hand and drcsim). All fails can be consider bugs and reported in drcsim-release bitbucket repository.
Comment by dcconner on 2013-04-01:
Do you need to remove the current release installs prior to installing nightly?
Comment by cga on 2013-04-01:
Installing xxx-nightly automatically "uninstalls" xxx. I don't know if manually uninstalling xxx and then installing xxx-nightly is an approved procedure.
Comment by cga on 2013-04-01:
By the way, you can't mix drcsim/gazebo installations using apt-get and those where you compile source. Unless you intervene, apt-get installations install to /usr/share, bin, include, lib while compiling by source installs to /usr/local/share, bin, include, lib. Unless you are very careful with your $PATH, these two installations interfere with each other. So no compiling by source if you want to use the nightly versions. | {
"domain": "robotics.stackexchange",
"id": 3159,
"tags": "gazebo"
} |
taylor expansion of scale space function | Question: I see the following expression from http://en.wikipedia.org/wiki/Scale-invariant_feature_transform
The quadratic Taylor expansion of the Difference-of-Gaussian scale-space function, with the candidate keypoint as the origin is
$$D(\textbf{x}) = D + \frac{\partial D^T}{\partial \textbf{x}}\textbf{x} + \frac{1}{2}\textbf{x}^T \frac{\partial^2 D}{\partial \textbf{x}^2} \textbf{x}$$
where D and its derivatives are evaluated at the candidate keypoint and $\textbf x = (x,y,\sigma)$ is the offset from this point.
I'm very confused about this expression. I don't know why $D^T$ appears and don't understand everything. Is there someone to help me?
Answer: The quantity $\frac{\partial D}{\partial \textbf{x}}$ is a vector, since it is the derivative of the scalar function $D(\textbf{x})$ w.r.t. all the elements of $\textbf{x}$. In the formula it is assumed that all vectors are column vectors, so in order to compute the dot product of the derivative $\frac{\partial D}{\partial \textbf{x}}$ and the vector $\textbf{x}$, you need to transpose one of them, which gives you a matrix product row times column (= scalar). The expression $\frac{\partial D^T}{\partial \textbf{x}}$ is simply the transpose of the vector of derivatives. Maybe it would have been clearer to write it as $\left(\frac{\partial D}{\partial \textbf{x}}\right)^T$.
Similarly, for the last term you have a row vector times a matrix (second derivative of $D$ w.r.t. $\textbf{x}$) times a column vector, which again results in a scalar value. | {
"domain": "dsp.stackexchange",
"id": 1041,
"tags": "sift, scale-space, taylor-expansion"
} |
Plot a piecewise-defined function | Question: I would like to plot the following function:
$$
\begin{align}
\Lambda(\delta\tau) &\equiv\ \chi(\delta\tau, 0)
= \frac{1}{T_i} \int_0^{T_i} a(t_0+t')a(t_0+t'+\delta\tau) dt' \\
&= \begin{cases}
1 - \frac{\left|\delta\tau\right|}{\tau_c}, &\left|\delta\tau\right| \le \tau_c(1+\frac{\tau_c}{T_i}) \\
-\frac{\tau_c}{T_i}, &\left|\delta\tau\right| \gt \tau_c(1+\frac{\tau_c}{T_i}) \\
\end{cases}
\end{align}
$$
which represents a simple triangular shape.
There is a conditional statement. My implementation uses a for loop as follows:
def waf_delay(delay_increment):
for d in delay_increment:
if np.abs(d) <= delay_chip*(1+delay_chip/integration_time):
yield 1 - np.abs(d)/delay_chip
else:
yield -delay_chip/integration_time;
integration_time = 1e-3 # seconds
delay_chip = 1/1.023e6 # seconds
x=np.arange(-5.0, 5.0, 0.1)
y=list(waf_delay(x))
plt.plot(x, y)
plt.show()
Is there a more correct way to transform an array based on a condition rather than just looping through it? Instead of having something like this:
def f(x_array):
for x in x_array:
if np.abs(x) <= 3:
yield 1 - x/3
else:
yield 0
x=np.arange(-5.0, 5.0, 0.1)
y=list(f(x))
plt.plot(x, y)
plt.show()
I would like to write something like this:
def f(x):
if np.abs(x) <= 3:
yield 1 - x/3
else:
yield 0
x=np.arange(-5.0, 5.0, 0.1)
plt.plot(x, f(x))
plt.show()
that could take an array.
Answer: There are two ways to solve this problem. The first one is numpy.where, which can take two arrays and it will choose from one wherever a condition is true and from the other wherever it is false. This only works if your piecewise function has only two possible states (as is the case here):
def waf_delay(delays):
return np.where(np.abs(delays) <= delay_chip*(1+delay_chip/integration_time),
1 - np.abs(delays)/delay_chip,
-delay_chip/integration_time)
Another more general possibility is to use numpy.piecewise, but that is probably overkill here:
def f1(x):
return 1 - np.abs(x)/delay_chip
def f2(x):
return -delay_chip/integration_time
cut_off = delay_chip*(1+delay_chip/integration_time)
y = np.piecewise(x, [np.abs(x) <= cut_off, np.abs(x) > cut_off], [f1, f2])
Note that in both cases no for d in delays is needed, because all functions used are vecotorized (the basic arithmetic operations for numpy arrays are and so is numpy.abs). | {
"domain": "codereview.stackexchange",
"id": 33268,
"tags": "python, numpy, matplotlib"
} |
What is the computational complexity of the PageRank problem? | Question: I was just wondering what the complexity of the PageRank problem is. A description can be found here: http://en.wikipedia.org/wiki/PageRank . (I am referring to the problem that is solved by the PageRank algorithm, i.e., the question of how one should reasonably distribute "ranking" throughout a graph in a way that is consistent with the model presented in the article I've linked to.)
I've always been curious as to whether or not there is a deterministic polynomial time algorithm that can do as well as the actual PageRank algorithm that Google uses.
It's clear to me that the problem is FNP at worst, as it's easy to verify if a given "arrangement" of rankings for a series of nodes is consistent and valid. I'm not sure beyond that, though. Is it NP-hard? I cannot think of which NP-complete problem would reduce to it.
Thank you,
Philip
Answer: As pointed out on the wikipedia page you cite, PageRank is simply computing the eigenvector corresponding to the maximum eigenvalue of the (modified) adjacency matrix of the web graph. Since this is simple linear algebra, it should definitely be in FP, if not much smaller classes.
Part of the issue with Google's implementation is that the web is so large that its adjacency matrix doesn't fit into the memory of any single (or even any 10,000) computers. So although the complexity of PageRank is technically in FP, the scale at which Google is doing it doesn't really fit into the realm of traditional single-processor complexity. To discuss the complexity of actually computing PageRank on a graph as large as the whole web, you'd at the very least have to consider some sort of distributed computation model. | {
"domain": "cstheory.stackexchange",
"id": 223,
"tags": "ds.algorithms, graph-theory"
} |
What is the scientific name of this green insect/bug? | Question:
(Picture from Internet)
I want to know the scientific name of this insect. They are very small in size and they do not like to stay at one place so taking pictures is hard. Following are some information which will help in identification.
Location: India
Size: Miniscule. About some millimeters
Color: Green. Sometimes a black dot appears on its wings
Survive during the post-Monsoon duration i.e. Late July to October. Hence sometimes referred to as Post-Monsoon bugs.
Attracted to lights. Large number of those swarm around any source of light
Causes a great deal of damage to crops
Does not bite but causes local itching on contact with skin
The name in our native language roughly translates to "green insect" (because of green color)
Answer: From the google search I did,guess they are green leafhoppers.
To find more on the leafhopper of the second half, this link should help .Reach the site and give a search on green leafhoppers in the searchbar.
Well,there are a large no of leafhoppers under the family Cicadellidae and I believe the ones in the picture differ.
If my findings are correct,Cicadella viridis is the scientific name of the leafhopper of the 1st half,the second one comes under Nephotettix sp.
Hope it has helped! | {
"domain": "biology.stackexchange",
"id": 11605,
"tags": "species-identification, entomology"
} |
At what text-based tasks are "dumb humans" still better than the best language models? | Question: I ran into this AI-SE question from 5 years ago and I believe that an updated version could be interesting to discuss nowadays: Is the smartest robot more clever than the stupidest human?
Today's best LLMs are displaying a lot of human-like abilities: proficiency in natural languages, ability to code, logical reasoning, role playing and so on. They can even solve CAPTCHAs, design games, answer questions about stories or write new ones: these were the "shortcomings of robots" in 2018, according to the answers to the question I linked.
Question
How do the best LLMs of today compare to a "dumb human"? In what tasks are all normal humans still better than AIs? Is there any test that every able-bodied human would pass, but top LLMs would still fail?
Definitions and clarifications
A "dumb human" is a person without recognized disabilities or obvious problems, who doesn't have particular skills and who is considered not very intelligent (low IQ).
Of course the LLMs available to the public have a number of objective limitations: they can only process text-to-text, they work with tokens rather than characters, context length is just few kilo-tokens, and they have no long-term memory. However a number of open source projects have shown various solutions to these problems, and the non-public version of the commercial LLMs already support much larger context windows, image input and similar features. Observations like "LLMs can't move arms as they don't have it", "LLMs fail to count characters because they're token based", "LLMs can't speak nor listen to speech" are not interesting.
Answer: LLMs seem to be limited at "compositional tasks." Have a look at this paper, in which the authors
investigate the limits of these models across three representative compositional tasks—multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer.
I don't know if a "dumb human" can do dynamic programming problems, but
humans can solve 3-digit by 3-digit multiplication arithmetic after learning basic calculation rules. Yet, off-the-shelf ChatGPT and GPT4 achieve only 55% and 59% accuracies on this task, respectively.
I gave ChatGPT two tries (what is "311 times 877") and ("what is 513 times 799"), and it got them both wrong.
Another example from that paper is a "zebra" or "einstein" puzzle. Look at page 18 in the paper for an example. (I'll copy it below) It took me a couple minutes to solve. ChatGPT tries its best, but gets it quite wrong.
the puzzle
There are 3 houses (numbered 1 on the left, 3 on the right). Each has a different person in them. They have different characteristics:
Each person has a unique name: peter, eric, arnold
People have different favorite sports: soccer, tennis, basketball
People own different car models: tesla, ford, camry
clues:
The person who owns a Ford is the person who loves tennis.
Arnold is in the third house.
The person who owns a Camry is directly left of the person who owns a Ford.
Eric is the person who owns a Camry.
The person who loves basketball is Eric.
The person who loves tennis and the person who loves soccer are next to each other.
Match the person, favorite sports, and car models to each house.
ChatGPTs output (for me just now)
GPT's full output
<omitting the output where it "thinks">
Therefore, the matching of people, favorite sports, and car models to each house is as follows:
House 1:
Person: Peter
Favorite sport: Tennis
Car model: Tesla
House 2:
Person: Eric
Favorite sport: Basketball
Car model: Ford
House 3:
Person: Arnold
Favorite sport: Soccer
Car model: Camry
and its easy to see that some of the constraints are violated. (e.g. the first one: Ford goes with Tennis) | {
"domain": "ai.stackexchange",
"id": 3849,
"tags": "human-like, intelligence-testing, reasoning, intelligence"
} |
Subset sum with only two item types | Question: Suppose we have $r$ copies of the integer $a$ and $t$ copies of the integer $b$, and a capacity $C$. We would like to find the maximum sum of the given integers, that is at most $C$.
This is a special case of the subset sum problem. Since there are at most $(r+1)(t+1)$ possible sums, one can just check all of them and find the maximum sum that is at most $C$. But this takes time $O(r t)$, while the size of the input - if it is given in binary - is $\log{(r t a b)}$.
Is there an algorithm that solves the problem in time polynomial in the size of the binary representation of the input?
Answer: As kcsquared notes, this is an instance of integer linear programming (ILP) in two dimensions. Lenstra showed that ILP can be solved in polynomial time when the number of dimensions is constant; the special case of two dimensions was apparently proven earlier by Scarf. This proves that your program can be solved in polynomial time, as you were hoping. | {
"domain": "cs.stackexchange",
"id": 19302,
"tags": "polynomial-time, partitions, subset-sum"
} |
NodeJS APIs folder structure | Question: I am going to have multiple APIs in routes folder should I put them in in the same file API or separate them?
├── app.js
├── src/
│ ├── contants/
│ ├── helpers/
│ ├── models/
│ ├── routes/
| | |___index.js
|___api.js
│ └── libs/
│ ├── backbone/
│ ├── underscore/
│ └── ...
api.js file contains all the APIs
const jwt = require("jsonwebtoken")
const axios = require("axios")
require("express-async-errors")
const bodyParser = require("body-parser")
const fs = require("fs")
const LOLTrackingSystem = require("../methods/onlineGamesTracking/LOLTracking")
const getUserData = require("../methods/leagueOfLegends/getUserData")
const isAuthenticated = require("../helpers/authenticated")
const apiRoute = (api) => {
api.use(bodyParser.json())
api.use(bodyParser.urlencoded({
extended: false
}));
api.post("/api/auth", (req, res) => {
//API Functions
})
api.post("/api/gizmo/memberProfile", isAuthenticated, (req, res) => {
//API Functions
})
api.post("/api/gizmo/memberState/:userId/:host/:state", async (req, res) => {
//API Functions
})
}
module.exports = apiRoute
Is what I am doing is right?
If it's wrong what is the right way to do it?
Answer: there is no right in wrong in this situation it's a question of what's best for your particular situation if you are going to build many Rest API endpoints it's best to separate them in separate files under routes like this so your code can be more maintainable :
│ ├── routes/
| | |___index.js
|___auth.js
|___gizmo.js
- | {
"domain": "codereview.stackexchange",
"id": 37066,
"tags": "javascript, node.js"
} |
Modern OpenGL shader wrapper v2 | Question: This is a fixed up version (following RAII principles) of my previous reviewal. I have moved the construction of the object entirely to the constructor to prevent any bad object states.
The only thing I am slightly concerned about is if an exception is thrown, the destructor will not be called thus causing shader objects to leak (as glDeleteShader won't be called).
Maybe wrapping the shader handle ID (shader_id) in some sort of POD-wrapper could fix this?
compilation_error.h
#pragma once
#include <stdexcept>
#include <string>
class CompilationError : public std::runtime_error {
public:
CompilationError(const std::string &message, std::string info_log);
const std::string &get_info_log() const;
private:
const std::string info_log;
};
compilation_error.cpp
#include <render/shader/compilation_error.h>
CompilationError::CompilationError(const std::string &message, std::string info_log) :
runtime_error(message), info_log(std::move(info_log)) {}
const std::string &CompilationError::get_info_log() const {
return info_log;
}
shader.h
#pragma once
#include <engine.h>
#include <string>
class Shader {
public:
Shader(GLenum type, const std::string &path);
~Shader();
private:
const GLuint shader_id;
};
shader.cpp
#include <render/shader/shader.h>
#include <render/shader/compilation_error.h>
#include <fstream>
Shader::Shader(const GLenum type, const std::string &path) :
shader_id(glCreateShader(type)) {
if (!shader_id) {
throw std::runtime_error("Unable to create shader");
}
std::ifstream input_stream(path, std::ifstream::in | std::ifstream::ate);
if (!input_stream) {
glDeleteShader(shader_id);
throw std::ifstream::failure("Unable to open shader file");
}
const long source_length = input_stream.tellg();
std::vector<char> source((unsigned long) source_length);
input_stream.seekg(0);
input_stream.read(source.data(), source_length);
const GLchar *sources[] = {source.data()};
const GLint lengths[] = {(GLint) source_length};
glShaderSource(shader_id, 1, sources, lengths);
glCompileShader(shader_id);
GLint compile_status;
glGetShaderiv(shader_id, GL_COMPILE_STATUS, &compile_status);
if (!compile_status) {
GLint log_length;
glGetShaderiv(shader_id, GL_INFO_LOG_LENGTH, &log_length);
std::vector<GLchar> log_output((unsigned long) log_length);
glGetShaderInfoLog(shader_id, log_length, nullptr, log_output.data());
std::string info_log(log_output.data());
if (!info_log.empty() && info_log.back() == '\n') {
info_log.pop_back();
}
glDeleteShader(shader_id);
throw CompilationError("Unable to compile shader", info_log);
}
}
Shader::~Shader() {
glDeleteShader(shader_id);
}
Answer: This looks a lot better! Good job!
Nitty-gritty stuff:
better cleanup in the constructor
You hava a lot of the following idiom in the constructor:
if(condition) {
glDeleteShader(shader_id);
throw something;
}
This is a little fragile, since should you make the class more complex, that's a lot of places to refactor. The following would be better:
try {
//most of the constructor in here
...
if(condition) throw something;
...
}
catch(...) {
glDeleteShader(shader_id);
throw;
}
ifstream can throw its own exceptions if you tell it to
ifstream input_stream;
input_stream.exceptions( std::ios::failbit );
input_stream.open(path, std::ifstream::in | std::ifstream::ate);
Use iterators when converting a vector to a string
std::string info_log(log_output.data());
vs
std::string info_log(log_output.begin(), log_output.end());
The second one is better because it works whether there is a null terminator or not. It also removes a strlen() call from the code, which is a O(N) operation.
That's actually all I've got. Good job once again. | {
"domain": "codereview.stackexchange",
"id": 27428,
"tags": "c++, c++11, opengl"
} |
Setting up your robot using tf, migration to tf2 | Question:
Hi all,
I am following the tutorial from this link here: http://wiki.ros.org/navigation/Tutorials/RobotSetup/TF
Noticed that it is using tf, however I would like to code it in tf2 seeing I have gone through the tf2 tutorials.
For the broadcaster file I had converted the original tf version which is this:
#include <ros/ros.h>
#include <tf/transform_broadcaster.h>
int main(int argc, char** argv){
ros::init(argc, argv, "robot_tf_publisher");
ros::NodeHandle n;
ros::Rate r(100);
tf::TransformBroadcaster broadcaster;
while(n.ok()){
broadcaster.sendTransform(
tf::StampedTransform(
tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)),
ros::Time::now(),"base_link", "base_laser"));
r.sleep();
}
}
To the tf2 version which I did up here (not sure if I did it correctly):
#include <ros/ros.h>
#include <tf2/LinearMath/Quaternion.h>
#include <tf2_ros/transform_broadcaster.h>
#include <geometry_msgs/TransformStamped.h>
int main(int argc, char** argv){
ros::init(argc, argv, "robot_tf2_ros_publisher");
ros::NodeHandle n;
ros::Rate r(100);
tf2_ros::TransformBroadcaster broadcaster;
geometry_msgs::TransformStamped transformStamped;
while(n.ok()){
transformStamped.header.stamp = ros::Time::now();
transformStamped.header.frame_id = "base_link";
transformStamped.child_frame_id = "base_laser";
transformStamped.transform.translation.x = 0.1;
transformStamped.transform.translation.y = 0.0;
transformStamped.transform.translation.z = 0.2;
transformStamped.transform.rotation.x = 0;
transformStamped.transform.rotation.y = 0;
transformStamped.transform.rotation.z = 0;
transformStamped.transform.rotation.w = 1;
broadcaster.sendTransform(transformStamped);
r.sleep();
}
}
As for the listener file, I'm not sure how to convert it into the tf2 version :(
The original tf version of the listener file is as follows:
#include <ros/ros.h>
#include <geometry_msgs/PointStamped.h>
#include <tf/transform_listener.h>
void transformPoint(const tf::TransformListener& listener){
//we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
geometry_msgs::PointStamped laser_point;
laser_point.header.frame_id = "base_laser";
//we'll just use the most recent transform available for our simple example
laser_point.header.stamp = ros::Time();
//just an arbitrary point in space
laser_point.point.x = 1.0;
laser_point.point.y = 0.2;
laser_point.point.z = 0.0;
try{
geometry_msgs::PointStamped base_point;
listener.transformPoint("base_link", laser_point, base_point);
ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f",
laser_point.point.x, laser_point.point.y, laser_point.point.z,
base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec());
}
catch(tf::TransformException& ex){
ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what());
}
}
int main(int argc, char** argv){
ros::init(argc, argv, "robot_tf_listener");
ros::NodeHandle n;
tf::TransformListener listener(ros::Duration(10));
//we'll transform a point once every second
ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener)));
ros::spin();
}
If anyone would be able to help it would be very much appreciated.
Edit: Followed ur suggestions and changed the code. However I am getting errors :(
Listed the errors below -
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp: In function ‘void transformPoint(const tf2_ros::TransformListener&)’:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:5: error: ‘tfBuffer’ was not declared in this scope
tfBuffer.transformPoint("base_link", laser_point, base_point);
^
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:26:18: error: ‘TransformException’ in namespace ‘tf2_ros’ does not name a type
catch(tf2_ros::TransformException& ex){
^
In file included from /opt/ros/kinetic/include/ros/ros.h:40:0,
from /home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:1:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:27:109: error: ‘ex’ was not declared in this scope
on trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what());
^
/opt/ros/kinetic/include/ros/console.h:346:165: note: in definition of macro ‘ROSCONSOLE_PRINT_AT_LOCATION_WITH_FILTER’
define_location__loc.level_, __FILE__, __LINE__, __ROSCONSOLE_FUNCTION__, __VA_ARGS__
^
/opt/ros/kinetic/include/ros/console.h:379:7: note: in expansion of macro ‘ROSCONSOLE_PRINT_AT_LOCATION’
ROSCONSOLE_PRINT_AT_LOCATION(__VA_ARGS__); \
^
/opt/ros/kinetic/include/ros/console.h:561:35: note: in expansion of macro ‘ROS_LOG_COND’
#define ROS_LOG(level, name, ...) ROS_LOG_COND(true, level, name, __VA_ARGS__)
^
/opt/ros/kinetic/include/rosconsole/macros_generated.h:214:24: note: in expansion of macro ‘ROS_LOG’
#define ROS_ERROR(...) ROS_LOG(::ros::console::levels::Error, ROSCONSOLE_DEFAULT_NAME
^
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:27:5: note: in expansion of macro ‘ROS_ERROR’
ROS_ERROR("Received an exception trying to transform a point from \"base_laser\"
^
robot_setup_tf/CMakeFiles/tf_listener.dir/build.make:62: recipe for target 'robot_setup_tf/CMakeFiles/tf_listener.dir/src/tf_listener.cpp.o' failed
make[2]: *** [robot_setup_tf/CMakeFiles/tf_listener.dir/src/tf_listener.cpp.o] Error 1
CMakeFiles/Makefile2:2227: recipe for target 'robot_setup_tf/CMakeFiles/tf_listener.dir/all' failed
make[1]: *** [robot_setup_tf/CMakeFiles/tf_listener.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
My edited code looks as so, not sure what I'm doing wrong :( -
#include <ros/ros.h>
#include <geometry_msgs/PointStamped.h>
#include <tf2_ros/transform_listener.h>
void transformPoint(const tf2_ros::TransformListener& tfListener){
//we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
geometry_msgs::PointStamped laser_point;
laser_point.header.frame_id = "base_laser";
//we'll just use the most recent transform available for our simple example
laser_point.header.stamp = ros::Time();
//just an arbitrary point in space
laser_point.point.x = 1.0;
laser_point.point.y = 0.2;
laser_point.point.z = 0.0;
try{
geometry_msgs::PointStamped base_point;
tfBuffer.transformPoint("base_link", laser_point, base_point);
ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f",
laser_point.point.x, laser_point.point.y, laser_point.point.z,
base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec());
}
catch(tf2_ros::TransformException& ex){
ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what());
}
}
int main(int argc, char** argv){
ros::init(argc, argv, "robot_tf2_ros_listener");
ros::NodeHandle n;
tf2_ros::Buffer tfBuffer(ros::Duration(10));
tf2_ros::TransformListener tfListener(tfBuffer);
//we'll transform a point once every second
ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(tfListener)));
ros::spin();
}
Edit: Made the changes that you suggested, there's less errors than before tho still have one more error :\
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp: In function ‘void transform(const tf2_ros::TransformListener&)’:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:14: error: ‘const class tf2_ros::TransformListener’ has no member named ‘transform’
tfBuffer.transform("base_link", laser_point, base_point);
^
robot_setup_tf/CMakeFiles/tf_listener.dir/build.make:62: recipe for target 'robot_setup_tf/CMakeFiles/tf_listener.dir/src/tf_listener.cpp.o' failed
make[2]: *** [robot_setup_tf/CMakeFiles/tf_listener.dir/src/tf_listener.cpp.o] Error 1
CMakeFiles/Makefile2:2227: recipe for target 'robot_setup_tf/CMakeFiles/tf_listener.dir/all' failed
make[1]: *** [robot_setup_tf/CMakeFiles/tf_listener.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
My updated code is as follows:
#include <ros/ros.h>
#include <geometry_msgs/PointStamped.h>
#include <tf2_ros/transform_listener.h>
void transform(const tf2_ros::TransformListener& tfBuffer){
//we'll create a point in the base_laser frame that we'd like to transform to the base_link frame
geometry_msgs::PointStamped laser_point;
laser_point.header.frame_id = "base_laser";
//we'll just use the most recent transform available for our simple example
laser_point.header.stamp = ros::Time();
//just an arbitrary point in space
laser_point.point.x = 1.0;
laser_point.point.y = 0.2;
laser_point.point.z = 0.0;
try{
geometry_msgs::PointStamped base_point;
tfBuffer.transform("base_link", laser_point, base_point);
ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f",
laser_point.point.x, laser_point.point.y, laser_point.point.z,
base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec());
}
catch(tf2::TransformException& ex){
ROS_ERROR("Received an exception trying to transform a point from \"base_laser\" to \"base_link\": %s", ex.what());
}
}
int main(int argc, char** argv){
ros::init(argc, argv, "robot_tf2_ros_listener");
ros::NodeHandle n;
tf2_ros::Buffer tfBuffer(ros::Duration(10));
tf2_ros::TransformListener tfListener(tfBuffer);
//we'll transform a point once every second
ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transform, boost::ref(tfBuffer)));
ros::spin();
}
Edit: Made change to my code as suggested though got a set of errors -
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp: In function ‘void transform(const tf2_ros::Buffer&)’:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:60: error: no matching function for call to ‘tf2_ros::Buffer::transform(const char [10], geometry_msgs::PointStamped&, geometry_msgs::PointStamped&) const’
tfBuffer.transform("base_link", laser_point, base_point);
^
In file included from /opt/ros/kinetic/include/tf2_ros/buffer.h:35:0,
from /opt/ros/kinetic/include/tf2_ros/transform_listener.h:40,
from /home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:3:
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:123:8: note: candidate: template<class T> T& tf2_ros::BufferInterface::transform(const T&, T&, const string&, ros::Duration) const
T& transform(const T& in, T& out,
^
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:123:8: note: template argument deduction/substitution failed:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:60: note: deduced conflicting types for parameter ‘T’ (‘char [10]’ and ‘geometry_msgs::PointStamped {aka geometry_msgs::PointStamped_<std::allocator<void> >}’)
tfBuffer.transform("base_link", laser_point, base_point);
^
In file included from /opt/ros/kinetic/include/tf2_ros/buffer.h:35:0,
from /opt/ros/kinetic/include/tf2_ros/transform_listener.h:40,
from /home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:3:
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:143:7: note: candidate: template<class T> T tf2_ros::BufferInterface::transform(const T&, const string&, ros::Duration) const
T transform(const T& in,
^
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:143:7: note: template argument deduction/substitution failed:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:60: note: cannot convert ‘laser_point’ (type ‘geometry_msgs::PointStamped {aka geometry_msgs::PointStamped_<std::allocator<void> >}’) to type ‘const string& {aka const std::__cxx11::basic_string<char>&}’
tfBuffer.transform("base_link", laser_point, base_point);
^
In file included from /opt/ros/kinetic/include/tf2_ros/buffer.h:35:0,
from /opt/ros/kinetic/include/tf2_ros/transform_listener.h:40,
from /home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:3:
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:168:8: note: candidate: template<class A, class B> B& tf2_ros::BufferInterface::transform(const A&, B&, const string&, ros::Duration) const
B& transform(const A& in, B& out,
^
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:168:8: note: template argument deduction/substitution failed:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:60: note: cannot convert ‘base_point’ (type ‘geometry_msgs::PointStamped {aka geometry_msgs::PointStamped_<std::allocator<void> >}’) to type ‘const string& {aka const std::__cxx11::basic_string<char>&}’
tfBuffer.transform("base_link", laser_point, base_point);
^
In file included from /opt/ros/kinetic/include/tf2_ros/buffer.h:35:0,
from /opt/ros/kinetic/include/tf2_ros/transform_listener.h:40,
from /home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:3:
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:192:8: note: candidate: template<class T> T& tf2_ros::BufferInterface::transform(const T&, T&, const string&, const ros::Time&, const string&, ros::Duration) const
T& transform(const T& in, T& out,
^
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:192:8: note: template argument deduction/substitution failed:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:60: note: deduced conflicting types for parameter ‘T’ (‘char [10]’ and ‘geometry_msgs::PointStamped {aka geometry_msgs::PointStamped_<std::allocator<void> >}’)
tfBuffer.transform("base_link", laser_point, base_point);
^
In file included from /opt/ros/kinetic/include/tf2_ros/buffer.h:35:0,
from /opt/ros/kinetic/include/tf2_ros/transform_listener.h:40,
from /home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:3:
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:220:7: note: candidate: template<class T> T tf2_ros::BufferInterface::transform(const T&, const string&, const ros::Time&, const string&, ros::Duration) const
T transform(const T& in,
^
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:220:7: note: template argument deduction/substitution failed:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:60: note: cannot convert ‘laser_point’ (type ‘geometry_msgs::PointStamped {aka geometry_msgs::PointStamped_<std::allocator<void> >}’) to type ‘const string& {aka const std::__cxx11::basic_string<char>&}’
tfBuffer.transform("base_link", laser_point, base_point);
^
In file included from /opt/ros/kinetic/include/tf2_ros/buffer.h:35:0,
from /opt/ros/kinetic/include/tf2_ros/transform_listener.h:40,
from /home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:3:
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:251:8: note: candidate: template<class A, class B> B& tf2_ros::BufferInterface::transform(const A&, B&, const string&, const ros::Time&, const string&, ros::Duration) const
B& transform(const A& in, B& out,
^
/opt/ros/kinetic/include/tf2_ros/buffer_interface.h:251:8: note: template argument deduction/substitution failed:
/home/gavin/catkin_ws/src/robot_setup_tf/src/tf_listener.cpp:20:60: note: cannot convert ‘base_point’ (type ‘geometry_msgs::PointStamped {aka geometry_msgs::PointStamped_<std::allocator<void> >}’) to type ‘const string& {aka const std::__cxx11::basic_string<char>&}’
tfBuffer.transform("base_link", laser_point, base_point);
^
robot_setup_tf/CMakeFiles/tf_listener.dir/build.make:62: recipe for target 'robot_setup_tf/CMakeFiles/tf_listener.dir/src/tf_listener.cpp.o' failed
make[2]: *** [robot_setup_tf/CMakeFiles/tf_listener.dir/src/tf_listener.cpp.o] Error 1
CMakeFiles/Makefile2:2227: recipe for target 'robot_setup_tf/CMakeFiles/tf_listener.dir/all' failed
make[1]: *** [robot_setup_tf/CMakeFiles/tf_listener.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
Edit: The latest set of errors I got after updating the parameters of tfBuffer.transform()
CMakeFiles/tf_listener.dir/src/tf_listener.cpp.o: In function `geometry_msgs::PointStamped_<std::allocator<void> >& tf2_ros::BufferInterface::transform<geometry_msgs::PointStamped_<std::allocator<void> > >(geometry_msgs::PointStamped_<std::allocator<void> > const&, geometry_msgs::PointStamped_<std::allocator<void> >&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, ros::Duration) const':
tf_listener.cpp:(.text._ZNK7tf2_ros15BufferInterface9transformIN13geometry_msgs13PointStamped_ISaIvEEEEERT_RKS6_S7_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN3ros8DurationE[_ZNK7tf2_ros15BufferInterface9transformIN13geometry_msgs13PointStamped_ISaIvEEEEERT_RKS6_S7_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN3ros8DurationE]+0x69): undefined reference to `ros::Time const& tf2::getTimestamp<geometry_msgs::PointStamped_<std::allocator<void> > >(geometry_msgs::PointStamped_<std::allocator<void> > const&)'
tf_listener.cpp:(.text._ZNK7tf2_ros15BufferInterface9transformIN13geometry_msgs13PointStamped_ISaIvEEEEERT_RKS6_S7_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN3ros8DurationE[_ZNK7tf2_ros15BufferInterface9transformIN13geometry_msgs13PointStamped_ISaIvEEEEERT_RKS6_S7_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN3ros8DurationE]+0x7b): undefined reference to `std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const& tf2::getFrameId<geometry_msgs::PointStamped_<std::allocator<void> > >(geometry_msgs::PointStamped_<std::allocator<void> > const&)'
tf_listener.cpp:(.text._ZNK7tf2_ros15BufferInterface9transformIN13geometry_msgs13PointStamped_ISaIvEEEEERT_RKS6_S7_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN3ros8DurationE[_ZNK7tf2_ros15BufferInterface9transformIN13geometry_msgs13PointStamped_ISaIvEEEEERT_RKS6_S7_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEN3ros8DurationE]+0xc8): undefined reference to `void tf2::doTransform<geometry_msgs::PointStamped_<std::allocator<void> > >(geometry_msgs::PointStamped_<std::allocator<void> > const&, geometry_msgs::PointStamped_<std::allocator<void> >&, geometry_msgs::TransformStamped_<std::allocator<void> > const&)'
collect2: error: ld returned 1 exit status
robot_setup_tf/CMakeFiles/tf_listener.dir/build.make:117: recipe for target '/home/gavin/catkin_ws/devel/lib/robot_setup_tf/tf_listener' failed
make[2]: *** [/home/gavin/catkin_ws/devel/lib/robot_setup_tf/tf_listener] Error 1
CMakeFiles/Makefile2:2313: recipe for target 'robot_setup_tf/CMakeFiles/tf_listener.dir/all' failed
make[1]: *** [robot_setup_tf/CMakeFiles/tf_listener.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2
Invoking "make -j4 -l4" failed
My CMakeLists.txt file is as follows -
cmake_minimum_required(VERSION 2.8.3)
project(robot_setup_tf)
find_package(catkin REQUIRED COMPONENTS
roscpp
tf2
tf2_ros
geometry_msgs
)
catkin_package(
)
include_directories(
${catkin_INCLUDE_DIRS}
)
add_executable(tf_broadcaster src/tf_broadcaster.cpp)
add_executable(tf_listener src/tf_listener.cpp)
target_link_libraries(tf_broadcaster ${catkin_LIBRARIES})
target_link_libraries(tf_listener ${catkin_LIBRARIES})
Originally posted by gavindvs on ROS Answers with karma: 11 on 2018-08-23
Post score: 1
Original comments
Comment by gvdhoorn on 2018-08-24:
@gavindvs: could I ask you to not post answers unless you are answering your own question? To interact with other posters, please use comments. To update us on things that require more characters (ie: code, console logs, etc), please edit your original question. Use the edit button/link.
Comment by PeteBlackerThe3rd on 2018-08-24:
See my update, there are three errors in your code you need to fix.
Comment by gavindvs on 2018-08-24:
@gvdhoorn: ah ok sorry about that
Comment by gavindvs on 2018-08-24:
@PeteBlackerThe3rd: did the updates you suggested tho am still getting an error :\ Not sure what I'm doing wrong. Edited my original post.
Comment by PeteBlackerThe3rd on 2018-08-25:
Almost there, the type of the parameter passed to the transform function should be tf2_ros::Buffer as below.
void transform(const tf2_ros::Buffer& tfBuffer)
Then I think you'll be there.
Comment by gavindvs on 2018-08-25:
@PeteBlackerThe3rd: made the edit you suggested though got a set of errors, as listed in my latest edit :(
Comment by PeteBlackerThe3rd on 2018-08-26:
That's my mistake this time. The order of the parameters is different to the old transformPoint method. The transform line should be:
tfBuffer.transform(laser_point, base_point, "base_link");
Comment by gavindvs on 2018-08-28:
@PeteBlackerThe3rd changed the parameters tho got a whole string of errors :\ listed them in my latest edit.
Comment by gvdhoorn on 2018-08-28:
@gavindvs: these are all linker errors, so the compiler is now happy, but you appear to not be linking against the required libraries. If you can show your CMakeLists.txt perhaps we can help (be sure to remove all the boilerplate comments first though).
Comment by gavindvs on 2018-08-28:
That's great! have added my CMakeLists.txt file above @gvdhoorn
Comment by gvdhoorn on 2018-08-28:
I'm not sure, but I believe you should add tf2_geometry_msgs to the list of Catkin pkgs that you link against.
Answer:
There are four things you'll need to change. The creation of the listener will need to be this:
tf2_ros::Buffer tfBuffer(ros::Duration(10));
tf2_ros::TransformListener tfListener(tfBuffer);
You'll have to pass the tfBuffer object to the transformPoint function instead of the listener object.
The transform point operation can be done using the transform method of the buffer object:
tfBuffer.transform("base_link", laser_point, base_point);
Finally the type of the exception object will need to be changed to:
catch (tf2::TransformException &ex)
...
Hope this helps.
UPDATE :
3 things, if you look carefully there are a few differences between your new code and what I suggested.
The namespace of TransformException should be tf2 not tf2_ros
You're still passing the listener object to transform point you need to change this to the Buffer object.
The method used to transform the point is now called transform not transformPoint.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-08-23
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31625,
"tags": "ros, transform, ros-kinetic, tf2"
} |
Order of index in Lorentz transform | Question: I am reading Schwartz's "QFT and the standard model". On pg 13 he gives the Lorentz transform of a rotation around the x-axis:
$
\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \cos \theta _x & \sin \theta _x \\
0 & 0 & -\sin \theta _x & \cos \theta _x \\
\end{array}
\right)
$
For a boost along the x-axis he gives:
$
\left(
\begin{array}{cccc}
\cosh \beta _x & \sinh \beta _x & 0 & 0 \\
\sinh \beta _x & \cosh \beta _x & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}
\right)
$
I believe this corresponds to the Lorentz transform in the form $\Lambda^\alpha{}_\beta$. I believe the first index is the row and the second is the column. But at the bottom of the pg he has
$
V^\mu=\Lambda_\nu^\mu V^\nu.
$
as though the order of the indices on $\Lambda$ doesn't matter. But surely they do matter as $\Lambda^\alpha{}_\beta\not=\Lambda_\beta{}^\alpha$ in general?
Answer: Short answer
His notation is not ambiguous because the expression
$$V^{'\mu} \equiv \Lambda^\mu_\nu V^\nu$$
can only mean sum along the $\nu$ component. Since $\Lambda$ is a representation of the Lorentz group, it is a linear operator, hence it can only act on a vector by the usual way that matrices act on vectors. Hence the above is unambiguous.
Longer answer
I'll explain why in general this notation is unambiguous.
The convention is the following: the upper index denotes a vector index and the lower index denotes a dual vector index and. That is, for a general rank $(p, q)$ tensor, one would usually write
$$
T^{\mu_1...\mu_p}_{\nu_1...\nu_q} \partial_{\mu_1} \otimes ... \otimes \partial_{\mu_p} \otimes dx^{\nu_1}... dx^{\nu_q}
$$
where $\{\partial_\mu\}$ denotes a basis for your vector space and ${dx^\nu}$ denotes a basis for its dual vector space. (The notation I chose is from differential geometry, which is certainly useful when you define a field theory in over a general spacetime manifold).
As a special case, a linear transformation is a rank $(1,1)$ tensor. Namely $\Lambda^\mu_\nu$ unambiguously denotes components of the tensor:
$$\Lambda^\mu_\nu ~\partial_\mu \otimes dx^\nu$$
Acting on a vector $V \equiv V^\tau \partial_\tau$, we get:
$$\Lambda[V] \\
= \Lambda^\mu_\nu ~\partial_\mu \otimes dx^\nu [V^\tau \partial_\tau] \\
= \Lambda^\mu_\nu V^\tau (\partial_\tau dx^\nu) ~\partial_\mu \\
= \Lambda^\mu_\nu V^\tau \delta_\tau^\nu ~\partial_\mu \\
= \Lambda^\mu_\nu V^\nu ~\partial_\mu $$
Observe no horizontal padding in the indices are needed to make these manipulations unambiguous. | {
"domain": "physics.stackexchange",
"id": 22586,
"tags": "special-relativity, soft-question, notation"
} |
rosdep update problem | Question:
Hello, I'm installing groovy now.
After running "rosdep update" it show error as follows:
$ rosdep update
reading in sources list data from /etc/ros/rosdep/sources.list.d
Hit*****************/rosdep/osx-homebrew.yaml
Hit*****************/rosdep/gentoo.yaml
Hit /rosdep/base.yaml
Hit/rosdep/python.yaml
Hit /rosdep/ruby.yaml
Hit/releases/fuerte.yaml
ERROR: error loading sources list:
HTTP Error 404: Not Found
I already install fuerte and using it. There was not same problem when I installed fuerte. What I have to now?
Thank you.
Originally posted by zieben on ROS Answers with karma: 118 on 2013-04-24
Post score: 2
Original comments
Comment by joq on 2013-04-25:
What does rosdep --version print?
Comment by Beel on 2013-04-29:
On my Ubuntu 12.04 system, loaded and updated yesterday and today, rosdep --version gives 0.10.14. ...a few hours later ...And now, after the repository update mentioned below, the version is 0.10.18, and running "rosdep update" does not give any error messages.
Answer:
You are not using the latest version of that package. Please update "rosdep" - on Ubuntu/Debian using "sudo apt-get update && sudo apt-get install python-rosdep" (at least version 0.10.18).
Rosdep has recently changed and requires a newer version since than (see http://code.ros.org/lurker/message/20130423.025734.7689bd35.en.html).
Originally posted by Dirk Thomas with karma: 16276 on 2013-04-29
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Beel on 2013-04-29:
This error still occurs with the latest version of rosdep, after doing the updates and installs, a couple of hours ago. I should say, it still occurs after doing those commands to update python-rosdep. I am booting up my Ubuntu 12.04 on my Beagleboard to see - rosdep --version shows, 0.10.14
Comment by tfoote on 2013-04-29:
The current version of rosdep is 0.10.18
Comment by Beel on 2013-04-29:
@tfoote - see comment from ahendrix below. I will update my system tomorrow morning in hopes that it will be fixed. | {
"domain": "robotics.stackexchange",
"id": 13943,
"tags": "rosdep, ros-fuerte, ros-groovy"
} |
Spacing between adjacent planes of a given set in a simple cubic crystal | Question: Today I learned about Miller indices in a cubic crystal, and I learned that adjacent planes $(hkl)$ in a simple cubic crystal are spaced a distance $d_{hkl}$ from each other, with $d_{hkl}$ given by:
$$d_{hkl} = \frac{a}{\sqrt{h^2 + k^2 + l^2}}$$
For example, take the origin any of the below vertices of the cube with z axis directed upwards, the $(0 0 2)$ plane cuts horizontally the middle of the cube, and for this set of planes, $d_{hkl} = \frac{a}{2}$.
May be a silly question, but if we followed the rule of spacing, we would get that the next $(0 0 2)$ plane is the very upper horizontal plane in the cube, which - i guess - is not true since the latter plane has $(0 0 1)$ and not in any case $(0 0 2)$ miller indices. However, if we considered that $(0 0 2)$ planes are only those passing horizontally through the middle of unit cells, then the distance between them will equal $a$ and not $\frac{a}{2}$ (using imagination and not rules).
I must be missing something very trivial here, but it is only today that i started learning these stuff and i am really confused. Thanks for help.
Answer: When Miller indices are different only in a common factor, the planes are in the same orientation. The higher order reflections share some planes with the lower order reflections. In the case mentioned, every second plane of the (2 0 0) planes coincides with the (1 0 0) planes, whose spacing is double that of the former. | {
"domain": "chemistry.stackexchange",
"id": 13222,
"tags": "inorganic-chemistry, crystal-structure, miller-indices"
} |
Recursive flattening of Swift sequences - an overly complicated approach | Question: I recently read and answered Martin R's Recursive flattening of Swift sequences and continued to play around with the code until I arrived at something that was both pretty cool and possibly an abomination. So I figured I'd come back to CR with it and ask others' opinions.
If you want a full context, you should refer to Martin's question, but I'll reiterate the essential parts here:
The function seqentialFlatten(_:_,children:_), takes a sequence and a function that maps an element of that sequence into a sequence of the same type, and lazily returns an AnySequence with all elements that can be generated by repeatedly mapping elements into sequences. One use case of this is getting a list of all subviews of a view:
let someView : UIView = ...
let views = sequentialFlatten([someView], children: { $0.subviews as! [UIView] })
What I've done in this approach is used a more functional style to implement this function, using map and reduce. To achieve this, I've implemented the + operator for Generators, made a generic struct to make it possible to have local lazy variables and wrapped a whole bunch of stuff into AnyGenerators.
And therein lies the problem, I feel perhaps this bit of code has introduce too many new concepts for not much gain. I'll first post the full code and then go through some of my concerns.
func +<G: GeneratorType>(lhs: G, rhs: G) -> AnyGenerator<G.Element> {
var leftGen = lhs
var leftEmpty = false
var rightGen = rhs
var rightEmpty = false
return AnyGenerator {
if !leftEmpty, let elem = leftGen.next() {
return elem
}
leftEmpty = true
if !rightEmpty, let elem = rightGen.next() {
return elem
}
rightEmpty = true
return nil
}
}
struct Lazy<E> {
private let get: () -> E
lazy var val: E = self.get()
init(@autoclosure(escaping) _ getter: () -> E) {
get = getter
}
}
func sequentialFlatten<S : SequenceType>(seq : S, children : S.Generator.Element -> S) -> AnySequence<S.Generator.Element> {
return AnySequence {
() -> AnyGenerator<S.Generator.Element> in
seq.map {
let firstGen = AnyGenerator([$0].generate())
var secondGen = Lazy(sequentialFlatten(children($0), children: children).generate())
return firstGen + AnyGenerator {
secondGen.val.next()
}
}.reduce(AnyGenerator { nil }, combine: +)
}
}
let gen = sequentialFlatten([0], children: { n in n < 50 ? [2*n+1, 2*n+2] : []})
for e in gen {
print(e)
}
Right off the bat, the implementation of the + operator doesn't rub me right. It's overly verbose. At first I had it implemented as:
func +<G: GeneratorType>(lhs: G, rhs: G) -> AnyGenerator<G.Element> {
var leftGen = lhs
var rightGen = rhs
return AnyGenerator {
leftGen.next() ?? rightGen.next()
}
}
which is worlds nicer, but was a nightmare performance-wise. The problem was that, with complicated trees, all of the generators were gone trough, because the + operator relied on the two generators both returning nil to return nil itself. I'd love to see a version of this that is more elegant without losing too much in way of performance.
I'm worried there might be something like my Lazy struct already available and I'm reinventing the wheel.
I'm also worried about all the AnyGenerators I'm throwing around left and right, it seemed necessary, because I can't use my + operator on two different types of Generators. I have tried to rewrite the function and succeeded, but this caused Swift problems with type inference in the reduce as such:
func +<G1: GeneratorType, G2: GeneratorType where G1.Element == G2.Element>(lhs: G1, rhs: G2) -> AnyGenerator<G1.Element> {
var leftGen = lhs
var leftEmpty = false
var rightGen = rhs
var rightEmpty = false
return AnyGenerator {
if !leftEmpty, let elem = leftGen.next() {
return elem
}
leftEmpty = true
if !rightEmpty, let elem = rightGen.next() {
return elem
}
rightEmpty = true
return nil
}
}
Some general concerns:
Is the performance okay? (speed and memory usage)
Is the code Swifty enough?
Answer: That is an interesting approach. There are some simplifications and
performance improvements possible. To measure the performance I have used
the following simple code:
let d1 = NSDate()
let gen = sequentialFlatten([0], children: { n in n < 100_000 ? [2*n+1, 2*n+2] : []})
var c = 0
for _ in gen { c += 1 }
let d2 = NSDate()
print(c, d2.timeIntervalSinceDate(d1))
and with your sequentialFlatten method, that takes about 3.1 sec
on a 2.3 GHz MacBook Pro (compiled in Release mode).
First note that the explicit parameter list in the closure
which is passed to AnySequence { } is not needed, it can be
inferred:
func sequentialFlatten<S : SequenceType>(seq : S, children : S.Generator.Element -> S) -> AnySequence<S.Generator.Element> {
return AnySequence {
seq.map {
let firstGen = AnyGenerator([$0].generate())
var secondGen = Lazy(sequentialFlatten(children($0), children: children).generate())
return firstGen + AnyGenerator {
secondGen.val.next()
}
}.reduce(AnyGenerator { nil }, combine: +)
}
}
Next, the Lazy wrapping seems unnecessary to me. Perhaps I am overlooking something, but it works just as well without.
This also makes one AnyGenerator wrapping obsolete:
func sequentialFlatten<S : SequenceType>(seq : S, children : S.Generator.Element -> S) -> AnySequence<S.Generator.Element> {
return AnySequence {
seq.map {
let firstGen = AnyGenerator([$0].generate())
let secondGen = sequentialFlatten(children($0), children: children).generate()
return firstGen + secondGen
}.reduce(AnyGenerator { nil }, combine: +)
}
}
The execution time is now 2.5 sec.
Your more general
func +<G1: GeneratorType, G2: GeneratorType where G1.Element == G2.Element>(lhs: G1, rhs: G2) -> AnyGenerator<G1.Element>
operator can be used, but we have to assist the compiler with
an additional parameter list in the closure passed to seq.map { }:
func sequentialFlatten<S : SequenceType>(seq : S, children : S.Generator.Element -> S) -> AnySequence<S.Generator.Element> {
return AnySequence {
seq.map { elem -> AnyGenerator<S.Generator.Element> in
let firstGen = AnyGenerator([elem].generate())
let secondGen = sequentialFlatten(children(elem), children: children).generate()
return firstGen + secondGen
}.reduce(AnyGenerator { nil }, combine: +)
}
}
This allows to use GeneratorOfOne() as the first generator
(and get rid of another AnyGenerator wrapping):
func sequentialFlatten<S : SequenceType>(seq : S, children : S.Generator.Element -> S) -> AnySequence<S.Generator.Element> {
return AnySequence {
seq.map { elem -> AnyGenerator<S.Generator.Element> in
let firstGen = GeneratorOfOne(elem)
let secondGen = sequentialFlatten(children(elem), children: children).generate()
return firstGen + secondGen
}.reduce(AnyGenerator { nil }, combine: +)
}
}
which reduces the execution time to 2.1 sec.
Your approach
func +<G: GeneratorType>(lhs: G, rhs: G) -> AnyGenerator<G.Element> {
var leftGen = lhs
var rightGen = rhs
return AnyGenerator {
leftGen.next() ?? rightGen.next()
}
}
is actually invalid, as the GeneratorType documentation states
for the next() method:
/// - Requires: `..., and no preceding call to `self.next()`
/// has returned `nil`.
So once the left generator is exhausted, we must not call its
next() method again. (Even if many generators tolerate that
and return nil again.)
However, using the ideas from https://stackoverflow.com/a/37665583/1187415
and http://ericasadun.com/2016/06/06/sneaky-swift-tricks-the-fake-boolean/
the + operator can be implemented as a sequence of
nil-coalescing operators, where the purpose of the middle closure
expression is to execute a statement if the previous expression "failed":
func +<G1: GeneratorType, G2: GeneratorType where G1.Element == G2.Element>(lhs: G1, rhs: G2) -> AnyGenerator<G1.Element> {
var leftGen: G1? = lhs
var rightGen = rhs
return AnyGenerator {
return leftGen?.next()
?? { _ -> G1.Element? in leftGen = nil; return nil }()
?? rightGen.next()
}
}
Interestingly, this is a bit faster: 1.9 sec.
More improvements might be possible but that is what I have so far.
However, the recursive method from your answer https://codereview.stackexchange.com/a/131527/35991
is still much faster, it takes only 0.9 sec. | {
"domain": "codereview.stackexchange",
"id": 20404,
"tags": "recursion, swift, generics, generator, overloading"
} |
Why are ROS commands not recognised when I open a new terminal? | Question:
I know I'm a complete noob and it will be something stupid, but I've been going through the tutorials trying to get ROS installed and set up and I've got to ROS\Tutorials\UnderstandingNodes where one of the steps says run the roscore command (which runs until you stop it right?) and run something else in a new terminal while it's running. Everytime I open a new terminal it won't recognise any ROS commands.
rosnode: command not found
roscd: command not found
etc...
I'm using Ubuntu 12.04 and I think I've tried to install both ROS Groovy and Hydro. Not going to lie, I have absolutely zero idea what I'm doing here...
Cheers.
Originally posted by fjleishman on ROS Answers with karma: 1 on 2013-10-16
Post score: 0
Answer:
It sounds like setup.bash hasn't been sourced in the new terminal window. When you open a new window try executing this command:
source /opt/ros/hydro/setup.bash
I personally include that in my .bashrc file, which will do that automatically every time you open a new terminal, however some people don't like that approach.
Originally posted by skiesel with karma: 549 on 2013-10-16
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by fjleishman on 2013-10-16:
Perfect, Thank you so much! I think I'll go and add it to my .bashrc file to save me from forgetting every time. | {
"domain": "robotics.stackexchange",
"id": 15888,
"tags": "ros"
} |
Does the potential energy related to a particle determines its rest mass? | Question: Would it be possible to determine the rest mass of a particle by computing the potential energy related to the presence (existence) of the particle, if this potential energy could be determined accurately enough?
I noticed from the answers to a recent question that I always assumed this to be true, without even thinking about it. However, it occurred to me that this concept was at least unfamiliar to the people who answered and commented on that question, and that it's even unclear whether this concept is true or meaningful at all.
Let me explain this concept for an idealized situation. Consider an idealized classical spherical particle with a charge $q$ and a radius $r$ at the origin. Assume that the particle generates an electrostatic field identical to the one of a point charge $q$ in the region outside of radius $r$ and vanishing inside the region of radius $r$. Now let's use a point charge $-q$ and move it to the origin in order to cancel this field in the region outside of radius $r$. Moving the point charge to the origin will generate a certain amount of energy, and that would be the energy which I mean by the potential energy related to the presence (existence) of this idealized classical spherical particle.
I'm well aware that really computing the potential energy related to the presence (existence) of any real particle is not practically feasible for a variety of reasons, but that never worried me with respect to this concept. What worries me now is whether this notion of potential energy is even well defined at all, and even if it is, whether it really accounts for the entire rest mass (not explained by other sources of kinetic, internal or potential energy) of a particle. After all, the rest mass of a particle might simply be greater than the mass explained by any sort of potential energy.
Answer: The answer is ultimately no, but this is a reasonable idea, although old. This idea was floating around in the late 19th century, that the mass of the electron is due to the energy in the field around the electron.
The concept of potential energy is refined in field theories to field energy. The fields have energy, and this energy is identified with the potential energy of a mechanical system, so that if you lift a brick up, the potential energy of the brick is contained in the gravitational field of the brick and the Earth together.
This is important, because unlike kinetic energy, it is difficult to say where the potential energy is. If you lift a brick, is the potential energy in the brick? In the Earth? In Newton's mechanics, the question is meaningless both because things go instantaneously to different places, and also because energy is a global quantity with no way to measure the location. But in relativistic physics, the energy gravitates, and the gravitational field produced by energy requires that you know where this energy is located.
The upshot of all this is that potential energy is field energy, and you are asking if all the mass-energy of a particle is due to the fields around it.
This model has a problem if you think of it purely electromagnetically. Using a model where the electron is a ball of charge, and all the mass is electromagnetic field, you would derive, along with Poincare, Abraham and others that the total mass is equal to 4/3 of the E/c^2. The reason you don't get the right relativistic relation is because of the stresses you need to hold a ball of charge from exploding. The correct relation really needs relativity, and then you can't determine if the mass is all field.
The process of renormalization in quantum field theory tells you that part of the mass of the electron is due to the mass of the field it carries, but there are two regimes now. There is a long-distance regime, much longer than the Compton wavelength of the electron, where you get a contribution to the mass from the electric field which blows up as the reciprocal of the electron radius, and then there is the region inside the Compton wavelength, where you get the QED mass correction from electrons fluctuating into positrons, which softens the blowup to a log. The compton wavelength of the electron is 137 times bigger than the classical electron radius, so even with a Planck scale cutoff, not all the mass of the electron is field, because the blow-up in field energy is so slow at high energy.
So in quantum field theory, the answer is no--- the field energy is not the entire mass of the particle. But in another sense it is yes, because if you include the electron field too, then the total mass of the electron is the mass in the electron field plus the electromagnetic field.
Within string theory, you can formulate the question differently--- is there a measure of a field at infinity which will tell you the mass of the particle? In this case, it is the gravitational field, so that the far-away gravitational field tells you the mass.
But you probably want to know--- is the mass due to the combination of gravitational and electromagnetic field together? In this sense, since this is a classical question, it is best to think in classical GR.
If you have a charged black hole, there is a contribution to the mass of the black hole from the field outside, and a contribution from the black hole itself. As you increase the charge of the black hole, there comes a point where the charge is equal to the mass, where the entire energy of the system is due to the external fields (gravitational and electromagnetic together), and the black hole horizon becomes extremal. The extremal limit of black holes can be thought of as a realization of this idea, that all the mass is due to the fields.
Within string theory, the objects made out of strings and branes are extremal black holes in the classical limit. So within string theory, although it is highly quantum, you can say the idea that all the mass-energy is field energy is realized. This is not very great in giving you what the mass should be, because in the cases of interest, you are finding particles which are massless, so that all their energy is the energy in infinitely boosted fields. But you can take comfort in the fact that this is just a quantum regime of a system where the macroscopic classical limit of the particles are classical gravitational systems where your idea is correct. | {
"domain": "physics.stackexchange",
"id": 4210,
"tags": "special-relativity, energy, mass, potential"
} |
Space-time tradeoff and the best algorithm | Question: Consider some language $L$ such that:
$L \in DTIME(O(f(n))) \cap DSPACE(O(g(n)))$
and so that
$L \not\in DTIME(o(f(n))) \cup DSPACE(o(g(n)))$
In other words, the fastest machine $M$ computes $L$ in time $O(f(n))$ and the most space efficient machine $M'$ computes $L$ while using space $O(g(n))$.
What can be said about the space efficiency of M or the time efficiency of M'? Or more precisely, if $\mathbb{M}_T$ is the set of all machines that compute $L$ in $O(f(n))$ then what can we say about the most space efficient machine in $\mathbb{M}_T$? What about the same thing for the obvious space version: $\mathbb{M}_S$.
Alternatively, can $f(n)$ and $g(n)$ be used to define some good space-time tradeoffs? Under what conditions is $TS \in o(f(n)g(n))$ or more generally for some space-time tradeoff $h(T,S)$ under what conditions is $h(T,S) \in h(o(f(n)),o(g(n)))$.
Answer: The prototypical f and g here would probably be poly-time and polylog space. The interesting problem here is connectivity (in directed graphs) which can be solved in polynomial time (using linear space) or in polylog space (using super-polynomial time). It is a famous open problem whether it can be solved in TIME-SPACE(poly,polylog), a class known as SC.
I.e. your question is a well-known open problem. I don't think that anything non-trivial is known here. | {
"domain": "cstheory.stackexchange",
"id": 1287,
"tags": "cc.complexity-theory, ds.algorithms, open-problem, space-time-tradeoff"
} |
Hydro - groovy - indigo | Question:
When I made rosdep update, it says :
reading in sources list data from /etc/ros/rosdep/sources.list.d
Hit https://github.com/ros/rosdistro/raw/master/rosdep/osx-homebrew.yaml
Hit https://github.com/ros/rosdistro/raw/master/rosdep/gentoo.yaml
Hit https://github.com/ros/rosdistro/raw/master/rosdep/base.yaml
Hit https://github.com/ros/rosdistro/raw/master/rosdep/python.yaml
Hit https://github.com/ros/rosdistro/raw/master/rosdep/ruby.yaml
Hit https://github.com/ros/rosdistro/raw/master/releases/fuerte.yaml
Ignore legacy gbpdistro "groovy"
Ignore legacy gbpdistro "hydro"
Query rosdistro index https://raw.github.com/ros/rosdistro/master/index.yaml
Add distro "groovy"
Add distro "hydro"
Add distro "indigo"
updated cache in /home/lempereur/.ros/rosdep/sources.cache
Is this in error, that it add distro "groovy", "hydro" "indigo"?
I'm using groovy (Ubuntu 12.04)
Originally posted by Moda on ROS Answers with karma: 133 on 2014-07-23
Post score: 0
Answer:
No it's not an error I'm using Hydro and I get the same shell as you, don't worry.
They are not installed on your system but It's kind of knowing other distributions.
Originally posted by ROSkinect with karma: 751 on 2014-07-23
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Moda on 2014-07-23:
Thank you!!! | {
"domain": "robotics.stackexchange",
"id": 18732,
"tags": "rosdep, ros-groovy, ros-indigo, ros-hydro"
} |
Where does this support reaction come from? | Question: I am looking at part a of the following question,
and I can not figure out where this support reaction N is coming from. The solution is as follows.
The question and diagram make no mention of N. My understanding of the question is that there are three forces acting on the object, gravity and the two tensile forces from the cable. Where does N come from?
Answer: N is just the force of the particle resting on the smooth surface (the shaded thing in Figure 3). The pulleys aren't directly above the particle so it's leaning on the surface. This is also stated in part (c) of the question. | {
"domain": "engineering.stackexchange",
"id": 4877,
"tags": "statics"
} |
Constructing a pure state in Bloch sphere using 3 gates | Question: We have the 3 following gates :
$$
H = \dfrac{1}{\sqrt{2}}\begin{bmatrix}1 & 1 \\ 1 & -1 \end{bmatrix}
$$
$$
R(\varphi) = \begin{bmatrix}1 & 0 \\ 0 & e^{-i\varphi} \end{bmatrix}
$$
$$
R(\psi) = \begin{bmatrix}1 & 0 \\ 0 & e^{i\psi} \end{bmatrix}
$$
and we want to construct a one-bit circuit that produces the final state
$$
|\Xi\rangle = \cos {\varphi\over{2}} |0\rangle + e^{i\psi}\sin {\varphi\over{2}} |1\rangle
$$
I do not understand how a factor $\cos {\varphi\over{2}} $ can appear in front of $|0\rangle$, can someone help me ?
Answer: Starting from $|0\rangle$ if you 'mix' the amplitudes with $H$ then rotate by $R(\phi)$ and then 'unmix' using $H$ again you'll have transfered the phase $\phi$ to the amplitude of $|0\rangle$. i.e. starting with $H|0\rangle$:
$$|0\rangle\overset{H}{\to}\frac1{\sqrt{2}}\Bigl(|0\rangle + |1\rangle\Bigr)$$ then you rotate in $Z$ by $\phi$:
$$\frac1{\sqrt{2}}\Bigl(|0\rangle + |1\rangle\Bigr)\overset{R(\phi)}{\to}\frac1{\sqrt{2}}\Bigl(|0\rangle + e^{-i\phi}|1\rangle\Bigr)$$
then H again:
$$\frac1{\sqrt{2}}\Bigl(|0\rangle + e^{-i\phi}|1\rangle\Bigr)\overset{H}{\to}\frac12\Bigl((1+e^{-i\phi})|0\rangle + (1-e^{-i\phi})|1\rangle\Bigr)$$
Now the amplitude of $|0\rangle$ is
\begin{align}
\frac14\|1 + e^{-i\phi}\|^2 &= \frac14\Bigl(1+\cos(\phi)-i\sin(\phi)\Bigr)
\Bigl(1+\cos(\phi)+i\sin(\phi)\Bigr)\\
&= \frac12\Bigl(\cos(\phi) + 1\Bigr)\\
&= \cos^2(\frac{\phi}2)
\end{align}
and similarly the amplitude of $|1\rangle$ is $\sin^2(\frac{\phi}2)$
so you can write this state as $$\cos\bigl(\frac{\phi}2\bigr)|0\rangle + e^{i\theta}\sin\bigl(\frac{\phi}2\bigr)|1\rangle$$
where $e^{i\theta}$ is their relative phase which ends up being $i = e^{i\frac{\pi}2}$. | {
"domain": "quantumcomputing.stackexchange",
"id": 3716,
"tags": "circuit-construction, quantum-circuit, hadamard"
} |
How do Physicists View Position? | Question: I started thinking about what it means to know the position of something. That thought lead me down this weird path that now makes me wonder if something can actually ever have a position at all.
For example, I have never stayed in a position. I have never stayed in the same place. My feet, the earth, the sun and galaxy have always been moving me. This is true for not just me, but everything, electrons, photons, etc. Then I though, to myself, "Well, you actually were in positions, but you were just always moving through them." Okay, so position is just a snapshot of where something is in time and space.
But when I did research on position on this site and a few videos as well, like this one, https://www.youtube.com/watch?v=uwEgbSU6omI. I never found info that suggested that a position was always fleeting. The definitions seemed to say that positions could be static.
Because position is connected to space and time I suspect that relativity comes into to play.
So my question is this: Can something have a position or is position always changing?
Note: There were no tags that matched the question.
Answer: All position is relative, and it's really just a label. You arbitrarily choose an origin and coordinate system (called an atlas) and a way to measure distances (called a metric), and then you use that to define your position relative to the origin. One of the fundamental rules when constructing theories in physics is that they shouldn't depend on how you define position, because it's entirely arbitrary. | {
"domain": "physics.stackexchange",
"id": 40827,
"tags": "spacetime, coordinate-systems, inertial-frames"
} |
Array slice type in Java | Question: Edit See the next iteration at Array slice type in Java - follow-up.
I have this "slice" type for managing array subranges. It is kind of the same thing as Pythons slice notation, yet I did not add negative indexing (since Java's Lists don't do it). So, what you think?
Slice.java:
package net.coderodde.util;
import java.util.Iterator;
import java.util.NoSuchElementException;
import java.util.Scanner;
/**
* This utility class implements <b>cyclic</b> array slices. If you move the
* slice to the right long enough, the "head" of the slice will wrap around and
* emerge at the beginning of the array being sliced. Same applies to movement
* to the left.
*
* @author Rodion "rodde" Efremov
* @param <E> the actual array component type.
*/
public class Slice<E> implements Iterable<E> {
/**
* The actual array being sliced.
*/
private final E[] array;
/**
* The starting index of this slice within <code>array</code>.
*/
private int fromIndex;
/**
* The size of this slice.
*/
private int size;
/**
* Constructs a slice representing the entire array.
*
* @param array the array being sliced.
*/
public Slice(final E[] array) {
this(array, 0, array.length);
}
/**
* Constructs a slice representing everything starting at index
* <code>fromIndex</code>.
*
* @param array the array being sliced.
* @param fromIndex the starting index.
*/
public Slice(final E[] array, final int fromIndex) {
this(array, fromIndex, array.length);
}
/**
* Constructs a new slice for <code>array</code> starting at
* <code>fromIndex</code> and ending at <code>toIndex - 1</code>.
*
* @param array the array being sliced.
* @param fromIndex the starting (inclusive) index.
* @param toIndex the ending (exclusive) index.
*/
public Slice(final E[] array,
final int fromIndex,
final int toIndex) {
checkArray(array);
checkIndexForArray(array, fromIndex);
checkIndexForArray(array, toIndex);
this.array = array;
this.fromIndex = fromIndex;
// 100 10 9
this.size = fromIndex <= toIndex ?
toIndex - fromIndex :
array.length - fromIndex + toIndex;
}
/**
* Returns <code>true</code> if this slice is empty.
*
* @return a boolean value.
*/
public boolean isEmpty() {
return size == 0;
}
/**
* Returns the current size of this slice.
*
* @return the current size.
*/
public int size() {
return size;
}
/**
* Accesses an element. The indices wrap around to the beginning of the
* underlying array.
*
* @param index the target index element.
* @return the element at the specified index.
*/
public E get(final int index) {
checkIndex(index);
return array[(fromIndex + index) % array.length];
}
/**
* Sets a new value at slice index <code>index</code>.
*
* @param index the target component index.
* @param value the new value to set.
*/
public void set(final int index, final E value) {
checkIndex(index);
array[(fromIndex + index) % array.length] = value;
}
/**
* Moves this slice <code>steps</code> to the right. If the head of this
* slice, while moving to the left, leaves the beginning of the underlying
* array, it reappears at the right end of the array.
*
* @param steps the amount of steps to move.
*/
public void moveLeft(final int steps) {
if (array.length == 0) {
return;
}
fromIndex -= steps % array.length;
if (fromIndex < 0) {
fromIndex += array.length;
}
}
/**
* Moves this slice one step to the left.
*/
public void moveLeft() {
moveLeft(1);
}
/**
* Moves this slice <code>steps</code> amount of steps to the right. If the
* tail of this slice, while moving to the right, leaves the tail of the
* underlying array, it reappears at the beginning of the array.
*
* @param steps the amount of steps to move.
*/
public void moveRight(final int steps) {
if (array.length == 0) {
return;
}
fromIndex += steps % array.length;
if (fromIndex >= array.length) {
fromIndex -= array.length;
}
}
/**
* Moves this slice one step to the right.
*/
public void moveRight() {
moveRight(1);
}
/**
* Expands the front of this slice by at <code>amount</code> array
* components. This slice may "cycle" the same way as at motion to the left
* or right.
*
* @param amount the expansion length.
*/
public void expandFront(final int amount) {
checkNotNegative(amount);
final int actualAmount = Math.min(amount, array.length - size());
fromIndex -= actualAmount;
size += actualAmount;
if (fromIndex < 0) {
fromIndex += array.length;
}
}
/**
* Expands the front of this slice by one array component.
*/
public void expandFront() {
expandFront(1);
}
/**
* Contracts the front of this slice by at <code>amount</code> array
* components.
*
* @param amount the contraction length.
*/
public void contractFront(final int amount) {
checkNotNegative(amount);
final int actualAmount = Math.min(amount, size());
fromIndex += actualAmount;
size -= actualAmount;
if (fromIndex >= array.length) {
fromIndex -= array.length;
}
}
/**
* Contracts the front of this slice by one array component.
*/
public void contractFront() {
contractFront(1);
}
/**
* Expands the back of this slice by at <code>amount</code> array
* components.
*
* @param amount the expansion length.
*/
public void expandBack(final int amount) {
checkNotNegative(amount);
size += Math.min(amount, array.length - size());
}
/**
* Expands the back of this slice by one array component.
*/
public void expandBack() {
expandBack(1);
}
/**
* Contracts the back of this slice by <code>amount</code> array components.
*
* @param amount the contraction length.
*/
public void contractBack(final int amount) {
checkNotNegative(amount);
size -= Math.min(amount, size());
}
/**
* Contracts the back of this slice by one array component.
*/
public void contractBack() {
contractBack(1);
}
/**
* Reverses the array range covered by this slice.
*/
public void reverse() {
for (int l = 0, r = size() - 1; l < r; ++l, --r) {
final E tmp = get(l);
set(l, get(r));
set(r, tmp);
}
}
/**
* Cycles the array range covered by this slice <code>steps</code> steps to
* the left.
*
* @param steps the amount of steps to cycle.
*/
public void cycleLeft(final int steps) {
if (size() < 2) {
// Trivially cycled.
return;
}
final int actualSteps = steps % size();
if (actualSteps == 0) {
return;
}
if (actualSteps <= size() - actualSteps) {
cycleImplLeft(actualSteps);
} else {
cycleImplRight(size() - actualSteps);
}
}
/**
* Cycles the array range covered by this slice one step to the leftt.
*/
public void cycleLeft() {
cycleLeft(1);
}
/**
* Cycles the array range covered by this slice <code>steps</code> steps to
* the right.
*
* @param steps the amount of steps to cycle.
*/
public void cycleRight(final int steps) {
if (size() < 2) {
// Trivially cycled.
return;
}
final int actualSteps = steps % size();
if (actualSteps == 0) {
return;
}
if (actualSteps <= size() - actualSteps) {
cycleImplRight(actualSteps);
} else {
cycleImplLeft(size() - actualSteps);
}
}
/**
* Cycles the array range covered by this slice one step to the right.
*/
public void cycleRight() {
cycleRight(1);
}
/**
* Returns the iterator over this slice.
*
* @return the iterator.
*/
@Override
public Iterator<E> iterator() {
return new SliceIterator();
}
/**
* Returns the textual representation of this slice.
*
* @return a string.
*/
@Override
public String toString() {
final StringBuilder sb = new StringBuilder();
int left = size();
for (final E element : this) {
sb.append(element);
if (--left > 0) {
sb.append(' ');
}
}
return sb.toString();
}
/**
* Implements the rotation of a slice to the left.
*
* @param steps the amount of steps.
*/
private void cycleImplLeft(final int steps) {
final Object[] buffer = new Object[steps];
int index = 0;
// Load the buffer.
for (; index < steps; ++index) {
buffer[index] = get(index);
}
for (int j = 0; index < size; ++index, ++j) {
set(j, get(index));
}
index -= steps;
for (int j = 0; index < size; ++index, ++j) {
set(index, (E) buffer[j]);
}
}
/**
* Implements the rotation of a slice to the right.
*
* @param steps the amount of steps.
*/
private void cycleImplRight(final int steps) {
final Object[] buffer = new Object[steps];
for (int i = 0, j = size - steps; i < steps; ++i, ++j) {
buffer[i] = get(j);
}
for (int i = size - steps - 1; i >= 0; --i) {
set(i + steps, get(i));
}
for (int i = 0; i < buffer.length; ++i) {
set(i, (E) buffer[i]);
}
}
/**
* Checks that the input array is not <code>null</code>.
*
* @param <E> the array component type.
* @param array the array.
*/
private static <E> void checkArray(final E[] array) {
if (array == null) {
throw new NullPointerException("Input array is null.");
}
}
/**
* Checks that <code>index</code> is legal for an <code>array</code>.
*
* @param <E> the actual array component type.
* @param array the array.
* @param index the index.
*/
private static <E> void checkIndexForArray(final E[] array,
final int index) {
if (index < 0) {
throw new IllegalArgumentException(
"The index (" + index + ") may not be negative.");
}
if (index > array.length) {
throw new IllegalArgumentException(
"The index (" + index + ") is too large. Should be at " +
"most " + array.length);
}
}
/**
* Checks the access indices.
*
* @param index the index to check.
*/
private void checkIndex(final int index) {
final int size = size();
if (size == 0) {
throw new NoSuchElementException("Reading from an empty slice.");
}
if (index < 0 || index >= size) {
throw new IndexOutOfBoundsException(
"The input index is invalid: " + index + ". Should be " +
"in range [0, " + (size - 1) + "].");
}
}
/**
* Checks that <code>number</code> is not negative.
*
* @param number the number to check.
*/
private static void checkNotNegative(final int number) {
if (number < 0) {
throw new IllegalArgumentException(
"The input number is negative: " + number);
}
}
/**
* This class implements an iterator over this slice's array components.
*/
private class SliceIterator implements Iterator<E> {
/**
* The index of the next slice component to return.
*/
private int index;
/**
* The number of components yet to iterate.
*/
private int toIterateLeft;
/**
* Constructs a new slice iterator.
*/
SliceIterator() {
toIterateLeft = Slice.this.size;
}
/**
* Returns <code>true</code> if there is components yet to iterate.
*
* @return a boolean value.
*/
@Override
public boolean hasNext() {
return toIterateLeft > 0;
}
/**
* Returns the next slice component.
*
* @return a component.
*/
@Override
public E next() {
if (toIterateLeft == 0) {
throw new NoSuchElementException("Iterator exceeded.");
}
--toIterateLeft;
return Slice.this.get(index++);
}
}
/**
* The entry point into a program.
* @param args the command line arguments.
*/
public static void main(final String... args) {
final Character[] array = new Character[10];
for (char c = '0'; c <= '9'; ++c) {
array[c - '0'] = c;
}
final Slice<Character> slice = new Slice<>(array);
final Scanner scanner = new Scanner(System.in);
System.out.println(slice);
while (scanner.hasNext()) {
final String line = scanner.nextLine().trim().toLowerCase();
final String[] parts = line.split("\\s+");
if (parts.length == 0) {
continue;
}
switch (parts[0]) {
case "left":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.moveLeft(steps);
} else {
slice.moveLeft();
}
break;
case "right":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.moveRight(steps);
} else {
slice.moveRight();
}
break;
case "exfront":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.expandFront(steps);
} else {
slice.expandFront();
}
break;
case "exback":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.expandBack(steps);
} else {
slice.expandBack();
}
break;
case "confront":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.contractFront(steps);
} else {
slice.contractFront();
}
break;
case "conback":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.contractBack(steps);
} else {
slice.contractBack();
}
break;
case "lcycle":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.cycleLeft(steps);
} else {
slice.cycleLeft();
}
break;
case "rcycle":
if (parts.length > 1) {
int steps = Integer.parseInt(parts[1]);
slice.cycleRight(steps);
} else {
slice.cycleRight();
}
break;
case "rev":
slice.reverse();
break;
case "help":
printHelp();
break;
case "quit":
System.exit(0);
}
System.out.println(slice);
}
}
private static void printHelp() {
System.out.println(
"----------------------------------------------\n" +
"quit - Quit the demonstration.\n" +
"help - Print this help list.\n" +
"left [N] - Move the slice to the left.\n" +
"right [N] - Move the slice to the right.\n" +
"exfront [N] - Expand the front.\n" +
"exback [N] - Expand the back.\n" +
"confront [N] - Contract the front.\n" +
"conback [N] - Contract the back.\n" +
"lcycle [N] - Cycle the slice to the left.\n" +
"rcycle [N] - Cycle the slice to the right.\n" +
"rev - Reverse the range covered by this slice.\n" +
"----------------------------------------------\n");
}
}
If you want to take a look at unit tests, you'll find them here.
The cycling logic may seem too elaborate, yet the point was to ensure that the buffer length is no more than half of the length of a slice. I am eager to see other possible implementations.
Answer: public Slice(final E[] array) {
For me, adding final to arguments means lengthening the argument list for hardly any gain, YMMV.
/**
* Constructs a slice representing everything starting at index
* <code>fromIndex</code>.
*
* @param array the array being sliced.
* @param fromIndex the starting index.
*/
public Slice(final E[] array, final int fromIndex) {
this(array, fromIndex, array.length);
}
You've commented it well, but this is pretty unexpected to me. Compare with substring and similar methods which stretch to the end. I'd suggest to drop all constructors but one, make it private and add fatory methods clearly stating what they create.
public Slice(final E[] array,
final int fromIndex,
final int toIndex) {
It's a bit strange to accept toIndex but work with size internally. You may have a reason.
fromIndex -= steps % array.length;
if (fromIndex < 0) {
fromIndex += array.length;
}
This may blow when steps is negative (making fromIndex > array.length). Write a method mod so you can use it like
fromIndex = mod(fromIndex + steps, array.length);
or something like this.
public void moveRight(final int steps) {
This should call moveLeft(-steps). Or better be dropped as swamping the user with that many methods does no good.
public void expandFront(final int amount) {
checkNotNegative(amount);
I'd allow negative amount and do contract.
/**
* Cycles the array range covered by this slice <code>steps</code> steps to
* the left.
*
* @param steps the amount of steps to cycle.
*/
public void cycleLeft(final int steps) {
The Javadoc restates the method name, but I'm still having no idea what's cycling.
Summary
I'm too lazy to go through all of it, but what I dislike most is the amount of methods. Other than that it's nice. | {
"domain": "codereview.stackexchange",
"id": 13558,
"tags": "java, array, iterator, circular-list, interval"
} |
Maximum product of 3 integers in an int array using Python - follow up | Question: This is a follow up to Maximum product of 3 integers in an int array using Python
Changes to the code include renamed variables for improved readability and I added more test cases.
import unittest
def highest_product(list_of_ints):
max_seen = [float("-inf"), float("-inf"), float("-inf")]
min_seen = [float("inf"), float("inf")]
for x in list_of_ints:
if x >= max_seen[0]:
max_seen[0], max_seen[1], max_seen[2] = x, max_seen[0], max_seen[1]
elif x >= max_seen[1]:
max_seen[1], max_seen[2] = x, max_seen[1]
elif x > max_seen[2]:
max_seen[2] = x
if x <= min_seen[0]:
min_seen[0], min_seen[1] = x, min_seen[0]
elif x < min_seen[1]:
min_seen[1] = x
max_product_candidate_one = min_seen[0] * min_seen[1] * max_seen[0]
max_product_candidate_two = max_seen[0] * max_seen[1] * max_seen[2]
return max(max_product_candidate_one, max_product_candidate_two)
class TestHighestProduct(unittest.TestCase):
def test_highest_product(self):
self.assertEqual(highest_product([6, -1, -1, -2, 0]), 12)
self.assertEqual(highest_product([-6, -1, -1, -2]), -2)
self.assertEqual(highest_product([0, 0, 0]), 0)
self.assertEqual(highest_product([0, 0, -2]), 0)
if __name__ == '__main__':
unittest.main(verbosity=2)
Answer: This version feels much more readable, especialy since you dropped half of your comparisons.
However, I really liked @JoeWallis use of heapq. I would however use heappushpop which will provide all the comparisons you are doing at once.
But you will need to manually extract out the maximum value of max_seen since you have no guarantee anymore on its position.
You can thus write:
from heapq import heappushpop
def highest_product(list_of_ints):
max_seen = [float("-inf")] * 3
min_seen = [float("-inf")] * 2
for x in list_of_ints:
heappushpop(max_seen, x)
heappushpop(min_seen, -x)
# No change of sign since we changed it twice (once for each element)
max_product_candidate_one = min_seen[0] * min_seen[1] * max(max_seen)
max_product_candidate_two = max_seen[0] * max_seen[1] * max_seen[2]
return max(max_product_candidate_one, max_product_candidate_two) | {
"domain": "codereview.stackexchange",
"id": 22654,
"tags": "python, python-3.x, interview-questions, integer"
} |
Interaction of quantum fields | Question: Do quantum fields in QFT interact with each other constantly and continuously, or only from time to time?
Answer: Quantum fields interact constantly.
However when you consider the perturbation theory for each order of coupling constant your amplitudes look like originating from discrete and finite number of acts of interaction between free particles. These contributions are represented by Feynman diagrams which may be the source of your question. You should understand that this is just an artifact of perturbative description. Full amplitude with continuous interaction is given by sum of infinite number of such diagrams. | {
"domain": "physics.stackexchange",
"id": 72711,
"tags": "quantum-mechanics, quantum-field-theory, particle-physics, standard-model"
} |
re installing ros | Question:
How should I re-install my whole ros environment?
thanks,
vahid.
Originally posted by vahid on ROS Answers with karma: 31 on 2012-02-29
Post score: 0
Original comments
Comment by Dan Lazewatsky on 2012-02-29:
Please be more specific. What OS are you using? What version of ROS? Why do you need to reinstall? Are you using a source-based install, or binary?
Answer:
actually i messed up my ros environment so I just re-install everything. thank you.
Originally posted by vahid with karma: 31 on 2012-02-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by prince on 2012-02-29:
i am just curious what you really messed up which forced you to go for reinstall? | {
"domain": "robotics.stackexchange",
"id": 8439,
"tags": "ros, installation"
} |
What is the effect of isospin on the proton or neutron alone, i.e. not in a doublet? | Question: So I start with a proton $p$.
I extend my "physical" space by means of the internal degree of freedom of isospin, so that I know write $p$ in a higher dimensional space:
$$ p = \left( \begin{array}{c}
1 \\
0
\end{array} \right). $$
I notice that in this representation I can also have the neutron $n$:
$$ n = \left( \begin{array}{c}
0 \\
1
\end{array} \right), $$
which is great because now in this isospin extended space $p$ and $n$ form an (approximate) $SU(2)$ doublet:
$$ \begin{equation}
\left( \begin{array}{c}
p \\
n
\end{array} \right) \xrightarrow{SU(2)} \exp \left( - \frac{ i }{ 2} \theta_a \sigma_a \right) \left( \begin{array}{c}
p \\
n
\end{array} \right).
\end{equation}$$
Question
Back in my "physical space" where $p$ is just one-dimensional, i.e. it is not embedded in the higher dimensional space derived from its internal state, what is the effect/consequence of the isospin symmetry? Does it appear as a phase factor?
Answer:
What I mean is: does the 1D representation of the proton carry any information of the fact that, in its 2D extended space (1,0) there is a SU(2) symmetry?
There is no one dimensional representation. There is a projection to a one dimensional state of the two dimensions of isospin. All of (x,y,z,t)points in four dimensional space can be characterized by an SU(2) vector. They are independent mathematical spaces.
The effect in the space where measurements can be done, (x,y,z,t), is that a neutron should exist in that space too.
The existence of a neutron in the four dimensional space is a prediction of the SU(2) theoretical model which is validated experimentally. | {
"domain": "physics.stackexchange",
"id": 55645,
"tags": "particle-physics, nuclear-physics, neutrons, protons, isospin-symmetry"
} |
How does a computer determine the data type of a byte? | Question: For example, if the computer has 10111100 stored on one particular byte of RAM, how does the computer know to interpret this byte as an integer, ASCII character, or something else? Is type data stored in an adjacent byte? (I don't think this would be the case as this would result in using twice the amount of space for one byte.)
I suspect that perhaps a computer does not even know the type of data, that only the program using it knows. My guess is that because RAM is RAM and therefore not read sequentially, that a particular program just tells the CPU to fetch the info from a specific address and the program defines how to treat it. This would seem to fit with programming things such as the need for typecasting.
Am I on the right track?
Answer: Your suspicion is correct. The CPU doesn't care about the semantics of your data. Sometimes, though, it does make a difference. For example, some arithmetic operations produce different results when the arguments are semantically signed or unsigned. In that case you need to tell the CPU which interpretation you intended.
It is up to the programmer to make sense of her data. The CPU only obeys orders, blissfully unaware of their meaning or goals. | {
"domain": "cs.stackexchange",
"id": 9504,
"tags": "memory-hardware, memory-access, type-checking"
} |
Is it possible to download packages from the build farm? | Question:
Many of the packages are built at http://build.willowgarage.com/ before they are pushed to the repositories.
Is it possible to get the output debs of these builds?
(If not I'll probably file an enhancement request.)
Originally posted by Asomerville on ROS Answers with karma: 2743 on 2011-07-14
Post score: 0
Answer:
According to the release architecture here:
http://www.ros.org/wiki/release/Architecture
The Hudson builds are in the release pipeline; the last stage of which before being released publicly is:
http://packages.ros.org/ros-shadow-fixed/
EDIT by kwc: ros-shadow-fixed may not have a completed set of packages, so it's use is not recommended. If you do fetch from ros-shadow-fixed, you will have to update your entire set of ros-* debs at once as the debians are version-locked for binary compatibility.
Originally posted by Asomerville with karma: 2743 on 2011-07-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Asomerville on 2011-07-14:
Thanks for the link.
Comment by kwc on 2011-07-14:
I recommend that you read http://www.ros.org/wiki/release/Architecture if you're interested in the definition of what is contained there.
Comment by Asomerville on 2011-07-14:
Is it in fact always the latest from the build farm though?
Comment by kwc on 2011-07-14:
caveat emptor: they are held there before going public for a reason. You may find yourself with incomplete or broken packages if you regularly use that repository. | {
"domain": "robotics.stackexchange",
"id": 6140,
"tags": "debian, ubuntu"
} |
Why and how is rust forming on moon? | Question: note: @Mithron's proposed duplicate Why can't rust form without water? does not have anything about the conditions on the Moon, so no, it's not a duplicate.
Several popular news articles mention that "rust" has been found on the Moon, and this is surprising because rust requires the presence of both oxygen and liquid water.
nasaspaceflight.com: Rust on the Moon. How is that possible without oxygen and liquid water? "But how can rust form far from water ice deposits on a barren oasis devoid of oxygen?"
Axios: Researchers find rust on the Moon "Scientists were surprised by the findings because rust requires oxygen and water to form on Earth."
When I open a plastic-wrapped cake or muffin there is often an "O-buster" packet with finely powdered iron. When I open the packet (chemistry is more interesting than cake) and watch the powder, indeed it rusts. Of course air has plenty of both oxygen and water, so this is no surprise.
Question: But why and how is rust forming on moon? Why does it need both oxygen and water there, and more importantly, how is it getting them?
Answer: UPDATE:
Today I got a chance to read the abstract the original paper which made news everywhere that moon has rust on the side which faces the Earth.
The authors propose that upper atmosphere oxygen from the Earth reaches the moon. In their words
"Oxygen delivered from Earth’s upper atmosphere could be the major oxidant that forms lunar hematite. Hematite at craters of different ages may have preserved the oxygen isotopes of Earth’s atmosphere in the past billions of years. Future oxygen isotope measurements can test our hypothesis and may help reveal the evolution of Earth’s atmosphere."
Widespread hematite at high latitudes of the Moon in Science Advances (open access
Let us split your query into three parts:
a) Will iron oxidize without water and form oxides?
Iron would happily burn in oxygen to form iron oxides. But we cannot call it iron rust. So iron can oxidize without moisture. In fact, iron powder may spontaneously burn in air.
b) Will iron form rust without water?
Rust is more of a semantic issue. Iron rust may not have constant composition. As per the dictionary definition (OED) "
A red, orange, or yellowish-brown substance which forms progressively
as a flaking, permeable coating on the surface of iron and its alloys
as a result of oxidation, esp. through exposure to air and moisture.
Therefore, rust is an product of ambient environmental factors that affect iron at (approx.) typical room temperature. Rusting in sea water might different than rusting on a mountain. Acidic gases such as carbon dioxide, sulfur dioxide, nitrogen oxides certainly accelerate this process. Iron rust can be green and it can be common brown version. Since, chemically rust is a hydrated oxide of iron plus its hydroxide, water's presence is a must. More importantly water is a necessary ingredient which works as mediator of the electrochemical cell that forms between oxygen and iron surface. If you search Google Scholar you would find tons of articles on the electrochemical mechanism of iron rusting.
Coming to the more interesting question of rust on the moon:
nasaspaceflight.com: Rust on the Moon. How is that possible without
oxygen and liquid water? "But how can rust form far from water ice
deposits on a barren oasis devoid of oxygen?" Axios: Researchers find
rust on the Moon "Scientists were surprised by the findings because
rust requires oxygen and water to form on Earth."
Since you would know more astronomy than I do, wasn't moon a part of the Earth? I recall this from a documentary. I do remember seeing layers of "rust" in moutainous paths, so the oceans had a lot of iron which settled as a layer in mountains. If moon were a part of Earth, why is that surprising?
a) The question is what was the age of this rust?
Secondly, Mars is also "rusty", its surface is highly oxidizing with a huge amount of perchlorate in the soil (there is a long story how that was discovered by the folly of analytical chemists). Who knows what is the moon soil like. What about the shower of radiation on the moon. That may promote oxidation as well. | {
"domain": "chemistry.stackexchange",
"id": 14499,
"tags": "everyday-chemistry, oxides"
} |
Failed to load r_cart | Question:
Dear All.
while I following the step from the website:
http://www.ros.org/wiki/pr2_simulator/Tutorials/Teleop%20PR2%20arm%20in%20simulation
roscore
roslaunch pr2_gazebo pr2_empty_world.launch
roslaunch jtteleop.launch
Then from the terminal I find the following information
administrator@ubuntu:~$ roslaunch jtteleop.launch
... logging to /home/administrator/.ros/log/ea179d4e-5728-11e1-af87-18f46a4fafa8/roslaunch-ubuntu-19389.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://ubuntu:58130/
SUMMARY
PARAMETERS
/r_cart/joint_max_effort/r_forearm_roll_joint
/r_cart/pose_command_filter
/r_cart/joint_max_effort/r_shoulder_pan_joint
/r_cart/joint_feedforward/r_shoulder_lift_joint
/r_cart/joint_feedforward/r_elbow_flex_joint
/r_cart/joint_max_effort/r_wrist_flex_joint
/r_cart/joint_max_effort/r_upper_arm_roll_joint
/r_cart/joint_feedforward/r_shoulder_pan_joint
/rosdistro
/r_cart/vel_saturation_trans
/r_cart/joint_feedforward/r_wrist_roll_joint
/r_cart/cart_gains/trans/d
/rosversion
/r_cart/joint_max_effort/r_shoulder_lift_joint
/r_cart/cart_gains/trans/p
/r_cart/type
/r_cart/tip_name
/r_cart/vel_saturation_rot
/r_cart/joint_feedforward/r_forearm_roll_joint
/r_cart/cart_gains/rot/p
/r_cart/root_name
/r_cart/jacobian_inverse_damping
/r_cart/joint_max_effort/r_elbow_flex_joint
/r_cart/k_posture
/r_cart/cart_gains/rot/d
/r_cart/joint_max_effort/r_wrist_roll_joint
/r_cart/joint_feedforward/r_upper_arm_roll_joint
/r_cart/joint_feedforward/r_wrist_flex_joint
NODES
/
stop_r_arm (pr2_controller_manager/pr2_controller_manager)
spawn_cart (pr2_controller_manager/spawner)
ROS_MASTER_URI=http://localhost:11311
core service [/rosout] found
process[stop_r_arm-1]: started with pid [19407]
process[spawn_cart-2]: started with pid [19408]
[stop_r_arm-1] process has finished cleanly.
log file: /home/administrator/.ros/log/ea179d4e-5728-11e1-af87-18f46a4fafa8/stop_r_arm-1*.log
[ERROR] [WallTime: 1329237265.196718] [254.818000] Failed to load r_cart
I am in ubuntu 11.10 , electric ros package.
I also rosmake pr2_controller_manager
and download the source code from svn teleop_controllers.
Thanks for any help.
Zhenli
Originally posted by zhenli on ROS Answers with karma: 287 on 2012-02-14
Post score: 1
Answer:
Hi,
in ROS Electric the controller moved to the robot_mechanism_controllers package.
In the launch file you had to create during the tutorial, change the controller type from "JTTeleopController" to "robot_mechanism_controllers/JTCartesianController". Then it should work.
Best,
Juergen
Originally posted by JuergenHess with karma: 86 on 2012-02-14
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 8232,
"tags": "microcontroller, simulation, pr2"
} |
In a neural network with tensor input X, it seems there are times when it will never learn... Why? | Question: import numpy as np
import keras.models as km
import keras.layers as kl
import keras.optimizers as ko
import keras.losses as kloss
# This will cause no learning
np.random.seed(1692585618)
def f(x):
a = x[0]* 3.141 + x[1]
return a;
# Create a sample dataset
# Input is (*, 2)
# Output is (*, )
x_train=np.array([[1,2], [3,4], [5,6]])
y_train=np.array([f(x) for x in x_train])
# These are required by the shape of x_train and y_train
in_dim = x_train.shape[1]
out_dim = 1
model = km.Sequential()
model.add(kl.Dense(units=3, activation='relu', input_shape=(in_dim,)))
model.add(kl.Dense(units=out_dim, activation='relu'))
model.compile(
loss=kloss.mean_squared_error
, optimizer=ko.Adam(lr=0.1)
)
model.fit(x_train, y_train, epochs=500, batch_size=1, verbose=True)
Output:
Epoch 1/50
3/3 [==============================] - 1s 221ms/step - loss: 225.9046
Epoch 2/50
3/3 [==============================] - 0s 3ms/step - loss: 225.9046
Epoch 3/50
3/3 [==============================] - 0s 3ms/step - loss: 225.9046
Epoch 4/50
3/3 [==============================] - 0s 2ms/step - loss: 225.9046
...and so on (hundreds more)
Answer: In short
You are waaaaay undertraining. Increase the number of times you show the network your data. I am guessing training may take longer than you expect because typically networks train best with 0-mean data, which yours is not.
ReLU seems to cause problems with such a shallow network. Try increasing depth or using elu activation instead.
I confirmed having an activation in the last layer doesn't cause huge problems, but it is still a good idea of getting in the habit of knowing when you should and should not have an activation in the last layer.
Undertraining
I handled this by increasing the number of training examples by 10,000 times (you could increase the number of epochs instead but this results in better printing):
x_train=np.array([[1,2], [3,4], [5,6]] * 10000)
y_train=np.array([f(x) for x in x_train])[:, np.newaxis]
Problems with ReLU
The problems with ReLU can be handled in one of two ways, increase the number of layers when using ReLU, or use a different activation such as elu. Both trained just fine for me:
model = km.Sequential()
model.add(kl.Dense(units=3, activation='relu', input_shape=(in_dim,)))
model.add(kl.Dense(units=3, activation='relu'))
model.add(kl.Dense(units=out_dim))
model.compile(
loss=kloss.mean_squared_error, optimizer=ko.Adam(lr=0.1)
)
or
model = km.Sequential()
model.add(kl.Dense(units=3, activation='elu', input_shape=(in_dim,)))
model.add(kl.Dense(units=out_dim))
model.compile(
loss=kloss.mean_squared_error, optimizer=ko.Adam(lr=0.1)
)
Full working code
Below shows the code with elu, you can swap the block out for the ReLU version (shown above) instead and it prints very similar values.
import numpy as np
import keras.models as km
import keras.layers as kl
import keras.optimizers as ko
import keras.losses as kloss
# This will cause no learning
np.random.seed(1692585618)
def f(x):
a = x[0]* 3.141 + x[1]
return a;
# Create a sample dataset
# Input is (*, 2)
# Output is (*, )
x_train=np.array([[1,2], [3,4], [5,6]] * 10000)
y_train=np.array([f(x) for x in x_train])[:, np.newaxis]
# These are required by the shape of x_train and y_train
in_dim = x_train.shape[1]
out_dim = 1
model = km.Sequential()
model.add(kl.Dense(units=3, activation='elu', input_shape=(in_dim,)))
model.add(kl.Dense(units=out_dim))
model.compile(
loss=kloss.mean_squared_error, optimizer=ko.Adam(lr=0.1)
)
model.fit(x_train, y_train, epochs=3, verbose=True)
print('predicted: {}'.format(model.predict(x_train)[:3, 0]))
print('actual : {}'.format(y_train[:3, 0]))
prints
Epoch 1/3
30000/30000 [==============================] - 1s 29us/step - loss: 3.0553
Epoch 2/3
30000/30000 [==============================] - 1s 20us/step - loss: 5.0199e-06
Epoch 3/3
30000/30000 [==============================] - 1s 20us/step - loss: 5.4414e-06
predicted: [ 5.1426897 13.420667 21.707573 ]
actual : [ 5.141 13.423 21.705] | {
"domain": "datascience.stackexchange",
"id": 3608,
"tags": "neural-network, keras, tensorflow"
} |
Compute Distance Between Stars | Question: If I have the following information about star A and Star B, how can I compute the distance between A and B?
Distance from Sol for Star A
Right Ascension/Declination of Star A
Parallax/Absolute Magnitude of Star A
Distance from Sol for Star B
Right Ascension/Declination of Star B
Parallax/Absolute Magnitude of Star B
I can use the parallax and absolute magnitude to compute distance from Sol, but I don't know how to get the distance between A and B.
Obviously there will be errors in the parallax, but I'm looking for a best effort means to calculate this.
Edit
I've implemented this in Java and made it available via this link:
https://gist.github.com/fergusonjason/fa4794dc0dc5d45f7a7ed12296577ed5
I realize for actual science work most people wouldn't use Java, but this is for a project that is part of my Java portfolio.
Answer: If you know the right ascension and declension of the stars, then you know the angle between them (ie the A-Sun-B angle). Working this out is an exercise in spherical trigonometry. The cosine of angular separation of the stars $\cos(C)$ is given by
$$\cos(C) = \sin(d_a)\sin(d_b) + \cos(d_a)\cos(d_b)\cos(r_a-r_b)$$
Where $d_i$ is the declension of the star $i$ and $r_i$ is the right ascension (in degrees or radians as necessary)
Then the distance between the stars is just an application of the cosine law.
$$c^2=a^2+b^2 -2ab\cos(C).$$ In which $a$ and $b$ are the distances to each star and $c$ is distance between the stars. | {
"domain": "astronomy.stackexchange",
"id": 6289,
"tags": "observational-astronomy, distances, positional-astronomy"
} |
Why is the Oort cloud presumed to be spherical? | Question: Most descriptions of the Oort cloud depict it as a mostly spherical distribution of planetesimals, with occasional allowance for an inner component that is more donut-shaped. This is slightly at odds with the fact that most protoplanetary clouds and their derivative objects - planets, asteroids, comets and dust - will collapse to a fairly well-defined plane relatively early in a stellar system's evolution.
What evidence is used to postulate this? Does it come from numerical simulations of the solar system? Or does it help account for observed orbital inclinations of real comets?
Answer: Nobody has "seen" the Oort-cloud (yet). The Oort cloud is simply a concept that can explain why long-period comets appear to come from random directions.
With the current instruments, we are not able to detect any of these comets "at the source". It is also not even possible to show with measurements that at that distance there might be a companion-object for the sun (long-period binary, with a brown dwarf companion going through the oort-cloud, which could explain some things about periodical mass-extinctions). We can however put some limits on the mass and distance of such an object, but we can not yet prove with measurements that it is impossible. This just to show how small the amount of information is that we have on these distances.
The only thing we know about objects that are there, is what we see from objects that come our way, and when we calculate the orbit, we notice that it comes from the same region of the solar system.
Edit: this publication shows that the WISE mission has been able to narrow it down, showing that if a Jupiter-mass brown dwarf exists in our solar system, it has to be at least at a distance of 26,000 AU, to stay under the detection limits of WISE.
The point is not to say wether or not such an object exists or not, but to point out that at those distances, we can only detect things that are massive, compared to the average comet. This shows that the only information we have about the Oort-cloud, is indirect information from objects that are on orbits going through the Oort-cloud, and near enough to earth for us to detect them. | {
"domain": "astronomy.stackexchange",
"id": 599,
"tags": "oort-cloud"
} |
Safety VS. Liveliness Property | Question: I have to prove whether a certain property is safety or liveliness. The property represents the absence of deadlock so I expected it to be a safety property from what I read online.
The issue is that I seem to show that it is both, but this is impossible as the only property that is both safety and liveliness would be $\left(2^{\Xi}\right)^{\omega}$, where $\Xi$ is the set of propositional symbols. I would like to understand where the mistake is, and naturally if any of the solutions would be correct. Also, I would like to do it in a formal way instead of using the "something bad never happens" intuitive idea.
The property in question is: $$ \mathsf{G}((P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))) $$
The definition I have for safety property goes as follows:
An LT property $P_{safe}$ over $\Xi$ is called a safety property if for all words $\sigma \in\left(2^{\Xi}\right)^{\omega} \setminus P_{safe}$ there exists a finite prefix $\hat\sigma$ of $\sigma$ such that
$$P_{safe} \cap \{\sigma^\prime\in \left(2^{\Xi}\right)^{\omega}\; |\; \hat\sigma \textrm{ is a finite prefix of }\sigma^\prime
\} = \emptyset $$
What I tried to do is as follows:
Let $\sigma\in \left(2^{\Xi} \right)^{\omega}$ be an arbitrary word such that $\sigma\not \Vdash \mathsf{G}((P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))) $, then there exists $i\geq 0$ such that $\sigma, i\,\not \Vdash (P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C)) $. Let $\hat\sigma=\sigma[..i]$, this is a bad prefix of $\sigma$, and for any word $\sigma^\prime\in \left(2^{\Xi} \right)^{\omega}$ such that $\hat\sigma$ is a prefix of $\sigma^\prime$, we have that $\sigma^\prime,i \not \Vdash(P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))$, thus:
$\sigma^\prime\not \Vdash \mathsf{G}((P1E \wedge P2E) \rightarrow (\mathsf{F}\, ( P1C \vee P2C))) $. We conclude it is a safety property.
The definition I have for liveliness property goes as follows:
An LT property $P_{live}$ over $\Xi$ is called a liveness property if $pref(P_{live}) = \left(2^{\Xi} \right)^{*}$.
So, I can't see why I can't prove it is a liveliness property as:
Take $\hat\sigma\in\left(2^{\Xi} \right)^{*}$ of length $n+1$, $\hat\sigma=v_0...v_n$, let $\sigma=\hat\sigma.P1C.\varnothing^{\omega}$ .
For all $i> n$, we have that $\sigma,i \not \Vdash P1E \wedge P2E $ thus $\sigma,i \Vdash (P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))$.
For all $i\leq n$, we have $i<n+1$ and $\sigma, n+1 \Vdash P1C$ thus $\sigma, i\Vdash \mathsf{F}(P1C \vee P2C)$, hence $\sigma,i \Vdash (P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))$.
Therefore we have that $\forall i\geq 0, \;\; \sigma, i\Vdash (P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))$. Thus:
$$\sigma \Vdash \mathsf{G}((P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))) $$
Hence, it is a liveliness property.
Note: This is my first time posting in this community so any comments are appreciated. Also if it were better to post this in the math.stack, let me know!
Answer: I found the error on the safety proof.
Let $\hat\sigma=\sigma[..i]$, this is a bad prefix for our property.
This is of course wrong. If the property is not satisfied, i.e., $\sigma \not\Vdash \mathsf{G}((P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C)))$ then indeed there is as $i\geq 0$ such that $\sigma, i\not \Vdash ((P1E \wedge P2E) \rightarrow (\mathsf{F}\, (P1C \vee P2C))$, that is:
$$\sigma, i\Vdash P1E \,\wedge\, P2E \;\;\textrm{ and } \sigma, i\not\Vdash \mathsf{F}\, (P1C \vee P2C)$$
The thing is that just because we have $\sigma, i\not\Vdash \mathsf{F}\, (P1C \vee P2C)$, it does not mean that it cannot become true in the future, therefore we cannot state that $\hat\sigma$ is a bad prefix for the property, as a word $\sigma^\prime$ with prefix $\hat\sigma$ can still be extended in order to satisfy the property.
Hence, this was wrong and it is indeed a liveliness property. | {
"domain": "cs.stackexchange",
"id": 20802,
"tags": "logic, model-checking, check-my-answer, linear-temporal-logic"
} |
Precision and Accuracy course review | Question: In the following course review question I need to choose all the answers that are true about precision and accuracy
1)Definitions of accuracy and precision depend on the type of measurement you are making.
2)Accuracy and precision have the same definition and occur when measurements are close to each other.
3)Accuracy occurs when a measurement is close to the true value and precision occurs when measurements are all close to each other.
4)Accuracy and precision have the same definition and occur when a measurement is close to the true value
5)Precision occurs when a measurement is close to the true value and accuracy occurs when measurements are all close to each other
6)The set of numbers 6.162 cm, 6.163 cm, 6.162 cm is more precise than 6.162 cm, 6.181 cm, 6.150 cm
My solution
I know that 3 is true, accuracy is close to the target and precision is all measurements are close to each other. Therefore number 5 would be false
I know 6 is true because the numbers in the first set are closer than the numbers in the second set
I do not think number 2 is correct, accuracy and precision have two different definitions
I do not think number 4 is correct those definitions are incorrect
Not sure if 1 is correct or not. Looking for some help with this question
Answer: Accuracy should always refer to the quality of being "correct," and Precision should always refer to the quality of "uniformity" or "consistency."
All else equal, precision is far more difficult to achieve than accuracy. Once your precision is acceptable, accuracy is typically a simple matter to account for.
For instance, if a marksman manages to hit a target repeatedly half a full meter under his target, but does so in a three centimeter radius circle, then it would be said that his accuracy was particularly poor, but his precision excellent. Being such a precise shot, all he would need to do is adjust his sight and suddenly he could be striking dead-center. However, if such a marksman was accurate but imprecise, it may mean that the average position of his shots were close to the center, but scattered like a random distribution across a particular area (a much more difficult problem to address).
Now, the majority of your responses appear accurate ;) but the first question sounds to be more a matter of semantics. The definitions themselves I have just described for you do not change based upon measurement, but the specific qualities of "being accurate/inaccurate" and or "being precise/imprecise" can and do change.
What it means to be accurate/precise
VS.
What is considered to be accurate/precise in a given context
If I were you, I would ask my instructor to clarify between these two possibilities, but make certain you show that you understand the distinction between the two terms. | {
"domain": "physics.stackexchange",
"id": 33777,
"tags": "homework-and-exercises, measurements, error-analysis"
} |
Proving 1 = 2 by POAC( Principle of Atom Conservation ) | Question: I have been told that applying POAC in any equation means conserving the number of moles of an element in both reactant and product .
For example :
$$\ce{H2 + O2 -> H2O}$$
Here when we apply POAC on oxygen we write :
Equation 1 :
No. Of Atoms(or moles) of "O" in reactant = No. Of Atoms(or moles ) of "O" in product
Equation 2: (obtained from equation 1)
2(moles of $\ce{O2}$) = 1(moles of $\ce{H2O}$)
Where equation 2 gives us the necessary data required in solving questions of stoichiometric calculations .
I understand how equation 2 comes from equation 1 but I don't understand why equation 1 is correct even when the chemical equation is unbalanced ?
As I think equation 1 can be applied only when the chemical equation is balanced .
As here :
No. Of Atoms of "O" in reactant =2
No of Atoms of "O" in product = 1
Thus by equation 1 we can say that $1=2$ .
And my book says POAC can be applied even when equation is not balanced . So here equation is not balanced and when I apply POAC I get $1=2$ . So what's is wrong with my reasoning?
Answer: As said by many others , The equation 1 given above is simply put very wrong . As it can be applied only when a balanced chemical equation is given to us , otherwise we will reach at all sorts of contradictions as in the question above .
However many non- rigorous teachers , textbooks often write equation 1 in POAC (which is always true in a balanced chemical equation ) and then say now equation 2 is obtained from equation 1(which is partially incorrect) . It is true equation 2 comes naturally from equation 1 , but only for a balanced chemical equation.
So then how come we apply equation 2 in POAC in all sorts of question ? And still come up with the right answer ?
The answer is that equation 2 is true in itself always whether the chemical equation given ( in POAC ) is balanced or not . So it is not related to equation 1 for its validity .
How is equation 2 always valid ?
As pointed out by @iammax in the comments actually we (sort of ) essentially balanced the chemical equation given to us in the equation 2 itself .
For example in the $KClO_3$ question linked above , the moles of $O_2$ , moles of $KClO_3$ given to us are actually according to the stoichiometric coefficients , which enable us to use POAC without balancing the reaction (even though we have done that indirectly by using No. of moles of $O_2$ , $KClO_3$).
To conclude we can say after using No. Of moles of reactant , product we have indirectly balanced the chemical equation as the Ratio of their number of moles will be
Equal to Ratio of their respective stoichiometric coefficients . After which we just use Conservation of Atoms( as the name says it) and multiply the number of moles of the molecule with the number of Atoms present on which POAC is being applied ( For example we multiply no. Of moles of $O_2$ by 2 when we apply POAC on oxygen atom ). | {
"domain": "chemistry.stackexchange",
"id": 17417,
"tags": "physical-chemistry, mole"
} |
Clearing operation in costmap_2d explanation? | Question:
Can someone explain the following sentence or with an image maybe ?
A clearing operation, however, consists of raytracing through a grid from the origin of the sensor outwards for each observation reported.
Thank you !
Originally posted by 2ROS0 on ROS Answers with karma: 1133 on 2014-08-21
Post score: 0
Answer:
Sensor data gives you two things: Where an obstacle is and where an obstacle isn't. Imagine you have a single laser that starts at point A encounters an obstacle at point B.
The marking operation is putting a lethal value in the costmap at point B.
The clearing operation is taking all the points between A and B and marking them as free space.
The process of finding the line is often referred to as ray tracing, and is explained here: http://en.wikipedia.org/wiki/Bresenham's_line_algorithm
Originally posted by David Lu with karma: 10932 on 2014-08-21
This answer was ACCEPTED on the original site
Post score: 4 | {
"domain": "robotics.stackexchange",
"id": 19141,
"tags": "ros, navigation, clear-costmap, costmap-2d, costmap-2d-ros"
} |
Simple mathematical model for a bouncing ball | Question: I started coding a physics program that simulates gravity - specifically a bouncing ball - and I've already learned a lot researching the topic but the bounce still doesn't look quite right. I need a simple formula that captures a bounce. There may be questions similar to this, but I wanted advice on whether the following logic makes sense. I'm also not very math or physics-savvy, so if you can explain it in a way that a law student would understand or link me to some appropriate material, I'd appreciate it a lot.
So far I have something like this:
The downward fall from a stationary position:
Velocity (going down) = 0.5 * 9.8 * time^2
Where time = seconds counting up.
Bounce:
Velocity (going up) = Coefficient of Restitution * Velocity Just Before Bounce
Upward Rise:
I reversed the first formula so that the ball is essentially "rewinding time", so to speak.
Velocity (going up) = Coefficient of Restitution * 0.5 * 9.8 * time^2
Where time = seconds counting down from wherever the "counting up" stopped, so that
the velocity decreases to zero at the same rate that it increased during free fall.
Peak
Velocity (going down) = 0.5 * 9.8 * time^2
Where time = seconds counting up before the ball reaches the peak.
Step 1 and Step 2 seem to work fine, but Step 3 makes the balls look unnatural, like they're being pulled up and hovering a bit before dropping. And I haven't been able to find anything that explains the upwards movement of a bouncing ball (and how it differs from the fall).
The only way I've been able to smooth out the "hovering" effect of the ball when it reaches its post-bounce peak is to start the free fall counter (time) right before the ball reaches the peak so that it gains enough velocity to drop faster....if that makes sense. That's what I mean by my comment in Step 4.
P.S. I'm not planning on accounting for friction, ball shape, density, etc. unless it would be necessary to do so. The only "extra" thing I've added in so far is the ability to change the ball's elasticity.
Answer: Your formula's wrong. You've got $v=\frac12 at^2$, whereas that's the formula for $y$=height. Velocity's actually $v=at$ (with $a=9.8\mbox{m/sec}^2$). | {
"domain": "physics.stackexchange",
"id": 31034,
"tags": "homework-and-exercises, newtonian-mechanics, collision, projectile, free-fall"
} |
How do I measure loudness level of a device in environment? | Question: I want to measure the individual loudness level of a device in environment.
Fistly, I measure loudness level when the device is ON (95.5dB)
Then I measure loudness level when the device is OFF (48.5dB)
The engineer said that the device loudness level is 95.5-48.5=47dB
But I think it must be ~95.5 because it still depending in many factor such as: environment sound frequency and device sound frequency...
Answer: Here's the math: If $\beta$ is the level in dB and I is the intensity (power/area) in watts per square meter, $$\beta=10\log\left(\frac{I}{10^{-12}}\right).$$
Inverting that we get that $$I=10^{-12}\cdot 10^{\beta/10}.$$
$$\beta = 95.5\ \mathrm{dB} \to I = 3.548\times 10^{-3}\ \mathrm{W/m}^2$$
$$\beta = 48.5\ \mathrm{dB} \to I = 7.079\times 10^{-8}\ \mathrm{W/m}^2$$
A reasonable assumption for non-coherent sound sources is that the intensities add, so subtracting the background intensity be get the device intensity is
$$I = 3.548\times 10^{-3}\ \mathrm{W/m}^2$$ which has a level of $95.5$ dB | {
"domain": "physics.stackexchange",
"id": 40469,
"tags": "acoustics, measurements"
} |
qemu failure while making chroot for raspbian | Question:
Hi folks,
I tried to create a chroot environmnent in accordance with ROS dox "groovy / installation / Raspbian" in ORacle VBox.
Host OS is debian-6.0.7-amd64 squeeze, 2 virtual CPU. Image file I dd'ed from SD card which has 2013-02-09 Raspbian wheezy (it's running OK on Raspberry).
Image mounted OK, I can see it under mounting point /mnt/sdb1/raspb, but when I run
sudo chroot /mnt/sdb1/raspb
qemu fails with
qemu: fatal: cp15 insn ee070fba
and then register dump...
any ideas how this can be fixed?
Thanks
Originally posted by tulumbas on ROS Answers with karma: 3 on 2013-03-03
Post score: 0
Original comments
Comment by tulumbas on 2013-03-09:
No ideas I guess
Answer:
Qemu chroot will not work on the latest raspbian release. It does work on the release from december. You can still download it from here.
I will also add this information to the wiki since I had to find it out the hard way as well.
Originally posted by kalectro with karma: 1554 on 2013-03-10
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by tulumbas on 2013-03-14:
thanks for giving a clue - I've spent quite lot of time. Any ideas when fresh kernel will be supported? I appreciate you could be wrong person to ask, just wondering...
Comment by tulumbas on 2013-03-14:
Ah, found a branch of RasPi forum... | {
"domain": "robotics.stackexchange",
"id": 13159,
"tags": "ros, raspbian"
} |
What is the bond angle of water? | Question: I have been trying to find out the bond angle of $\ce{H2O}$, but every site I visit has a different answer.
So far, I have found the following angles listed:
Site 1: 104.4º
Site 2: 107.5º OR 104.5º, depending on where you are in the article.
Site 3: 104.5º
Answer: In your comment, you say you are interested in "liquid water".
The values in your self-answer are for gas phase water, and the second source is citing the first source, so it is really just one source. The value is determined from gas phase rotation spectroscopy.
For liquid water, the value is not as precisely known:
105.5° (calculated) and 106° (experimental) as reported in Structural, electronic, and bonding properties of liquid water from first principles J. Chem. Phys. 111, 3572. | {
"domain": "chemistry.stackexchange",
"id": 2598,
"tags": "bond, water, electrons, dipole"
} |
Debugging a bad topic subscription | Question:
Hi,
I just solve a situation I don't know how to debug. When subscribing to a topic, the prototype of my callback function was
void my_callback(const
std_msgs::String::ConstPtr& msg);
instead of
void my_callback(const
control_msgs::JointControllerState::ConstPtr& msg);
However, the code compiled fine and no error was reported at runtime. For future use, where should I look to get more debug info on topic subscriptions?
Originally posted by Stephane Caron on ROS Answers with karma: 5 on 2015-03-20
Post score: 0
Answer:
The actually a WARNING will be at runtime logged at the publisher side:
E. g. if trying to publish a std_msgs/Emtpy to topic /camera/image where rosrun image_view image_view is listening:
rostopic pub /camera/image std_msgs/Empty -r 1
[WARN] [WallTime: 1426844976.659990] Could not process inbound connection: topic types do not match: [sensor_msgs/Image] vs. [std_msgs/Empty]{'topic': '/camera/image', 'tcp_nodelay': '0', 'md5sum': '060021388200f6f0f447d0fcd9c64743', 'type': 'sensor_msgs/Image', 'callerid': '/image_view_1426844958567455361'}
On the subscriber side, however, no Error/Warning is logged:
rosrun image_view image_view image:=/camera/image
[ INFO] [1426844958.672731010]: Using transport "raw"
So you have to look at the log msgs of the publisher.
Originally posted by Wolf with karma: 7555 on 2015-03-20
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 21183,
"tags": "ros, callback, topic"
} |
Why is an arrow pointing through a glass of water only flipped vertically but not horizontally? | Question: Look at the bottom picture for example. If I try to imagine the glass of water as a lens, shouldn't it be flipped in both axes? Why does it only flip in one axis?
Answer: Because it is a cylinder, which is a lens, along one axis, and homogeneous along the other. :)
As in @fectin's comment: if it were a sphere, rather than a cylinder, then it would flip through the central axis (as more-typical convex lenses do). | {
"domain": "physics.stackexchange",
"id": 96662,
"tags": "optics, everyday-life, geometric-optics, lenses"
} |
Scala Case Classes | Question: Please take a look at the following Scala program and give me suggestions for improvement. I'm sure there would be plenty. This is my very first Scala code, so please don't be frustrated because of its low quality.
abstract class Expression {
def eval() : List[List[String]] = this match {
case Identifier(token) => List(List(token))
case Union(exprs) => exprs.flatMap(e => e.eval)
case Sequence(exprs) => exprs.map(e => e.eval).reduceLeft(product)
case Iteration(min, max, expr) => {
val subResult = expr.eval;
(min to max toList)
.flatMap(card => List.fill(card)(subResult).foldLeft(List(List[String]()))(product))
}
}
def product(first: List[List[String]], second: List[List[String]]) : List[List[String]] = {
for { x <- first; y <- second} yield x ++ y
}
}
case class Identifier(token: String) extends Expression
case class Union(subExprs: List[Expression]) extends Expression
case class Sequence(subExprs: List[Expression]) extends Expression
case class Iteration(minCard: Int, maxCard: Int, subExpr: Expression) extends Expression
object App {
def main(args: Array[String]) = {
println(
Iteration(
1, 2,
Union(
List(
Identifier("cat"),
Sequence(
List(
Identifier("dog"),
Iteration(
0, 1, Identifier("pig")
),
Identifier("bird")
)
)
)
)
).eval
)
}
}
Answer: This is quite good. I have one suggestion, though:
Use traits instead of abstract classes, and since probably all your data of type Expression will be defined only in this file make it a sealed trait:
sealed trait Expression {
// same body
}
Sealing a trait (or abstract class for that matter) has the advantage that whenever you'll do a pattern match over a value the compiler can tell you if you omitted a case. Also, using a trait has two advantages over abstract classes:
traits can be used to express everything that an abstract class can, with little syntactic overhead (when expressing the equivalent of class parameters). While the converse is not true (you cannot inherit, or mixin, multiple abstract classes).
traits are a slight performance optimization, since for any non-abstract member of a trait, the compiler literally copies those definitions in the bodies of the subclasses (not that this optimization is ever truly useful, the knowledge of how the compiler works is more important though).
Second, as a response to all suggestions that you should use the OO style more, that's really a choice that depends on the situation. By using the functional design you leave yourself vulnerable to adding new data, i.e. whenever you add a new case class you have to update every pattern match, but adding new functionality does not require you to update any of the previously defined case classes. While in OO the opposite would be true. So choosing between the two styles is really a question about leaving your code open to easy extension with respect to new data (OO), or new functionality (functional). | {
"domain": "codereview.stackexchange",
"id": 12672,
"tags": "scala"
} |
Besides hemoglobin, what proteins are present in red blood cells? | Question: I knew that mature red blood cells (RBCs) lacked nuclei, but I wasn't aware until just now that they also lacked ribosomes and mitochondria. Most cells in the human body all contain a common laundry list of housekeeping proteins and RNAs (including mitochondrial proteins and ribosomal RNAs), but I guess RBCs lack a number of them. Do they still have all of the other organelles? Obviously hemoglobin (and to a lesser extent carbonic anhydrase) makes up a large portion of the dry weight of RBCs, but are other proteins still present? If so, what are their relative abundances?
For example, do red blood cells have any of the normal metabolic (i.e. ATP producing) proteins? Obviously they don't have any of the TCA cycle proteins, but do they still have the glycolysis ones?
Answer: Reticulocyte stage is when the ribosomes are still present and after that no new protein synthesis occurs. However RBCs have a lot of proteins and major proteins other than haemoglobin are cytoskeletal proteins and ion channels/pumps (In fact, cytoskeletal proteins are more abundant than haemoglobin). It is the Na+-K+-ATPase that consumes most ATP. As you correctly identified the RBCs produce ATP via glycolysis and glycolytic enzymes are also present. Note that deficiency of pyruvate kinase leads to haemolytic anaemia.
For a detail on the proteins present in human RBSs, see this paper. They have studied the RBC proteome by ion-trap MS. The top 5 proteins (from Table-1) are:
No. Protein description Molecular mass (Da) Gi Number Sequence No. of identified
coverage(%) peptides
1 Spectrin α chain, erythrocyte 279,916.5 1174412 48.0 77*
2 Spectrin β chain, erythrocyte 246,468.1 17476989 48.0 76*
3 Ankyrin 1, splice form 2 206,067.9 105337 45.0 55
4 Ankyrin 1, isoform 4, erythrocytic 203,416.6 10947036 45.0 50
5 Ankyrin 1, isoform 2, erythrocytic 189,011.2 10947042 46.0 48
Although in this table you cannot find glycolytic enzymes other than GAPDH and Aldolase, but other enzymes are also present. They are perhaps not detected in this experiment because of overwhelming levels of structural proteins. You can check this old paper that shows a study of different glycolytic enzymes from erythrocytes. It is also to be noted that the glycolytic pathway flux is not as smooth as in other cells. So some accumulated metabolites are probably exported out of the cells to keep the flux smooth[ref Full text not found]. | {
"domain": "biology.stackexchange",
"id": 3281,
"tags": "human-biology, biochemistry, hematology, red-blood-cell, human-physiology"
} |
How does adding a mass change fluid flow rate? | Question: I have this exercise:
I haven't had any trouble solving the first two parts, but regarding the third one, I was wondering if my reasoning is correct. I would say that for the first case in which the object floats, then $h_1$ would not change, but the pressure at A would. Thus, I calculated this new pressure knowing the mass and radius of the tank, and got the new speed at D. However, if the object sinks, the Pressure at A would not change but $h_1$ would.
Can I say anything else about this? Is it correct?
Any help will be greatly appreciated.
Answer: Consider how the mass of the weight will displace the fluid. How will that effect the level of the fluid? Will there be a difference in the level if 10kg floated on the surface or displaced fluid under the liquid level? | {
"domain": "physics.stackexchange",
"id": 40652,
"tags": "homework-and-exercises, fluid-dynamics, pressure, fluid-statics"
} |
Find out the second highest in array | Question: I want to find the first and second highest number in an array. I could come up with one solution, but I want to know the optimum solution to do the same. Can someone help me with some alternative solution to this?
int main()
{
int arr[10] = {0,1,2,13,4,5,9,8,11,6};
int first = arr[0];
int second = arr[0];
int i;
for(i=0;i<10;i++)
{
if(first < arr[i])
{
second = first;
first = arr[i];
}
else if(second < arr[i])
{
second = arr[i];
}
}
printf("First = %d\n", first);
printf("Second = %d\n", second);
return 0;
}
Output:
First = 13
Second = 11
Answer: There is a problem in your code. Assuming that input is [3, 2, 1], the program will work like this.
Set first and second as 3.
Iterate through elements noticing that nothing is larger than 3.
Claim that 3 is the second number.
To fix this, you can do something like this.
if (arr[0] < arr[1]) {
second = arr[0];
first = arr[1];
}
else {
second = arr[1];
first = arr[0];
}
for (i = 2; i < elems; i++) {
/* Your code */
}
Also, your program doesn't work well when NaN is involved in first position. This probably doesn't really matter (currently this handles integers, not double floating point numbers), but this may be still relevant for you, as it would require some special code to handle NaN. | {
"domain": "codereview.stackexchange",
"id": 21547,
"tags": "c"
} |
What does '%BZ' mean in materials science? | Question:
Also, for that matter, what does k_II mean?
Answer:
The abbreviation BZ stands for Brillouin Zone, almost universally.
The material depicted is not a uniform solid, and it has at least two components each with different sizes of unit cell and thus different sizes of Brillouin zone. The SiC subscript denotes that the axis is in units of the BZ for the bulk silicon carbide component.
The percentage sign means simply that $k$ is measured in percentage of the size of the SiC BZ.
The $||$ subscript indicates the parallel component of the vector $\vec k$. This presumably means the component parallel to the surface, but that could change depending on the details of the context provided by the text surrounding the figure. | {
"domain": "physics.stackexchange",
"id": 72900,
"tags": "material-science, notation, crystals, graphene, chemical-compounds"
} |
How to refactor / re-architect the components/state here in ui-router app | Question: https://plnkr.co/edit/bOZW1a9u62W1QA6cYjYj?p=preview
My goal has been to separate the Dashboard states from the Feed state.
Inside the Dashboard state these are the following views: Tickers, Tags, Social.
Changes in those 3 should not effect the Feed module. (However changes from the Feed will eventually need to update the other 3)
I finally achieved this by adding the <feed-module></feed-module> to the root of the index file. Right underneath the first <div ui-view></div>.
<!-- MAIN CONTENT -->
<div class="container">
<div ui-view></div>
<feed-module></feed-module>
</div>
This is of course not ideal, because (1): There should only be 1 ui-view on the index, and (2): This exposes the feedModule outside of the login state.
I have an idea of perhaps going from the login state to a container state that will hold the dash-module and feed-module together, but have gotten no success yet. You can see an attempt here.
Full code to top Plnkr link here:
// Feed module
////////////////////////////////////////////////////////////////////////////////
var feed = angular.module('feed', ['ui.router'])
feed.config(function($stateProvider) {
const feed = {
name: 'feed',
url: '/feed',
templateUrl: '<em>Feed items go here.</em>'
}
$stateProvider.state(feed);
})
feed.component('feedModule', {
templateUrl: 'feed-module-template.html',
controller: function($scope, $state) {
console.log('Feed init (only once)', $state.params);
}
})
// RouterApp module
////////////////////////////////////////////////////////////////////////////////
var routerApp = angular.module('routerApp', ['ui.router', 'feed']);
routerApp.config(function($stateProvider, $urlRouterProvider) {
$urlRouterProvider.otherwise('/login');
const login = {
name: 'login',
url: '/login',
templateUrl: 'login.html',
bindToController: true,
controllerAs: 'l',
controller: function($state) {
this.login = function() {
$state.go('dashboard', {});
}
}
}
const dashboard = {
name: 'dashboard',
url: '/dashboard',
params: {
ticker: {},
tags: {}
},
views: {
'' : {
templateUrl: 'dashboard.html',
},
'tickers@dashboard': {
templateUrl: 'tickers-module-template.html',
controller: function($scope, $state) {
console.log('Tickers init', $state.params);
$scope.tickers = [
{ id: 1, ticker: 'AAPL' },
{ id: 2, ticker: 'GOOG' },
{ id: 3, ticker: 'TWTR' }
];
$scope.clickTicker = function(ticker) {
console.log(' ')
console.log('Ticker clicked!')
$state.go('dashboard', { ticker: ticker });
}
}
},
'tags@dashboard' : {
templateUrl: 'tags-module-template.html',
controller: function($scope, $state) {
const tags_model = [
{
ticker: 'AAPL',
tags : [{ id: 1, term: 'iPhone 7' }, { id: 2, term: 'iPhone 8' }, { id: 3, term: 'Tim Cook' }]
},
{
ticker: 'GOOG',
tags : [{ id: 4, term: 'Pixel' }, { id: 5, term: 'Pixel XL' }, { id: 6, term: 'Chrome Book' }]
},
{
ticker: 'TWTR',
tags : [{ id: 7, term: 'tweet' }, { id: 8, term: 'retweet' }, { id: 9, term: 'moments' }]
}
];
function matchTags(ticker, model) {
return model.filter(function(obj){
if (obj.ticker === ticker) { return obj; }
});
}
$scope.tags_model = matchTags($state.params.ticker.ticker, tags_model)[0];
$scope.clickTag = function(tag) {
$state.go('tags', { tag: tag });
}
console.log('Tags init', $state.params);
// console.log(' Tags model', tags_model);
}
},
'social@dashboard' : {
templateUrl: 'social-module-template.html',
controller: function($state) {
console.log('Social init', $state.params);
}
}
}
}
$stateProvider
.state(login)
.state(dashboard);
});
Answer: I was able to accomplish this by creating a new module called container which you are redirected to from the login state.
https://plnkr.co/edit/ivpBrncRKAXxhoYlm6uY?p=preview
// Container module
////////////////////////////////////////////////////////////////////////////////
var container = angular.module('container', [ 'ui.router' ])
container.config(function($stateProvider) {
const container = {
name: 'container',
url: '/container',
templateUrl: 'app.container.html'
}
$stateProvider.state(container);
});
And the app.container.html looks like so:
<div>
<dashboard-module></dashboard-module>
<feed-module></feed-module>
</div>
I'm still having problems with sibling and parent -> child $state communication, but those are different questions. | {
"domain": "codereview.stackexchange",
"id": 24779,
"tags": "javascript, angular.js, user-interface, state-machine"
} |
Stellar mass of galaxies | Question: Given the magnitudes (in the i-band) of certain galaxies, I would like to calculate their stellar mass (in terms of solar masses). So far, I have calculated their absolute magnitudes and gotten to working out the mass-light ratio $M/L$ for each galaxy.
e.g. $M/L=0.563$
Values I have are the calculated $M/L$ for each galaxy, and the $i$-band apparent ($13.25$) and absolute ($-18.06$) magnitudes for the galaxy, as well as the distance ($18.44Mpc$).
From this I need to get the mass of the galaxy $M$ in terms of solar masses. Therefore I assume I first need to calculate the i-band luminosity for the galaxy in solar masses $L_i$. This is where I am stuck.
However, once I have $L_i$ next step would be ...
$$M_g = 0.563 * L_i$$
Ultimately, given these values, how would I go about estimating the stellar mass of a galaxy in terms of solar masses?
Answer: The relation between absolute magnitude $M$ and luminosity $L$ for stars
$$\frac{L_{Star}}{L_{Sun}}=10^{(M_{Sun}-M_{Star})/2.5}$$
should also be appliable to i-band luminosities of galaxies.
Taking the absolute magnitude 4.08 of the sun on the I-band the luminosity of a galaxy with absolute magnitude −18.06 would be
$$\frac{L_{Galaxy}}{L_{Sun}}=10^{(4.08−(-18.06))/2.5}=0.7178\cdot 10^9.$$
An order of magnitude estimate for the mass of the galaxy would be
$0.563\cdot 0.7178\cdot 10^9=0.404\cdot 10^9$ solar masses.
But $M/L$ isn't necessarily biased the same way for the sun to $M_i/L_i$ as for the galaxy.
Therefore you'll probably need to compare the spectrum of the sun with the spectrum of the galaxy to find out the ratios of the i-band fraction of the total emission. Absorption and extinction at different wavelength may be different, therefore determining kind of mean temperature of stars in the galaxy could help finding a more realistic estimate for the stellar mass.
A similiar approach has been used in this paper. | {
"domain": "astronomy.stackexchange",
"id": 455,
"tags": "mass, luminosity"
} |
Weekday+day validation | Question: The string I'd like to check is something like "abcSun24def". If any valid "xxxyy" (xxx = weekday and yy = day) is found, return the position inside the string. If "xxxyy" is not found, return -1.
The code works as desired, but I think it can be optimized.
/* -------------------------------------------------------------
FUNC : findxy (find pattern xxxyy)
xxx = weekday (e.g. "Mon01")
yy = day
roster specific formatting
PARAMS : c (char *), pointer to string
RETURNS : (int), if pattern found, pointer to found pattern in string c
-1 if pattern not found
REMARKS :
---------------------------------------------------------------- */
int findxy(char *c) {
const char *days[] = { "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"};
int i, j;
char bufw[4];
char bufd[3];
/* check if c is at least 5 chars long */
if (strlen(c) < 5)
return -1;
for (i = 0; i <= (int)strlen(c)-5; i++) {
memcpy(bufw, c+i, 3);
bufw[3]='\0';
/* check all 7 weekdays */
for (j = 0; j < 7; j++) {
/* find weekday matches */
if (!strcmp(bufw, days[j])) {
/* check if both chars following weekday are numerical */
if (isdigit(c[i+3]) && isdigit(c[i+4])) {
memcpy(bufd, c+i+3, 2);
bufd[2]='\0';
/* check if number after weekday is a valid day */
if (atoi(bufd) >= 1 && atoi(bufd) <= 31) {
return i;
}
}
}
}
}
return -1;
}
Answer: I see some things that may help you improve your code.
Use the required #includes
The code uses strlen and memcpy which means that it should #include <string.h>. It was not difficult to infer, but it helps reviewers if the code is complete. It's also an important part of the interface. I believe these are the required includes:
#include <string.h>
#include <stdlib.h>
#include <ctype.h>
Use const where practical
In your findxy routine, the passed string is never altered, which is just as it should be. You should indicate that fact by declaring it like this:
int findxy(const char *c)
Check for a null pointer
Things do not go well if the routine is passed a NULL pointer. On my machine, I get a segmentation fault and a crash. You can eliminate this hole by adding these lines near the top of the routine:
if (c == NULL) {
return -1;
}
Use better naming
The days array is well named because it's easy to guess from the name what it contains. Likewise i and j are commonly used as index variables as you have done in this code. However, findxy is a rather cryptic name for what this does and c is a poor name for the passed string. I'd recommend something like this instead:
int findWeekdayDate(const char *str)
Avoid copying if practical
It's not strictly necessary to make copies of portions of the passed string. With a bit of careful planning, it can be done in place. Here's one way to do it, although it's not very efficient:
int isValidWeekdayDate(const char *str) {
static const char *days[] = { "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"};
if (str == NULL || strlen(str) < 5) {
return 0;
}
for (const char **dayname = days; *dayname; ++dayname) {
char *pos = strstr(str, *dayname);
if (pos && pos == str) {
if (isdigit(pos[3]) && isdigit(pos[4])) {
int val = (pos[3]-'0') * 10 + (pos[4]-'0');
if (val > 0 && val <= 31) {
return 1;
}
}
}
}
return 0;
}
int findWeekdayDate(const char *str) {
for (const char *curr = str ; *curr; ++curr) {
if (isValidWeekdayDate(curr)) {
return curr-str;
}
}
return -1;
}
Use a finite state machine
We can create a much more efficient routine by creating a finite state machine. For a particular candidate string, we note that the first character must be one of {'F', 'M', 'S', 'T', 'W'}. If it is not one of those, then the candidate string can be immediately rejected. Now let's say the first character is 'S'. In that case the second character must be one of {'a', 'u'}. We can proceed like this, one character at a time to create a finite state machine. Here's a visualization of such a state machine:
This is how compiler tools like flex and bison and lex and yacc work. The code is very efficient but might not be as easy to understand, so it's a tradeoff you should be aware of.
A very pedantic note
Strictly speaking, this line:
int val = (pos[3]-'0') * 10 + (pos[4]-'0');
is guaranteed to be portable. The C standard requires that encoding of digits is contiguous, so this will work with any character encoding, including EBCDIC, Unicode and ASCII. | {
"domain": "codereview.stackexchange",
"id": 23549,
"tags": "c, datetime, validation"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.