anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Increase the speed of this Caesar Cipher | Question: Here's my attempt at a Caesar Cipher encoder/decoder.
If given a key, it will encrypt the given string. However, if you do not specify a key, it checks each of the 26 possible keys and returns the one with the highest percentage of words that appear in this file of English words (with a couple additions including 'a' and 'I'). The program also returns the 'assurance' in its choice; for a specified key it is always 100%, for an unspecified key it is the percentage of words that are in English.
How can I improve the speed of the program? I have gone through 5 versions and improved the method that I use to decrypt each time, but now I would like to improve the speed. As I am a beginner, I am not very sure on how to do this.
The code is fairly readable, I think.
CaesarCipher_v5.py [47 lines]
# CaesarCipher_v5.py
#
# CSTAICH 2014
from string import maketrans, ascii_uppercase, ascii_lowercase
import re
from operator import itemgetter
from itertools import chain
input = ''
key = ''
# variable input and processing
while input == '': input = raw_input('Input ciphertext: ')
if key == '': key = raw_input('Input key; leave blank for auto-detection: ')
if key != '':
key = int(key)
if key >= 26: raise Exception("invalid key, must be [0-25]")
alphabet = list(chain(*zip(ascii_uppercase, ascii_lowercase)))
english_words = re.sub(r'\r', '', open('sowpods.txt', 'r').read().lower()).split('\n')
# =================================-------------------------------
def key_shift(input, key): # shifts an input by a key number of characters
return input.translate(maketrans(str(alphabet), str(alphabet[key * 2:] + alphabet[:key * 2])))
def english(sentence): #returns percentage of words in input that are english words
sentence = re.sub(r'[?,.!:;/]', '', sentence).split(' ') #strip punctuation and split into words
number_english_words = 0
number_words = len(sentence)
for word in sentence:
if word.lower() in english_words: number_english_words += 1
return round(number_english_words / float(number_words) * 100, 2)
# begin body
if key != '': #behavior for defined-key shift
output = key_shift(input, key)
assurance = '100%'
else: #behavior for non-defined-key shift
options = [key_shift(input, s) for s in xrange(1,27)] #list of 26 options for shift
options_dict = zip(options, [english(s) for s in options]) #list of tuples: (option, percent eng)
output, assurance = max(options_dict, key=itemgetter(1))[0:2]
print ' :: '.join([str(output), str(assurance) + '%'])
Here are a couple interactions with the code:
$ python CaesarCipher_v5.py
Input ciphertext: Here is some plain English that I would like to translate over by, let's say... 17 characters? Sound good?
Input key; leave blank for auto-detection: 17
Yviv zj jfdv gcrze Vexczjy kyrk Z nflcu czbv kf kirejcrkv fmvi sp, cvk'j jrp... 17 tyrirtkvij? Jfleu xffu? :: 100%%
$ python CaesarCipher_v5.py
Input ciphertext: Yviv zj jfdv gcrze Vexczjy kyrk Z nflcu czbv kf kirejcrkv fmvi sp, cvk'j jrp... 17 tyrirtkvij? Jfleu xffu?
Input key; leave blank for auto-detection:
Here is some plain English that I would like to translate over by, let's say... 17 characters? Sound good? :: 89.47%
$ python CaesarCipher_v5.py
Input ciphertext: Yberz vcfhz qbybe fvg nzrg, pbafrpgrghe nqvcvfpvat ryvg, frq qb rvhfzbq grzcbe vapvqvqhag hg ynober rg qbyber zntan nyvdhn. Hg ravz nq zvavz iravnz, dhvf abfgehq rkrepvgngvba hyynzpb ynobevf avfv hg nyvdhvc rk rn pbzzbqb pbafrdhng. Qhvf nhgr veher qbybe va erceruraqrevg va ibyhcgngr iryvg rffr pvyyhz qbyber rh shtvng ahyyn cnevnghe. Rkprcgrhe fvag bppnrpng phcvqngng aba cebvqrag, fhag va phycn dhv bssvpvn qrfrehag zbyyvg navz vq rfg ynobehz
Input key; leave blank for auto-detection:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum :: 36.23%
That last one was an experiment to see if it could pick out Latin. Turns out it can simply because short, two- or three-letter words are often shared between languages even if they do not have the same meaning. Even though the assurance was only 36.23%, the next best was 13.04% and many of the keys returned only one or two recognized words.
Answer: Your code would be much clearer if split into multiple functions, with clear inputs and outputs, e.g.:
def encode(plaintext, key):
...
def decode(ciphertext, key=None, dictionary=None):
...
def get_int_input(prompt, max_=26):
...
This is much better than relying on scope for access to the objects your functions need.
You could then have a loop at the end to run it all, something like:
if __name__ == '__main__':
english_words = create_dictionary()
while True:
choice = get_int_input("1. Encode\n2. Decode\n3. Exit\n", 3)
if choice == 1:
plaintext = raw_input("Enter the plain text: ")
key = get_int_input("Enter the key: ", 26)
print encode(plaintext, key)
elif choice == 2:
ciphertext = raw_input("Enter the cipher text: ")
key = None
if raw_input("Do you know the key? (y/n) ".lower()) == "y":
key = get_int_input("Enter the key: ", 26)
plaintext, match = decode(ciphertext, key, dictionary)
print "{0} :: {1:.2f}".format(plaintext, match)
else:
break
This improves the reusability of your code by allowing you to import the functions elsewhere without actually running the
In terms of speed, one obvious improvement would be to use a set, which provides very fast membership testing using hashing, for your dictionary. Also, you seem to be building it in an awkward way, try:
def create_dictionary(filename='sowpods.txt'):
with open(filename) as f:
return set(line.strip().lower() for line in f)
As you can now loop multiple times, you could also think about storing the translations in a dictionary {key: translation}, so you only build them once. | {
"domain": "codereview.stackexchange",
"id": 9238,
"tags": "python, performance, python-2.x, caesar-cipher"
} |
Hamiltonian formalism and the phase space | Question: In my book, it says that Hamilton's equations of motion are equations of the first order in the time and that they describe the motion of the system in the $2S$-dimensional phase space.
Could someone explain clearly what this means, and what exactly a phase space is?
Answer: If we have a set of $S$ generalized coordinates $q_i$ along with the corresponding conjugate momentum
$$p_i=\frac{\partial L}{\partial \dot {q_i}}$$
then we can obtain the Hamiltonian
$$H=\sum_ip_i\dot q_i-L$$
And the following equations can be obtained:
$$\dot p_i=-\frac{\partial H}{\partial q_i}$$
$$\dot q_i=\frac{\partial H}{\partial p_i}$$
The phase space is just a term used to describe all "coordinates" $\left(q_1,q_2,...,q_S,p_1,p_2,...,p_S\right)$. Therefore, the phase space consists of $2S$ dimensions ($S$ generalized coordinates and $S$ conjugate momenta), and we have $2S$ first order equations in time to describe our trajectories in this phase space. | {
"domain": "physics.stackexchange",
"id": 53782,
"tags": "classical-mechanics, coordinate-systems, hamiltonian-formalism, phase-space, poisson-brackets"
} |
Why table-driven LL (1) parser does not work with division and subtraction? | Question: Everywhere, one grammar is used as an example table-driven LL(1) parser.
Grammar
S -> E | (epsilon)
E -> TE'
E' -> +TE' | (epsilon)
T -> FT'
T' -> *FT' | (epsilon)
F -> NUM | (E)
With this grammar you can only add and multiply. I wanted a little more so added subtraction and division operations.
my Grammar
S -> E | (epsilon)
E -> TE'
E' -> +TE' | -TE' | (epsilon)
T -> FT'
T' -> *FT' | /FT' | (epsilon)
F -> NUM | (E)
Parse table
|-------------------------------------------------------------------------------------|
| | NUM | + | - | * | / | ( | ) | $ |
|-------------------------------------------------------------------------------------|
| S | S->E | | | | | S->E | | S->e |
|-------------------------------------------------------------------------------------|
| E | E->TE' | | | | | E->TE' | | |
|-------------------------------------------------------------------------------------|
| E' | | E'->+TE'| E'->-TE'| | | | E'->e | E'->e |
|-------------------------------------------------------------------------------------|
| T | T->FT' | | | | | T->FT' | | |
|-------------------------------------------------------------------------------------|
| T' | | T'->e | T'->e | T'->*FT'| T'->/FT'| | T'->e | T'->e |
|-------------------------------------------------------------------------------------|
| F | F->NUM | | | | | F->(E) | | |
|-------------------------------------------------------------------------------------|
But I have a problem if used this grammar to build a parse tree the string 6*3/2, it turns out that the first operation is division and then the multiplication operation. I do not know why this is happening. Maybe this is because I have the wrong grammar or I'm doing something wrong. Help me.
Answer: If you naively generate a parse tree from
$$\begin{align}T\to& FT'\\
T'\to& *FT' \\
\mid&\; /FT' \\
\mid&\; \epsilon\\
\end{align}$$
then you're going to end up with something like this:
T
/ \
/ \
/ \
F T'
| /|\
6 / | \
/ | \
* F T'
| /|\
3 / | \
/ | \
÷ F T'
| |
2 ε
What you really want is this:
T
/|\
/ | \
/ | \
T ÷ F
/|\ |
/ | \ 2
/ | \
T * F
| |
F 3
|
2
But that belongs to a different, left-recursive grammar:
$$\begin{align}T\to& T * F\\
\mid&\; T / F\\
\mid&\; F
\end{align}$$
To use the second grammar, you'd need to use a different parsing technique (not the end of the world :-). To get the second parse tree, you need to do a little surgery while constructing it (or in a post-parse tree-walk, but that seems like overkill).
Because LL grammars cannot deal with left-recursive grammars, they cannot directly represent left-associative operators (which is most operators). Without left-recursion, you only have one way to represent a repetition, which means you have to use the same style to represent both left-associative and right-associative operators. That's not an issue in practice -- you certainly know the associativity of all operators in your language -- but you might find it annoying that you need to annotate the parser in order to get the right parse. If you use a bottom-up parsing technique, this annoyance vanishes. | {
"domain": "cs.stackexchange",
"id": 15642,
"tags": "parsers"
} |
ROS navigation AMCL with extended kalman filter | Question:
Hi! I am using the ROS navigation stack with my robot. I have odometry information from the wheel encoders, and a laser range finder. I am currently using AMCL to account for odometry drift. I was wondering if it is benefitial to use an Extended Kalman Filter (EKF node) that takes in information from the wheel odometry and publishes out "odometry_filtered" which I send to AMCL and the move_base node? The odometry isn't that good with just AMCL. The robot turns left and right a lot since AMCL corrects its pose because of the odometry drift.
Originally posted by SigurdRB on ROS Answers with karma: 3 on 2017-05-20
Post score: 0
Answer:
robot_localization provides a nicely configurable Kalman Filter implementation and is probably the most frequently used package for such tasks. That being said, if you only have wheel odometry available do not expect major improvements, as KF-based estimators are most useful for fusing multiple different sources of information. A common setup would be fusing angular yaw rates from an IMU with linear velocities from wheel odometry.
If you just send wheel odometry to a EKF and nothing else. By adjusting the process and measurement noise you could add some smoothing/adjust for noise, but to see noticable improvements adding for instance an IMU to measure angular rates is recommended.
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2017-05-20
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 27949,
"tags": "navigation, ekf, odometry, extended-kalman-filter, kalman"
} |
MySQL data fetching without page refresh | Question: I've recently finished a prototype for a little Raspberry Pi website. The main page of the site displays current users found in the room (through bluetooth). I wanted this list updated regularly from data in a MySQL table, so no refresh is needed. When someone walks into or out of the room, the webpage shows almost instantly.
This is the solution I created:
index.html
<html>
<head>
<!-- This page uses jQuery to insert PHP files into HTML divs -->
</head>
<body>
<div class="list-group">
<a href="#" class="list-group-item active">
<h4 class="list-group-item-heading"><u>Present:</u></h4>
<div class="list-group" id="list1">
<!-- php will be injected here, and it will create html -->
</div>
</a>
</div>
<div class="list-group">
<a href="#" class="list-group-item active">
<h4 class="list-group-item-heading"><u>Absent:</u></h4>
<div class="list-group" id="list2">
<!-- php will be injected here, and it will create html -->
</div>
</a>
</div>
</body>
<script>
<!-- references :) -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css">
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js"></script>
var timer = setInterval(listLoad, 1000);
<!-- Every 1s this function is called... -->
function listLoad(){
$(function(){
$("#list1").load("herelist.php");
$("#list2").load("notherelist.php");
});
}
</script>
</html>
herelist.php
<!DOCTYPE html>
<html>
<body>
<?php
$servername = "localhost";
$username = "xxxx";
$password = "xxxx";
$dbname = "xxxx";
// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$sql = "SELECT firstname, lastname FROM room_Data WHERE attendance = 1";
$result = $conn->query($sql);
if ($result->num_rows > 0) {
// output data of each row
echo("<div id=\"list1\">")
while($row = $result->fetch_assoc()) {
echo ("<li><h4 class=\"list-group-item-heading\">". $row["firstname"]. " " . $row["lastname"] . "</h4></li>");
}
echo("</div>");
} else {
echo ("");
}
$conn->close();
?>
</body>
</html>
notherelist.php
Same as herelist.php, just the where clause is = 0 instead of = 1
This project isn't for some major production scale, which is why I don't really mind hitting my MySQL server every second requesting a read. I actually like this solution a lot since it was my first time ever injecting PHP through jQuery and I thought it was a neat idea.
Is this a good solution for a personal project/school project? Or, is there something I should look into to improve this?
Answer:
Personally, I hate having a setInterval in JS code, especially to
fetch live updates. I hate it even more when it is used for hitting
the server for a database read/write operation, irrespective the size
of project.
The code you have is quite good, considering that it was your first time. There are quite a few suggestions though. Read on:
Instead of having 2 separate files to fetch data of attendance = 0 and attendance = 1, use a single file with a parameter passed via a GET or POST request.
Instead of dumping the entire data as HTML, I'd suggest outputting the results as JSON so that it might be of use to other applications, without having to resort to HTML parsers. This helps if you think/plan on providing an API for other users to develop on.
Since the data for room_data gets updated with an underlying python application, you can modify it to write the output to a static JSON file and hit this JSON content instead of executing a MySQL query every second. This will help as the browser will get a 304 response status from the server if the JSON was not updated since last fetch. Caching FTW ^_^
Put the external script/stylesheets in head.
Since all you need for the MySQL to return is concatenated name string, do so in MySQL itself:
SELECT CONCAT(firstname, ' ', lastname) AS 'name'
FROM room_Data
WHERE attendance = :something
Do not use h4 tags for list items.
If you follow (1) above, you won't need the (3). I strongly recommend using (3) though. | {
"domain": "codereview.stackexchange",
"id": 16208,
"tags": "javascript, php, jquery, html, mysql"
} |
How to reload visualisation_canvas without exit prolog shell? | Question:
I followed KnowRob_basics.
I found when I run
visualisation_canvas(C).
I only can run it once.
if I run again, it just shows:
?- visualisation_canvas(C).
C = @'J#00000140499119889776'.
?-
How to call visualisation_canvas again without restart prolog?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2012-08-07
Post score: 0
Answer:
Actually, there is a way to kill the old canvas and start a new one, but it's a bit hacky (which does not have to be bad, though...)
visualisation_canvas().
retractall(mod_vis:v_canvas()), assert(mod_vis:v_canvas(fail)).
visualisation_canvas(_).
cheers
moritz
Originally posted by moritz with karma: 2673 on 2012-08-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by sam on 2012-08-08:
Thank you~~ In my case I should run: retractall(mod_vis:v_canvas(C)), assert(mod_vis:v_canvas(fail)).
visualisation_canvas(C). It works!! | {
"domain": "robotics.stackexchange",
"id": 10509,
"tags": "knowrob"
} |
What is "quantization"? Give one example | Question: I just want to know the definition/explanation of quantization in layman's terms. Also an example would be very helpful if provided (not necessary).
Answer: Before quantum theory we had classical theories. There was no notion of energy in particles being stores in "a lump". Instead classical theories (in general) allow energy to be split up into arbitrary parts.
Quantum theories don't make that assumption. It turns out that to match what we see in experiments you have to assume that energy comes in lumps (quanta). Changes in energy can't be arbitrary but have to obey rules. Stable systems (like a Hydrogen atom) have specific energy levels and can only switch from one level to another, not from any energy to any other energy. This is why you get spectral lines, not a continuous distribution - they're changes in discrete energy levels.
Classical theories don't predict that.
Quantization (in theoretical physics) covers any mathematical method of developing a theory which incorporates quantum theoretical ideas from a classical concept.
The most dramatic might be Quantum Electrodynamics (QED) which is developing a theory for the electromagnetic field from quantum ideas. The resulting theory can get us back to Maxwell's laws (which are purely classical). QED is horrendously complex for a layman (and not at all easy for anyone else).
The grand-daddy of all quantization models was Einstein's explanation of the photoelectric effect. That's what he got the Nobel prize for, by the way, not relativity. To explain the observations required a model that required energy to be passed in discrete quanta, not like previous classical models. | {
"domain": "physics.stackexchange",
"id": 44525,
"tags": "quantum-mechanics, discrete, quantization"
} |
Water Jet Cutter and Laser | Question: I watched a video of "Glass cutting" which uses a Water Jet Cutter. It was said that Cutting glass simply by machines would eventually crack it... So, They're using grains of sand (by placing a sand paper beneath) to cut glass using a Water Jet. It also has enough force to cut through steel.
What exactly is the property of GLASS so that it requires a "Water Jet" to cut it..?
Also, Why can't we use a powerful laser and some kinda coolant to cut through glass..?
Answer: Cutting glass with a water jet uses grains of hard material (such as sand but occasioanlly harder minerals or diamond) to grind away at the glass one particle at a time. The water jet is just used to carry the sand to the glass at high speed and remove it and the eroded glass. As the poster says, it also cools the glass and so prevent cracking.
Cutting glass with a laser is harder. Laser cutting in most materials involves heating a region until it melts or evaporates - unless you have a material where there is a very strong absorption for the laser wavelength and you can use very short pulses to break bonds efficently.
So in glass or metal this means a lot of energy is absorbed by the surrounding material - which will heat up and expand - and in the case of glass, crack. | {
"domain": "physics.stackexchange",
"id": 4564,
"tags": "water, laser"
} |
Is there an explicit connection between rolling-shutter images of rotating propellers and interference patterns with optical vortices? | Question: The rolling shutter effect is a neat fact of the geometry of modern CCD cameras and how they interact with objects that move faster than the camera can handle, and it's been beautifully explained by a couple of youtube videos, one at SmarterEveryDay (with a cool behind-the-scenes video to back that up) and one at standupmaths.
These videos provide what I think is a good deal of a breakthrough on how we think about what we can do with the rolling-shutter effect, and the techniques they pioneer let you take (or simulate) exceptionally clean pictures like this one:
This is a simulated rolling-shutter picture of a rotating four-propeller, but what I notice is that it is incredibly close to the interference pattern that you get if you superpose an optical vortex with a plane wave, which can look kind of like this:
So, given this uncanny resemblance: can this similarity be traced to some deeper analogy between the mathematical description of the two phenomena? If so, how?
Answer: Yes indeed, there is a connection. And, as one can imagine, it is a geometrical one.
To make the connection, one needs to define three-dimensional spaces for the two scenarios. For the optical vortices, the space is simply the normal three-dimensional space, represented by the $x$, $y$ and $z$ coordinates, and we'll assume that the beam propagates in the $z$-direction. The phase factor of the vortex beam, which defines the wavefronts (surfaces of constant phase), is given (in cylindrical coordinates) by
$$ \psi_{\rm vort} = \exp( i\ell\phi-i k z) , $$
where $\ell$ is the order of the vortex (azimuthal index). For the figures in the question above $\ell=4$.
For the rotating four-blade propeller, one replaces the $z$-coordinate with time. The number of blades takes over the role of the azimuthal index $\ell$. In the process, we assume that the thickness of the propeller in its $z$-direction is of such a nature that it does not play a significant role in the observed pattern.
We'll start by describing the situation for the optical vortex first. In three dimensions, the wavefront of the optical vortex beam describes a (higher order) helical surface - single helix for first order vortex; double helix for second order vortex; and so forth. This can be expressed by
$$ \phi-\frac{k z}{\ell}={\rm constant}. $$
To observe the interference pattern shown in the figure above, one needs to let the vortex beam interfere with another beam - a reference beam - typically a plane wave. The fringes will only appear as in the figure, if this plane wave is tilted with respect to the plane of observation, which is perpendicular to the propagation direction. (The tilt needs to be larger than the largest tilt in the helical surface.) Otherwise the fringes will form spirals, which we don't observe. So the plane wave would have the expression
$$ \psi_{\rm pw} = \exp[i (k_y y + k_z z)] . $$
So now the planar wavefronts of the plane wave slice through the helical wavefronts of the vortex beam. Every point where these two beams are in-phase produces constructive interference, leading to a high intensity. The image in the figure only shows the intensity pattern of this interference in a particular plane, at say $z=0$:
$$ {\rm intensity}_{z=0} = |\psi_{\rm vort}+\psi_{\rm pw}|^2 = \frac{1}{2} + \frac{1}{2} \cos( \ell\phi- k_y y) . $$
So the fringes are observed for
$$ \ell\phi- k_y y = {\rm constant} . $$
Now for the propeller. Here the motion of the propeller also produces a helix in the three-dimensional space that we defined (where $z$ is replaced by time):
$$ \phi-\frac{t\omega}{\ell}={\rm constant}, $$
where $\omega$ is the rotation speed. Moreover, the rolling shutter defines planar surfaces in this three-dimensional space that are tilted with respect to a plane of constant time.
$$ y + v t={\rm constant}, $$
where $v$ is the shutter speed. So this is exactly analogues to the plane wave. In the image, one would only see the red of the propeller if the shutter-opening coincided with the location of a blade of the propeller. This is analogues to the constructive interference between the helical wavefront and the planar wavefronts. Again we only see one frame of this movie. Hence, a slice of the three-dimensional space for a fixed value of the time, say at $t=0$ (taking care to match the dimensions):
$$ \ell\phi - \frac{\omega}{v} y = {\rm constant}. $$
As a result, the two scenarios have percisely the same geometrical construction, provided that we replace the spatial propagation direction ($z$-direction) for the optical vortex beam with the time-dimension in the case of the propeller.
EDIT (by Frobenius under flippiefanus' permission)
An image of the rolling shutter effect for a rotating four-blade propeller was produced (with GeoGebra software, using the tools ''Animation On'' and ''Trace On''). On this image (red color) they superimposed the fringes curves of the interference optical vortex-plane wave (blue color) according to the above equation $\:\ell\,\phi\!-\!k_{y}\,y=c\:$, for $\:\ell=4=\text{number of blades}\:$, $\:k_{y}=-10.6\:$ and three values of the constant c $=0,-6.2,-12.8$.
Note that the previous equation in cartesian $\:x,y−$coordinates is
$$
x=\dfrac{y}{\tan\left(\dfrac{k_{y}\,y\!+\!c}{\ell}\right)}
$$ | {
"domain": "physics.stackexchange",
"id": 42922,
"tags": "optics, kinematics, vortex"
} |
What is unsolved in CP Violation? | Question: I learnt about CP Violation in my Quantum Information Theory and Particle Physics courses. However I have not understood what exactly has not been solved yet about CP Violation. I thought that the standard model is able to predict CP Violation perfectly well.
Answer: The standard model does indeed describe/predict CP violation in the quark sector perfectly well (dammit!) But it does not explain why the universe we're in is full of matter and empty of antimatter. | {
"domain": "physics.stackexchange",
"id": 48915,
"tags": "particle-physics, cp-violation"
} |
Error while parsing the URDF file | Question:
while making URDF file,when i am going to run ./bin/parser my_robot.urdf i got following error:
[ERROR] [1379365363.499555623]: Could not open file [my_robot.urdf] for parsing.
[ERROR] [1379365363.499662656]: Failed to parse urdf file
plz tell me how to solve it.
Originally posted by pavanpatel on ROS Answers with karma: 61 on 2013-09-16
Post score: 1
Original comments
Comment by gustavo.velascoh on 2013-09-16:
Please, explain more detailed your problem in the description, not in the title of your question.
Comment by gustavo.velascoh on 2013-09-16:
What distro of ROS are you using?
Answer:
There is a dot missing in the second tutorial.
From your workspace, run: ~/catkin_ws$ ./devel/lib/testbot_description/parser ./src/testbot_description/urdf/my_robot.urdf
It worked for me.
Originally posted by jaimerv with karma: 96 on 2014-02-07
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 15536,
"tags": "urdf"
} |
Which USB interface for Android device I can use for motor driver | Question: I am new to robotics,
I will be controlling DC motors from Android device through USB
For this I have selected L298N motor controller(After watching YouTube videos )
And got some DC motors
I have no idea how do I connect this to Android device via USB cable
Help appreciated
Ref:
https://www.bananarobotics.com/shop/L298N-Dual-H-Bridge-Motor-Driver
https://youtu.be/XRehsF_9YQ8
PS: All I know is programming android
Answer: What you want cannot be simply done by using an Android device and a USB cable. You will need a micro controller to process the instruction sent from your mobile device and then control the motor movement.
The android device will communicate to the arduino using say Bluetooth(although there are many other alternatives you can select), the arduino will process your instruction sent over bluetooth and then as per your logic it will drive the motor.
To control L298N a single motor using L298N you will use the input terminals 1&2. These will be connected to your micro controller. The motor terminals will be connected to the Output 1 & 2.
Check this image of a micro controller being connected to a motor driver.
L298N is usually used for motors which have a higher current requirement, thus you will also have to look for alternate power supply arrangements too.
P.S. If your motors don't require a lot of power you may consider using L293D which is simpler to use than L298N. | {
"domain": "robotics.stackexchange",
"id": 1549,
"tags": "motor, usb"
} |
Characteristic length for the diffusion equation (temperature) | Question: The background: I'm doing some simulation work involving the diffusion equation in 1D. Specifically I have some temperature profile, constant thermal conductivity and fixed temperature at each end of the system.
I know that we can write:
$$
\tau = \frac{L^2}{\kappa}
$$
where $\tau$ is the characteristic time scale, $L$ is the characteristic length scale and $\kappa$ is the thermal conductivity. In this case, $\kappa = 1$ so the time scale is equal to the square of the length scale.
I know that in a gas, the time scale corresponds to something like the amount of time it takes a particle to diffuse over the length scale of interest, but I'm not sure what it means in the context of temperature.
Could anyone enlighten me? Thanks!
Answer: Short answer: $\tau$ is the typical time it takes for heat (energy) to be transported over the distance $L$.
I'll try to elaborate a bit on your analogy to particle diffusion.
For particle diffusion in one dimension, you may think of the particle as jumping around on the x-axis. Some times it jumps to the right, and some times to the left. The end result is that it typically takes $\tau = L^2/\mathcal D$ to cover the distance $L$, when the diffusion constant is $\mathcal D$. The diffusion constant is a measure of how large the jumps are (in fact, how large the variance of the jumps is).
But for heat transport you may instead think of a chain of beads on the x-axis. Each bead is wiggling around its spot on the axis, and the more it wiggles, the higher the temperature at that position. Every now and then a wiggling bead will smack its neighbor, and exchange some energy with it. Some times the energy transfers from left to right, and some times the transfer is from right to left. The "thermal diffusivity" $\kappa$ is a measure of how often the beads collide, and how willing they are to exchange their energy with each other. The end result is that $\tau = L^2/\kappa$ is the typical time it takes for a "packet" of energy to travel over the distance $L$. | {
"domain": "physics.stackexchange",
"id": 13701,
"tags": "temperature, diffusion"
} |
how to publish real robot-arm joint_states by ROS | Question:
Hello,
I have a real six-dof robot arm and it's urdf model, the urdf model can display well in rviz.
Now I want to publish robot's joint_states so the rviz can show out the real robot pose.
But I find that joint_state_publisher node will also publish joint_states topic, and the node is need when visual urdf in rviz, so I don't know how to resolve the conflict between my own node and joint_state_publisher node, thanks for some tips!
Originally posted by yin on ROS Answers with karma: 58 on 2016-10-27
Post score: 0
Answer:
This looks like a duplicate of How to visualize a real robot in Rviz, but just to reiterate:
If you have a node that already publishes joint states for your real robot, then you don't need to start joint_state_publisher.
An instance of joint_state_publisher is only needed if:
you don't have another node that publishes JointState messages and you want to fake them (use_gui:=true)
you have multiple publishers of JointState messages and want to aggregate them (source_list:=[..])
[..] I don't know how to resolve the conflict between my own node and joint_state_publisher node [..]
Just don't start joint_state_publisher (but see the points above).
Originally posted by gvdhoorn with karma: 86574 on 2016-10-27
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by yin on 2016-10-27:
Ok, that's it, just not start joint_state_publisher will ok, thank you! | {
"domain": "robotics.stackexchange",
"id": 26068,
"tags": "joint-state-publisher"
} |
Popping from a list in state while a condition is true | Question: I'm dealing with data that stores its state as a String, treating the string like a stack. I also have to combine that with error handling.
To that end, I'm using the type StateT String Maybe a. I have a function to pop and to push a Char from and to the string:
pop :: StateT String Maybe Char
pop = do
x:xs <- get
put xs
return x
push :: Char -> StateT String Maybe ()
push x = do
xs <- get
put (x:xs)
return ()
I wrote a function to repeatedly pop from the string while the characters being popped fulfilled a condition. It behaves as follows:
> runStateT (popWhile (<'a')) "HELLO world"
Just ("HELLO ","world")
> runStateT (popWhile (>'a')) "HELLO world"
Just ("","HELLO world")
My implementation is the following:
popWhile :: (Char -> Bool) -> StateT String Maybe [Char]
popWhile f = do
s <- get
if null s
then return []
else popAgain
where
popAgain = do
x <- pop
if f x
then liftM (x:) (popWhile f)
else push x >> return []
But that seems pretty bulky, and has two if then else's in it. Is there a better way to write this function?
Answer: You can simplify the code by using span:
span :: (a -> Bool) -> [a] -> ([a], [a])
span, applied to a predicate p and a list xs, returns a tuple where
first element is longest prefix (possibly empty) of xs of elements
that satisfy p and second element is the remainder of the list
popWhile :: (Char -> Bool) -> StateT String Maybe String
popWhile p = do
s <- get
let (xs, ys) = span p s
put ys
return xs
Thanks to @bisserlis for the suggestion to use state
popWhile = state . span | {
"domain": "codereview.stackexchange",
"id": 11812,
"tags": "haskell, functional-programming, stack, monads"
} |
Use of tungsten as an insulator in an induction heater | Question: One problem in induction heating is that energy is lost because the object being heated radiates energy, that energy then heats the coils (which are water cooled) and the coils suck away the energy.
One idea is to insulate the object being heated. For example, a thin-walled cylinder of polished tungsten could be placed between the object being heated and the coil. This will reflect some of the infrared energy being emitted from the object and reduce the heat loss. Another potential material is gold. So, for example, silica plated with the gold might be a possible. The issue with gold is that it is a good infrared reflector, but it melts at 1400C and the object can reach that temperature, so it would be at a borderline temperature for melting. Also, gold is not paramagnetic.
The problem with this idea is that tungsten is paramagnetic so it will absorb some of the energy from the coil.
How can I compute whether using a tungsten cylinder reflector would have a net benefit without doing an actual experiment?
Answer: After doing more research I found the following table on IR reflectivity:
From this it appears silver is by far the best reflector, rhodium is #2 and platinum is number 3. Silver melts at forging temperatures so it may or may not be able to hold up in the environment. For that reason rhodium may be the best selection. | {
"domain": "physics.stackexchange",
"id": 32879,
"tags": "thermodynamics, magnetic-fields, reflection, induction, infrared-radiation"
} |
Quantum Fourier Transform and Entropy | Question: QFT is a nonlocal unitary transformation and so can generate entanglement in a system. It means a separable pure state can be converted into an entangled pure state. Now since the presence of entanglement can be witnessed via an increase in the entropy of the subsystems. Since all the subsystems witness a positive entropy change ,does the entropy of the complete system also increase (it seems to increase since entropy is additive) ? Now if it does increase , It seems to violate reversible nature of Quantum algorithms. I am very confused.
Answer: Just because the entropy of the subsystems increases, that doesn't mean that the entropy of the whole system increases. This is possible here because of entanglement: an entangled pure state has zero overall entropy, but the subsystems have non zero entropy. A simple example is the state
\begin{equation}
\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle).
\end{equation}
Both qubits have subsystem states
\begin{equation}
\frac{1}{2}(|0\rangle\langle 0|+|1\rangle\langle 1|),
\end{equation}
and, if you compute the entropy of these states, via
\begin{equation}
S=-\textrm{tr}[\rho\log(\rho)],
\end{equation}
you get $\log(2)$ as the entropy for both qubits. But, if you compute the entropy of the full system, you get zero. | {
"domain": "physics.stackexchange",
"id": 13248,
"tags": "quantum-mechanics, thermodynamics, statistical-mechanics, quantum-information"
} |
Float64MultiArray message type does not show up in published topic | Question:
I am trying to do a test where I publish an array to a topic
/chatter2s. Here is my publisher below:
#!/usr/bin/python3
import rospy
from std_msgs.msg import Float64MultiArray
#Float64MultiArray
def do_it():
data_to_send = Float64MultiArray()
rospy.init_node("array_publisher")
pub=rospy.Publisher("chatters2", Float64MultiArray, queue_size=10)
data_to_send.data = [1.2354567, 99.7890]
#ata_to_send.data_offset = 0
pub.publish(data_to_send)
rospy.spin()
do_it()
Here is my listener:
#!/usr/bin/python3
import rospy
from std_msgs.msg import Float64MultiArray
def callback(msg):
rospy.loginfo(msg)
rospy.init_node("array_listener")
rospy.Subscriber("chatters2",Float64MultiArray, callback)
#rospy.sleep(3.0)
rospy.spin()
The problem is, when I run these two codes in their respective terminals, they seem to run but then nothing shows up for the listener, I do a rostopic echo /chatters2 and dont see anything coming out for that either.
I check the rqt_graph and I see both nodes as well as the chatter topic, so these nodes seem to be active. whats going on?
Originally posted by distro on ROS Answers with karma: 167 on 2022-02-06
Post score: 0
Answer:
The rospy.spin() in your listener is waiting for a callback to be executed. By writing that line, you are essentially asking ROS to handle that for you. In your publisher, you don't want to wait for callbacks. So you need to omit the spin. You will also need a loop to continually publish your data. From the ROS simple publisher tutorial your example would look like this:
import rospy
from std_msgs.msg import Float64MultiArray
#Float64MultiArray
def do_it():
data_to_send = Float64MultiArray()
rospy.init_node("array_publisher")
pub=rospy.Publisher("chatters2", Float64MultiArray, queue_size=10)
data_to_send.data = [1.2354567, 99.7890]
#ata_to_send.data_offset = 0
r = rospy.Rate(10) # 10hz
while not rospy.is_shutdown():
pub.publish(data_to_send)
# rospy.spin()
r.sleep()
do_it()
Originally posted by Akhil Kurup with karma: 459 on 2022-02-06
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Akhil Kurup on 2022-02-06:
Also, since you mention ROS-melodic, you may want to change the python3 to python2
Comment by distro on 2022-02-08:
@Akhil Kurup Thanks! Could you please look at a related question I asked here if possible. | {
"domain": "robotics.stackexchange",
"id": 37422,
"tags": "ros, ros-melodic"
} |
Ballistic Electrons in Ballistic Electron Emission Microscopy (BEEM) | Question: This is for an exam that i have in the near future.
Would someone be able to explain what a Ballistic electron is in terms of BEEM. From my understanding ballistic would mean only acted upon by gravity.Can someone either correct me or expand on what i said if its right.
Thank you
Answer: I use BEEM everyday. "Ballistic Electron" in this sense means that the electron is able to go travel between two scattering events and remain unscattered in a metal. There is a lot of high level theory that can go into describing the motion of these electrons in a BEEM measurement, but generally you should think of it as an electron that is not scattered in a metal as it conducts through the metal to the semiconductor interface.
What you describe as "Ballistic" motion is really just projectile motion of an object and that its an object that is given some energy and it falls in a parabolic arc.
In depth answer: In a metal electronic states are occupied up to and at the fermi level of the metal. This means that electrons as they conduct through a metal must scatter into unoccupied states with energies of a few $k_bT$ (25meV) above or below the fermi level. So electrons as they conduct through the metal must have sufficient energy to scatter into these unoccupied states and generally have the same amount of energy. When you perform BEEM measurements though you are giving the electrons a boost to their kinetic energy by applying a bias between the STM tip and the metal, these electrons are now around a $1~eV$ higher than the fermi level of the metal and are considered "hot" electrons. "hot" electrons refer to the fact that when measuring energies in relation to $$E = k_bT$$ which at room temperature (300K) have an energy of 25meV, you have an energy of $1eV = k_b T$, $k_b$ is the boltzmann constant in eV, so your temperature of the electron is considered "hot" or 12000K. This is not a real temperature of the electron or of the material, but a guideline of the kinetic energy in relation to electrons conducting within a few k_b*T of the fermi level.
So these "hot" electrons have a lot of energy and because of this their scattering interactions are different. An electron with energy much greater than that of the fermi level is going to have more inelastic scattering events such as electron electron scattering and less energy loss from phonon interactions since electron-phonon energy loss is on the order of $k_bT$. If these electrons travel through the metal without very few scattering events they are considered "ballistic" electrons.
de Andres, P. L., F. J. Garcia-Vidal, K. Reuter, and F. Flores, 2001, Prog. Surf. Sci. 66, 3.
D. A. Pearson and L. J. Sham, Theory of ballistic electron emission microscopy. Phys. Rev. B 2001, 64, 125408. http://dx.doi.org/10.1103/PhysRevB.64.125408
http://www.rug.nl/research/portal/files/2372254/Chapter_2.pdf | {
"domain": "physics.stackexchange",
"id": 31353,
"tags": "nanoscience"
} |
Why is $\Delta x \Delta k \approx 1$ in any pulse? | Question: In my physics textbook, it says that for any pulse, if $\Delta x$ becomes smaller, $\Delta k$ becomes larger where $k$ refers to $2\pi/\lambda$ and $x$ is x-axis displacement, as described by $\Delta x \Delta k \approx 1$. Why is it like this?
Answer: It's not clear whether you mean a pulse of a wave, i.e. a short section of a wave, or whether you mean a top hat function, but in both cases the principle is the same.
If you Fourier transform a pulse of a wave or a top hat function you get the frequencies that make it up. If you decrease the length of the pulse or reduce the width of the top hat function you'll find that the width of your Fourier transform increases i.e. it spreads across more frequencies. That means it's harder and harder to pin down what you mean by the frequency of the pulse. In the limit of reducing your pulse to a delta function you find the Fourier transform now inludes all frequencies from zero to infinity at an equal amplitude so it's impossible to define even an average frequency.
This is the sense in which $\Delta k$ becomes larger. In real life we often find the Fourier transform is approximately gaussian and we can define $\Delta k$ as the 1/e half width (i.e. the standard deviation). | {
"domain": "physics.stackexchange",
"id": 4191,
"tags": "quantum-mechanics, waves, heisenberg-uncertainty-principle"
} |
Is there a heuristic argument for the expression $ \textbf{g} = \frac {\mathbf{S}}{c^2}$? | Question: Electromagnetic momentum density and the Poynting vector are related by the simple expression:
$$ \textbf{g} = \frac {\mathbf{S}}{c^2}$$
It can be rigorously derived from Maxwell's equations, but is there a more heuristic derivation?
Answer: Light follows null geodesics, which means that they have no rest mass. Given the relativistic formula $m^{2}c^{4} = E^{2} - p^{2}c^{2}$, the result follows | {
"domain": "physics.stackexchange",
"id": 9728,
"tags": "electromagnetism, momentum, classical-electrodynamics, poynting-vector"
} |
Arranging Buttons on a Process Enumerating Application | Question: I had the following piece of code for populating a list of currently running processes in an IT Self-Help application :
var proclist = ProcessHelper.GetProcList();
var i = 0;
var j = 0;
foreach (var process in proclist)
{
if (process.ButtonText == "") continue;
var button = new Button();
if (i < 7)
{
button.Location = new Point(10, 30 * i + 65);
}
else if ((i > 6) & (i < 14))
{
button.Location = new Point(150, 30 * j + 65);
j++;
}
else
{
button.Location = new Point(290, 30 * j + 65);
j++;
}
i++;
button.Click += (x, y) => button.Visible = false;
button.Click += (x, y) => process.Kill();
button.Text = process.ButtonText;
button.Image = process.ThumbNail;
button.ImageAlign = ContentAlignment.MiddleLeft;
processTab.Controls.Add(button);
button.Width = 130;
button.Height = 25;
button.TextAlign = ContentAlignment.MiddleLeft;
}
And today I realised, that if a user has more than 14 applications running, my logic falls apart on the third column and displays the items like this :
I managed to fix this with the following amendment (added a new variable for the third column) :
var proclist = ProcessHelper.GetProcList();
// For placing each button - need a var for each column
var i = 0;
var j = 0;
var k = 0;
foreach (var process in proclist)
{
if (process.ButtonText == "") continue;
var button = new Button();
if (i < 7)
{
button.Location = new Point(10, 30 * i + 65);
}
else if ((i > 6) & (i < 14))
{
button.Location = new Point(150, 30 * j + 65);
j++;
}
else
{
button.Location = new Point(290, 30 * k + 65);
k++;
}
i++;
button.Click += (x, y) => button.Visible = false;
button.Click += (x, y) => process.Kill();
button.Text = process.ButtonText;
button.Image = process.ThumbNail;
button.ImageAlign = ContentAlignment.MiddleLeft;
processTab.Controls.Add(button);
button.Width = 130;
button.Height = 25;
button.TextAlign = ContentAlignment.MiddleLeft;
}
And now the items appear correctly :
However, I couldn't help but think that this is not a very clean fix, because I just hard-coded in another column, and if a user every has 21 + applications open, the issue will occur again!
finally, I came up with this :
var i = 0;
var j = 0;
var column = 10;
foreach (var process in proclist)
{
if (process.ButtonText == "") continue;
var button = new Button();
if ((i > 0) & ((i % 7) == 0))
{
j = 0;
column = column + 140;
}
button.Location = new Point(column, 30 * j + 65);
j++;
i++;
button.Click += (x, y) => button.Visible = false;
button.Click += (x, y) => process.Kill();
button.Text = process.ButtonText;
button.Image = process.ThumbNail;
button.ImageAlign = ContentAlignment.MiddleLeft;
processTab.Controls.Add(button);
button.Width = 130;
button.Height = 25;
button.TextAlign = ContentAlignment.MiddleLeft;
}
Which (as far as I can tell), will arrange the buttons correctly as long as there aren't so many that they go out of the window (which is highly unlikely in this environment). Here is a screenshot of the new version in action :
Hopefully this is the right place to post this - any feedback is extremely welcome!
Answer: What will happen when there are more processes than you can fit buttons for on your form?
You're doing the hard work by hand, reinventing a wheel that's already there for you... and as you saw, it's not exactly scaling very well.
The wheel in question is called a FlowLayoutPanel, a winforms layout control that automatically handles control positioning for you, taking overflowing controls to a new row or column, depending on how you configured it - it even handles scrollbars for you, if you have more items than can fit into the panel.
That said, your wheel has a number of squeaky parts:
if (process.ButtonText == "") continue;
Comparing a string against an empty string, would usually be written like this:
if (string.IsNullOrWhiteSpace(process.ButtonText)
{
continue;
}
In all versions/attempts of your layout code, you hard-coded the key values.
140 / the column width, should be computed from the width of a button
7 is arbitrary and should be allowed to change if/when the form is maximized or resized. Your screenshots crop the top part, but if you haven't removed the min/maximize buttons and allow resizing of the form, the user can easily wreck your layout; it doesn't take the button height into consideration either.
BUG?
It's a little beyond the scope of this review, but this line can potentially blow everything up:
button.Click += (x, y) => process.Kill();
There's a chance your application will not-so-graciously crash and burn if you bring up your app, then start task manager and kill a process, and then click the button for that process in your app.
The process to kill won't exist anymore, and if ProcessHelper.Kill() doesn't handle that situation, boom! Make sure your wrapper wraps the operation in a try..catch block. | {
"domain": "codereview.stackexchange",
"id": 19979,
"tags": "c#, algorithm, winforms"
} |
No-S-$B^+$ Theorem like Results | Question: Classification of multipartite quantum state is an interesting topic in quantum information and there have been many accomplishments in the field. For example, according to the result of Thapliyal, for tripartite pure state case, there can't be a tripartite pure state with a pair of subsystems in separable marginal state and another pair in PPT (Positive Partial Trace) bound entangle state. This result can be summed up as the following figure. S stands for 'Separable', $B^\pm$ is for 'Bound entangled PPT(NPT)' and D is for 'Distillable'.
$B^+$ Theorem.">
However, is there any progress in ruling out other categories in the catalog given above? What do we know more about tripartite pure states in 2018?
Answer: Hayashi and Chen did exhaustive classification of tripartite pure states. Let the following letters denote S : separable, P : PPT-Entangled, R :non-PPT reduction and N : Non-reduction states for each marginal bipartite state. Then, there can be essentially 8 cases: SSS, SSN, SNN, PNN, RRR, RRN, RNN, NNN. These are all physically possible quantum states, hence the classification is complete. | {
"domain": "physics.stackexchange",
"id": 54896,
"tags": "quantum-information, quantum-entanglement, quantum-states"
} |
Custom countdown second timer | Question: I'm making a card game and I found useful to include a 60 second countdown timer. Here is my approach:
import java.util.Timer;
import java.util.TimerTask;
public class SecondTimer {
private Timer timer;
private int countDown;
private int secondsLeft;
public SecondTimer() {
timer = new Timer();
}
public void reset() {
secondsLeft = countDown;
// Decrease seconds left every 1 second.
timer.schedule(new TimerTask() {
@Override
public void run() {
secondsLeft--;
if (secondsLeft == 0) {
timer.cancel();
}
}
}, 0, 1000);
}
public void setCountDown(int seconds) {
this.countDown = seconds;
}
public int getSecondsLeft() {
return secondsLeft;
}
}
Is this considered clean? How can I test such classes?
Answer:
Avoid using java.util.Timer. See Checking if a Java Timer is cancelled and java.util.Timer: Is it deprecated? Instead use a ScheduledExecutorService.
When changing to a ScheduledExecutorService, don't cancel or shutdown your ScheduledExecutorService, instead cancel the FutureTask that you will get when scheduling a task.
Don't decrease a counter once a second. Nothing guarantees that it will be called with exactly 1000 ms periodicity.
In your reset method, use System.nanoTime() (not System.currentTimeMillis) to calculate when the time is up, such as:
this.targetTime = System.nanoTime() + seconds * 1_000_000_000;
Then calculate in getSecondsLeft, by using System.nanoTime() again, how many seconds remain until the target nanosecond value has been reached. | {
"domain": "codereview.stackexchange",
"id": 18292,
"tags": "java, object-oriented, timer"
} |
Should I perform cross validation only on the training set? | Question: I am working with a dataset that I downloaded from Kaggle. The data set is already divided into two CSVs for Train and Test.
I built a model using the training set because I imported the train CSV into a Jupyter Notebook. I predicted using the Train CSV itself. I would like to perform cross validation. Should I perform cross validation on the train CSV and split it into two parts again Train and Test? Or, should I import a new CSV file Test and combine both CSVs into ONE?
Answer: You shouldn't touch test data until you finish. To do cross validation you have to split the training dataset into train and validation sets or you can do k-fold cross validation (or any other method).
Here is some information: https://en.wikipedia.org/wiki/Cross-validation_(statistics)
But, never use test data to train or adjust your model, because if you do, your model will be trained for your test data and then your results won't be valid. | {
"domain": "datascience.stackexchange",
"id": 5848,
"tags": "cross-validation, kaggle"
} |
how to check if the experimental performance of the oscillation amplitudes follows the predicted trend? | Question: once I got my set of amplitudes how can I compare experimental amplitudes with damped harmonic motion?
I know that
$F = -kx + mg -(C_1 + C_2 |v|)v $
supposing c2 = 0 and cos = 1
$x(t) = Ae^{-\gamma t} $
supposing c1 = 0
$x(t) = \frac{A_0}{1+A_0 \alpha t} , \alpha = \frac{4}{3\pi}C_2 \frac{w_0^2}{k}$
Answer: In essence you are trying to decide whether the friction forve is proportional to the velocity or velocity squared.
Assume that the period of a swing $T$ stays constant.
If $x(t)=A_oe^{-\gamma t}$ then if after $n$ swings the amplitude is $A_n$
$A_n = A_o e^{-\gamma T n} \Rightarrow \ln A_n = -\gamma T n + \ln A_o$, so plot a graph of $\ln A_n$ against $n$.
If $x(t) = \dfrac {A_o}{1+ A_o \alpha t}$ then if after $n$ the amplitude is $A_n$
$A_n = \dfrac {A_o}{1+ A_o \alpha T n} \Rightarrow \dfrac {1}{A_n} = \alpha T n + \dfrac {1}{A_o} $, so plot a graph of $\dfrac {1}{A_n}$ against $n$. | {
"domain": "physics.stackexchange",
"id": 30773,
"tags": "experimental-physics"
} |
Signal approximation using linear combination of functions | Question: How I can approximate the signal $x(t)=0.001\,t^3 \exp(-0.1t)$ in the interval $[0,100]$ using a linear combination of the following functions:
$f_1(t)=A_1$
$f_2(t)=A_2\cos(0.05t)$
$f_3(t)=A_3\cos(0.1t)$
$f_4(t)=A_4\cos(0.2t+1)\exp(-0.2t)$
$f_5(t)=A_5\,t^3$
I try to write a matlab code. Is this correct?
t=[0:100];
x_t=0.001*(t.^3).*exp(-0.1*t); %signal given for aproximation
f1_t=x_t.^0;
f2_t=cos(0.05*x_t);
f3_t=cos(0.1*x_t);
f4_t=cos(0.2*x_t+1).*exp(-0.2*x_t);
f5_t=x_t.^3;
M=[f1_t' f2_t' f3_t' f4_t' f5_t']; %matrix with linear components
A=M\x_t'; %matrix with coefficients
f_t=(A(1)*f1_t)+(A(2)*f2_t)+(A(3)*f3_t)+(A(4)*f4_t)+(A(5)*f5_t);
figure(1),plot(f_t);
Answer: The method you chose is a standard least squares approximation, i.e. you minimize the sum of squared errors on the chosen grid. You obtain your coefficients by solving an overdetermined system of linear equations in a least squares sense. I think that's a very sane approach. | {
"domain": "dsp.stackexchange",
"id": 939,
"tags": "matlab, homework, continuous-signals, approximation"
} |
What is the name of the smallest self-replicating thing? | Question: Some time last year, I found an article on Wikipedia about the smallest something to be able to reproduce.
I don't remember exactly what it was, but I am fairly certain that after the initial discovery another of the previous organism (this one slightly smaller) was discovered.
I think that the smallest something might have been the smallest self-replicating protein, or smallest self-replicating molecule, or something like that.
It was not mentioned in this thread: Which organism has the smallest genome length?
It had a strange, stand-out name and I believe it was discovered in the 90s.
Answer: You're probably thinking of the Spiegelman Monster. It was actually discovered in 1965, but it was discovered that it became shorter over time in 1997.
It also wasn't included in that thread, and it has a strange name.
http://en.wikipedia.org/wiki/Spiegelman_Monster | {
"domain": "biology.stackexchange",
"id": 9871,
"tags": "life, replication"
} |
Can we derive Ampere's Circuital Law from Gauss's Law or vice versa? | Question: I was curious if it is possible to derive Ampere's Circuital Law from Gauss's Law as they are very similar and both can be applied for highly symmetrical problems $(Infinite\space wires,Rings..etc)$ and also because they look very similar.
$$\oint B\space dl=\sum \mu I$$
$$\oint E \space dA=\frac{\sum Q}{\epsilon}$$
Answer: No you can't.
The most important reason is that they're equivalent to two Maxwell equations, respectively Maxwell-Gauss and Maxwell-Ampere. Since Maxwell equations are independent from each others, they can't be derived from one another.
Also, the resemblance between those two laws aren't as deep as they seem:
Ampere law is a line integral.
Gauss Law is a surface integral.
There are strong mathematical parallels, but nothing more. | {
"domain": "physics.stackexchange",
"id": 88564,
"tags": "electromagnetism, magnetic-fields, electric-fields, gauss-law, maxwell-equations"
} |
How to understand Preskill's argument for degeneration of eigenstates? | Question: In his notes on topological quantum computation on page 18, Preskill uses the "commutator"
$T_2^{-1}T_1^{-1}T_2T_1 = e^{-2 i \vartheta}$ to show that the eigenstates of $T_1$ are degenerate. But I don't understand his argument, which goes as follows:
Take an eigenstate $T_1|\alpha\rangle = e^{i \alpha}|\alpha\rangle$ of $T_1$
Evaluate: $T_1(T_2|\alpha\rangle) = e^{i2\vartheta}e^{i\alpha}T_2|\alpha\rangle$, interpret this as $T_2$ advancing the value of $\alpha$ by $2\vartheta$
Now suppose $\vartheta$ is a rational multiple of $2\pi$, e.g. $\vartheta = \pi \frac{p}{q}$ where $p<2q$ and $(p,q)=1$
Then he concludes that $T_1$ must have at least $q$ distinct eigenvalues, and $T_1$ acting on $\alpha$ generates an orbit with $q$ distinct values
$$\alpha_k \equiv \alpha + \left(2\pi \frac{p}{q}\right) k\quad (\textsf{mod }2\pi) $$
My questions:
In the second step, does he mean something like $T_2|\alpha\rangle = |(\alpha + 2\vartheta)\textsf{mod }2\pi\rangle$, i.e. that applying $T_2$ to $|\alpha\rangle$ gives us the eigenstate $|\alpha + 2\vartheta\rangle$ of $T_1$?
In the fourth step, how do we conclude this? And how exactly does $T_1$ generate said orbit? I don't see it.
Answer:
Basically yes, up to your inaccurate articles. When you say "gives us the eigenstate $|\alpha+2\vartheta\rangle$", the word "the" indicates that you believe that the eigenstate with this eigenvalue has to be unique. But you haven't proven so and it doesn't have to be the case in general. So you wanted to say "an eigenstate with the eigenvalue that would be represented by the phase $\alpha+2\vartheta$".
The relevant states that are said to exist in the fourth step are simply states $(T_2)^j |\alpha\rangle$ where $j=0,1,2,\dots q-1$. All these $q$ states have to be nonzero and distinct from each other because they have different eigenvalues of the normal operator $T_1$. But once you pick $j=q$, you may get the same state as for $j=0$ up to an overall normalization (phase), the states may be periodic in $j$ with the periodicity $q$. | {
"domain": "physics.stackexchange",
"id": 31831,
"tags": "quantum-mechanics, eigenvalue"
} |
"what(): can't subtract times with different time sources [2 != 1]" error when spawning diff_drive_controller on ros2_control | Question:
I'm using ros2_control to control my robot.
After exectue controller_manager, when I spawned "diff_drive_controller" I have this error.
[ros2_control_node-1] [INFO] [1632907663.808010997] [controller_manager]: update rate is 50 Hz
[ros2_control_node-1] [INFO] [1632907695.226861708] [controller_manager]: Loading controller 'diff_drive_controller'
[ros2_control_node-1] [INFO] [1632907695.255700365] [controller_manager]: Configuring controller 'diff_drive_controller'
[ros2_control_node-1] terminate called after throwing an instance of 'std::runtime_error'
[ros2_control_node-1] what(): can't subtract times with different time sources [2 != 1]
another terminal shown as
$ ros2 control load_controller --set-state start diff_drive_controller
Sucessfully loaded controller diff_drive_controller into state active
I tried add parameter use_sim_time: True to controller_manager, diff_drive_controller.. but result is same..
How can I solve this error?
Originally posted by byeongkyu on ROS Answers with karma: 126 on 2021-09-29
Post score: 0
Answer:
It was solved by updating ros2_control packages.
Originally posted by byeongkyu with karma: 126 on 2021-10-06
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by destogl on 2021-11-29:
This should not actually be solved by solely updating the packages. This can still be an issue, depending which time-source is used in a commanding package.
Comment by afrixs on 2021-12-08:
Building ros2_control and (for simulation) gazebo_ros2_control from source solved the issue for me too (diff_drive_controller can be installed via apt)... Crash didn't occur no matter whether controller_manager spawner node was launched with use_sim_time:=True or False. Looks like some kind of incompatibility between packages in current Galactic sync. | {
"domain": "robotics.stackexchange",
"id": 36966,
"tags": "ros, diff-drive-controller"
} |
Finding exceptions and optimisations in weather forecast service | Question: When creating a new Blazor project, there is a page called FetchData which gives an example of a razor page using a service to pull data to a page.
I set myself a challenge to improve on the service, so that instead of a random label being assigned to a temperature for a forecast, it separates the label categories evenly across the range of temperatures.
Here is the original code for the service:
namespace MyBlazorApp.Data
{
public class WeatherForecastService
{
private static readonly string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
public Task<WeatherForecast[]> GetForecastAsync(DateTime startDate)
{
return Task.FromResult(Enumerable.Range(1, 5).Select(index => new WeatherForecast
{
Date = startDate.AddDays(index),
TemperatureC = Random.Shared.Next(-20, 55),
Summary = Summaries[Random.Shared.Next(Summaries.Length)]
}).ToArray());
}
}
}
And here is my new code:
namespace MyBlazorApp.Data
{
public class WeatherForecastService
{
private static readonly string[] Summaries = new[]
{
"Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching"
};
public Task<WeatherForecast[]> GetForecastAsync(DateTime startDate)
{
var minTemp = -10;
var maxTemp = 40;
return Task.FromResult(Enumerable.Range(0, 14).Select(index => getWeatherForecast(startDate, minTemp, maxTemp, index)).ToArray());
}
public WeatherForecast getWeatherForecast(DateTime startDate, int minTemp, int maxTemp, int index)
{
maxTemp += (maxTemp - minTemp) % Summaries.Length;
var rng = Random.Shared.Next(minTemp, maxTemp);
var categorySize = (maxTemp - minTemp) / (Summaries.Length);
var pointer = (rng + (0 - minTemp)) / categorySize;
if (pointer >= Summaries.Length)
{
pointer = Summaries.Length - 1;
}
return new WeatherForecast
{
Date = startDate.AddDays(index),
TemperatureC = rng,
Summary = Summaries[pointer]
};
}
}
}
The code seems to work fine, but I feel that the error handling as well as the code optimization, readability and simplicity can be improved. Any tips on how I can look to find these improvements myself would also be extremely helpful. I changed some of the numbers to feel more appropriate to the context, but ideally the solution should work no matter what variables are set.
In summary, I am trying to find a solution to the problem in the absolute best way possible, as a way of giving an example of what perfect code looks like.
Answer: The GetForecastAsync method
Since you always call it with DateTime.Now from FetchData.razor you should consider to remove startDate as a parameter
Nothing indicates that this method should be async, so you could convert it to back to sync
You could replace the OnInitializedAsync override with the following code:
protected override void OnInitialized()
=> forecasts = ForecastService.GetForecastForTwoWeeks();
The GetForecastAsync could be replaced with this:
public WeatherForecast[] GetForecastForTwoWeeks()
=> Enumerable.Range(0, 14)
.Select(daysSinceStart => GetWeatherForecast(DateTime.Now.AddDays(daysSinceStart)))
.ToArray();
Since 14 is hard-coded inside the method I would suggest to add this constraint/restriction to the method name (GetForecastForTwoWeeks)
Since minTemp and maxTemp are constants and used only inside the GetWeatherForecast I've moved them from here
The startDate and index are always used together inside the GetWeatherForecast to calculate the new Date
So, rather than passing them separately you could pass the calculated date
The getWeatherForecast method
I would suggest to use C# standard naming guideline and name it with Pascal Case (GetWeatherForecast)
I would also suggest to make it private if it is used only by the GetForecastForTwoWeeks method
You could rewrite the getWeatherForecast like this
const int MinTemperature = -10, MaxTemperature = 40;
static readonly int AdjustedMaxTemperature = MaxTemperature + (MaxTemperature - MinTemperature) % Summaries.Length;
static readonly int CategorySize = (AdjustedMaxTemperature - MinTemperature) / Summaries.Length;
private static WeatherForecast GetWeatherForecast(DateTime date)
{
var temperature = Random.Shared.Next(MinTemperature, AdjustedMaxTemperature);
var summaryIndex = (temperature - MinTemperature) / CategorySize;
return new ()
{
Date = date,
TemperatureC = temperature,
Summary = Summaries[summaryIndex]
};
}
Temp can abbreviate multiple terms (like temporarily, temperature, etc.) so, it might make sense to avoid abbreviating here
I would advise you to avoid overwriting method's parameter value (maxTemp += ...)
The adjusted maximum temperature and the category size are always the same
So, they could be calculated only once
BTW the AdjustedMaxTemperature is the same as the MaxTemperature since (MaxTemperature - MinTemperature) % Summaries.Length is 0
+ (0 - minTemp) can be simplified to -minTemp
This pointer >= Summaries.Length condition check is not needed, it will never be true
rng does not mean anything, whereas temperature is more meaningful/expressive IMHO
The most concise version I can think of now (which is still readable):
private const int MinTemperature = -10, MaxTemperature = 40;
private static readonly int AdjustedMaxTemperature = MaxTemperature + (MaxTemperature - MinTemperature) % Summaries.Length;
private static readonly int CategorySize = (AdjustedMaxTemperature - MinTemperature) / Summaries.Length;
public WeatherForecast[] GetForecastForTwoWeeks()
=> (from daysSinceStart in Enumerable.Range(0, 14)
let temperature = Random.Shared.Next(MinTemperature, AdjustedMaxTemperature)
let summaryIndex = (temperature - MinTemperature) / CategorySize
select new WeatherForecast
{
Date = DateTime.Now.AddDays(daysSinceStart),
TemperatureC = temperature,
Summary = Summaries[summaryIndex]
})
.ToArray(); | {
"domain": "codereview.stackexchange",
"id": 43855,
"tags": "c#, .net, blazor"
} |
Confusion about the solution and inner product properties of Dirac equation | Question: I'v just read the solutions of Dirac equation and haven't yet known anything about the quantization of the equation, so my confusion is about this level.The problem is trivial.
The solutions of Dirac equation are as follows:
$$\left|\psi ^1\right\rangle =\sqrt{\frac{m+e}{2 m}} e^{-\text{ipx}} \left(
\begin{array}{c}
1 \\
0 \\
\frac{p^3}{m+e} \\
\frac{p^1+\text{ip}^2}{m+e} \\
\end{array}
\right)=u_1 e^{-\text{ipx}};$$
$$\left|\psi ^2\right\rangle =\sqrt{\frac{m+e}{2 m}} e^{-\text{ipx}} \left(
\begin{array}{c}
0 \\
1 \\
\frac{p^1-\text{ip}^2}{m+e} \\
-\frac{p^3}{m+e} \\
\end{array}
\right)=u_2 e^{-\text{ipx}};$$
$$\left|\psi ^3\right\rangle =\sqrt{\frac{m+e}{2 m}} e^{\text{ipx}} \left(
\begin{array}{c}
\frac{p^3}{m+e} \\
\frac{p^1+\text{ip}^2}{m+e} \\
1 \\
0 \\
\end{array}
\right)=v_2 e^{\text{ipx}};$$
$$\left|\psi ^4\right\rangle =\sqrt{\frac{m+e}{2 m}} e^{\text{ipx}} \left(
\begin{array}{c}
\frac{p^1-\text{ip}^2}{m+e} \\
-\frac{p^3}{m+e} \\
0 \\
1 \\
\end{array}
\right)=v_1 e^{\text{ipx}}$$
The corresponding inner product relations are as follows:
$$u_{s \overset{\rightharpoonup }{p}} u_{r \overset{\rightharpoonup }{p}}{}^{\dagger }=v_{s \overset{\rightharpoonup }{p}} v_{r \overset{\rightharpoonup }{p}}{}^{\dagger }=\frac{e \delta ^{\text{rs}}}{m};$$
$$v_{s \left(-\overset{\rightharpoonup }{p}\right)} u_{r \overset{\rightharpoonup }{p}}{}^{\dagger }=0;$$
My question is:
For two particle states $\left|\psi ^1\right\rangle$and $\left|\psi ^4\right\rangle$, if they are going to have a same momentum and opposite energy, then the condition $v_{s \left(-\overset{\rightharpoonup }{p}\right)} u_{r \overset{\rightharpoonup }{p}}{}^{\dagger }=0$ is satisfied. This illustrates that the two particles cannot have their states related, therefore they do not interact. But, what if I choose to make these two particles have opposite momentum direction (i.e.$v_{s \overset{\rightharpoonup }{p}} u_{r \overset{\rightharpoonup }{p}}{}^{\dagger }\neq0$), then the inner product of the two states cannot vanish. Does this mean that under such circumstances the two particles can interact? If I am guessing right, what are the interactions? Annihilation? Sorry for being so lengthy and expatiatory.
Answer: You are confusing the negative energy solutions of Dirac's equation (your third and forth wavefunction) with legitimate particle (or antiparticle wavefunctions). You can't learn anything about physical interactions by studying the inner products of these solutions with the positive energy solutions (which do represent real particle wavefunctions).
Dirac realized that these negative energy solutions represented a serious pathology of his theory and overcame this pathology by postulating that these states must be fully occupied and inaccessible due to the Pauli principle. This enabled him to formulate his hole theory that antiparticle states would result if one of these states in the infinite Dirac sea became unoccupied by the absorption of an energy quanta greater than twice the rest mass of an electron. Particle-antiparticle annihilation would then occur when a positive energy electron undergoes a transition that occupies the hole state. For a summary of this you might Google Dirac hole theory. | {
"domain": "physics.stackexchange",
"id": 34690,
"tags": "quantum-mechanics, hilbert-space, antimatter, dirac-equation, spinors"
} |
How can centripetal acceleration change the speed? | Question: In the circular motion of a particle, it has two acceleration components, first, the tangential one which is responsible for increment (positive or negative) in speed and is $$ \frac{d \omega}{dt} \times r$$ and the other is centripetal acceleration and is responsible for the change in direction and is $$\omega \times \frac{dr}{dt}$$. Now imagine that you are rotating a stone with a string tied to it and then you slowly start to increase the centripetal acceleration which means that you start pulling the string with your other hand then according to the equation of circular motion kinematics, the magnitude of the velocity shouldn't be affected but now according to a new law that I studied which is the conservation of angular momentum, the object must increase its magnitude of velocity in order to conserve angular momentum and I don't get that, aren't the laws contradicting themselves and how does increase the central force even results in an increment of speed ?
Answer: When you say
according to a new law that I studied which is the conservation of angular momentum, the object must increase its magnitude of velocity in order to conserve angular momentum and I don't get that
Remember: What is angular momentom? Well its really just like linear momentum, right?
So if you have your rock out in outer space rotating around a pole via a string: What happens to the speed of the rock if you start to lengthen the string some?
Well, its going to slow down a bit, right? But how can that be? There is no tangential force pushing it backwards to slow it down, right? Actually wrong. There is a tangential force. Not because you added a rocket booster to the rock, but because while the string is extending, the rock is no longer going in a circle, and the string is pulling on it with a tangential component
How is that?
Consider this corner case to help make it more clear: Suppose instead of gradually lengthening the string, you suddenly add 1 foot in length to the string instantaneously (string has no weight and does not stretch). Well, the rock is gonna go straight for a while until it hits the new string length again and start going in a circle
But when the string length is hit, there is a triangle between the straight path line segment of the rock (side a), the string when it first let go of the rock (side b) and the longer string when it catches the rock (side c, or the hypotaneuse of the right triangle)
So side c is the direction of the force that catches the rock again. Now, since the angle between side c and side a is not 90 degrees, there is a component of force pull back against the movement of the rock, which slows it down
Now try instead of lengthening by 1 foot, lengthen by 0.1 foot ten times in one second. Or instead of that, lengthen by 0.01 foot a hundred times again in one second, and so forth. Keep making it smaller lengths but more number of times until when you get to lengthen by 1/inf foot...infinity times you get continuous lengthening motion and that shows you how it happens in the continuous manner
When you shorten the string, the opposite happens: The string pulls in the direction of the motion of the rock and speeds it up
So turns out that when you double the length of the rope, the speed gets cut in have. And if you cut the length of the rope in half the speed doubles
You can work out the math yourself and verify this is true. But in the simple case, suppose you double the length of the string instantaneously. Well then the triangle side b divided by side c is 0.5 right? Well, thats the sine of 30 deg
Yeah if physics is something you want to really have fun with it is best to really try to understand why all these formulas and rules are true. | {
"domain": "physics.stackexchange",
"id": 75941,
"tags": "newtonian-mechanics, rotational-dynamics, rotational-kinematics"
} |
Project Euler #2 (classic) - Sum of even fibonacci numbers below 4 million | Question: I'm looking to use LISP as best I can, not just get the right answer. This is very early on in my LISP career so feedback is welcome and exciting! Recently asked about Project Euler #1, got some feedback and incorporated into this answer for #2, but I know there's a LOT left to be learned!
Description of Problem #2
Each new term in the Fibonacci sequence is generated by adding the
previous two terms. By starting with 1 and 2, the first 10 terms will
be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not
exceed four million, find the sum of the even-valued terms.
(defun sum (L)
"sum a list"
(reduce #'+ L))
(defun filter (L predicate)
"filters a list"
(remove-if (complement predicate) L))
(defun fiboNums (maxNum)
"list of fibonacci numbers that are below maxNum"
(setq A 1)
(setq B 2)
(setq next (+ A B))
(loop while (< B maxNum) do
(setq next (+ A B))
(setq A B)
(setq B next)
collecting A))
(defvar answer (sum (filter (fiboNums (* 4 1000 1000)) #'evenp)))
(print answer)
Answer: Notation
It is a common convention in Lisp to avoid uppercase (or mixed lowercase and uppercase) letters in identifiers (symbols), so for instance use l instead of L, or fibo-nums instead of fiboNums, etc.
Variables
If you type your function in a Common Lisp interpreter/compiler, you will receive warnings undefined variable for A B and next. This is because you should introduce local variables before using them with the let or let* special operators (let* is needed if you need to initialize variables with the value other variables introduced). Then, you can assign these variables, as well as all the other type of variables, with setf, that can assign several variables in sequence. So, for instance:
(defun fibo-nums (max-num)
"list of fibonacci numbers that are below max-num"
(let* ((a 1)
(b 2)
(next (+ a b)))
(loop while (< b max-num)
do (setf next (+ a b)
a b
b next)
collecting a)))
Primitive operators
The definition of filter is not necessary, your definition is equivalent to the primitive function remove-if-not.
Algorithm
In this case there is no need of generate a list of Fibonacci numbers and then visiting it, it is sufficient to sum the numbers while generating them:
(defun sum-of-even-fibo (max-num)
(loop for a = 1 then b
for b = 1 then next
for next = (+ a b)
while (< a max-num)
when (evenp a)
sum a))
Finally, since loop is a powerful macro with a complex syntax, here is an alternative version with the more simple macro do*, which has also a syntax more “lisp-style”:
(defun sum-of-even-fibo (max-num)
(do* ((sum 0)
(a 1 b)
(b 1 next)
(next (+ a b) (+ a b)))
((> a max-num) sum)
(when (evenp a)
(incf sum a)))) | {
"domain": "codereview.stackexchange",
"id": 23553,
"tags": "programming-challenge, lisp, common-lisp"
} |
Kinect + Openni/Freenect + Multiple machines | Question:
I've learned that ROS allows for talking/subscribing of topics over the network by utilizing ROS_MASTER_URI. I'm trying to get my Kinect data to work in a similar fashion.
Senario:
Computer (10.0.0.22) "Master" runs roscore and rviz
Computer (10.0.0.4) "Bot" runs roslaunch openni_launch openni.launch OR roslaunch freenect_launch freenect.launch
Both computers have export ROS_MASTER_URI=http://10.0.0.22:11311
So roscore is running and roslaunch openni_launch openni.launch are both running fine but things get weird with the introduction of rosrun rviz rviz. On "Bot" I see the terminal print out the following immediately when rviz boots up:
$ roslaunch openni_launch openni.launch
... logging to /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/roslaunch-ubuntu-3091.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://ubuntu:43440/
SUMMARY
========
PARAMETERS
* /camera/camera_nodelet_manager/num_worker_threads: 4
* /camera/depth_rectify_depth/interpolation: 0
* /camera/depth_registered_rectify_depth/interpolation: 0
* /camera/disparity_depth/max_range: 4.0
* /camera/disparity_depth/min_range: 0.5
* /camera/disparity_registered_hw/max_range: 4.0
* /camera/disparity_registered_hw/min_range: 0.5
* /camera/disparity_registered_sw/max_range: 4.0
* /camera/disparity_registered_sw/min_range: 0.5
* /camera/driver/depth_camera_info_url:
* /camera/driver/depth_frame_id: camera_depth_opti...
* /camera/driver/depth_registration: False
* /camera/driver/device_id: #1
* /camera/driver/rgb_camera_info_url:
* /camera/driver/rgb_frame_id: camera_rgb_optica...
* /rosdistro: indigo
* /rosversion: 1.11.16
NODES
/camera/
camera_nodelet_manager (nodelet/nodelet)
debayer (nodelet/nodelet)
depth_metric (nodelet/nodelet)
depth_metric_rect (nodelet/nodelet)
depth_points (nodelet/nodelet)
depth_rectify_depth (nodelet/nodelet)
depth_registered_hw_metric_rect (nodelet/nodelet)
depth_registered_metric (nodelet/nodelet)
depth_registered_rectify_depth (nodelet/nodelet)
depth_registered_sw_metric_rect (nodelet/nodelet)
disparity_depth (nodelet/nodelet)
disparity_registered_hw (nodelet/nodelet)
disparity_registered_sw (nodelet/nodelet)
driver (nodelet/nodelet)
points_xyzrgb_hw_registered (nodelet/nodelet)
points_xyzrgb_sw_registered (nodelet/nodelet)
rectify_color (nodelet/nodelet)
rectify_ir (nodelet/nodelet)
rectify_mono (nodelet/nodelet)
register_depth_rgb (nodelet/nodelet)
/
camera_base_link (tf/static_transform_publisher)
camera_base_link1 (tf/static_transform_publisher)
camera_base_link2 (tf/static_transform_publisher)
camera_base_link3 (tf/static_transform_publisher)
ROS_MASTER_URI=http://10.0.0.22:11311
core service [/rosout] found
process[camera/camera_nodelet_manager-1]: started with pid [3100]
process[camera/driver-2]: started with pid [3101]
process[camera/debayer-3]: started with pid [3102]
process[camera/rectify_mono-4]: started with pid [3103]
process[camera/rectify_color-5]: started with pid [3107]
process[camera/rectify_ir-6]: started with pid [3112]
process[camera/depth_rectify_depth-7]: started with pid [3113]
process[camera/depth_metric_rect-8]: started with pid [3121]
process[camera/depth_metric-9]: started with pid [3122]
process[camera/depth_points-10]: started with pid [3126]
process[camera/register_depth_rgb-11]: started with pid [3127]
process[camera/points_xyzrgb_sw_registered-12]: started with pid [3128]
process[camera/depth_registered_sw_metric_rect-13]: started with pid [3138]
process[camera/depth_registered_rectify_depth-14]: started with pid [3145]
process[camera/points_xyzrgb_hw_registered-15]: started with pid [3147]
process[camera/depth_registered_hw_metric_rect-16]: started with pid [3155]
process[camera/depth_registered_metric-17]: started with pid [3157]
process[camera/disparity_depth-18]: started with pid [3161]
[ INFO] [1459620448.933287052]: Initializing nodelet with 4 worker threads.
process[camera/disparity_registered_sw-19]: started with pid [3162]
process[camera/disparity_registered_hw-20]: started with pid [3174]
process[camera_base_link-21]: started with pid [3182]
process[camera_base_link1-22]: started with pid [3183]
process[camera_base_link2-23]: started with pid [3184]
process[camera_base_link3-24]: started with pid [3185]
Warning: USB events thread - failed to set priority. This might cause loss of data...
Warning: USB events thread - failed to set priority. This might cause loss of data...
Warning: USB events thread - failed to set priority. This might cause loss of data...
Warning: USB events thread - failed to set priority. This might cause loss of data...
Warning: USB events thread - failed to set priority. This might cause loss of data...
Warning: USB events thread - failed to set priority. This might cause loss of data...
[ INFO] [1459620459.529921904]: Number devices connected: 2
Warning: USB events thread - failed to set priority. This might cause loss of data...
[ INFO] [1459620459.701988258]: 1. device on bus 001:09 is a SensorKinect (2ae) from PrimeSense (45e) with serial id '0'
[ INFO] [1459620459.702705081]: 2. device on bus 001:09 is a SensorV2 (2ae) from PrimeSense (45e) with serial id 'A00365806688049A'
[ INFO] [1459620459.721341696]: Searching for device with index = 1
[FATAL] [1459620461.020970916]: Service call failed!
[FATAL] [1459620461.021271801]: Service call failed!
[FATAL] [1459620461.022621593]: Service call failed!
[FATAL] [1459620461.022621645]: Service call failed!
[FATAL] [1459620461.023807947]: Service call failed!
[FATAL] [1459620461.024677791]: Service call failed!
[FATAL] [1459620461.026253780]: Service call failed!
[FATAL] [1459620461.026253780]: Service call failed!
[FATAL] [1459620461.026506905]: Service call failed!
[FATAL] [1459620461.028658520]: Service call failed!
[FATAL] [1459620461.028924718]: Service call failed!
[FATAL] [1459620461.029465030]: Service call failed!
[FATAL] [1459620461.030274978]: Service call failed!
[FATAL] [1459620461.031086384]: Service call failed!
[FATAL] [1459620461.032245707]: Service call failed!
[FATAL] [1459620461.033092739]: Service call failed!
[FATAL] [1459620461.034566228]: Service call failed!
[camera/camera_nodelet_manager-1] process has died [pid 3100, exit code -11, cmd /opt/ros/indigo/lib/nodelet/nodelet manager __name:=camera_nodelet_manager __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-camera_nodelet_manager-1.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-camera_nodelet_manager-1*.log
[camera/depth_points-10] process has died [pid 3126, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyz camera_nodelet_manager --no-bond image_rect:=depth/image_rect_raw points:=depth/points __name:=depth_points __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_points-10.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_points-10*.log
[camera/depth_metric-9] process has died [pid 3122, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth/image_raw image:=depth/image __name:=depth_metric __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_metric-9.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_metric-9*.log
[camera/depth_registered_sw_metric_rect-13] process has died [pid 3138, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth_registered/sw_registered/image_rect_raw image:=depth_registered/sw_registered/image_rect __name:=depth_registered_sw_metric_rect __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_sw_metric_rect-13.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_sw_metric_rect-13*.log
[camera/depth_registered_rectify_depth-14] process has died [pid 3145, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=depth_registered/image_raw image_rect:=depth_registered/hw_registered/image_rect_raw __name:=depth_registered_rectify_depth __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_rectify_depth-14.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_rectify_depth-14*.log
[camera/points_xyzrgb_hw_registered-15] process has died [pid 3147, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyzrgb camera_nodelet_manager --no-bond rgb/image_rect_color:=rgb/image_rect_color rgb/camera_info:=rgb/camera_info depth_registered/image_rect:=depth_registered/hw_registered/image_rect_raw depth_registered/points:=depth_registered/points __name:=points_xyzrgb_hw_registered __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-points_xyzrgb_hw_registered-15.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-points_xyzrgb_hw_registered-15*.log
[camera/depth_registered_hw_metric_rect-16] process has died [pid 3155, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth_registered/hw_registered/image_rect_raw image:=depth_registered/hw_registered/image_rect __name:=depth_registered_hw_metric_rect __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_hw_metric_rect-16.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_hw_metric_rect-16*.log
[camera/depth_registered_metric-17] process has died [pid 3157, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth_registered/image_raw image:=depth_registered/image __name:=depth_registered_metric __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_metric-17.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_registered_metric-17*.log
[camera/disparity_registered_hw-20] process has died [pid 3174, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/disparity camera_nodelet_manager --no-bond left/image_rect:=depth_registered/hw_registered/image_rect_raw right:=projector left/disparity:=depth_registered/disparity __name:=disparity_registered_hw __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-disparity_registered_hw-20.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-disparity_registered_hw-20*.log
[camera/driver-2] process has died [pid 3101, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load openni_camera/driver camera_nodelet_manager --no-bond ir:=ir rgb:=rgb depth:=depth depth_registered:=depth_registered projector:=projector __name:=driver __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-driver-2.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-driver-2*.log
[camera/rectify_color-5] process has died [pid 3107, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=rgb/image_color image_rect:=rgb/image_rect_color __name:=rectify_color __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-rectify_color-5.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-rectify_color-5*.log
[camera/rectify_ir-6] process has died [pid 3112, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=ir/image_raw image_rect:=ir/image_rect_ir __name:=rectify_ir __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-rectify_ir-6.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-rectify_ir-6*.log
[camera/depth_rectify_depth-7] process has died [pid 3113, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load image_proc/rectify camera_nodelet_manager --no-bond image_mono:=depth/image_raw image_rect:=depth/image_rect_raw __name:=depth_rectify_depth __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_rectify_depth-7.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_rectify_depth-7*.log
[camera/depth_metric_rect-8] process has died [pid 3121, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/convert_metric camera_nodelet_manager --no-bond image_raw:=depth/image_rect_raw image:=depth/image_rect __name:=depth_metric_rect __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_metric_rect-8.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-depth_metric_rect-8*.log
[camera/points_xyzrgb_sw_registered-12] process has died [pid 3128, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/point_cloud_xyzrgb camera_nodelet_manager --no-bond rgb/image_rect_color:=rgb/image_rect_color rgb/camera_info:=rgb/camera_info depth_registered/image_rect:=depth_registered/sw_registered/image_rect_raw depth_registered/points:=depth_registered/points __name:=points_xyzrgb_sw_registered __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-points_xyzrgb_sw_registered-12.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-points_xyzrgb_sw_registered-12*.log
[camera/disparity_depth-18] process has died [pid 3161, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/disparity camera_nodelet_manager --no-bond left/image_rect:=depth/image_rect_raw right:=projector left/disparity:=depth/disparity __name:=disparity_depth __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-disparity_depth-18.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-disparity_depth-18*.log
[camera/register_depth_rgb-11] process has died [pid 3127, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/register camera_nodelet_manager --no-bond rgb/camera_info:=rgb/camera_info depth/camera_info:=depth/camera_info depth/image_rect:=depth/image_rect_raw depth_registered/image_rect:=depth_registered/sw_registered/image_rect_raw __name:=register_depth_rgb __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-register_depth_rgb-11.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-register_depth_rgb-11*.log
[camera/disparity_registered_sw-19] process has died [pid 3162, exit code 255, cmd /opt/ros/indigo/lib/nodelet/nodelet load depth_image_proc/disparity camera_nodelet_manager --no-bond left/image_rect:=depth_registered/sw_registered/image_rect_raw right:=projector left/disparity:=depth_registered/disparity __name:=disparity_registered_sw __log:=/home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-disparity_registered_sw-19.log].
log file: /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/camera-disparity_registered_sw-19*.log
With roslaunch freenect_launch freenect.launch things seem better but shortly after it starts (and the Kinect's laser projector turns on) It shuts off with terminal output of [ INFO] [1459620527.170462997]: Stopping device RGB and Depth stream flush.
$ roslaunch freenect_launch freenect.launch
... logging to /home/ubuntu/.ros/log/b945d67e-f8fd-11e5-a7b3-0800270c17fb/roslaunch-ubuntu-3254.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://ubuntu:37085/
SUMMARY
========
PARAMETERS
* /camera/camera_nodelet_manager/num_worker_threads: 4
* /camera/depth_rectify_depth/interpolation: 0
* /camera/depth_registered_rectify_depth/interpolation: 0
* /camera/disparity_depth/max_range: 4.0
* /camera/disparity_depth/min_range: 0.5
* /camera/disparity_registered_hw/max_range: 4.0
* /camera/disparity_registered_hw/min_range: 0.5
* /camera/disparity_registered_sw/max_range: 4.0
* /camera/disparity_registered_sw/min_range: 0.5
* /camera/driver/data_skip: 0
* /camera/driver/debug: False
* /camera/driver/depth_camera_info_url:
* /camera/driver/depth_frame_id: camera_depth_opti...
* /camera/driver/depth_registration: False
* /camera/driver/device_id: #1
* /camera/driver/diagnostics_max_frequency: 30.0
* /camera/driver/diagnostics_min_frequency: 30.0
* /camera/driver/diagnostics_tolerance: 0.05
* /camera/driver/diagnostics_window_time: 5.0
* /camera/driver/enable_depth_diagnostics: False
* /camera/driver/enable_ir_diagnostics: False
* /camera/driver/enable_rgb_diagnostics: False
* /camera/driver/rgb_camera_info_url:
* /camera/driver/rgb_frame_id: camera_rgb_optica...
* /rosdistro: indigo
* /rosversion: 1.11.16
NODES
/camera/
camera_nodelet_manager (nodelet/nodelet)
debayer (nodelet/nodelet)
depth_metric (nodelet/nodelet)
depth_metric_rect (nodelet/nodelet)
depth_points (nodelet/nodelet)
depth_rectify_depth (nodelet/nodelet)
depth_registered_hw_metric_rect (nodelet/nodelet)
depth_registered_metric (nodelet/nodelet)
depth_registered_rectify_depth (nodelet/nodelet)
depth_registered_sw_metric_rect (nodelet/nodelet)
disparity_depth (nodelet/nodelet)
disparity_registered_hw (nodelet/nodelet)
disparity_registered_sw (nodelet/nodelet)
driver (nodelet/nodelet)
points_xyzrgb_hw_registered (nodelet/nodelet)
points_xyzrgb_sw_registered (nodelet/nodelet)
rectify_color (nodelet/nodelet)
rectify_ir (nodelet/nodelet)
rectify_mono (nodelet/nodelet)
register_depth_rgb (nodelet/nodelet)
/
camera_base_link (tf/static_transform_publisher)
camera_base_link1 (tf/static_transform_publisher)
camera_base_link2 (tf/static_transform_publisher)
camera_base_link3 (tf/static_transform_publisher)
ROS_MASTER_URI=http://10.0.0.22:11311
core service [/rosout] found
process[camera/camera_nodelet_manager-1]: started with pid [3263]
process[camera/driver-2]: started with pid [3264]
process[camera/debayer-3]: started with pid [3265]
process[camera/rectify_mono-4]: started with pid [3266]
process[camera/rectify_color-5]: started with pid [3270]
process[camera/rectify_ir-6]: started with pid [3272]
process[camera/depth_rectify_depth-7]: started with pid [3282]
process[camera/depth_metric_rect-8]: started with pid [3283]
process[camera/depth_metric-9]: started with pid [3288]
process[camera/depth_points-10]: started with pid [3293]
process[camera/register_depth_rgb-11]: started with pid [3298]
process[camera/points_xyzrgb_sw_registered-12]: started with pid [3299]
process[camera/depth_registered_sw_metric_rect-13]: started with pid [3303]
[ INFO] [1459620521.731632004]: Initializing nodelet with 4 worker threads.
process[camera/depth_registered_rectify_depth-14]: started with pid [3309]
process[camera/points_xyzrgb_hw_registered-15]: started with pid [3313]
process[camera/depth_registered_hw_metric_rect-16]: started with pid [3318]
process[camera/depth_registered_metric-17]: started with pid [3326]
process[camera/disparity_depth-18]: started with pid [3330]
process[camera/disparity_registered_sw-19]: started with pid [3331]
process[camera/disparity_registered_hw-20]: started with pid [3335]
process[camera_base_link-21]: started with pid [3336]
process[camera_base_link1-22]: started with pid [3341]
process[camera_base_link2-23]: started with pid [3342]
process[camera_base_link3-24]: started with pid [3343]
[ INFO] [1459620523.932327943]: Number devices connected: 1
[ INFO] [1459620523.933402839]: 1. device on bus 000:00 is a Xbox NUI Camera (2ae) from Microsoft (45e) with serial id 'A00365806688049A'
[ INFO] [1459620523.954331484]: Searching for device with index = 1
[ INFO] [1459620524.168844141]: Starting a 3s RGB and Depth stream flush.
[ INFO] [1459620524.169588412]: Opened 'Xbox NUI Camera' on bus 0:0 with serial number 'A00365806688049A'
[ INFO] [1459620525.224897840]: rgb_frame_id = 'camera_rgb_optical_frame'
[ INFO] [1459620525.225280027]: depth_frame_id = 'camera_depth_optical_frame'
[ WARN] [1459620525.505361798]: Camera calibration file /home/ubuntu/.ros/camera_info/rgb_A00365806688049A.yaml not found.
[ WARN] [1459620525.505736642]: Using default parameters for RGB camera calibration.
[ WARN] [1459620525.506143569]: Camera calibration file /home/ubuntu/.ros/camera_info/depth_A00365806688049A.yaml not found.
[ WARN] [1459620525.506394975]: Using default parameters for IR camera calibration.
[ INFO] [1459620527.170462997]: Stopping device RGB and Depth stream flush.
Am I doing something wrong? How can I make sure my topic is being published and is reachable over the network?
Originally posted by jacksonkr_ on ROS Answers with karma: 396 on 2016-04-02
Post score: 0
Answer:
** EDIT **
I didn't have all my facts and figures straight before writing this answer. The fact is that it doesn't matter where the master is or where the camera is. Once your settings correspond correctly then everything should work. Make sure the following two items are addressed on ALL machines.
ROS_MASTER_URI - IP of the machine to act as the "server", needs be the same on all machines
ROS_IP - the IP of the current machine, needs to be different on all machines
--- Original Answer Below ---
I had the right idea and found out where I was wrong. The "master" is really the robot, which seems a little backward to me but now that I know that roscore needs to run on the robot I'm set straight. Here's the article that put me on the right track:
Getting RVIZ to Work Over Multiple Computers
And in case the link ever dies, here's the content of the link since it's short:
README on getting RVIS to work over
multiple computers
A common task is to SSH into the
robot's computer and run RVIZ to get
the laser output and other
visualization. Running RVIZ directly
on the remote computer will not work
due to the way RVIZ is implemented.
The workaround is to run RVIZ locally.
To do this we need to set the local
computer to locate the remote MASTER
NODE in order to display the right
information.
Assume: IP: 192.168.1.0 // remote
computer (robot) IP: 192.168.1.1 //
local computer (host)
*** ssh into remote computer ***
ssh -X erratic@192.168.1.0
At the remote terminal:
2. export ROS_MASTER_URI=http://192.168.1.0:11311
//this ensures that we do not use
localhost, but the real IP address as
master node
export ROS_IP=192.168.1.0 //this ensures that ROS knows that we cannot
use hostname directly (due to DHCP
firewall issues)
roscore
At the local terminal:
export ROS_MASTER_URI=http://192.168.1.0:11311
//tells local computer to look for the
remote here
export ROS_IP=192.168.1.1 //this ensures that ROS knows that we cannot
use hostname directly (due to DHCP
firewall issues)
rosrun rviz rviz // fires up rviz on local computer. It will attach
to the master node of the remote
computer
** to check, open a remote terminal **
rxgraph
** Note, everytime a new terminal is open on the local/remote computer, we
have to call the 2 exports commands.
To make this permanent, edit the
~/.bashrc file:
sudo gedit ~/.bashrc //add the two export commands at the end of the
file.
source ~/.bashrc //and restart terminal
Originally posted by jacksonkr_ with karma: 396 on 2016-04-09
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24297,
"tags": "ros, ros-master-uri, roscore, freenect-launch, openni.launch"
} |
Bit flip error correction syndrome measurements | Question: I'm coming across some confusion in chapter 10.1.1 of Nielsen and Chaung. In terms of the 'recovery' procedure, how can the result of the syndrome measurement be 0, 2 or 3?
I am assuming that, for example, if the state is $a|010\rangle$ + $b|101\rangle$ (bit flip on the second qubit), then will we not see that $\langle\psi|P_2|\psi\rangle = 1$ and the rest of the projection operators will give a syndrome measurement of 0 (i.e $\langle\psi|P_1|\psi\rangle = 0$)? From this example, and working through the rest in the same manner, I am struggling to see where/how an error syndrome of 2 will be measured.
I hope my current working makes sense!
Answer: The outcome of measuring $P_2$ will not be 2, but rather the outcome of measuring the operator with a spectral decomposition given by the four syndromes and corresponding eigenvalues. Remember that $\langle \psi | P_n | \psi \rangle$ gives the probability of measuring the eigenvalue of $M$ corresponding to $P_n$, not the eigenvalue itself. As such, an outcome of $n$ is not saying that measuring $P_n$ will give $n$, but just refers to the fact that $\langle \psi | P_n | \psi \rangle = 1$, so we are certain that the syndrome is $P_n$.
As for how to perform this syndrome measurement in practice, look at page 430. | {
"domain": "quantumcomputing.stackexchange",
"id": 4140,
"tags": "textbook-and-exercises, error-correction, nielsen-and-chuang"
} |
Origin of Virtual Molecular Orbitals in Hartree-Fock equations | Question: Let's look at scheme
Do I understand correctly, the origin of Virtual Orbitals is a solution of (for, exampre, restricted) HF equation eigenfuncions?
Answer: Yes, they result from the HF equations just like the occupied orbitals. The diagram should also be show them as present in the initial set of equations, unless the basis (used for the LCAO) was changed between the two states shown. The number of occupied orbitals plus the number of virtual orbitals is equal to the basis set size because they are the eigensolutions of
$$
\mathbf{F}\mathbf{C} = \mathbf{\epsilon}\mathbf{S}\mathbf{C},
$$
where $\mathbf{F}$ is the Fock matrix, $\mathbf{C}$ is the matrix of orbital coefficients, $\mathbf{\epsilon}$ are the eigenvalues which are the orbital energies, and $\mathbf{S}$ is the atomic orbital overlap matrix (because the basis set used is typically not orthogonal). The dimension of all matrices involved is determined by the size of the basis set, which will be finite because otherwise the computational effort becomes infinite. | {
"domain": "chemistry.stackexchange",
"id": 16592,
"tags": "quantum-chemistry, molecular-orbital-theory"
} |
OFDM Query : Why In Phase and Quadrature Phase RF Conversion is needed? | Question: OFDM Basic Query- x[n] produced at output of IFFT is in discrete time, so we can use DAC and then RF Upconversion (multiply it with LO Frequency say f) and send it, why are we doing RF up conversion in Inphase and Quadrature phase then adding it and sending it? Can someone please explain.
Answer: Because the output of the IFFT operation is a vector of complex numbers. An RF signal is a real thing, so you can't just multiply a sine wave by a complex number and get something sensible.
However, you can map a complex number onto an RF carrier by multiplying the real part by $\cos \omega t$, and the imaginary part by $\sin \omega t$. This is I/Q modulation. The result is not complex -- it's all real. But the resulting arithmetic that you do on it, at the transmit and the receive end, is the same as if it were a complex number.
That's the beauty and convenience of I/Q modulation and demodulation. | {
"domain": "dsp.stackexchange",
"id": 10785,
"tags": "digital-communications, ofdm, lte, communication-standard"
} |
Can context-free grammar generate $a^{2^n}$? | Question: Context-free grammar can generate the string a 2 n for n≥0 .
The production rule P is S→SS|a .
The derivations is, for example:
1) S⇒a (this is when n = 0)
2) S⇒SS⇒aa (this is when n = 1)
3) S⇒SS⇒SSSS⇒aaaa (this is when n = 2)
4) S⇒SS⇒SSSS⇒SSSSSSSS⇒aaaaaaaa (this is when n = 3).
Am I right?
Someone told me yes but it takes a very long time for big enough n;
someone told me no - that this grammar actually generates $a^+$ because it can be $S \Rightarrow SS \Rightarrow SSS \Rightarrow aaa$.
So, does this mean context-free grammar cannot generate the string $a^{2^n}$?
Is there exist any other context-free grammars that can generate this language?
Answer: It is well known (and not very difficult to prove) that a context-free language over a unary alphabet $\{a\}$ is regular.
Thus, your question is essentially, "is $\{a^{2^n}:n\in \mathbb N\}$ regular?"
And the answer to that is no (easy to prove using the pumping lemma). | {
"domain": "cs.stackexchange",
"id": 5359,
"tags": "formal-languages, context-free, formal-grammars"
} |
Lambda: The Ultimate Imperative - who is Jensen? | Question: One of the notes in the classical paper LAMBDA: The Ultimate Imperative says:
{Jensensdevice}
The technique of repeatedly modifying a variable passed call-by-name in order to produce side effects on another call-by-name parameter is commonly known as Jensen's device, particularly in the case where call-by-name parameters are j and a[j]. We cannot find any reference to Jensen or who he was, and offer a reward for any information leading to the identification, arrest, and conviction of said Jensen.
Today, after almost 40 years since the memo was published, do we know the identity of Jensen? I'm not hoping for the reward, just curious.
Answer: Jensen's device was developed by Jørn Jensen, who worked on one of the earliest ALGOL 60 compilers (this answer is based on comment by Kristoffer Arnsfelt Hansen). | {
"domain": "cstheory.stackexchange",
"id": 1891,
"tags": "ho.history-overview"
} |
How do muscle relaxants work? | Question: Do they act directly on the muscle and actually relax muscle tissue and ease spasms, or do they just prevent your brain from receiving signals that inform you of tight muscles?
In the latter case, it seems like your brain would sort of become "immune" to the pain even though the muscle is still in a spasm.
I'm obviously not a biologist, just curious about this.
How do muscle relaxants work?
Answer: Check out the muscle relaxant article on Wikipedia, it's pretty straight forward. In short, there are two main types: Neuromuscular blockers, than act at the junction between the neuron and the muscle; and spasmolytics/antispasmodics, which (mainly) act on the central nervous system to reduce excitation or increase inhibition. Most of the ones I've heard of, such as diazepam, are in the latter category. | {
"domain": "biology.stackexchange",
"id": 1354,
"tags": "human-anatomy, neurotransmitter, muscles, pain, tissue"
} |
how to interlink arduino with gazebo? | Question:
hi, i have just completed rosserial_arduino tutorials. in time, i realised that the answer for my above question is through th flow like arduino->rosserial_arduino->ros->rqt->rviz->gazebo. however, i'm not quite sure is this the right track or not.is it? if not, how can i do it?
i found rosserial_arduino tutorials very helpful(example mentioned under it). but when i tried to go for rqt or tf or for gazebo, i find myself sometimes lost the track. thus can anybody tell me proper way for it?isnt there more simpler tutorials available?
my final goal is to move 2DOF robot built using arduino(and simple servo motors...lets say) and get its complete geometric simulation in realtime as well as offline. i'm using ubuntu 14, ros indigo, gazebo 2.2.
Originally posted by baiju on ROS Answers with karma: 71 on 2015-06-13
Post score: 0
Answer:
First of all I'm sorry that you found some tutorials are not simple, but I suggest you split questions into smaller pieces. Otherwise I'm afraid that we don't exactly know what problem you're facing.
To point out one thing,
arduino --> rosserial_arduino --> ros --> rqt --> rviz --> gazebo
I'd say the link between arduino and Gazebo can become like the following (despite the fact I don't exactly know what you're trying):
arduino --> rosserial_arduino --> ros --> gazebo
--> RViz
--> rqt
You don't necessarilly have to go through rqt and RViz in order to connect to Gazebo, since they are GUIs.
Originally posted by 130s with karma: 10937 on 2015-06-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by baiju on 2015-06-15:
i have:arduino connected to stepper motor.pen attached to motor shaft.i control rotation of shaft through rosserial_arduino monitor.
i want:simulation of single rod/bar rotating in realtime & control of it in gui.
future:i will convert it to PID control,feedback,more link...
which should i follow? | {
"domain": "robotics.stackexchange",
"id": 21917,
"tags": "arduino, gazebo, rviz, rosserial, rqt"
} |
Why is it so common to initialize weights with a Guassian distribution divided by the square root of number of neurons in a layer? | Question: I have seen in several jupyter notebooks people initializing the NN weights using:
np.random.randn(D, M) / np.sqrt(D)
Other times they just do:
np.random.randn(D, M)
What is the advantage of dividing the Gaussian distribution by the squared root of the number of neurons in the layer?
Thanks
Answer: I think they use the Xavier/Glorot's initialization method. You can read from the original paper:
We initialized the biases to be 0 and the weights $W_{ij}$ at each layer with the following commonly used heuristic:
$W_{ij} \sim U [ -\frac{1}{\sqrt{n}}, \frac{1}{\sqrt{n}}] $
where $U[−a, a]$ is the uniform distribution in the interval $(−a, a)$ and $n$ is the size of the previous layer (the number of columns of $W$)
Some people use this as some reports said this initialization method lead to better result | {
"domain": "ai.stackexchange",
"id": 1129,
"tags": "neural-networks"
} |
transfer learning with sentiment analysis? | Question: The question is how good and what are some things to keep in mind when sentiment analysis models are tested on different datasets than they are trained on.
Say the task is to perform sentiment analysis on product reviews (unlabeled datset) - to classify positive, negative or neural. Because the data is unlabelled, a model can be trained (perhaps using logistic regression or NN) on a similar labeled dataset (say movie reviews, or product reviews) and tested on the original unlabelled dataset.
Will something like this work? Because the words of product names that occur in the unlabelled dataset will not be words that the model was exposed to during training, during test time will these words possibly throw off the model?
Answer: I can not fully answer your questions, but would like to offer a couple of my thoughts here:
1) Transfer learning for sentiment analysis can be hard given that knowledge learned from one topic may not be not broad or general enough to perform well on the target or downstream tasks. For example, I have recently trained a neural network along with Word2Vec embedding using Twitter airline customer review data and get an prediction accuracy of 77%. However, when I use the same Word2Vec and neural network to classify some general customer review data, I got an prediction accuracy of only 35%.
2) Transfer learning in natural language processing is a hot topic and many researchers are working on it recently. 2018 sees some breakthroughs in transfer learning, i.e., Google Universal Sentence Encoder, BERT algorithm etc. I can not give you a comprehensive list here since I'm also learning. I would suggest you dive into some blog articles or even the original research articles to get a better understanding.
May it helps. | {
"domain": "datascience.stackexchange",
"id": 4510,
"tags": "word-embeddings, sentiment-analysis, nlp"
} |
Why friction is zero when wheel slip is zero? | Question: From most graph, when the tire doesn't slip, then the friction is zero, for example see the below image
Why there is no friction when there is no slip? As the car stand still on slope, there is no slip but still there is friction holding the car against gravity
Answer: For an (idealized) perfectly round wheel on a perfectly smooth road, there is only a single point of contact between the wheel and the road at any given time. If you were to plot the motion of a single point on the wheel's surface as it goes around and then touches the ground, you would see that it follows a curve called a cycloid. The picture in that wikipedia article explains it better than I possibly could.(*) As you can see from the image, the point on the wheel's surface is actually changing directions as it touches the road, so at that point in time its instantaneous velocity is zero. Because it is stationary relative to the road, there is no kinetic friction.
However, there can still be static friction, such as if you're driving the car around a curve. In that case, it's static friction on the wheel that prevents you from slipping and keeps you following the curved path. (Or static friction plus a contribution from gravity if the curve is banked.)
There is also static friction between the wheels and the road that causes the car to accelerate in the first place. (I'm assuming it starts from rest.) If friction between wheels and ground were zero, the wheels would spin in place but the car would never go anywhere.
(*) The picture makes it very clear, but if you prefer a verbal explanation: The wheel as a whole is moving forward (relative to the road), but when the point on the wheel's surface is at the bottom of its rotation, it's moving backward relative to the center of the wheel. The result is that the point on the surface of the wheel is stationary (relative to the road) when it's at the bottom of its rotation. | {
"domain": "physics.stackexchange",
"id": 23412,
"tags": "everyday-life, friction"
} |
TF2 lookupTransform problem in Qt5 | Question:
When I was writing a Qt5 widget use tf2, I got all zero when print Translation and Rotation.
I reviewed some similar questions and modified my program, it was written inside the UI's constructor, just like this:
tf2_ros::TransformListener tfListener(tfBuffer, nh_);
connect(ui->testButton, SIGNAL(clicked()), this, SLOT(testTf2()));
try{
if(tfBuffer.canTransform("map", "base_link", ros::Time::now(), ros::Duration(1)))
ts = tfBuffer.lookupTransform("map", "base_link", ros::Time::now(), ros::Duration(0));
} catch (tf2::TransformException &ex) {
ROS_WARN("%s", ex.what());
}
And in testTf2(), I print ts.
qDebug() << ts.transform.translation.x;
qDebug() << ts.transform.rotation.w;
etc.
I had set the target_time and time_out param, and in case, add a canTransform func, but still got 0.
With rosrun tf tf_echo map base_link print:
- Translation: [-0.478, 0.148, 0.017]
- Rotation: in Quaternion [0.000, 0.000, -0.802, 0.597]
in RPY (radian) [0.000, 0.000, -1.862]
in RPY (degree) [0.000, 0.000, -106.708]
I also run rosrun tf2_tools view_frames.py, it seems good.
I want to know if my method is wrong, or the parameters set are wrong?
Originally posted by TifferPelode on ROS Answers with karma: 96 on 2018-06-28
Post score: 1
Answer:
I see two issues.
The first is that you appear to be constructing the Listener immediately before trying to query transforms from it. But you're not giving it any time to populate before trying to query the buffer. Without that your lookup will likey be unable to get lucky enough to get exactly the right transform data during the timeout for your canTransform call.
It makes sense to have the TransformListener initialized in a constructor, but it should be persistent and continue in the background not be on the stack. Otherwise it will stop listening when your constructor is completed. And secondly your queries for the value should be done in a way that it will retry, or fail and try again the next time around.
If you do get lucky enough for the canTransform call to succeed you most likely won't be able to execute the lookup transform because you're querying it with a different timestamp. If you want to use this construct you should compute the time at which you want to run your query and then use that same timestamp in the subsequent lookup.
However I'll suggest that you use the lookupTransform method with the timeout integrated to avoid this issue all together.
Originally posted by tfoote with karma: 58457 on 2018-06-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by TifferPelode on 2018-06-30:
Thank you for your detailed explanation.
In fact if I change the Duration parameter in canTransform,it throw out a time error just like you described.
I'm try to create a thread to keep the listener, I was using waitForTransform to wait in tf1, so it may take some time to understand tf2.
Comment by TifferPelode on 2018-06-30:
lookupTransform("map", "base_link", ros::Time::now(), ros::Duration(1));
It delay for 0.15s, how can I deal with it?
Lookup would require extrapolation into the past. Requested time 1530356857.364972440 but the earliest data is at time 1530356857.509407043 | {
"domain": "robotics.stackexchange",
"id": 31112,
"tags": "ros, ros-kinetic, qt5, tf2, transform"
} |
DataTable to given type T simple mapper | Question: I threw this together and would love any suggestions on how to improve on this simple DataTable to given Type T mapper. Anything from coding conventions to speed optimizations or "you're doing something stupid".
public class test {
static ISXLog log = new XLog();
public void runTest() {
var data = RetrieveDataSet(/*parameters not important*/);
AddressType[] addr = mapObjects<AddressType>(data.Tables[0]);
}
// overload for optional function parameter
public static T[] mapObjects<T>(DataTable dt ) where T : new(){
return mapObjects<T>(dt, (p => p.IsCollection()));
}
// mapObjects from DataTable to Type T
public static T[] mapObjects<T>(DataTable dt, Func<PropertyInfo,bool>propRestriction) where T : new() {
T[] newobjs;
var Rows = dt.Rows;
newobjs = new T[Rows.Count];
var MField = typeof(DataRowExtensions)
.GetMethod(@"Field", new[] { typeof(DataRow), typeof(string) });
for (int i = 0; i < Rows.Count; i++) {
DataRow dbobj = Rows[i];
var obj = newobjs[i] = new T();
var objProps = obj.GetType().GetProperties();
foreach (var prop in objProps) {
try {
if (!dbobj.Table.Columns.Contains(prop.Name) || propRestriction(prop)) {
log.Debug("Nothing to set for property: {0}", prop.Name);
}else{
MField = MField.GetGenericMethodDefinition()
.MakeGenericMethod(prop.PropertyType);
var objval = MField.Invoke(null, new object[] { dbobj, prop.Name });
prop.SetValue(obj, objval, null);
log.Debug("Set property '{0}' to value '{1}'", prop.Name, objval);
}
}
catch (Exception e) {
log.DebugException("Error occured while trying to set prop: " + prop.Name, e);
}
}
}
return newobjs;
}
public static class PropertyInfoExtensions
{
public static bool IsCollection(this PropertyInfo property) {
return (!typeof(String).Equals(property.PropertyType) &&
typeof(IEnumerable).IsAssignableFrom(property.PropertyType));
}
}
Answer: Naming
Methodnames should be using PascalCase based on the Method Naming Guidlines so
public static T[] mapObjects<T>()
should be
public static T[] MapObjects<T>()
or better, as the method is mapping datarows of a datatable to objects T
public static T[] MapTo<T>()
Variables should be using camelCase and should be as descriptive as possible, so they are readable/understandable for Mr.Maintainer.
T[] newobjs;
var Rows = dt.Rows;
DataRow dbobj = Rows[i];
should be e.g
T[] newObjs;
var rows = dt.Rows;
DataRow row = rows[i];
Bug
var Rows = dt.Rows;
newobjs = new T[Rows.Count];
You are creating a new array of T whichs dimension is 1 to big. This should be
var Rows = dt.Rows;
newobjs = new T[Rows.Count -1];
but now you also need to check for the Rows.Countproperty if it is 0.
Avoidable problems
As the method having a Func<PropertyInfo, bool> property is public, you should check if the parameter is null
A retrieved columnvalue also can be DBNull, insert a DBNull check
Optimization
As the columns of the datatable won't change you don't need to query them for each
datarow
This should be an extension method for DataTable
After taking all this, the refactored methods will result in
public class TestDataTableExtensions
{
public static void runTest()
{
var data = RetrieveDataSet(/*parameters not important*/);
AddressType[] addr = data.Tables[0].MapTo<AddressType>();
}
}
public static class DataTableExtensions
{
private static ILogger log = Logger.GetLogger();
public static T[] MapTo<T>(this DataTable dt) where T : new()
{
return dt.MapTo<T>(p => p.IsCollection());
}
public static T[] MapTo<T>(this DataTable dt, Func<PropertyInfo, bool> propertyRestriction) where T : new()
{
T[] mappedObjects = null;
var rows = dt.Rows;
if (rows.Count == 0)
{
return mappedObjects;
}
mappedObjects = new T[rows.Count - 1];
var methodInfo = typeof(DataRowExtensions)
.GetMethod(@"Field", new[] { typeof(DataRow), typeof(string) });
DataColumnCollection columns = dt.Columns;
for (int i = 0; i < rows.Count; i++)
{
DataRow row = rows[i];
var currentObject = mappedObjects[i] = new T();
var properties = currentObject.GetType().GetProperties();
foreach (var property in properties)
{
String propertyName = property.Name;
try
{
if (!columns.Contains(propertyName) ||
(propertyRestriction != null && propertyRestriction(property)))
{
log.Debug("Nothing to set for property: {0}", propertyName);
}
else
{
methodInfo = methodInfo.GetGenericMethodDefinition()
.MakeGenericMethod(property.PropertyType);
var objval = methodInfo.Invoke(null, new object[] { row, propertyName });
property.SetValue(currentObject,
DBNull.Value.Equals(value) ? value : null,
null);
log.Debug("Set property '{0}' to value '{1}'", propertyName, objval);
}
}
catch (Exception e)
{
log.DebugException("Error occured while trying to set prop: " + property.Name, e);
}
}
}
return mappedObjects;
}
}
public static class PropertyInfoExtensions
{
public static bool IsCollection(this PropertyInfo property)
{
return (!typeof(String).Equals(property.PropertyType) &&
typeof(IEnumerable).IsAssignableFrom(property.PropertyType));
}
}
public interface ILogger
{
void Debug(String message, params Object[] objects);
void DebugException(String message, Exception e);
}
public class Logger:ILogger
{
private static ISXLog logger = new XLog();
public static ILogger GetLogger()
{
return new Logger();
}
void ILogger.Debug(string message, params object[] objects)
{
logger.Debug(message, objects);
}
void ILogger.DebugException(string message, Exception e)
{
logger.DebugException(message, e);
}
}
I just couldn't resist to beautyfy this and therefor I lend the following parts from paritosh's answer.
If you, the OP, consider to mark my answer because of the parts lend from paritosh's answer then don't do it.
var mappedObjectProperties = typeof(T).GetProperties()
.Where(elem => !propRestriction(elem)).ToList();
var mappedObjectCollection = new List<T>();
After renaming mappedObjectProperties to properties (also it is a good name), adjusting propRestriction to fit the parametername of our implementation and renaming mappedObjectCollection to mappedObjects, we need to add a check if propertyRestriction != null.
IEnumerable<PropertyInfo> properties =
((propertyRestriction != null) ?
typeof(T).GetProperties().Where(elem => !propertyRestriction(elem)) :
typeof(T).GetProperties());
But as always we can do better. Why shouldn't we restrict the properties to only the columnnames of the datatable ?
IEnumerable<PropertyInfo> properties =
((propertyRestriction != null) ?
typeof(T).GetProperties().Where(elem => !propertyRestriction(elem)) :
typeof(T).GetProperties())
.Where(prop => dt.Columns.Contains(prop.Name));
Next we add a check if either !properties.Any() or dt.Rows.Count == 0 and if one of these checks will be true, we are finished.
But, why shouldn't we add an extension method for a DataRow ? We just do it.
As we need the name of the property to access the value of the datacolumn and also for logging we add a String propertyName
public static T MapTo<T>(this DataRow dataRow, IEnumerable<PropertyInfo> properties) where T : new()
{
T currentObject = new T();
foreach (PropertyInfo property in properties)
{
String propertyName = property.Name;
try
{
var value = dataRow[propertyName];
property.SetValue(currentObject,
DBNull.Value.Equals(value) ? value : null,
null);
log.Debug("Set property '{0}' to value '{1}'", propertyName, value);
}
catch (Exception e)
{
log.DebugException("Error occured while trying to set prop: " + property.Name, e);
}
}
return currentObject;
}
}
Now we can simplify the DataTable's extension method to
public static T[] MapTo<T>(this DataTable dt, Func<PropertyInfo, bool> propertyRestriction) where T : new()
{
IList<T> mappedObjects = new List<T>();
IEnumerable<PropertyInfo> properties =
((propertyRestriction != null) ?
typeof(T).GetProperties().Where(elem => !propertyRestriction(elem)) :
typeof(T).GetProperties())
.Where(prop => dt.Columns.Contains(prop.Name));
if (!properties.Any() || dt.Rows.Count == 0) { return mappedObjects.ToArray(); }
foreach (DataRow dataRow in dt.Rows)
{
mappedObjects.Add(dataRow.MapTo<T>(properties));
}
return mappedObjects.ToArray();
}
Here we go..
public static class DataMappingExtensions
{
private static ILogger log = Logger.GetLogger();
public static T[] MapTo<T>(this DataTable dt) where T : new()
{
return dt.MapTo<T>(null);
}
public static T[] MapTo<T>(this DataTable dt, Func<PropertyInfo, bool> propertyRestriction) where T : new()
{
IList<T> mappedObjects = new List<T>();
IEnumerable<PropertyInfo> properties =
((propertyRestriction != null) ?
typeof(T).GetProperties().Where(elem => !propertyRestriction(elem)) :
typeof(T).GetProperties())
.Where(prop => dt.Columns.Contains(prop.Name));
if (!properties.Any() || dt.Rows.Count == 0) { return mappedObjects.ToArray(); }
foreach (DataRow dataRow in dt.Rows)
{
mappedObjects.Add(dataRow.MapTo<T>(properties));
}
return mappedObjects.ToArray();
}
public static T MapTo<T>(this DataRow dataRow, IEnumerable<PropertyInfo> properties) where T : new()
{
T currentObject = new T();
foreach (PropertyInfo property in properties)
{
String propertyName = property.Name;
try
{
var value = dataRow[propertyName];
property.SetValue(currentObject,
!DBNull.Value.Equals(value) ? value : null,
null);
log.Debug("Set property '{0}' to value '{1}'", propertyName, value);
}
catch (Exception e)
{
log.DebugException("Error occured while trying to set prop: " + property.Name, e);
}
}
return currentObject;
}
} | {
"domain": "codereview.stackexchange",
"id": 8965,
"tags": "c#, performance, asp.net, properties"
} |
On covariant derivative | Question: Let us denote a 1 form on manifold M with $\eta$ which in a chart looks like $\eta=\eta_{\mu}dx^{\mu}$ where $\eta_{\mu}$ are smooth functions on M. Now given the coordinate vector fields $\frac{\partial}{\partial x^{\mu}}$, $$\nabla_{\nu}\eta\equiv\nabla_{\frac{\partial}{\partial x^{\nu}}}\eta$$ is a (0,1) tensor field, i.e, another 1 form. so, it makes sense to talk about $(\nabla_{\nu}\eta)_{\mu}$ and I can see that (after acting $\nabla_{\nu}\eta$ on a coordinate vector field) the following holds
$$(\nabla_{\nu}\eta)_{\mu}=\nabla_{\nu}\eta_{\mu}-\Gamma^{\rho}_{\mu\nu}\eta_{\rho}.$$ Note that since $\eta_{\mu}$ are smooth functions on M, $\nabla_{\nu}\eta_{\mu}=\eta_{\mu,\nu}$ are ordinary partial derivatives of $\eta_{\mu}$. The above equation can be rewritten as: $$\eta_{\mu;\nu}=\eta_{\mu,\nu}-\Gamma^{\rho}_{\mu\nu}\eta_{\rho}.$$ My question is why in the physics/general relativity literature then $\nabla_{\nu}\eta_{\mu}$ denotes the covariant derivative of the 1 form $\eta$ along the coordinate vector field $\frac{\partial}{\partial x^{\nu}}$? The covariant derivative of $\eta$ along $\frac{\partial}{\partial x^{\nu}}$, denoted by $\nabla_{\nu}\eta$ is a (0,1) tensor field whose components are denoted by $(\nabla_{\nu}\eta)_{\mu}$ (the left hand side of the second equation above) where as $\nabla_{\nu}\eta_{\mu}$ are mere partial derivatives of the component functions $\eta_{\mu}$.
Answer: Usually $\nabla_\mu$ is not taken to be an operator by itself. Or, you can take it to be an operator, with the understanding that certain indices are tensor indices and certain indices are counting indices, and when $\nabla_\mu$ acts on something with tensor indices, it means "take the abstract covariant derivative, then take the components".
So $\nabla_\mu A^\nu$ is shorthand notation for $(\nabla A)_{\mu}^{\ \nu}$.
A convention that is probably not widespread, but what I personally use (if I have to mix Ricci calculus with modern differential geometry) is to use two, separate notation for covariant derivatives. The $\nabla_X Y$ notation for invariant notation and $\nabla_\mu A^\nu$ for component notation. In this sense, $$ \nabla_\mu A^\nu=\partial_\mu A^\nu +\Gamma^\nu_{\mu\sigma}A^\sigma $$ but $$ \nabla_{\partial_\mu}A^\nu=\partial_\mu A^\nu. $$
The two dont mix in the sense that I'll never write something as $$ \nabla_\mu A $$ where $ A=A_\mu dx^\mu $, and I'll never write something as $\nabla_X A^\mu$, unless I actually mean $X(A^\mu)=X^\nu\partial_\nu A^\mu$. | {
"domain": "physics.stackexchange",
"id": 43956,
"tags": "general-relativity, differential-geometry, tensor-calculus, notation, differentiation"
} |
Is model order of a model class (for example, polynomial regression class) a hyperparameter or a tuning parameter? | Question: We know that in ML we have tuning parameters and hyperparameters.
Is model order of a model class (for example, polynomial regression class) a hyperparameter or a tuning parameter?
Answer: I assume that by "tuning parameter" you mean, e.g., the weights of a nueral net. These are parameters that can be learned from data. A hyperparameter, however, can not (or only indirectly) be estimated from your training data. Even though, during hyperparameter optimization you can determine which of these hyperparameters results in a model that is more (or less) appropriate to describe the data -- or you can simply make an educated guess about that parameter. In this sense, the polynomial degree of a regression model is a hyperparameter that needs to be determined before you use the model for training it and, e.g., making predictions or inference with it. | {
"domain": "ai.stackexchange",
"id": 3999,
"tags": "machine-learning, validation"
} |
Why can't a single photon produce an electron-positron pair? | Question: In reading through old course material, I found the assignment (my translation):
Show that a single photon cannot produce an electron-positron pair, but needs additional matter or light quanta.
My idea was to calculate the wavelength required to contain the required energy ($1.02$ MeV), which turned out to be $1.2\times 10^{-3}$ nm, but I don't know about any minimum wavelength of electromagnetic waves. I can't motivate it with the conservation laws for momentum or energy either.
How to solve this task?
Answer: Another way of solving such problems is to go to another reference frame, where you obviously don't have enough energy.
For example you've got a $5 MeV$ photon, so you think that there is plenty of energy to make $e^-e^+$ pair. Now you make a boost (a change by a constant velocity to another inertial reference frame) along the direction of the photon momentum with $v=0.99\,c$ and you get a $0.35 MeV$ photon. That is not enough even for one electron. | {
"domain": "physics.stackexchange",
"id": 73412,
"tags": "homework-and-exercises, particle-physics, photons, conservation-laws, pair-production"
} |
X-Y axes for UR5 appear reversed | Question:
I'm trying to test a kinematic controller that relies on UR5 DH parameters. I'm using UR5 model in universal_robot and the classical DH parameters. The controller works but there is just one problem. The x-y axes are reversed. If I send 0.5i+0.5j+0.5k, the rviz shows -0.5i-0.5j+0.5k, see the below picture. You can see the the arm is going in the opposite direction of the x-y axes. How can I rectify this issue? I'm using Gazebo 9 and Melodic to simulate the robot. I really appreciate your help and feedback.
.
Originally posted by CroCo on ROS Answers with karma: 155 on 2022-09-02
Post score: 0
Answer:
There are many different "families" of quaternion (not sure if family is the right word.) The family used by DH is different from the one that is used by ros, so they are not directly compatible. You need to convert the DH quaternion to get the equivalent ros quaternion.
Originally posted by Mike Scheutzow with karma: 4903 on 2022-10-17
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by CroCo on 2022-10-17:
DH method represents the orientation as a Rotation matrix not quaternion, but I think you are right in the way tf tree is structured differs from the DH method. This deprives me from visualizing the frame of the end-effector sadly since modelling manipulators heavily rely on DH. Do you know how the tf internally constructs the tree? Thank you. | {
"domain": "robotics.stackexchange",
"id": 37954,
"tags": "ros, gazebo, ros-melodic, ur5"
} |
Physical interpretation of one-particle density, as expectation value of finding a particle with a specific momentum at a location at a time | Question: I was reading Statistical Physics of Particles by Mehran Kardar. In unit 3.3 of chapter 3 named Kinetic theory of gases, the author derived the BBGKY hierarchy. Now in the derivation, the author defines one-particle density $f_1(\vec{p}, \vec{q}, t)$ as the expectation value of finding any of the $N$ particles at location $\vec{q}$, with momentum $\vec{p}$, at time $t$. Then the author calculates the one-particle density using phase space density $\rho(\vec{p}_1, \vec{q}_1, \vec{p}_2, \vec{q}_2, \cdots, \vec{p}_N, \vec{q}_N, t)$ as
$$f_1(\vec{p}, \vec{q}, t)=\left\langle\sum_{i=1}^N\delta^3(\vec{p}-\vec{p_i})\delta^3(\vec{q}-\vec{q_i})\right\rangle$$
$$f_1(\vec{p}, \vec{q}, t)=N\int\prod_{i=2}^N d^3\vec{p}_id^3\vec{q}_i\rho(\vec{p}_1=\vec{p}, \vec{q}_1=\vec{q}, \vec{p}_2, \vec{q}_2, \cdots, \vec{p}_N, \vec{q}_N, t)$$
Here, $\langle\mathcal{O}(\vec{p}, \vec{q})\rangle=\int\prod_{i=1}^Nd^3\vec{p}_id^3\vec{q}_i\mathcal{O}(\vec{p}, \vec{q})\rho(\vec{p}, \vec{q})$
Now, here I don't understand what the author exactly means by "the expectation value of finding a particle", I mean how do I quantify "finding a particle"? How is it related to the probability of finding a particle? Also, what does the Dirac delta function in the angle bracket represents? Can someone explain it to me? Thanks for considering my question.
Answer: A gas consists of N-Particle, what's the probability for me to expect any particle, but only one of those N particles, have exactly a momentum and a position I was looking for. Translating this message in mathematics, it is exactly what a delta distribution does. | {
"domain": "physics.stackexchange",
"id": 93279,
"tags": "statistical-mechanics, kinetic-theory"
} |
Maximal anticommuting sets of Dirac matrices | Question: At the end of this webpage, it is said that there exist 6 maximal anticommuting sets each consisting of 5 Dirac $\Gamma$-matrices. I couldn't find anything more in the book cited there, either.
I found that the very common Weyl, Dirac, Majorana bases are actually of a same set. So I begin to wonder about the relation between these 6 sets. Are they related by unitary transformations or else?
For example, we can think of the following two sets. What transformation could possibly relate them?
$$\alpha_0=\sigma_z\otimes\tau_z,\alpha_1=\sigma_x\otimes\tau_0,\alpha_2=\sigma_y\otimes\tau_0,\alpha_3=\sigma_z\otimes\tau_x,\alpha_4=\sigma_z\otimes\tau_y$$
$$\beta_0=\sigma_z\otimes\tau_0,\beta_1=\sigma_x\otimes\tau_z,\beta_2=\sigma_y\otimes\tau_0,\beta_3=\sigma_x\otimes\tau_x,\beta_4=\sigma_x\otimes\tau_y$$
Answer: First, let's derive the 6 sets. Note that order doesn't matter so instead of getting 6 sets we'll have 6x5! = 720 sets.
We need to find 5 gamma products that anticommute. There are 16 products available but we can't use 1 so there are 15 choices. Without loss of generality (because the gamma matrices are equivalent, up to multiplication by $\pm 1, \pm i$ to change the square from +1 to -1 or back), we can assume we chose $\gamma^0$
The second choice needs to anticommute with our first choice $\gamma^0$. There are 8 gamma products that anticommute and 8 that commute. (Of the 8 that commute, one is our choice and another is 1.) For the first choice of $\gamma^0$, the eight we have available to choose for the second anticommuting matrix is given by the eight products in $\{1,\gamma^0\} \;\times\; \{\gamma^1,\gamma^2,\gamma^3,\gamma^1\gamma^2\gamma^3\}$, i.e. choose one of the first two (that commute with $\gamma^0$) and multiply it by one of the last four (that anticommute) so the product will anticommute. The eight cases give the 8 gamma products that anticommute with $\gamma^0$. Our choice doesn't matter, we will make the canonical choice of $\gamma^1$.
The third choice is one of those same 8 but now we can't use $\gamma^1$ and we also can't use the product of our first two choices $\gamma^1\gamma^0$. Looking through the 8 possibilities, there are 3 that anticommute with the first two choices and are also not a product of them. They are $\{\gamma^2,\gamma^3,\gamma^0\gamma^1\gamma^2\gamma^3\}$. Our choice doesn't matter, we will make the canonical choice of $\gamma^2$.
The fourth choice has to be one of the remaining two possibilities $(\gamma^3,\gamma^0\gamma^1\gamma^2\gamma^3)$ the canonical choice is $\gamma^3$ and that defines the fifth choice as the product of the other four.
The number of choices for these 4 stages was 15,8,3,2 and the product of these is 720 so the number of ways of defining sets of 5 anticommuting gamma products. Taking into account ordering, we have 720/5! = 720/120 = 6 cases which are listed in the reference.
Now the OP question was "what is the relation between these 6 sets?" Since each of these sets has an arbitrary ordering, it might be more productive to ask instead, "what is the relation between these 720 ordered sets?"
For standard physics, these 720 sets of 5 anticommuting matrices are all equivalent, so it doesn't matter which we choose. However, if we make the assumption that the gamma matrices are a part of physical reality, (and are not just an arbitrary mathematical choice used to obtain a symmetry compatible with experimental observation), it's possible that we can discover something that is a consequence of our choice of which 4 we choose as $\gamma^0,\gamma^1,\gamma^2,\gamma^3$. For example, imagine physics where different elementary particles are modeled with different gamma matrices.
If we are going to do that sort of thing, then it might be also useful to note that we can multiply gamma products by $-1$ without making much of a change to them. Proper rotations cannot introduce a single minus sign so if we want to add minus signs we need to bring them in 0, 2 or 4 at a time. The number of ways of doing that are 1, 10 and 5 which will increase the number of gamma matrix sets by a further factor of 1+10+5 = 16.
A simpler problem is to consider these sorts of transformations on the Pauli spin matrices. For these, it turns out that there are 24 choices for proper rotations and they make up the point symmetry 432 which the crystallographers call gyroidal. That finite group has 24 elements and is the same as the permutation group for 4 elements called $S_4$. It is a finite subgroup of the spin-1 rep of SU(2) or SO(3). You will find $S_4$ used in the published phenomenology literature quite a lot (but not in reference to gamma matrix symmetries discussed here), for example https://doi.org/10.1016/j.nuclphysb.2018.12.016
In looking for a transformation from one of the six sets to another, we're working in 4x4 matrices so the transformation is going to use unitary 4x4 matrices. I've indicated that with the Pauli matrices these sorts of things fit into SU(2) so I expect the gamma matrix calculations to be SU(4) calculations and indeed that happens, and since SU(4) is a subgroup of U(4) the transformations are indeed unitary.
The first two example lines of the reference are:
$\alpha_1,\alpha_2,\alpha_3,\alpha_4,\alpha_5$ and $y_1,y_2,y_3,y_4,y_5$. We only need to do the first four of each as the fifth is just the product of the first four. These are defined in terms of $\sigma_j$ and $\rho_j$ as:
\begin{array}{rclrcl}
\alpha_1 &=& \rho_1\sigma_1,&y_1&=&\rho_2\sigma_1\\
\alpha_2 &=& \rho_1\sigma_2,&y_2&=&\rho_2\sigma_2\\
\alpha_3 &=& \rho_1\sigma_3,&y_3&=&\rho_2\sigma_3\\
\alpha_4 &=& \rho_3\sigma_0,&y_4&=&\rho_3\sigma_0
\end{array}
so the transformation we want needs to take $\rho_1$ to $\rho_2$, and it needs to leave the $\sigma_j$ and $\rho_3$ alone. Since the $\rho_j$ and $\sigma_k$ commute, this means our transformation will depend on $\rho_k$ alone. And since we want to leave $\rho_3$ alone, our basis set should commute with $\rho_3$. That leaves only two elements in the basis set of the transformation: $\{1,\rho_1\rho_2\}$. Note that $\rho_1\rho_2 = i\rho_3$ but I'll leave it as $\rho_1\rho_2$ cause I think it makes the transformation more obvious.
Let $U(\theta) = \exp(\theta\rho_1\rho_2)$ be a unitary matrix. Here we are writing the unitary matrix as $\exp(iH)$ where $H$ is an Hermitian matrix. Check that indeed, $i^2\rho_3$ is Hermitian. Yep. Now compute.
\begin{equation}
\exp(\theta\rho_1\rho_2) = 1 + \theta\rho_1\rho_2 + \theta^2(-1)/2! + \theta^3(-\rho_1\rho_2)/3! ...
\end{equation}
where we simplify using $(\rho_1\rho_2)^2=-1$ by anticommutativity of $\rho_1$ and $\rho_2$ and this simplifies to
\begin{equation}
U(\theta) = \cos(\theta) +\rho_1\rho_2\sin(\theta)
\end{equation}
It's clear that $U(\theta)$ commutes with $\rho_3$ so let's see what it does to $\rho_1$. Here we will use commutation rules between $\rho_1$ and $\rho_2$ to move the exponential around the object being transformed ($\rho_1$). This transformation negates the exponential:
\begin{equation}
\begin{array}{rcl}
U(-\theta)\;\rho_1\;U(\theta) &=&
\exp(-\theta\rho_1\rho_2)\;\rho_1\;\exp(\theta\rho_1\rho_2),\\
&=&\rho_1\;\exp(+\theta\rho_1\rho_2)\exp(\theta\rho_1\rho_2),\\
&=&\rho_1\;\exp(2\theta\rho_1\rho_2),\\
&=&\rho_1(\cos(2\theta)+\sin(2\theta)\rho_1\rho_2),\\
&=&\cos(2\theta)\rho_1+\sin(2\theta)\rho_2.
\end{array}
\end{equation}
Now to get this equal to the desired $\rho_2$ we need $\sin(2\theta)=1$ and $\cos(2\theta)=0$ So $2\theta = \pi/2$ and $\theta = \pi/4$ and the desired $U$ is
\begin{equation}
\begin{array}{rcl}
U(\pi/4) &=& \exp((\pi4)\rho_1\rho_2) = (1+\rho_1\rho_2)/\sqrt{2},\\
U(-\pi/4) &=&(1-\rho_1\rho_2)/\sqrt{2}
\end{array}
\end{equation}
And the reader is invited to verify that this $U$ transforms $\rho_1$ to $\rho_2$. In fact, thinking of the $\rho$ as Pauli spin matrices, what we've done is derived the unitary transformation corresponding to a rotation by 90 degrees around the z axis. This leaves $\rho_3$ alone and takes $\rho_1$ to $\rho_2$, as desired. | {
"domain": "physics.stackexchange",
"id": 57344,
"tags": "dirac-equation, dirac-matrices, clifford-algebra"
} |
Total number of stereoisomers of truxillic acid | Question:
Total number of stereoisomers of the compound will be:
This was a question asked in our mock test. I've tried by considering pseudo-chirality on the carbon atoms. But I don't know where to start.
I need help finding the different stereoisomers. It would be appreciated if you could draw the different isomers.
Answer: At first glance, not only does it seem like there are no stereogenic centres in truxillic acid, it is also highly symmetrical. However, if we consider different conformations of the chemically identical units (H, COOH or Ph), which give rise to different isomers, some carbons at the 4-membered ring can possibly become stereogenic.
Stereogenic centres are labelled with an asterisk: ε-truxillic acid and peri-truxillic acid do not have stereogenic carbon centres as their 4-membered ring carbons have chemically and conformationally identical neighbours.
It took me a while to get this.Thanks to @andselisk and wikipedia. | {
"domain": "chemistry.stackexchange",
"id": 13866,
"tags": "organic-chemistry, stereochemistry, isomers, conformers"
} |
How to calculate exact angle to the moon? | Question: I am building some device as a gift for my girlfriend.
For this device to work, I need to be able to calculate the angle to which I should look towards (3D angle), in order to see the moon.
This angle is affected by (and maybe not only):
The time of year.
The hour.
The location on earth from which I look.
I'm sure there's some formula that approximates the angle.
I don't need an exact value, I can live with +-1 degree.
Answer: For a one-off calculation, the easiest solution would be to use planetarium software, such as stellarium.
For a programming solution, a python package such as pyephem is an efficient way to calculate the position of the moon at multiple times, in a way that can be imported into a spreadsheet.
The apparent motion of the moon is more complex than you might think. The moon moves about 15 degrees per hour due to the rotation of the Earth. But on top of this fairly simple motion is the Moon's actual orbit around the Earth, which is elliptical (and so this motion is not even, the moon moves faster when it is close to the Earth) and the position and inclination of the orbit is perturbed by the Sun. The perturbations are fairly regular, and can be calculated, but these effects (rotation of the earth, the orbit of the moon, the eccentricity of the orbit, perturbation of orbit) combine to make the actual calculation more complex than "a formula" that can fit easily in a spreadsheet, which is why a package like pyephem is recommended. | {
"domain": "astronomy.stackexchange",
"id": 3194,
"tags": "the-moon, amateur-observing, observational-astronomy"
} |
Why do we fuse Odom, IMU, GPS and other pose messages? | Question:
I'm working with navigation stack lately, and I have a indoor location system to get the absolute pose of the robot.
As we all know, the odometry is only right for a short time, so I want to correct the pose with my indoor location system.
I know there is a package robot_pose_ekf can fuse /odom, /imu_data and /vo. But my question is Why do we have to fuse these pose messages? Why don't we just make the /odom correct with indoor location system or GPS or something else?
EDIT
Hi mig,
Actually, I tried to adapt the tf map->odom before. But I found it hard to deal with the orientaion. You know, to adapt the position (x,y,z), you just need to do some additions and subtractions, however, when it comes to the orientation, if you change it (Quaternion: x, y, z, w), the position of base_link relative to map will change as well. And I also tried to do some 'cos' and 'sin' calculation, but I didn't get the right tf result.
So, is there any packages I can use or refer for this tf map->odom? I know amcl did such thing, but its source code is hard to understand.
Thank you.
Originally posted by Shay on ROS Answers with karma: 763 on 2016-08-17
Post score: 0
Answer:
The difference between those is how/what they measure.
Odometry, IMU and Visual Odometry (I guess this what you mean with vo) just measure the internal state of the robot, and thus just deliver relative measurements towards a starting pose and cannot correct for long-term drifts. However, you can fuse those using the robot_pose_ekf to achieve a more stable "fused" odometry guess.
Then, you need a localization that is providing measurements with respect to "world". This can be GPS, IPS, cameras with stored, localized features or laserscanners with a given map. With those, you can correct the drift of the "fused" odometry.
There are several packages providing this or parts of this functionality, e.g. robot_localization or amcl, to name just two.
EDIT
You are right, I did not think of adding a GPS sensor like this. Seems like I misunderstood how they define visual odometry here. However, a world fixed frame does not mean that this is fixed over multiple runs. Typically, any odometry starts of from the pose of the robot where it is turned on. In contrast there are fixed frames (like map coordinates) that are the same whether you turn this on or not.
Thus, vo provides measurements to the vo frame which can different any time you launch the robot, depending on what you use for input.
EDIT 2
Typically, when you add a sensor providing "global corrections", you don't correct the odom frame. The tf odom->base_link is what is typically provided by internal sensors, i.e. wheel encoders, IMU and visual odometry.
If you have another sensor (GPS, laserscanner, ...) I would prefer to adapt the tf map->odom such that the tree map->odom->base_link is correct. This is how it is typically done for mobile robots in ROS, thus I'd prefer this solution.
EDIT 3
This is where the magic happens in amcl.
You can use the TransformPose function of TF to get map->odom (called odom_to_map therein) from the map->base_link that you estimate, and broadcast this (after you bring it into the correct format...)
Originally posted by mgruhler with karma: 12390 on 2016-08-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Shay on 2016-08-18:
Thanks for your answer. But I'm still confused.
I thought the robot_pose_ekf not just fuses relative measurements, because it now supports GPS too. And I thought /vo is also the global pose relative to "world" | {
"domain": "robotics.stackexchange",
"id": 25543,
"tags": "navigation, robot-pose-ekf"
} |
Eclipse and Android Tutorial PubSub Errors | Question:
I have been battling through installing a development environment to use rosjava to communicate with a robot (pc) from an android device. I am relatively new to ROS and Android programming.
I have managed to finally get pubsub installed on my phone and functioning (from the command line), thanks to a lot of tidbits from everyone on Ros answers.
I now seem to be having issues getting the project to run from an eclipse environment. I created a new Android Project and created from existing source, pointing to android_tutorial_pubsub.
This left me with some unresolved errors.
My first attempt to fix this I added all the jars I could from android_gingerbread and rosjava until the errors were gone. I was then able to get the app on my phone but it would fail every time I tried to run it. I did notice that under properties->android the android_gingerbread reference had a big red X next to it.
My next attempt was remove all the library references and to create a new android project from existing source and point to android_gingerbread to bring it into eclipse. This removed all the errors and resulted in a green check mark beside the android_gingerbread reference.
But now when I try to compile I simply get a java heap error, crashing eclipse. I've increased the size of the heap but it just keeps happening.
Is importing android_gingerbread as an Android project not the right way to import it into eclipse? Is there another environment that I can use to modify the pubsub code?
Originally posted by Brainmuck on ROS Answers with karma: 1 on 2012-04-03
Post score: 0
Original comments
Comment by Brainmuck on 2012-04-04:
I pulled the new files and compiled the rosjava_core fine. But the Android_core won't compile now. It fails on UpdateProject. It can't seem to find the command android in the folder to 'update project' But if I call android from the directory it opens the SDK manager fine. So the path is there.
Comment by Brainmuck on 2012-04-04:
Ok now parts of android_core compile, had to add -target 1 to the gradle build file to target android-10 based on my target list. I still get a java heap problem when trying to run in eclipse though. android_honeycomb_mr2 won't compile do I need it for pubsub?
Answer:
There were some significant changes in the past couple days that left android_core broken. However, android_core and its documentation have been updated as of today. I suggest pulling and trying again.
http://docs.rosjava.googlecode.com/hg/rosjava_core/html/building.html
http://docs.rosjava.googlecode.com/hg/android_core/html/building.html
Originally posted by damonkohler with karma: 3838 on 2012-04-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 8848,
"tags": "rosjava, eclipse, android"
} |
Vowels and consonants in Java-exercises with strings | Question: I feel like my code is too messy-Im looking for possible way to make it more readable/shorter.
Goals:
1)Get three strings from user input. They have to be the same length.
a)First must contain only lowercase letters.
b)Second must contain only uppercase letters.
c)Third must have an even length and the same number of vowels and consonants.
Example1:
Input1: aaaa
Input2: BBBB
Input3: baba
2)Add to the first string all vowels from the third word (at the end of the word).
Example2:
Output: aaaaaa
3)Add to the second string all consonants from the third word (at the beginning of the word).
Example3:
Output:bbBBBB
4)Print the words:
a)with the most vowels.
b)with the most consonants.
I was thinking about ternary operator in "print the word with most vowels/consonants" phase but I failed.
Thank you for every advice.
import java.util.Scanner;
public class Main {
public static int countVowels(String str) {
str.toLowerCase();
int count = 0;
for (int i = 0; i < str.length(); i++) {
if (str.charAt(i) == 'a' || str.charAt(i) == 'e' || str.charAt(i) == 'i'
|| str.charAt(i) == 'o' || str.charAt(i) == 'u' || str.charAt(i) == 'y') {
count++;
}
}
return count;
}
public static int countConsonants(String str) {
str.toLowerCase();
String stringWithoutVowels = str.replaceAll("[aeiouyAEIOUY]", "");
int numberOfConsonants = stringWithoutVowels.length();
return numberOfConsonants;
}
public static void main(String[] args) {
Scanner scn = new Scanner(System.in);
while (true) {
System.out.println("Provide 3 strings");
String first = scn.next();
String second = scn.next();
String third = scn.next();
String thirdWithoutVowels = third.replaceAll("[aeiouyAEIOUY]", "");
String thirdWithoutConsonants = third.replaceAll("[BCDFGHJKLMNPQRSTVXZbcdfghjklmnpqrstvxz]", "");
StringBuffer firstForAdding = new StringBuffer(first);
StringBuffer thirdForAdding = new StringBuffer(thirdWithoutVowels);
String firstRegex = "[a-z]+";
String secondRegex = "[A-Z]+";
if (third.length() % 2 == 0) {
if (thirdWithoutVowels.length() == thirdWithoutConsonants.length())
if ((first.matches(firstRegex) == true) &&
(second.matches(secondRegex)) == true) {
if (first.length() == second.length() && second.length() == third.length()) {
System.out.println("__________________________1.First with vowels from the third one(at the end).__________________________");
System.out.println(firstForAdding.append(thirdWithoutConsonants));
System.out.println("__________________________2.Second with consonants from the third one(at the begining).__________________________");
System.out.println(thirdForAdding.append(second));
if (countVowels(first) > countVowels(second) && countVowels(first) > countVowels(third)) {
System.out.println("Word with most vowels: " + first);
} else if (countVowels(second) > countVowels(first) && countVowels(second) > countVowels(third)) {
System.out.println("Word with most vowels: " + second);
} else {
System.out.println("Word with most vowels: " + third);
}
if (countConsonants(first) > countConsonants(second) && countConsonants(first) > countConsonants(third)) {
System.out.println("Word with most consonants: " + first);
} else if (countConsonants(second) > countConsonants(first) && countConsonants(second) > countConsonants(third)) {
System.out.println("Word with most consonants: " + second);
} else {
System.out.println("Word with most consonants: " + third);
}
break;
}
}
} else {
System.out.println("Wrong");
}
}
}
}
Answer: The functions countVowels() and countConsonants() do very similar things, but are implemented in entirely different ways. Could they not be written using the same method? Could they be defined one in terms of the other, such as consonants = string_length - vowels?
The statement str.toLowerCase(); does nothing. Well, it does do something ... it computes a lowercase version of str ... but doesn’t assign the result to anything, so the result is lost. You probably wanted str = str.toLowerCase();.
The class StringBuffer is deprecated; StringBuilder should be used in its place.
The StringBuffer#append() function returns itself to facilitate chained operation, like sb.append(x).append(y).append(x). The value returned should not be used for other purposes, such as printing. System.out.println(thirdForAdding.append(second)) is a side-effect in a print statement - a dangerous practice; unlearn it.
Constructing a StringBuffer (or a StringBuilder) to append one value is overkill. Just use regular string addition.
Explicit equality tests with true are unnecessary. You could simply write:
if (first.matches(firstRegex) && second.matches(secondRegex)) { …
”Wrong” is only printed if the first test fails. If the second, third, or fourth tests fail, nothing is output.
The Scanner (and other Closable resources) should be closed to release resources immediately, instead of when garbage collected. The try-with-resources statement does this for you automatically & safely:
try (Scanner scn = new Scanner(System.in)) {
... use scanner here ...
}
... scanner has been closed after this point. | {
"domain": "codereview.stackexchange",
"id": 38043,
"tags": "java, strings"
} |
Are there any double stars that I can actually see orbit each other? | Question: If I had a nice amateur telescope†, are there any multiple star systems that I could observe over a few years or a few decades and actually see the movement of one or both of them over time?
My short human lifespan and limited telescope put heavy constraints on the orbital distance, brightness, and distance from the Sun, so I am guessing that if there are any at all, the number is probably small.
†arbitrarily defined as say 8-inch (20 cm) aperture, with a good set of eyepieces, and a sketch pad.
Answer: Vir (12h 42m, –01° 27′)
Probably Porrima, $\gamma$ Vir, is the best candidate for most observers in the Northern Hemisphere to see changes in a binary orbit, particularly using a small telescope. It is a pair of stars with similar size and visual magnitude, of about 3.6. Their orbital period is about 169 years, but the orbit is eccentric, e = 0.88. They are also relatively close at about 40 ly. Periapsis was in 2005, so the stars are now moving away from each other, but their rate of separation is decreasing. Separation at periapsis was about 0.4 arcsec, so would not have been resolved using a small telescope in 2005. By 2015 their separation was ~2.5 arcsec, and will increase to ~3 arcsec by 2020. I estimated a position angle change between 2015-2020 of ~7 degrees. These changes should be detectable with a 100-200 mm (4-8 inch) telescope.
Since most short period binaries are close together, with nearly circular orbits, and often more distant, they are very difficult or impossible to resolve with a small telescope.
Sirius B (06h 45m, −16° 43′)
As mentioned in @MichaelWalsby's answer, it is also possible to observe the orbit of the white dwarf binary companion to Sirius, the brightest star visible in the night sky. Sirius is only 8.6 ly away, and their orbit has a semi-major axis of about 7.5 arcsec, an eccectricity of e = 0.59, and a period of about 50 years. If this pair were similar in brightness, they would be an easy answer to this question. Unfortunately, Sirius B, or the Pup (as the companion is known to amateur astronomers), is ~10 magnitudes dimmer than Sirius, and usually lost in its glare. It takes a night with excellent seeing, i.e. a stable, non-turbulent atmosphere, especially since Sirius never gets much above 30 degrees elevation at the mid-northern latitude where I live. I have only seen the Pup 4 or 5 times (one view was probable but not certain) over almost 6 decades of observing, and I have never seen it in a telescope with aperture under 300 mm. I know other amateurs that have seen Sirius B in 150-200 mm telescopes, but mostly at lower latitudes. However, by seeing Sirius B at intervals separated by decades, I have observed its polar angle change.
I believe the separation of Sirius B is now over 10 arcsec, and still increasing slightly. So for the next couple of decades, observing it might be a bit easier. In recent winters I have tried with telescopes from a 120 mm refractor to a 250 mm Dobsonian, and occasionally larger, but still have not seen it for several years. This Hubble photo of Sirius gives some idea why Sirius B is hard to observe in small telescopes.
Indirect methods
Also, many eclipsing binaries are often observed, and their light curves measured. By analyzing light curves, orbital elements can be estimated. However, these indirect orbital observations are probably stretching the intent of the original question. | {
"domain": "astronomy.stackexchange",
"id": 3849,
"tags": "observational-astronomy, amateur-observing, binary-star"
} |
Is the chalk really needed in the "chalk and string labyrinth" analogy for depth-first search? | Question: I came across the chalk-and-string labyrinth analogy for depth-first search in Algorithms by Dasgupta et al. I have seen the code for the depth first search.
Everybody knows that all you need to explore a labyrinth is a ball of string and a piece of chalk. The chalk prevents looping, by marking the junctions you have already visited. The string always takes you back to the starting place, enabling you to return to passages that you previously saw but did not yet investigate.
How a computer does it in code makes sense, but as a human, I think that the string is sufficient. why would I need the chalk, wouldn't see the string indicate that I have already passed a junction?
Answer: First of all keep in mind that this is just an analogy meant to convey the intuition behind the DFS algorithm on graphs.
Now graphs can be directed, which would correspond to having a labyrinth with junctions/rooms and one-way pathways between them.
Then it is possible for you to be in some room $r_0$ and to travel to some other room $r_1 \neq r_0$ as a part of your labyrinth exploration.
Maybe all paths from $r_1$ lead to a dead end, so you backtrack to $r_0$. From there you continue to explore until you reach a third room $r_2 \not\in \{r_0, r_1\}$.
The problem arises if $r_2$ has a pathway to $r_1$: without any record that you already visited $r_1$, you'd visit it again.
To stick with your analogy, you don't need strings if the floor plan of your labyrinth can be represented as a tree (rather than a general graph). Indeed, if you have a tree and some (arbitrary) order among the edges from a vertex to their children (as it is usually the case in actual implementations), you can implement a DFS without keeping any information other than the current and previous vertices of the search (i.e., an edge of the tree). | {
"domain": "cs.stackexchange",
"id": 17751,
"tags": "depth-first-search"
} |
What’s wrong with this Nordström-like scalar theory of gravity? | Question: I got very perplexed while reading a few papers on the old Nordström theory of relativistic scalar gravity. I would like to know what's wrong with the following, which isn't exactly the same as Nordström old theory (AFAIK, since there appears to be several inconsistencies with the description on the Wikipedia's page: https://en.wikipedia.org/wiki/Nordstr%C3%B6m%27s_theory_of_gravitation).
I consider gravity as a pure scalar field in Minkowski spacetime. The test-particle equation of motion is the following (I use units so $c \equiv 1$ and metric signature $\eta = (1, -1, -1, -1)$):
$$\tag{1}
\frac{d u^a}{d \sigma} = (u^a \, u^b - \eta^{ab}) \, \partial_b \, \phi,
$$
where $\phi$ represents the gravitational potential, and $u_a \, u^a \equiv 1$ is the usual four-velocity norm in Minkowski spacetime. The four-force on the right side is orthogonal to the four-velocity, so $u_a \, \dot{u}^a = 0$ as it should. The relativistic Poisson equation is assumed to be this:
$$\tag{2}
\square \, \phi = -\, 4 \pi G T,
$$
where $T \equiv \eta^{ab} \, T_{ab}$ is the trace of the energy-momentum tensor (including a possible non-linear contribution from the scalar field itself).
I'm not interested in the experimental failure of this theory, which predicts that light would not produce any gravity (since the trace of the electromagnetic contribution vanishes).
So what are the theoretical issues with these equations, in a special-relativity classical field context? What are the contradictions, or inconsistencies? What are the non-experimental concerns with this theory? As an example, maybe these equations couldn't be found from an action, and this could be raised as an objection (even in a classical field context), since it would be hard to find the scalar field energy-momentum without the lagrangian density (unless the energy-momentum is already God-given...).
Answer: An interesting analysis could be found in a following paper:
Giulini, D. (2008). What is (not) wrong with scalar gravity? Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 39(1), 154-180, doi:10.1016/j.shpsb.2007.09.001, arXiv:gr-qc/0611100.
Abstract:
On his way to General Relativity, Einstein gave several arguments as to why a special-relativistic theory of gravity based on a massless scalar field could be ruled out merely on grounds of theoretical considerations. We re-investigate his two main arguments, which relate to energy conservation and some form of the principle of the universality of free fall. We find that such a theory-based a priori abandonment not to be justified. Rather, the theory seems formally perfectly viable, though in clear contradiction with (later) experiments.
OP's model is discussed in this paper as a “naive theory”. The trouble with it, is that the equation of motion for test particle is incompatible with the coupling of matter to scalar field implied by the field equation on $\phi$. This requirement is referred to as Principle of universal coupling:
All forms of matter (including test particles) couple to the gravitational field in a universal fashion.
The theory could be remedied by deriving equations of motion from the joint action of scalar field and matter (including test point particles) satisfying this principle of universal coupling taking, taking into account that point particle has the following stress-energy tensor: \begin{equation}
\label{eq:T-PointParticle}
T^{\mu\nu}_p(x)=mc\,\int
{\dot x}^\mu(\tau){\dot x}^\nu(\tau)\ \delta^{(4)}(x-x(\tau))\ d\tau\,.
\end{equation}
The resulting equation for particle motion in this improved theory of scalar gravity:
\begin{align}
\ddot x^\mu(\tau)&=
P^{\mu\nu}(\tau)\partial_\nu φ(x(\tau))\,,&\\
\text{where}\quad
P^{\mu\nu}(\tau)&=
\eta^{\mu\nu}-{\dot x}^\mu(\tau){\dot x}^\nu(\tau)/c^2&\\
\text{and}\qquad
φ&:=c^2\ln(1+\phi/c^2)\,,&
\end{align}
differs from that of the naive theory in that it is now in $φ$, nonlinear function of the scalar field $\phi$.
This improved model is actually internally consistent at least as a classical field theory, and in particular, the arguments by Einstein used to dismiss scalar theories of gravity do not work against it. So it is only experiments, such as the absence of light deflection by a gravitating body and the wrong prediction for the perihelion precession (which is different by a factor of $-1/6$ from value predicted by GR) that rule this theory out. | {
"domain": "physics.stackexchange",
"id": 86163,
"tags": "general-relativity, special-relativity, classical-field-theory, modified-gravity"
} |
Doubt about definition of creation operator | Question: I am a past physics student and wanted to revise the rudiments of many body theory, in particular as related to materials physics.
I have a doubt about the definition of creation and annihilation operators. Let's call them $C^{+}_{\lambda}$ and $C_{\lambda}$.
We consider the fermionic (eletronic) case, where the creation operator creates an electron with wavefunction $\psi_{\lambda}(x)$. We have in mind an Hamiltonian $H$ with interactions.
Now, calling $S(*,..,*)$ the Slater determinant operation, I think that apart from normalization we know that $C^{+}_{\lambda}$ acts something like:
$$C_{\lambda}^{+} S(\psi_{\lambda_1},...,\psi_{\lambda_N}) \sim
S(\psi_{\lambda},\psi_{\lambda_1},...,\psi_{\lambda_N}) \tag{1}$$
(depending on where we insert $\psi_{\lambda}$ we would have a different sign).
Now my question is:
Suppose that we want to understand how $C^{+}_{\lambda}$ acts on a generic antisymmetric funtion $f(x_1,..,x_n)$ that is not provided as a slater determinant (e.g. the ground state wavefunction of $H$ should generally fall in this case). I guess the way to go would be to expand:
$$f(x_1,..,x_n)=\sum_{\lambda_1,..,\lambda_n} \alpha_{\lambda_1,..,\lambda_n} S(\psi_{\lambda_1},...,\psi_{\lambda_n})$$
and proceed by linearity:
$$C_{\lambda}^{+} f(x_1,..,x_n)=\sum_{\lambda_1,..,\lambda_n} \alpha_{\lambda_1,..,\lambda_n} S(\psi_{\lambda,}\psi_{\lambda_1},...,\psi_{\lambda_n}) \tag{2}$$
But the wavefunction $f$ does not know about the wavefunctions $\psi$, so that we could also have expanded in an other single particle bases:
$$f(x_1,..,x_n)=\sum_{\mu_1,..,\mu_n} \beta_{\mu_1,..,\mu_n} S(\phi_{\mu_1},...,\phi_{\mu_n})$$
and definining:
$$C_{\lambda}^{+} f(x_1,..,x_n)=\sum_{\mu_1,..,\mu_n} \beta_{\mu_1,..,\mu_n} S(\psi_{\lambda,}\phi_{\mu_1},...,\phi_{\mu_n}) \tag{3}$$
and this result looks different so that we would choose the first definition (Eq. (2)). Expanding in another bases seems to lead to a different result. Any relevant mistake up to now?
How do I have to interpret the fact that we need to expand $f$ with the same single particle bases of $C_{\lambda}$ in order to evaluate the operator? Maybe I have to consider as if in the same wavefunction there are "hidden" many single particle states, according to the representation that I use, and that $C^+_{\lambda}$ is "probing" somehow the "hidden" single particle states of a certain type? (the onces associated with the $\lambda$ quantum numbers). Does this interpretation makes sense?
Maybe once upon the time I knew the answer to these doubts, sorry if the question is too trivial but wanted to know if I am missing something important...
EDIT: actually now looking at the formulas maybe it is not impossible that the results of (2) and (3) are equal. Maybe the Slater determinant changes so that the coefficients balance... I will try to check and will update the question if I find something more convincing in one direction or the other... I guess that according to the result than the physical interpretation of the formulas may differ...
Answer: I think the formulas (2) and (3) of the original post are equivalent as suggested in the comments.
I change slightly the notation for the indexes and use Einstein notation. If we define the matrix $O$ so that:
$$\phi_i=O_{i,j} \psi_j \tag{1}$$
than we have for the Slater determinants:
$$S(\phi_{i_1},..,\phi_{i_n})=O_{i_1j_1}...O_{i_nj_n}S(\psi_{j_1},..,\psi_{j_n}) \tag{2}$$
Using this we can show that if $f(x_1,..,x_n)=\alpha_{i_1,..i_k}S(\phi_{i_1},..,\phi_{i_n})=\beta_{j_1,..j_n}S(\psi_{j_1},..,\psi_{j_n})$, than:
$$\beta_{j_1,..j_n}=\alpha_{i_1,..i_k}O_{i_1j_1}...O_{i_nj_n}\tag{3}$$
Now we start from the definition of the construction operator associated to a wafunction $\rho$:
$$C_\rho^+ f \equiv\beta_{j_1,..j_n} S(\rho,\psi_{j_1},..,\psi_{j_n}) \tag{4}$$
We have for the Slater determinants:
$$S(\rho,\psi_{j_1},..,\psi_{j_n})=O^{-1}_{j_1i_1}...O^{-1}_{j_ni_n}S(\rho,\phi_{i_1},..,\phi_{i_n}) \tag{5}$$
Inserting in [4] the relations [5] and [3] we obtain:
$$C_\rho^+ f = \alpha_{i_1,..i_n} S(\rho,\phi_{i_1},..,\phi_{i_n})$$
so that the expression/form of the construction operator is independent of the bases chosen to expand the function $f$.
NB1: some prefactors are missing probably but I hope the arguments are not affected.
NB2: in theory this works only for fermions. But probably the multilinearity used here is valid also for bosons?
UPDATE RELATING TO ANSWER OF @mike stone:
We try to use the relation between construction operators and see if we get a similar picture. Here I am not using Einstein notation. Let's call $a_i^+$ the construction operators for the $\phi$ functions. Than for the function $\rho$ we have:
$$C^+_{\rho}=\sum_i \langle \phi_i|\rho \rangle a^+_i$$
Further we have the Slater determinant expansion for $f$ (NB: we have to restrict the indices otherwise the determinants are not indepdendent. I will not be careful with this issue in the following, but I guess it can be fixed/formalized with some effort):
$$f=\sum_{i_1,..,i_n} \alpha_{i_1,..,i_n}a_{i_1}^+...a_{i_n}^+|0\rangle$$
This means that:
$$C^+_{\rho} f=\sum_{i,i_1,..,i_n} \langle \phi_i|\rho \rangle \alpha_{i_1,..,i_n} a^+_i a_{i_1}^+...a_{i_n}^+|0\rangle$$
Going back to Slater:
$$C^+_{\rho} f=\sum_{i,i_1,..,i_n} \langle \phi_i|\rho \rangle \alpha_{i_1,..,i_n} S(\phi_i,\phi_{i_1},..\phi_{i_n})$$
and by linearity:
$$C^+_{\rho} f=\sum_{i_1,..,i_n} \alpha_{i_1,..,i_n} S(\sum_i \langle \phi_i|\rho \rangle \phi_i,\phi_{i_1},..\phi_{i_n})=\sum_{i_1,..,i_n} \alpha_{i_1,..,i_n} S(\rho,\phi_{i_1},..\phi_{i_n}) \tag{6}$$
Formula [6] generalizes the defininition of creation operator often found in books:
$$a_i^+ S(\phi_{i_1},..\phi_{i_n})=S(\phi_i,\phi_{i_1},..\phi_{i_n})$$
[6] is valid for every bases set $\phi$, and independently of whether the wavefunction $\rho$ belongs to this set or not. In particular we have that when $f$ is a Slater determinant:
$$C_\rho^+ S(\phi_{i_1},..\phi_{i_n})=S(\rho,\phi_{i_1},..\phi_{i_n}) \tag{7}$$ | {
"domain": "physics.stackexchange",
"id": 93485,
"tags": "quantum-mechanics, operators, hilbert-space, many-body"
} |
Why doesn't Faraday's Law include speed of EM wave | Question: $$\nabla\times \overrightarrow{E} = -\frac{d\overrightarrow{B}}{dt} $$
Faraday's Law says: any change in the magnetic field causes circulation of electric field.
$$\mathbf{\nabla \times B} = \mu_0 \mathbf{j} + \frac{1}{c^2}\frac{\partial \mathbf{E}}{\partial t}$$
Maxwell's Law says: any change in the electric field causes circulation of magnetic field.
Right?
1-) If it is right, then why doesn't Faraday law have a speed of light parameter but Maxwell has?
2-) Is it because Faraday is all about the electric current in the wire and Maxwell's Law is about vacuum?
3-) Is that the difference between Maxwell's and Faraday's Law?
Answer: If you work in Gaussian units, where the electric and magnetic field appear on the same footing and have the same units, Faraday's law does contain a factor of $1/c$
$\nabla \times \mathbf{E} = -\frac{1}{c}\frac{\partial \mathbf{B}} {\partial t}$.
In Gaussian units, Ampere's equation takes the form
$\nabla \times \mathbf{B} = \frac{4\pi}{c}\mathbf{J} + \frac{1}{c}\frac{\partial \mathbf{E}} {\partial t}$,
where there is now single factor of $c$ in the denominator of both terms on the right-hand side. The advantage of Gaussian units is that they emphasize the fact that $\mathbf{E}$ and $\mathbf{B}$ come together to form the single electromagnetic field (sometimes written $F_{\mu\nu}$).
The real difference between Faraday's and Ampere's equations is the lack of a "magnetic monopole current" in Faraday's law. Despite the symmetry that such a term would add to the equations, no compelling experimental evidence for monopoles has ever been found. | {
"domain": "physics.stackexchange",
"id": 73825,
"tags": "electromagnetism"
} |
Writing a program to read a maximum of 99 elements in an iteration from a list | Question: I want to read a list of elements and pass a maximum of 99 elements to a function for some logical operations.
I have tried this with an array as an example and this code was successful in achieving my purpose.
I just want someone to review it and help optimize it.
/**
*/
package com.review.code.java;
/**
* i> Read a List Input,
* ii> Call a Function with maximum of 99 elements in one iteration Eg:(0..98; 99..198; 199..297; .....)
*
*
*/
public class OptimizeReadingList {
public static int list_size=107;//252,543,... etc. - Input List Size
public static int a[] = new int[list_size];
public static void main(String[] args) {
int read_max_size = 99;
int input_list_size = list_size;
// Add Elements to the Array
for(int j =0;j<input_list_size;j++){
a[j]=j+1;
}
// Print all the Elements in the List
for(int j =0;j<input_list_size;j++){
System.out.print(a[j]+",");
}
System.out.println("\n Print only a max of 99 elements in one iteration using the printElements method");
// Should Call a function printElements with start_index and end_index
for(int i = 0,ele = (read_max_size-1); i<input_list_size ; ){
printElements(i,ele);
// Increment Operations for the next elements(maybe another 99 or less than 99)
i=(ele+1);
if((ele+read_max_size > input_list_size)){
ele=input_list_size;
}else{
ele=ele+ele;
}
}
}
// Consider that this method can read only a maximum of 99 elements in range.
private static void printElements(int i, int ele) {
for(int j = i;j<ele;j++){
System.out.print(a[j]+",");
}
System.out.println("\n");
}
}
Answer: Bugs
Your code doesn't actually do what it says it will do. The problems are here:
for(int i = 0,ele = (read_max_size-1); i<input_list_size ; ){
printElements(i,ele);
// Increment Operations for the next elements(maybe another 99 or less than 99)
i=(ele+1);
if((ele+read_max_size > input_list_size)){
ele=input_list_size;
}else{
ele=ele+ele;
}
}
Problem one
On the first iteration, only 98 items will be printed instead of 99. This is because ele is set to read_max_size-1 which is 98, so printElements() will end up printing the elements from 0..97 instead of 0..98.
Problem two
After each iteration, you do ele=ele+ele;. This is incorrect because it doubles the end index instead of adding 99 to it. So while the 2nd iteration will be ok, the third iteration will end up printing 198 elements instead of 99. The fourth iteration will print 396 elements, etc.
You should replace that line with ele += read_max_size;.
Problem three
On the first iteration, there is no check to see if ele exceeds the array bounds. So if the array is smaller than read_max_size, you will get an out of bounds exception.
Problem four
You are setting i=(ele+1); after each iteration. By doing that, you are skipping one element. It should be i = ele; instead.
Rewritten loop
Here is how the loop could have been written to avoid the problems:
for (int start = 0; start < input_list_size; start += read_max_size) {
int end = start + read_max_size;
if (end > input_list_size) {
end = input_list_size;
}
printElements(start, end);
} | {
"domain": "codereview.stackexchange",
"id": 14368,
"tags": "java, beginner, array, collections"
} |
imu_filter_madgwick not publishing on imu/data | Question:
Running the sample bag-file "ardrone_imu.bag", I am remapping the topic "/ardrone/imu" to "/imu/data_raw" since according to http://wiki.ros.org/imu_filter_madgwick, that is the nodes subscribed topic. I run the filter with:
rosrun imu_filter_madgwick imu_filter_node _use_magnetic_field_msg:=false _publish_debug_topics:=true
Looking at rostopic list -v and rostopic info on the topics I see that the /Imufilter is subscribing on /imu/data_raw and the /imu/data_raw topic seems fine when echoing it. But when I echo /imu/data where the fused orientation is supposed to be, nothing appears. I can't see any debug topics publishing either. What am I doing wrong here?
$ rostopic list -v
Published topics:
* /imu/rpy/raw [geometry_msgs/Vector3Stamped] 1 publisher
* /ardrone/mag [geometry_msgs/Vector3Stamped] 1 publisher
* /imu/data [sensor_msgs/Imu] 1 publisher
* /rosout [rosgraph_msgs/Log] 2 publishers
* /tf [tf2_msgs/TFMessage] 1 publisher
* /ImuFilter/parameter_descriptions [dynamic_reconfigure/ConfigDescription] 1 publisher
* /clock [rosgraph_msgs/Clock] 1 publisher
* /rosout_agg [rosgraph_msgs/Log] 1 publisher
* /ImuFilter/parameter_updates [dynamic_reconfigure/Config] 1 publisher
* /imu/magnetic_field [sensor_msgs/MagneticField] 1 publisher
* /imu/rpy/filtered [geometry_msgs/Vector3Stamped] 1 publisher
* /imu/data_raw [sensor_msgs/Imu] 1 publisher
Subscribed topics:
* /imu/magnetic_field [sensor_msgs/MagneticField] 1 subscriber
* /imu/mag [geometry_msgs/Vector3Stamped] 1 subscriber
* /rosout [rosgraph_msgs/Log] 1 subscriber
* /imu/data_raw [sensor_msgs/Imu] 1 subscriber
and
$ rostopic echo /imu/data_raw
header:
seq: 3691
stamp:
secs: 1352907263
nsecs: 373922350
frame_id: ardrone_base_link
orientation:
x: 0.149071718242
y: 0.115987049271
z: -0.981395641479
w: 0.0344560895747
orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
angular_velocity:
x: -0.265703596544
y: 0.275929386036
z: 0.0895630139048
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration:
x: 0.448720367998
y: 0.806759922206
z: 8.29265825748
linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
and
$ rostopic info /imu/data_raw
Type: sensor_msgs/Imu
Publishers:
* /player (http://user:44850/)
Subscribers:
* /ImuFilter (http://user:38581/)
Originally posted by hashten on ROS Answers with karma: 11 on 2016-06-14
Post score: 1
Answer:
Sorry for the late reply. You forgot to remap the magnetometer topic. When use_mag is set to true (default), the node waits for matching pairs of messages on topics /imu/data_raw and either /imu/mag or /imu/magnetic_field. If the user does not provide one of the magnetic field topics, no data will be published on /imu/data. There should be a warning in this case; I've opened a ticket for that.
You probably started everything like this:
roscore
rosparam set use_sim_time true
rosrun imu_filter_madgwick imu_filter_node _use_magnetic_field_msg:=false _publish_debug_topics:=true
rosbag play --clock $(rospack find imu_filter_madgwick)/sample/ardrone_imu.bag /ardrone/imu:=/imu/data_raw
Option 1:
rosbag play ... /ardrone/imu:=/imu/data_raw /ardrone/mag:=/imu/mag
Option 2: (if you don't have or don't want to use magnetometer data)
rosrun imu_filter_madgwick imu_filter_node ... _use_mag:=false
Originally posted by Martin Günther with karma: 11816 on 2016-07-28
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by aaron cheng on 2016-07-28:
Hi Martin, I just set use_mag to false and it works like a charm. Many thanks.
Comment by 2ROS0 on 2016-07-28:
What additional benefits does the magnetic field message provide? | {
"domain": "robotics.stackexchange",
"id": 24930,
"tags": "ros"
} |
How difficult it is to build simple robots (for example Line follower) using raspberry pi and ROS? | Question: I want to build a low cost robot, running ROS for educational purposes. It can be a simple line follower using raspberry pi and an IR sensor. Is it overambitious as a beginner project? How difficult is it to make ROS run on custom hardware?
P.S. I am newbie in both robotics and programming and I am more interested in building actual robots than running simulations. Also, I cant afford to buy ROS compatible robots.
Answer: It is not extremely difficult to achieve this. It is probably a great way to learn after you complete the main ROS tutorials. Using ROS will get you free joystick or keyboard teleoperation of your bot along potential integration with the ROS navigation stack.
You have the choice of either interfacing with your robot hardware using the GPIO onboard you raspi or using a microcontroller and interfacing that with the computer using rosserial. I would recommend using rosserial and an Arduino because controlling a small robot and reading analog sensors is better documented on the Arduino platform. Either way is totally doable though. Good Luck! | {
"domain": "robotics.stackexchange",
"id": 993,
"tags": "ros, raspberry-pi, electronics"
} |
What is meant by increase in information embodied in the system? | Question: From Ecosystem Ecology
edited by Sven Erik Jørgensen
After the initial capture of energy across a boundary, ecosystem growth and development is possible by
an increae of the physical structure(biomass),
an increase of the network (more cycling)
an increase in the information embodied in the system.
What is meant by increase in information embodied in the system?
Does it mean increase in diversity of the genes (genetic information)?
Answer: Table 1 on the next page (p. 36) lists a number of properties for each of the three "growth forms".
For information, properties include:
Life history types
Diversity (taxonomic & ecological)
Body size
Stability
I have to admit that some of these properties doesn't really make sense to me under these headings (at least from the common use of these terms). However, I suspect that they are used in a very specific sense that might make sense if you read the entire book. And systems ecology isn't really my thing.
Note also that exergy (which is used in relation to these three growth forms in the book) is a term borrowed from thermodynamics (ie physics), and it describes potential/available work. Exergy is also negatively related to entropy, so when exergy decreases, entropy increases. Since higher entropy is related to disorder, the opposite goes for exergy, so high exergy means a higher amount of order (within the system), which can be translated into "structural information". I suspect that this is the form of information that the book is referring to, as one of the "growth forms". | {
"domain": "biology.stackexchange",
"id": 6488,
"tags": "ecology, energy, ecosystem"
} |
Is natural gas transported as a liquid or supercritical fluid? | Question: is natural gas transported as a liquid or supercritical fluid?
I am curious as to what phase natural gas is in when being transported in pipelines over long distances. Both liquid and supercritical fluid phases seem to have their own set of unique difficulties involving temperature pressure and feasibility of use. Which is preferred and why?
Answer: In industrial applications, natural gas is sent through pipelines at ambient temperature and pressures somewhat higher than ambient pressure. Thus, it is transported through pipelines as a gas. Note that ambient temperatures are higher than the critical temperature of natural gas, indicating that for this application, natural gas is indeed transported as a "supercritical fluid".
When shipped overseas, natural gas is liquified at low temperature in order to maximize its density such that the maximum mass of natural gas can be placed in the constant volume tanks of the LNG ship.
Regarding which is preferred, it takes a lot of energy to liquify natural gas, and a LOT of insulation on long pipelines to keep it liquid. Since the natural gas must be in the gaseous state to use it as a fuel (its normal use), it is not economic or desirable to attempt to send natural gas through pipelines as a liquid. | {
"domain": "physics.stackexchange",
"id": 94774,
"tags": "thermodynamics, fluid-dynamics, phase-transition, physical-chemistry, gas"
} |
Problems with robot modeling in model.sdf | Question:
Hello
I am somewhat new to Gazebo
I have modelled a biped humanoid about 5+ feet tall in standard sdf format required for gazebo.
The robot consists of simple boxes for links for collision/visual and I have assigned the joints so i can connect these links together.
There are no errors with the model nor the actual XML description. however, i have the following issues. There is no way to set the joints so they are stiff (especially at the hips and knees level). When i let the robot stand it collapses to the ground shortly after due to wobbling.
i have set "effort" "friction" "damping' values to keep the robot from collapsing but its been useless.
I have seen the DRC robot's model.sdf but there's is it does not help.
Is there a way to keep the robot from collapsing?
Any help is greatly appreciated. Thank you for your time
Originally posted by nordegren on ROS Answers with karma: 1 on 2013-01-03
Post score: 0
Answer:
It appears the joints of your robot do not have controllers that apply torques/efforts to them. You could write your own plugin for applying such forces (see this gazebo tutorial).
A simpler option is the use of existing controller plugins, similar to how it is done on the drc robot. That model is a URDF model however, so I am not totally sure you can apply things to your pure SDF.
The simplest option is to use per joint position PID controllers as it is done in the atlas_position_controllers.launch launch file. This gives you one '/[joint_controller_name]/command' topic per joint that listens to std_msgs/Float64 position commands.
Another option is the use of of whole body trajectory action controller as it is currently done in the default atlas.launch file, which in turn includes atlas_bringup.launch.
It's recommended you have a look at the drcsim tutorials, those show some basics of how a robot like yours can be controlled (no fancy walking or whole body control available though, all drc teams are busy working on that ;) ).
Originally posted by Stefan Kohlbrecher with karma: 24361 on 2013-01-04
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 12266,
"tags": "ros, gazebo, sdf, robot"
} |
Rotations in space-time | Question: In Landau's Classical Theory of Fields, one finds the statement:
Every rotation in the four-dimensional space can be resolved into six
rotations, in the planes $xy,zy,xz,tx,ty,tz$ (just as every rotation
in ordinary space can be resolved into three rotations in the planes
$xy,zy,xz$).
How can I prove this statement? Thanks.
Answer: Revised answer:
I think Landau is referring to the extrinsic Euler angle parametrization of rotations (also known as the method of Givens rotations or Jacobi rotations). Basically, there is an explicit algorithm by which one can achieve any orientation-preserving orthogonal transformation as a (highly non-unique) sequence of rotations in pairs of coordinates. You can prove the existence of this decomposition in general by induction on the number of coordinates, and this is essentially what Philip Gibbs did in his answer, for the case of dimension 4.
Original answer:
The only way I know how to make Landau's statement both precise and correct is to say that the vector space of first-order infinitesimal rotations in 4 dimensions is spanned by infinitesimal rotations in the 6 pairs of axes. (In particular, the word "resolved" here is a bit of a puzzle to me.)
Any linear transformation in $n$ dimensions (including any rotation) can be written as an $n \times n$ matrix, where for each $k$ between $1$ and $n$, the $k$th column of the matrix gives the coordinates of where the $k$th basis vector goes. In order for a transformation to be a rotation, we need the lengths of the vectors to be preserved, and we need the angles between them to stay the same. We can encode these conditions in a succinct equation asserting that our matrix times its transpose is the identity. The set of such transformations is given by the solutions to the matrix equation, so it forms an algebraic subset of the $n^2$-dimensional space of matrices. In fact, it has the structure of a Lie group, called the orthogonal group $O(n)$.
We can describe infinitesimal rotations by adding an infinitesimal element $\epsilon$ to our number system, which satisfies the properties that $\epsilon \neq 0$ and $\epsilon^2 = 0$. An infinitesimal transformation is a matrix of the form $I + \epsilon M$, where $I$ is the $n \times n$ identity matrix, and $M$ is any matrix with real (non-infinitesimal) entries. In order for this to be a rotation, it is necessary and sufficient that the matrix equation $(I + \epsilon M)(I + \epsilon M)^T = I$ is satisfied. The left side can be expanded as $I + \epsilon M + \epsilon M^T$, so the equation becomes $\epsilon (M + M^T) = 0$. Since the entries of $M$ and $M^T$ are non-infinitesimal, this is equivalent to $M$ being skew-symmetric, i.e., $M = -M^T$. That is, the space of first-order infinitesimal rotations is the space of matrices of the form $I + \epsilon M$, where $M$ is skew-symmetric - this is also called the Lie algebra of the group $O(n)$.
It remains to find a set that spans the space of skew-symmetric matrices. A natural method is given taking all pairs of distinct coordinates, and for each pair, choosing an antisymmetric combination, i.e., $e_{ij} - e_{ji}$, where $e_{ij}$ is the matrix that has a 1 in the $i$th row and $j$th column and zeroes elsewhere. This forms a linearly independent set of size $\binom{n}{2}$, which is the dimension of the space of skew-symmetric matrices. If a matrix $M$ has the form $e_{ij} - e_{ji}$, we can think of it as an infinitesimal rotation in the $x_i x_j$ direction, since exponentiating yields the rotation: $$e^{tM} = (e_{ii} + e_{jj}) \cos t + (e_{ij} - e_{ji}) \sin t + \sum_{k \not \in \{i,j \} } e_{kk}.$$
In dimensions 3 and 4, the rotations you listed are precisely those given by pairs of distinct coordinates. | {
"domain": "physics.stackexchange",
"id": 1295,
"tags": "special-relativity, spacetime"
} |
tflite_convert a Keras h5 model which has a custom loss function results in a ValueError, even if I add it in the Keras losses import | Question: I have written a SRGAN implementation. In the entry point class of the Python program, I declare a function which returns a mean square using the VGG19 model:
# <!--- COST FUNCTION --->
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape)
return mean(square(loss_model(ground_truth_image) - loss_model(predicted_image)))
import keras.losses
keras.losses.build_vgg19_loss_network = build_vgg19_loss_network
# <!--- /COST FUNCTION --->
(Vgg19Loss class shown further below)
As you can see, I have added this custom loss function in the import keras.losses. Why? Because I thought it could solve the following problem...: When I execute the command tflite_convert --output_file=srgan.tflite --keras_model_file=srgan.h5, the Python interpreter raises this error:
raise ValueError('Unknown ' + printable_module_name + ':' + object_name)
ValueError: Unknown loss function:build_vgg19_loss_network
However, it didn't solve the problem. Any other solution which could work?
Here is the Vgg19Loss class:
from keras import Model
from keras.applications import VGG19
class Vgg19Loss:
def __init__(self):
pass
@staticmethod
def define_loss_model(high_resolution_shape):
model_vgg19 = VGG19(False, 'imagenet', input_shape=high_resolution_shape)
model_vgg19.trainable = False
for l in model_vgg19.layers:
l.trainable = False
loss_model = Model(model_vgg19.input, model_vgg19.get_layer('block5_conv4').output)
loss_model.trainable = False
return loss_model
Answer: I tried the code you posted the following way:
from keras import Model
from keras.applications import VGG19
import keras.backend as K
class Vgg19Loss:
def __init__(self):
pass
@staticmethod
def define_loss_model(high_resolution_shape):
model_vgg19 = VGG19(False, 'imagenet', input_shape=high_resolution_shape)
model_vgg19.trainable = False
for l in model_vgg19.layers:
l.trainable = False
loss_model = Model(model_vgg19.input, model_vgg19.get_layer('block5_conv4').output)
loss_model.trainable = False
return loss_model
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape) # where is this variable coming from?
return K.mean(K.square(loss_model(ground_truth_image) - loss_model(predicted_image)))
import keras.losses
keras.losses.build_vgg19_loss_network = build_vgg19_loss_network
print(keras.losses.build_vgg19_loss_network) # <function build_vgg19_loss_network at 0x7f05e8e1cbf8>
I get no error messages and the function is assigned to the losses module. That means the problematic lines are probably not part of what you posted. It would be nice to know which line of code raises the error that you quoted.
However, I'm not sure where this high_resolution_shape argument on line 22 in your build_vgg_19_network function is coming from. If this is a global constant, it should be written in all uppercase letters separated by underscores to prevent confusion. If it is not defined it will throw a NameError sooner or later.
If I execute keras.losses.build_vgg19_loss_network(None, None) after running the code above, I get the following error message:
NameError: name 'high_resolution_shape' is not defined
Edit: If this error happens only during TFLite conversion, it does so because custom objects are not yet supported by the TFLiteConverter in tensorflow 1.x. However, there is a commit on the tensorflow Github repo that addresses this issue and adds support for custom objects (see also the related pull request). It should be part of the official tensorflow v2.0.0-beta1. | {
"domain": "datascience.stackexchange",
"id": 5821,
"tags": "python, keras, tensorflow"
} |
How to create a LR(k) grammar for an arbitrary k | Question: Is there a simple procedure for constructing a grammar that is LR(k) but not LR(k-1) for any k?
Answer: Here's one pattern for an LR(k) grammar which is not LR(k-1).
I didn't fill in the definition of $A$; there's nothing particularly special about it. It might have an empty right-hand side, or it might match any LR(k) subgrammar. $a^k$ represents $k$ instances of $a$.
$$
\begin{align}S&\to B a^{k-1} b\\
S&\to C a^{k-1} c\\
B&\to A\\
C&\to A\\
A&\to \text{see above}
\end{align}
$$
Clearly, it is not possible to determine whether $A$ should be reduced to $B$ or to $C$ without knowing what the $k$ following token is. So the grammar is not LR(i) for any $i \lt k$. The reduction can be determined using $k$ lookahead tokens, so the grammar is LR(k) provided that $A$ is. | {
"domain": "cs.stackexchange",
"id": 15531,
"tags": "context-free, parsers, lr-k"
} |
Simulating an orbit, primary is not at focus | Question: I've been toying around with some -very- simple orbital simulators, mostly using preexisting physics libraries (I took a layman's stab at doing it with vectors too). The thing that is confusing me is that my orbits do not behave as in reality - the primary is always at the center of the ellipse rather than at one of the foci. I get the same result regardless of the engine or library I use.
I've simply been putting in a primary and an orbiter, with the primary at the center of the layout. I plug in the formula $$F = G\frac{m_1m_2}{r_1^2}$$ with the force directed towards the primary. I've tried adjusting the time step, but I get the same result. I'm simply confused as to what would cause this.
Update
You're right, I was multiplying, I just didn't know it (something about the way the Construct2 interpreted my commands). I had it configured to apply the force which was essentially G(arbitrary) / distance * distance (didn't like it when I tried ^2). When I put distance squared in a variable and then divided by it things worked.
Now I had tried this before in Panda3d with the same problem, so I'll have to go back and look at that.
Answer: A few basic checks:
What size time steps are you taking? Way too big will lead to wild errors. Way too small and changes in velocity and position will be incorrect due to roundoff errors (finite precision).
Is the primary mass much larger than the orbiter? In real life, the Moon and Earth orbit around a common inertial center. Since the Earth is much heavier, we can declare it stationary at the risk of small errors. I don't think this explains your odd results, however. But it could be a helpful clue to know who is not guilty. (What was the famous Sherlock quote?)
What initial velocity are you giving the orbiter?
Do you have a choice of integration method? I can't imagine the wrong choice giving the result you get, but still, clues help...
You really are telling it inverse square, right? And it's inverse, right? If you multiply not divide by r squared, that will explain centered ellipses. | {
"domain": "physics.stackexchange",
"id": 7033,
"tags": "newtonian-gravity, orbital-motion, simulations"
} |
ROS time across machines | Question:
When there are several machines involved, how are timestamps created? It looks like to me that they're being generated by each machine's individual time, so if one machine is out of sync with another than all the messages will be in the past or the future. Is this actually the case or am I missing something, and how do people deal with this limitation?
Originally posted by John Hoare on ROS Answers with karma: 765 on 2011-10-17
Post score: 9
Answer:
You're correct, they just use the system's current time. The typical way I've seen to synchronize time across computers is to use Chrony. With Ubuntu, you should be able to just run apt-get install chrony.
Here are some resources I've found on a quick search:
http://answers.ros.org/question/2140/chrony-configuration-and-limitations
http://pr2support.willowgarage.com/wiki/PR2%20Manual/Chapter13#Clock_Synchronization
The important information is that your ROS system computers are tightly coupled in time, but loosely tied to the outside world's time.
Originally posted by Chad Rockey with karma: 4541 on 2011-10-17
This answer was ACCEPTED on the original site
Post score: 16
Original comments
Comment by Den on 2022-03-28:
If anyone requires, the link for: http://pr2support.willowgarage.com/wiki/PR2%20Manual/Chapter13#Clock_Synchronization has been changed to: https://www.clearpathrobotics.com/assets/downloads/pr2/pr2_manual_r321.pdf since PR2 has changed hands from Willow Garage to Clear Path Robotics | {
"domain": "robotics.stackexchange",
"id": 6992,
"tags": "ros, multiplemachines, time"
} |
Holonomic constraints on an upright wheel | Question: A wheel moving in free space has the six degrees of freedom of a rigid body. If we constrain it to be upright on a plane and to roll without slipping, how many holonomic and nonholonomic constraints is the wheel subject to?
This is a question from NWUs's course on Foundations of Robot Motion,
According to my understanding,
the wheel has 2 non-holonomic constraints:
wheel is rolling and not slipping and
the wheel cannot slide along its axis of rotation.
but I don't understand how many holonomic constraints it would have.
Isn't, the wheel not able to slide along its axis of rotation also a holonomic constraint?
I hope someone can help me with the number of holonomic constraints on the wheel.
Answer: Holonomic constraints are ones that are based on the position only. And restrict the system to an envelope.
You've identified one stated constraint that is non-holonmic and broken it down into two components as they have constraints related to the velocity.
The remaining constraints in your list are then holonomic.
The wheel is upright
The wheel is on the plane
You've above decided that not sliding is a non-holonomic constraint. Why are you questioning that? Because it's related to the previous position of the system you're restricting the change in position from one state to the next, thus it's related to the derivative of the position not the value of the position.
If you create a state space representation of the wheel and write the constraints out you may find it more revealing that way such that you can reveal the structure of the constraints and apply the definition of a holonomic constraint to each one.
There's also a good discussion of the difference here: https://physics.stackexchange.com/questions/409951/what-are-holonomic-and-non-holonomic-constraints | {
"domain": "robotics.stackexchange",
"id": 2456,
"tags": "kinematics, dynamics, navigation"
} |
What is this liriope looking plant? | Question: They look like monkey grass, in the midst, a tall, skinny stem with bright orange flowers on top.
One of the plants, although it looks just like the others, has pink, purplish flowers instead of orange.
I would like to know what these are so I can start to take care of them properly.
Answer: Your question' better suited to Gardening SE where members can identify plants. As it is, I'm an active member there and the identity of these flowers are straightforward. They're all different cultivars of the daylily (plants in the Hemerocallis genus). If you use Google image search, you'll see photos of various cultivars. Basic structure:
Notice the flowers on a branched (or unbranched) scape (tall stem); leaves arise from the base of the plant. Flowers can be single or double (sometimes more); single flowers (as in your lower left photo) have six petals.
While Asian and Oriental lilies are toxic to pets (even the pollen is toxic to cats if they lick it off their fur), daylilies aren't. They're easy to tell apart as the daylily has long narrow strap-like leaves with flowers on stems with no leaves. Asian and Oriental lily have small leaves continue up the length of the stem like this -
This includes Easter lilies.
Daylilies are named for the reason individual flowers lasts only one day. But each stem has many buds on it that continue to mature over a few weeks so will produce flowers for about a month (as some stems don't have open flowers yet when the first stems start flowering). | {
"domain": "biology.stackexchange",
"id": 7390,
"tags": "botany, species-identification, flowers"
} |
Transforming digitized noisy signal before applying cross-correlation | Question: I'm trying to grasp the concept of cross-correlation as it applies to CDMA in the GPS C/A signal where noise is involved and the SNR is low.
My understanding is that before calculating the cross-correlation with the C/A code one has to transform (just shift by its average value?) the digitized and demodulated received signal so that there are both positive and negative values in the sequence and the sum of these on average should be zero, is this correct?
(I understand that there's much more to acquiring the GPS C/A signal, but I'm trying to grasp how cross-correlation can be applied to a digitized signal)
Answer: To offset or not is simply in your definitions on how you want to do the math involved and can be convenient for further processing. Subtracting a constant, or otherwise scaling the waveform, does not change the signal to noise ratio.
Correlation is to multiply and accumulate, and the cross-correlation and auto-correlation functions show this correlation as the sequences are shifted in time relative to each other. Cross-correlation is between two different functions, and auto-correlation is between a function and itself.
The cross-correlation of x[n] and y[n], with a shift index m, can be given as:
$$\rho_{xy}[m] = \Sigma_n x[n]y[n+m]$$
(Note: For complex signals, the product must be complex conjugate product)
With that said, consider this very simple example using an 11 chip barker code to demonstrate my first statement. This Barker code has a similar property to C/A codes in that a correlation with shifted versions with itself (autocorrelation) will have a very low correlation relative to when the code is aligned.
1 0 1 1 0 1 1 1 0 0 0
This code written with 1's and 0's has a mean of approximately 1/2, so you can optionally remove the mean and similarly for convenience scale by a factor of two by mapping 0 to -1 such that the code is:
1 -1 1 1 -1 1 1 1 -1 -1 1
This results in the following correlation when the sequences are aligned in time (which could be used in direct-sequence-spread-spectrum, DSSS, to send a data symbol with value = "1"):
And the following if the code was inverted (which could be used in DSSS to send a data symbol with value = "0"):
And the following for any other shift (I did a rotational shift of one sample here, corresponding to m = -1):
With the following autocorrelation result as a plot:
We could similarly instead perform the correlation with the following operations for multiplication (basically XOR function, could do the same with XNOR and get an inverterted result):
1 x 0 = 0
0 x 1 = 1
1 x 1 = 1
0 x 0 = 0
Resulting in a similar result with an offset and scaling in the correlation output instead of an offset and scaling the correlation input. So in this case the maximum correlation would be 11, the maximum correlation when inverted is 0 and the autocorrelation for any other offset would be 5. Double this and subtract 11 and you get the same result as before.
(Note unlike maximum-length pseudo-random sequences which will also cross-correlate to -1 at all offsets when you use +1 and -1 values, C/A codes have cross-correlation values of either 1, +63 or -65, which is still significantly less than the 1023 value when aligned).
The reason this works for GPS in the case of low SNR is when you add each of the 1023 chips with noise present, if the noise on each chip is independent ("white noise"), then the signal magnitude will accumulate by a factor of $1023$, but the standard deviation of the noise will only accumulate by a factor of $\sqrt{1023}$ thus in terms of SNR you get a $10Log_{10}(N)$ processing gain. Consider a simpler case of adding two independent Gaussian random variables with equal distribution and equal but non-zero mean: The mean will double but the standard deviation will only increase by the $\sqrt{2}$. | {
"domain": "dsp.stackexchange",
"id": 8159,
"tags": "noise, cross-correlation"
} |
Why does this object periodically turn itself? | Question: See below gif image taken from here.
Or see this Youtube video about 30 sec in.
Is this a real effect?
Why does it seem to turn periodically?
Can it be explained by classical mechanics alone?
Is there a simple equation that models this behaviour?
Answer: It's a classical mechanics effect for sure although a really interesting one. Following links on "Dzhanibekov effect" one gets at Marsden and Ratiu's "Introduction to Mechanics and Symmetry" Chapter 15 Section 15.9 "Rigid Body Stability" treating this with use of the Casimir functions.
From remark 1: A rigid body tossed about its middle axis will undergo an interesting half twist when the opposite saddle point is reached.
Here is another and more profound example under weightless conditions.
http://www.youtube.com/watch?v=L2o9eBl_Gzw
This seems to be a home experiment where a guy throws the spinning object upwards.
http://www.youtube.com/watch?v=3VwS5ykAUHI
And this seems to be a computer simulation.
http://www.youtube.com/watch?v=LR5hkgfRPno
There is a related unstable orbit effect which you can try out easily yourself with a tennis racket. A treatment due to Ashbauch Chicone and Cushman is here:
Mark S. Ashbaugh, Carmen C. Chicone and Richard H. Cushman, The Twisting Tennis Racket, Journal of Dynamics and Differential Equations, Volume 3, Number 1, 67-85 (1991). (One time found at http://math.ucalgary.ca/files/publications/cushman/tennis.pdf which is no longer a working link.)
http://www.youtube.com/watch?v=4dqCQqI-Gis | {
"domain": "physics.stackexchange",
"id": 1969,
"tags": "newtonian-mechanics, rotational-dynamics, rigid-body-dynamics, stability"
} |
Develop a flight control system | Question:
Hi all! Im actually building a quadcopter (accelerometer+gyroscopemotor+props+esc...)
I would like to develop my own flight controller.
Just discovered ROS.
Could be ROS interesting to develop and test a flight controller?
Thanks all
Originally posted by jasomo on ROS Answers with karma: 1 on 2015-01-21
Post score: 0
Answer:
Yes, ROS can be used to build flight controllers. There are many people using ROS on quadcopters. A common pattern is to use ROS for higher level functionality and dedicated microcontrollers at the autopilot level. With ROS 2.0 we will be working toward supporting ROS down onto microcontrollers.
Originally posted by tfoote with karma: 58457 on 2015-03-12
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 20641,
"tags": "control, ros, quadcopter, drone"
} |
How to get the stereo_image_proc package? | Question:
I want to use the stereo_image_proc package to deal with the stereo image and get the pointclouds, but get the warning"stereo_image_proc is not running",what should i do, and i can't find the stereo_image_proc in my local computer
Originally posted by Sally on ROS Answers with karma: 1 on 2016-06-03
Post score: 0
Answer:
Hi there,
Assuming that you are using ROS indigo under Ubuntu, you can install stereo_image_proc package by doing:
sudo apt-get install ros-indigo-stereo-image-proc
You can check out stereo_image_proc's wiki page to find out how to play with it.
I hope this helps
Originally posted by Martin Peris with karma: 5625 on 2016-06-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24798,
"tags": "ros"
} |
No water from hole of the cup because of weightlessness? | Question: Let's say I have cup with water, but with a little hole in the bottom of cup somewhere
Galileo worked out that if you just let something fall, it will accelerate towards the ground at the same rate whatever it is.
Above seems to be the reason why the water doesn't come out from the water during the free fall because both the water and cup will accelerate towards the earth at the same speed.
Can some explain me the above phenomena in the sense of weightlessness?
Answer: When in free fall, the water and the cup both experience the same acceleration. Therefore, they both move together. Therefore, there is no "force" that wants to separate the water from the cup.
how weightlessness comes into the picture?
OK, forget the cup and forget the water. Let's consider a bathroom scale instead.
When you stand on a bathroom scale it measures your "weight". That is, it measures the force between the soles of your feet and the ground. Your body wants to "freely fall" toward the center of the Earth, but the ground gets in the way: The ground pushes up on your feet exactly as hard as is necessary to stop you from falling, and the scale measures that force.
Here's what happens if we move both you and the scale to the International Space Station. Your body doesn't just want to freely fall toward the center of the Earth, it actually does freely fall. And, the scale also freely falls, and the space station surrounding you also freely falls. There's nothing up there to push up on your feet to stop you from falling because everything is falling together. Even if your feet are touching the scale, and the scale is touching the wall, there's no force acting because there's nothing that gets in the way of anything else falling.
Weight is force. No force is no weight... Weightlessness. | {
"domain": "physics.stackexchange",
"id": 42876,
"tags": "newtonian-gravity, free-fall, weight"
} |
Implementation of a Location object, to be used in building a text-based adventure game | Question: Based on responses to my question posted here, I've built a full implementation of a Location object. Such an object can be used to build a map for a text-based adventure game.
Feel free to review the code, leave some comments about the good and the bad of what you see.
Things to notice
Objects of this class can only be constructed via its static factory method.
This class is immutable, yet has circular references to other objects of the same type (causing a 'chicken or the egg' conundrum). The circular references are made possible via lazy evaluation: the factory method accepts a Func{Location} delegate to map those references.
Each circular reference is hidden behind a public property (North, South, East, West). Each call to the delegate is tucked within a try-catch block, just in case each delegate contains an error condition, so that a meaningful exception message can be thrown.
using System;
using System.Diagnostics.Contracts;
/// <summary>
/// Represents a location within the game. Instances of this class are immutable.
/// </summary>
public sealed class Location
{
/// <summary>
/// The description of this location.
/// </summary>
private readonly string description;
/// <summary>
/// The delegate that returns the location that is to the east of this location. -or- The delegate that returns <c>null</c> if there is no location to the
/// east.
/// </summary>
private readonly Func<Location> east;
/// <summary>
/// The name of this location.
/// </summary>
private readonly string name;
/// <summary>
/// The description of the location that is displayed to the game player.
/// </summary>
private readonly string narration;
/// <summary>
/// The delegate that returns the location that is to the north of this location. -or- The delegate that returns <c>null</c> if there is no location to the
/// north.
/// </summary>
private readonly Func<Location> north;
/// <summary>
/// The delegate that returns the location that is to the south of this location. -or- The delegate that returns <c>null</c> if there is no location to the
/// south.
/// </summary>
private readonly Func<Location> south;
/// <summary>
/// The delegate that returns the location that is to the west of this location. -or- The delegate that returns <c>null</c> if there is no location to the
/// west.
/// </summary>
private readonly Func<Location> west;
/// <summary>
/// Initializes a new instance of the <see cref="Location"/> class.
/// </summary>
private Location()
: this(name: "[Empty]", description: "[Empty]", narration: "[Empty]", north: () => null, south: () => null, east: () => null, west: () => null)
{
}
/// <summary>
/// Initializes a new instance of the <see cref="Location"/> class.
/// </summary>
/// <param name="name">
/// The name of the location.
/// </param>
/// <param name="description">
/// The description of the location.
/// </param>
/// <param name="narration">
/// The description of the location that is displayed to the game player.
/// </param>
/// <param name="north">
/// A delegate that returns the location that is to the north of this location. -or- A delegate that returns <c>null</c> if there is no location to the
/// north.
/// </param>
/// <param name="south">
/// A delegate that returns the location that is to the south of this location. -or- A delegate that returns <c>null</c> if there is no location to the
/// south.
/// </param>
/// <param name="east">
/// A delegate that returns the location that is to the east of this location. -or- A delegate that returns <c>null</c> if there is no location to the east.
/// </param>
/// <param name="west">
/// A delegate that returns the location that is to the west of this location. -or- A delegate that returns <c>null</c> if there is no location to the west.
/// </param>
private Location(
string name, string description, string narration, Func<Location> north, Func<Location> south, Func<Location> east, Func<Location> west)
{
this.name = name;
this.description = description;
this.narration = narration;
this.north = north;
this.south = south;
this.east = east;
this.west = west;
}
/// <summary>
/// Gets an object that represents an empty location. This property is a pure function. Never returns <c>null</c>.
/// </summary>
[Pure]
public static Location Empty { get; } = new Location();
/// <summary>
/// Gets the description of the location that is displayed to the game player. This property is a pure function. Never returns <c>null</c>.
/// </summary>
[Pure]
public string Narration
{
get
{
Contract.Ensures(null != Contract.Result<string>());
return this.narration;
}
}
/// <summary>
/// Gets the location that is to the north of the this location.
/// </summary>
public Location North
{
get
{
return this.north();
}
}
/// <summary>
/// Gets the location that is to the south of this location.
/// </summary>
public Location South
{
get
{
return this.south();
}
}
/// <summary>
/// Gets the location that is to the east of this location.
public Location East
{
get
{
return this.east();
}
}
/// <summary>
/// Gets the location that is to the west of this location.
/// </summary>
public Location West
{
get
{
return this.west();
}
}
[Pure]
public static bool operator !=(Location x, Location y)
{
return !(x == y);
}
[Pure]
public static bool operator ==(Location x, Location y)
{
if (Object.ReferenceEquals(x, y))
return true;
if (((object)x == null) || ((object)y == null))
return false;
return x.Equals(y);
}
/// <summary>
/// Creates a new instance of the <see cref="Location"/> class. Never returns <c>null</c>.
/// </summary>
/// <param name="name">
/// The name of the location.
/// </param>
/// <param name="description">
/// The description of the location.
/// </param>
/// <param name="north">
/// A delegate that returns the location that is to the north of this location.
/// </param>
/// <param name="south">
/// A delegate that returns the location that is to the south of this location.
/// </param>
/// <param name="east">
/// A delegate that returns the location that is to the east of this location.
/// </param>
/// <param name="west">
/// A delegate that returns the location that is to the west of this location.
/// </param>
/// <exception cref="ArgumentNullException">
/// <paramref name="name"/> or <paramref name="description"/> is <c>null</c>.
/// </exception>
/// <returns>
/// A new instance of <see cref="Location"/>.
/// </returns>
public static Location Create(
string name, string description, Func<Location> north = null, Func<Location> south = null, Func<Location> east = null, Func<Location> west = null)
{
if (null == name)
{
throw new ArgumentNullException(
message: $"Context: Creating a new instance of {nameof(Location)}.{Environment.NewLine}" +
$"Problem: Attempted to create a new instance of {nameof(Location)} with a {nameof(name)} that is a null reference.",
paramName: nameof(name));
}
Contract.Ensures(null != Contract.Result<Location>());
try
{
north?.Invoke();
}
catch (Exception e)
{
throw new GameException(
message: $"Context: Creating a new instance of {nameof(Location)}.{Environment.NewLine}" +
$"Problem: Attempted to create a new instance of {nameof(Location)} by passing via the {nameof(north)} parameter a delegate that " +
$"contains an error condition.{Environment.NewLine}" +
$"Possible Solution: Examine the initialization of the game map and ensure that delegates used to create {nameof(Location)} " +
$"instances do not contain error conditions.{Environment.NewLine}" +
$"Remarks: See inner exception.",
inner: e);
}
try
{
south?.Invoke();
}
catch (Exception e)
{
throw new GameException(
message: $"Context: Creating a new instance of {nameof(Location)}.{Environment.NewLine}" +
$"Problem: Attempted to create a new instance of {nameof(Location)} by passing via the {nameof(south)} parameter a delegate that " +
$"contains an error condition.{Environment.NewLine}" +
$"Possible Solution: Examine the initialization of the game map and ensure that delegates used to create {nameof(Location)} " +
$"instances do not contain error conditions.{Environment.NewLine}" +
$"Remarks: See inner exception.",
inner: e);
}
try
{
east?.Invoke();
}
catch (Exception e)
{
throw new GameException(
message: $"Context: Creating a new instance of {nameof(Location)}.{Environment.NewLine}" +
$"Problem: Attempted to create a new instance of {nameof(Location)} by passing via the {nameof(east)} parameter a delegate that " +
$"contains an error condition.{Environment.NewLine}" +
$"Possible Solution: Examine the initialization of the game map and ensure that delegates used to create {nameof(Location)} " +
$"instances do not contain error conditions.{Environment.NewLine}" +
$"Remarks: See inner exception.",
inner: e);
}
try
{
west?.Invoke();
}
catch (Exception e)
{
throw new GameException(
message: $"Context: Creating a new instance of {nameof(Location)}.{Environment.NewLine}" +
$"Problem: Attempted to create a new instance of {nameof(Location)} by passing via the {nameof(west)} parameter a delegate that " +
$"contains an error condition.{Environment.NewLine}" +
$"Possible Solution: Examine the initialization of the game map and ensure that delegates used to create {nameof(Location)} " +
$"instances do not contain error conditions.{Environment.NewLine}" +
$"Remarks: See inner exception.",
inner: e);
}
string northExit, southExit, eastExit, westExit, exits, separator, narration;
northExit =
null != north
? " North"
: string.Empty;
southExit =
null != south
? " South"
: string.Empty;
eastExit =
null != east
? " East"
: string.Empty;
westExit =
null != west
? " West"
: string.Empty;
exits =
string.Empty != string.Concat(northExit, southExit, eastExit, westExit)
? $"{Environment.NewLine}Exits:{northExit}{southExit}{eastExit}{westExit}"
: string.Empty;
separator = new string('-', name.Length);
narration =
$"{name}{Environment.NewLine}" +
$"{separator}{Environment.NewLine}" +
$"{description}{Environment.NewLine}" +
$"{separator}" +
$"{exits}";
Func<Location> emptyLocationDelegate = () => Location.Empty;
return new Location(
name: name,
description: description ?? string.Empty,
narration: narration,
north: north ?? emptyLocationDelegate,
south: south ?? emptyLocationDelegate,
east: east ?? emptyLocationDelegate,
west: west ?? emptyLocationDelegate
);
}
[Pure]
public override bool Equals(object obj)
{
var other = obj as Location;
if (ReferenceEquals(other, null))
return false;
return other.name == this.name
&& other.description == this.description
&& other.narration == this.narration
&& other.North == this.North
&& other.South == this.South
&& other.East == this.East
&& other.West == this.West;
}
[Pure]
public override int GetHashCode()
{
int hashCode = new {
Name = this.name,
Description = this.description,
Narration = this.narration,
North = this.North,
South = this.South,
East = this.East,
West = this.West
}.GetHashCode();
return hashCode;
}
/// <summary>
/// Returns a <see cref="string"/> that represents the current object. This method is a pure function. Never returns <c>null</c>.
/// </summary>
/// <returns>
/// A <see cref="string"/> that represents the current object.
/// </returns>
[Pure]
public override string ToString()
{
Contract.Ensures(null != Contract.Result<string>());
return this.name;
}
/// <summary>
/// Used by Microsoft Code Contracts to verify object invariants.
/// </summary>
[ContractInvariantMethod]
private void ObjectInvariants()
{
Contract.Invariant(null != this.name);
Contract.Invariant(null != this.description);
Contract.Invariant(null != this.narration);
Contract.Invariant(null != this.north);
Contract.Invariant(null != this.south);
Contract.Invariant(null != this.east);
Contract.Invariant(null != this.west);
}
}
And here's how client code would initialize Location objects:
public static class GameMap
{
/// <summary>
/// Creates the game map and returns the starting <see cref="Location"/>.
/// </summary>
/// <returns>
/// The <see cref="Location"/> that is the start of the map.
/// </returns>
[Pure]
public static Location Initialize()
{
Location kitchen = null;
Location library = null;
Location office = null;
kitchen = Location.Create(
name: "The kitchen",
description: "You are in a messy kitchen.",
north: () => library
);
library = Location.Create(
name: "The old library",
description: "You are in a large library. The walls are lined with old, dusty books.",
north: () => office,
south: () => kitchen
);
office = Location.Create(
name: "The office",
description: "You are in an office",
south: () => library
);
return kitchen;
}
}
Answer: It's been since awhile I've done .NET dev but I'll take a stab at it.
Style
Line length. Some of the lines are over 150 characters in length. While I can't find a concrete guideline in Microsoft's Coding Conventions, I find this hard to read on a 13" screen. I suggest limiting lines to 100 character width by splitting where possible. Example:
private Location()
: this(
name: "[Empty]",
description: "[Empty]",
narration: "[Empty]",
north: () => null,
south: () => null,
east: () => null,
west: () => null
)
Minor style nitpick. Throughout you have null checking in the form: if (null == name). This sticks out as odd and distracting. Prefer the more common if (name == null).
OO/Design
Unnecessary member? The following:
/// <param name="description">
/// The description of the location.
/// </param>
/// <param name="narration">
/// The description of the location that is displayed to the game player.
is suspect. If narration is just a variation of description then why is it needed as a member? What you've submitted doesn't contain usage context to better understand but it seems that narration is more suited as some kind of getter method that operates on description to return a string.
Too many arguments. Your constructor signature is immediately suspect:
private Location(
string name,
string description,
string narration,
Func<Location> north,
Func<Location> south,
Func<Location> east,
Func<Location> west)
From Robert Martin's book Clean Code:
"Functions should have a small number of arguments. No argument is best, followed by one, two, and three. More than three is very questionable and should be avoided with prejudice." (Martin, 288)
So how to fix? Wrap them in a new class (or some other structure). Assuming we can remove narration from the previous suggestion, this leaves us with name, description, and the four directions. The directions really stick out here, especially with all the repetition going on in Create(). Wrap it up.
Don't Repeat Yourself (DRY). As previously mentioned, Create() has a ton of repetition and is way too long. Repetitive code is a smell and should indicate that improvements can be made. All of your direction related code does the same thing - wrap it in a class and eliminate all the duplication.
Methods should be short and do one thing. Create() is too long. Extract helper methods (ref) as it appropriate. There's a lot of cruft in setting strings; this should probably be abstracted somewhere with default initializers.
Misc
Code Contracts. I wasn't aware of this feature prior to this. I'm probably not experienced enough to have a full view but my initial impression is one of skepticism. They seem likely to introduce much clutter as well as possible performance issues at higher scale. Everything related to how you've used them currently can be handled by unit tests. I would recommend being mindful of potential future consequences when deciding to use such a feature. I'm not saying don't use them, but rather make sure you have a good understanding before going down that path.
There's more to be said but unfortunately I'm out of time. | {
"domain": "codereview.stackexchange",
"id": 22678,
"tags": "c#, adventure-game, immutability"
} |
Surface in the sun vs. surface in the shade | Question: What is the difference in temperature between a surface in the sun as supposed to one in the shade and how is it calculated?
For instance:
What will the temperature be if a metal sheet lies in the sun and then what will its temperature be if the same sheet lies in a shaded area?
Answer: As Dr. Bad Science Ben Goldacre says, "I think you'll find it's a bit more complicated than that." It's not hard to dig up solar spectral power density data - which depends heavily on latitude and season, of course -- but how a "metal" sheet behaves depends on tons of parameters.
Different metals (elemental or mixtures) have different spectral absorptivities as well as specific heat (how much energy per unit volume per unit temperature change). Further, for example, a roughened surface may have a higher net absorption.
In summary, you'll need a lot more specific info, including sheet thickness.
In the shade, any material will reach thermal equilibrium with the local atmosphere. | {
"domain": "engineering.stackexchange",
"id": 947,
"tags": "temperature, solar-energy, radiation"
} |
Ways to improve my coding test FizzBuzz solution for a TDD role? | Question: I had an interview recently where I was asked to produce the traditional FizzBuzz solution:
Output a list of numbers from 1 to 100.
For all multiples of 3 and 5, the number is replaced with "FizzBuzz"
For all remaining multiples of 3, the number is replaced with "Fizz"
For all remaining multiples of 5, the number is replaced with "Buzz"
My solution was written in Java because of the role, but this was not a requirement. The interviewer was keen to see some evidence of TDD, so in that spirit I went about producing a FizzBuzz unit test:
public class FizzBuzzTest {
@Test
public void testReturnsAnArrayOfOneHundred() {
String[] result = FizzBuzz.getResultAsArray();
assertEquals(100, result.length);
}
@Test
public void testPrintsAStringRepresentationOfTheArray() {
String result = FizzBuzz.getResultAsString();
assertNotNull(result);
assertNotSame(0, result.length());
assertEquals("1, 2", result.substring(0, 4));
}
@Test
public void testMultiplesOfThreeAndFivePrintFizzBuzz() {
String[] result = FizzBuzz.getResultAsArray();
// Check all instances of "FizzBuzz" in array
for (int i = 1; i <= 100; i++) {
if ((i % 3) == 0 && (i % 5) == 0) {
assertEquals("FizzBuzz", result[i - 1]);
}
}
}
@Test
public void testMultiplesOfThreeOnlyPrintFizz() {
String[] result = FizzBuzz.getResultAsArray();
// Check all instances of "Fizz" in array
for (int i = 1; i <= 100; i++) {
if ((i % 3) == 0 && !((i % 5) == 0)) {
assertEquals("Fizz", result[i - 1]);
}
}
}
@Test
public void testMultiplesOfFiveOnlyPrintBuzz() {
String[] result = FizzBuzz.getResultAsArray();
// Check all instances of "Buzz" in array
for (int i = 1; i <= 100; i++) {
if ((i % 5) == 0 && !((i % 3) == 0)) {
assertEquals("Buzz", result[i - 1]);
}
}
}
}
My resulting implementation became:
public class FizzBuzz {
private static final int MIN_VALUE = 1;
private static final int MAX_VALUE = 100;
private static String[] generate() {
List<String> items = new ArrayList<String>();
for (int i = MIN_VALUE; i <= MAX_VALUE; i++) {
boolean multipleOfThree = ((i % 3) == 0);
boolean multipleOfFive = ((i % 5) == 0);
if (multipleOfThree && multipleOfFive) {
items.add("FizzBuzz");
}
else if (multipleOfThree) {
items.add("Fizz");
}
else if (multipleOfFive) {
items.add("Buzz");
}
else {
items.add(String.valueOf(i));
}
}
return items.toArray(new String[0]);
}
public static String[] getResultAsArray() {
return generate();
}
public static String getResultAsString() {
String[] result = generate();
String output = "";
if (result.length > 0) {
output = Arrays.toString(result);
// Strip out the brackets from the result
output = output.substring(1, output.length() - 1);
}
return output;
}
public static final void main(String[] args) {
System.out.println(getResultAsString());
}
}
Self-examining what I originally submitted:
Early on I decided to merge my "multiple of" calculation into the generate() method to avoid over-engineering, which I now think was a mistake
The separate getResultAsArray/generate methods were clearly OTT.
The getResultAsString could also be merged with the main() method, since one just delegates to the other.
@APC on StackOverflow pointed out that my approach is not scalable, perhaps I should have better separated the logic from the string building?
I'm still fairly inexperienced with TDD and I feel this may have let me down in this case. I'm looking for other ways I might have improved on this approach, particularly with regard to TDD practices?
This is cross-referenced from my SO question, since you can't move SO questions to Code Review, and someone pointed out that this is a better forum for this type of request. In retrospect, I should have come here in the first place!
Based on the very useful suggestions on StackOverflow, I've reworked my answer to something I now consider would have been more "TDD-friendly". This is my second attempt at a solution:
Separated the FizzBuzz logic from the output generation to make the solution more scalable
Just one assertion per test, to simplify them
Only testing the most basic unit of logic in each case
A final test to confirm the string building is also verified
The code:
public class FizzBuzzTest {
@Test
public void testMultipleOfThreeAndFivePrintsFizzBuzz() {
assertEquals("FizzBuzz", FizzBuzz.getResult(15));
}
@Test
public void testMultipleOfThreeOnlyPrintsFizz() {
assertEquals("Fizz", FizzBuzz.getResult(93));
}
@Test
public void testMultipleOfFiveOnlyPrintsBuzz() {
assertEquals("Buzz", FizzBuzz.getResult(10));
}
@Test
public void testInputOfEightPrintsTheNumber() {
assertEquals("8", FizzBuzz.getResult(8));
}
@Test
public void testOutputOfProgramIsANonEmptyString() {
String out = FizzBuzz.buildOutput();
assertNotNull(out);
assertNotSame(0, out.length());
}
}
public class FizzBuzz {
private static final int MIN_VALUE = 1;
private static final int MAX_VALUE = 100;
public static String getResult(int input) {
boolean multipleOfThree = ((input % 3) == 0);
boolean multipleOfFive = ((input % 5) == 0);
if (multipleOfThree && multipleOfFive) {
return "FizzBuzz";
}
else if (multipleOfThree) {
return "Fizz";
}
else if (multipleOfFive) {
return "Buzz";
}
return String.valueOf(input);
}
public static String buildOutput() {
StringBuilder output = new StringBuilder();
for (int i = MIN_VALUE; i <= MAX_VALUE; i++) {
output.append(getResult(i));
if (i < MAX_VALUE) {
output.append(", ");
}
}
return output.toString();
}
public static final void main(String[] args) {
System.out.println(buildOutput());
}
}
Answer: public class FizzBuzzTest {
@Test
public void testMultipleOfThreeAndFivePrintsFizzBuzz() {
assertEquals("FizzBuzz", FizzBuzz.getResult(15));
}
@Test
public void testMultipleOfThreeOnlyPrintsFizz() {
assertEquals("Fizz", FizzBuzz.getResult(93));
}
This doesn't feel like a TDD test. If you wrote this in a test-first manner I'd expect you to start with 3, not 93. Not that testing 93 is bad, but I'd expect a test for the trivial cases as well.
@Test
public void testMultipleOfFiveOnlyPrintsBuzz() {
assertEquals("Buzz", FizzBuzz.getResult(10));
}
@Test
public void testInputOfEightPrintsTheNumber() {
assertEquals("8", FizzBuzz.getResult(8));
}
I'd expect more tests in general. I'd probably consistently test the first 15 numbers. I'd store the correct results in an array and use a loop to assert accuracy in each case.
@Test
public void testOutputOfProgramIsANonEmptyString() {
String out = FizzBuzz.buildOutput();
assertNotNull(out);
assertNotSame(0, out.length());
}
This test seems inefficient. You check for non-null, non-empty strings, but have no checks to make sure the actual output made any sense.
}
public class FizzBuzz {
private static final int MIN_VALUE = 1;
private static final int MAX_VALUE = 100;
I'd make these parameters to buildOutput not class constants.
public static String getResult(int input) {
boolean multipleOfThree = ((input % 3) == 0);
boolean multipleOfFive = ((input % 5) == 0);
I think you can probably get rid of some of those parentheses.
if (multipleOfThree && multipleOfFive) {
return "FizzBuzz";
}
else if (multipleOfThree) {
return "Fizz";
}
else if (multipleOfFive) {
return "Buzz";
}
return String.valueOf(input);
I'd put this in an else, I think it gives a clearer idea of what you are doing and looks more parallel with the rest of the function
}
public static String buildOutput() {
StringBuilder output = new StringBuilder();
for (int i = MIN_VALUE; i <= MAX_VALUE; i++) {
output.append(getResult(i));
if (i < MAX_VALUE) {
output.append(", ");
}
}
return output.toString();
}
public static final void main(String[] args) {
System.out.println(buildOutput());
}
} | {
"domain": "codereview.stackexchange",
"id": 1440,
"tags": "java, unit-testing, interview-questions, fizzbuzz"
} |
Difference between Non-Polar and Dipole moment $\vec\mu$=0 | Question: Is there any difference between a molecule having $\vec\mu=0$ and being Non-Polar?
Answer: $\vec\mu$ is just the electric dipole moment. However, a molecule can be polar with $\vec\mu=0$, as polarity has to do with charge separation, so a particle with any form of multipole moment is polar.
In chemistry, polarity refers to a separation of electric charge leading to a molecule or its chemical groups having an electric dipole or multipole moment.
Molecules like methane, carbon dioxide, and perchlorate have $\vec\mu=0$, but have some level of charge separation, making them polar (these have quadrupole moments, not sure about higher order moments).
Actually, all molecules are polar by this definition, just that many aren't polar enough for this to matter. Generally, when we call a molecule "polar", we are talking about only $\vec\mu$. | {
"domain": "physics.stackexchange",
"id": 7111,
"tags": "physical-chemistry, molecules, dipole-moment"
} |
If $A$ is context-free then $A^*$ is regular | Question: I am currently studying for my exam and I am having trouble to solve this question:
Right or wrong: If $A$ is context-free then $A^*$ is regular.
I think it's wrong because if $A$ is context-free it means that $A$ can be a non-regular language. And the non-regular languages are not closed under the Kleene star operation (at least I think so). I am not sure how write this in a more formal way.
Maybe like this?
Let $A=\{a^nb^n \mid n \in \mathbb{N}\}$. Then we know that $A$ is non-regular and context-free. However, I'm not sure what $A^*$ is.
Answer: Let $A=\{a^nb^n \mid n \in \mathbb{N}\}$. Then we know that $A$ is non-regular and context-free. Also we can see that $A^*\cap a^*b^*=A$. Since $a^*b^*$ is a regular expression, we do know that it is regular. Lets assume that A* is regular.
The regular languagues are closed unter intersection. Therefore $A^*\cap a^*b^*$ must be also regular(because we assume that $A^*$ is regular). This would implicate that A is regular because $A^*\cap a^*b^*=A$. This is a contradiction because we know that A is not regular. Therefore A* cant be regular.
$q.e.d$ | {
"domain": "cs.stackexchange",
"id": 16660,
"tags": "formal-languages, regular-languages, context-free"
} |
What uses the Grasp Message? | Question:
I'd like to use the PR2_grasp_adjust package and associated service call GraspAdjust. However, I can't figure out what to do with the Grasp message once the service returns. Is there a particular service that takes Grasp messages?
Originally posted by David Lu on ROS Answers with karma: 10932 on 2012-04-03
Post score: 0
Answer:
A Grasp can be passed on to the PickupAction, serviced by the object_manipulator. You can see an example of calling that action here:
http://www.ros.org/wiki/pr2_tabletop_manipulation_apps/Tutorials/Writing%20a%20Simple%20Pick%20and%20Place%20Application
Note that, unlike the example, you will be populating the desired_grasps field of the PickupGoal with your computed grasp.
Another example can be found in the pr2_interactive_manipulation package, in the file that Adam mentioned above.
Originally posted by Matei Ciocarlie with karma: 586 on 2012-04-03
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 8840,
"tags": "ros, object-manipulation"
} |
NP-completeness of a spanning tree problem | Question: I was reviewing some NP-complete problems on this site, and I meet one interesting problem from
NP completeness proof of a spanning tree problem
In this problem, I am interested in the original problem, which the leaf set is precisely $S$. The author said that he can prove this by reducing it to the Hamiltonian path. However, I still cannot figure it out. Could anybody help me with this in details?
Answer: Seems this question has been bumped by the system because it has no answer yet:
The idea JeffE proposed is to reduce the Hamiltonian Path problem (a known NP-complete problem) to this version of the spanning tree problem.
This is not hard to do: given a graph and two nodes we want to find an HPath between, we can set S to the set containing those two nodes and ask for a spanning tree with those nodes as leafs. Since a tree with just two leaves is a path, and a spanning path (that goes through all the nodes) is a Hamilton path, we can see how its possible to use an algorithm for the spanning tree eproblem to solve any Hamilton Path problem.
This proves that the spanning problem is NP-Hard. Since the problem is also clearly in NP, we thus can conclude that its NP complete. | {
"domain": "cs.stackexchange",
"id": 228,
"tags": "complexity-theory, np-complete, graphs, spanning-trees"
} |
Will electroplating a ventilated brake rotor with copper improve its cooling capacity and maximum its lifespan? | Question: Since copper is excellent at dissipating heat and does not rust, I am surprised that brake manufacturers do not electroplate cast iron/steel brake rotors with a thin layer of copper (~.5mm). There are brake manufacturers who offer brake rotors with cadmium or zinc-plated coatings at an additional cost but these coatings usually do not last long especially in areas that use a lot of rock salt.
I think copper plating would be the most ideal coating for ventilated brake rotors since it should keep the cooling vanes from rusting and becoming clogged with rust. Clogged cooling vanes reduce the rotor's cooling capacity resulting in higher temperatures, faster rotor wear, premature warping, and a shortened lifespan.
Moreover, since copper is very good at dissipating heat, air flowing through copper-plated cooling vanes should draw away more heat from the brake rotor, resulting in a cooler ventilataed brake rotor, maximizing or perhaps even extending its intended lifespan.
Also, since there should be almost no rust on a copper-plated brake rotor, removing an worn out, copper-plated brake rotor should be no problem. Anyone who has worked on brakes knows how time-consuming it is to remove a rusted-on brake rotor or drum.
The brake manufacturer should not electroplate the friction areas (as shown in the picture below) since the high pressure and friction from contact with the brake pads would most likely warp the copper coating, but all the other surfaces would have a copper coating, most importantly the surfaces of the cooling vanes.
Will electroplating a ventilated brake rotor with copper improve its cooling capacity and maximize its lifespan?
Answer: Copper is better than cast iron at conducting heat, and is used as a heat spreader in some high power density electronics products. However, I don't believe that it is better at dissipating heat from a surface. You might get marginally better heat dissipation if you made the whole rotor out of copper, but it'd be weaker, heavier, more expensive, and you'd have to reengineer your brake pads.
Any time you've got a hot thing that needs to be cooled by convection, the limitation is the interface between the air and the object to be cooled. A thin copper plating -- or even making the whole rotor out of copper -- isn't going to help this. | {
"domain": "engineering.stackexchange",
"id": 2760,
"tags": "mechanical-engineering, materials, automotive-engineering"
} |
Blackbody radiation and the quantization of energy? | Question: If the energy spectrum is continuous, a blackbody would radiate shorter wavelengths with higher intensity with no upper limit (the "ultraviolette catastrophe") for any temperature.
How can the correct behaviour of a blackbody be explained by quantization of energy? I know the emittance from the blackbody occurs due to recieve of energy to the molecules, and these molecules emit waves which can be in any mode.
I have read that according to none-quantum theory, each mode of such a standing wave is formed with equal probability. Is that correct (according to none-quantum theory, then)?
Does each mode of such a standing wave contribute to the intensity of a certain wavelength?
If the energy was not quantized, would the molecules "collect" energy (i.e. heat from the surroundings) until it has enough to relase another wave with a higher mode (and this go on and on leading to the "UV catastrophe")?
Answer: The spectrum is continuous, but the energy (for each frequency of the spectrum) is emitted in discrete chunks. That is the energy stored in each mode is $n\hbar\omega$ rather than $\propto |E|^2$, which gives very different results when substituted into the Boltzmann distribution. The rest is math. | {
"domain": "physics.stackexchange",
"id": 92867,
"tags": "quantum-mechanics, thermal-radiation"
} |
Why are crystals so useful for quantum isolation? | Question: Some implementations of quantum gates (in the hopes of building a quantum computer one day) use crystals to isolate the qubits (to prevent decoherence). Why is a crystal so much better than an amorphous solid? My reasoning is that crystals tend to pack atoms more tightly than other substances, which means that lattice vibrations are less of an issue (vibrational excitations get "frozen out"). On the other hand, atoms near defects or atoms in a non-crystal are more loosely bound. Is this correct?
Crystals also contain an ordered arrangement of atoms. Does this perfect ordering help with isolation as well?
Answer: A crystal is one of the macroscopic manifestations of quantum mechanics, like super conductivity and super fluidity. It is a coherent whole.
In principle it could be described by a single wave function for the whole crystal. There are wavefunctions modeled for the electrons in an ordered lattice.
As quantum computing is based on the coherence of the wave functions it seems a good idea to start with a coherent environment before introducing changes to be used in computing.
In contrast, amorphous material can only be described statistically, because the phases are lost even though the basic units might be small crystals. | {
"domain": "physics.stackexchange",
"id": 18647,
"tags": "solid-state-physics, quantum-computer, crystals"
} |
Constrained motion of the vertices of a quadrilateral | Question: There is this square of side length $a$. It's opposite vertices are being pulled in opposite directions with a constant velocity $u$. The question here is what is the velocity of the remaining pair of opposite vertices?
I tried to use the rules of constrained motion and made the velocity of each point along one of the edges equal as there is no relative velocity along that line always. I ended up getting that the velocity of the remaining two vertices would also be $u$. But I then reminded myself that I completely neglected the role of the velocity of the opposite vertex (whose velocity is initially given) in calculating the velocity of the other pair of vertices. Then I tried to use center of mass concept as the center of mass is stationary but was confused how to plug in the various values.
I'm really confused and want somebody to help me out of this question. Thanks.
Answer: Here is how to solve problems:
Always draw a sketch first that includes all relevant information.
Find what is invariant (remains constant) and what may be symmetric. Here the vertices of the square move along the diagonal lines.
Rephrase the problem in terms of the stated symmetries. Here pretend the diagonals are immovable walls that are 90° apart and draw only one side as it slides against the walls. One end with $u$ and the other with $v$.
Use geometry (hint similar triangles) to find $v$ as a function of $u$. | {
"domain": "physics.stackexchange",
"id": 60533,
"tags": "homework-and-exercises, newtonian-mechanics"
} |
Link between wave functions $\psi$ and state vectors $|\psi \rangle$ in quantum mechanics? | Question: I'm currently following a course in quantum mechanics that uses Griffith's textbook.
Griffiths shows that wave functions are members of a Hilbert space. Since this is an abstract vector space, we can assign a state vector $|{\psi}>$ to each wave function $\psi(x)$. Since our course on linear algebra runs pretty deep, this I understand.
Griffiths subsequently defines the inner product between two state vectors $<\phi|\psi> = \int\phi^*(x)\psi(x)dx$, and he derives all sorts of fun things from this inner product; that the function $\psi(x)$ corresponding to $|\psi>$ is the wave function as seen previously in wave mechanics, and that its Fourier transform determines the probability density of momentum.
At this point, something happens in both Griffiths and my course that I don't quite comprehend. Both my instructor and the book mention that at this point, the vectors will take precedence above the functions; all quantum mechanical information is from now on fully and completely recorded into the vector. The functions are simply a representation of this vector; one of the many possible representations. Am I correct in saying this?
But Griffith's proof that the coefficients in the position basis of $|\psi>$ form the values of the function $\psi(x)$ is completely based on the definition $<\phi|\psi> = \int\phi^*(x)\psi(x)dx$. How should I then read this? Does he assume that there is a function representation $\psi(x)$ corresponding to the state vector $|\psi>$ (which is in his right, since $L_2$ is isomorphic to the vector space), to subsequently derive the properties of this function?
To me it just seems really strange that we have these abstract state vectors, only to define the inner product in terms of one very specific representation of that vector.
Edit 6 February 2019: I can (for obvious reasons) not unmark this question as a duplicate, but I can "edit to explain why my question hasn't been answered before." My question is not about whether the ket psi and the wave function are the exact same--I have written in my question that I understand that the wave function is a representation of the ket (in other words, I've used the answer to the duplicate question to ask my own question). My question is why in so many derivations, it appears that the wave function formulation takes precedence (in e.g. the definition of the inner product). In the answers below, it has been explained that there is no precedence, and that this is simply a shortcut in the theory.
Answer:
The functions are simply a representation of this vector; one of the many possible representations. Am I correct in saying this?
Yes. You are probably familiar with the position basis $\left|x\right>$ which satisfies
$$\hat{x}\left|x\right>=x\left|x\right>$$
The wavefunction is then merely a representation of the state vector in the position basis
$$\psi\left(x\right)=\left<x| \psi\right>$$
Try to use the completeness relation given below and the fact that $\left<x|k\right>=\frac{1}{\sqrt{2\pi}}e^{ikx}$ to show that the momentum representation of the state vector is the Fourier transform of the wavefunction.
But Griffith's proof that the coefficients in the position basis of $\left|\psi\right>$ form the values of the function $\psi(x)$ is completely based on the definition $\left<\phi|\psi\right> = \int\phi^{\ast}\left(x\right)\psi\left(x\right){\rm d}x$. How should I then read this? Does he assume that there is a function representation $\psi(x)$ corresponding to the state vector $\left|\psi\right>$ (which is in his right, since $L^{2}$ is isomorphic to the vector space), to subsequently derive the properties of this function?
You don't have to define the inner product like this. The inner product is an invariant, and it does not depend on the specific choice of basis. The specific formula from Griffith's is just a consequence of the basis $\{\left|x\right>\}$ being complete. To elaborate, just use the completeness relation
$$\hat{I}=\int{\rm d}x\left|x\right>\left<x\right|$$
to get
$$\left<\phi|\psi\right>=\left<\phi\right|\hat{I}\left|\psi\right>=\left<\phi\right|\int{\rm d}x\left|x\right>\left<x|\psi\right>=\\=\int{\rm d}x\left<\phi|x\right>\left<x|\psi\right>=\int{\rm d}x\phi^{\ast}\left(x\right)\psi\left(x\right)$$
Nevertheless, there is something beneficial with the position basis: in it the Hamiltonian operator assumes a simple form which is the usual Schrödinger equation in term of wavefunctions. | {
"domain": "physics.stackexchange",
"id": 53683,
"tags": "quantum-mechanics, hilbert-space, wavefunction"
} |
Duration of 1 second during while(ros::ok()) | Question:
Hi All,
Below is the snippet of the code I have.
int main()
{
//Declaration part
ros::Rate rate(50);
while(ros::ok())
{
//I need to run one condition or function at each second
//Modify the message based on the condition in previous line
//Publish the message
rate.sleep();
ros::spinOnce();
}
}
Can anybody please tell me how can I check for some condition at each second?
Thank you,
KK
Originally posted by kk2105 on ROS Answers with karma: 262 on 2018-07-04
Post score: 0
Answer:
You are looking for Timers, see the wiki
void callback(const ros::TimerEvent&)
{
ROS_INFO("Callback triggered");
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "name");
ros::NodeHandle n;
ros::Timer timer = n.createTimer(ros::Duration(1.0), callback);
}
Here the callback will be triggered every second. You can put your condition or function inside the callback to have it called every second.
Originally posted by Delb with karma: 3907 on 2018-07-04
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by kk2105 on 2018-07-05:
@Delb Thanks for suggestion.. I will check this option and get back to you .. | {
"domain": "robotics.stackexchange",
"id": 31179,
"tags": "ros, rostime, ros-indigo"
} |
What does memorylessness mean as a postulate of special relativity? | Question: I was reading the wiki page on special relativity postulates. And wiki says,
The two-postulate basis for special relativity is the one historically used by Einstein, and it remains the starting point today. As Einstein himself later acknowledged, the derivation tacitly makes use of some additional assumptions, including spatial homogeneity, isotropy, and memorylessness.
I understand homogeneity is necessary to be able choose the origin anywhere and isotropy is necessary to be able to choose directions of axis of the coordinate system randomly.
What does memorylessness mean?
Answer:
what does memorylessness mean?
Essentially, it means that the length of a rod and the rate of a clock depend on their current state only.
The alternative would require that, e.g., two otherwise identical clocks at rest with respect to each other may run at different rates if their histories differed. | {
"domain": "physics.stackexchange",
"id": 19427,
"tags": "special-relativity, terminology"
} |
Run a command at every step of a relaxed scan | Question: In Gaussian, I'm trying to do a relaxed scan and for each step obtain the volume. However, when I use the volume keyword, it seems to only output the volume for the initial and final steps. Is there a way to ensure a keyword runs for every step of a scan? A similar question (How to obtain the Raman spectrum along every coordinate of a scan in Gaussian?) was asked earlier this year, but I want to confirm whether Gaussian doesn't allow any other keywords to run during a scan.
Answer: Guess what: It is possible (in your specific case).
I was wrong. At least a little bit. Most of what is described below is true for the general purpose, and I will not change that part of the answer.
However, in the specific case where you only want to compute the volume, you are able to do that because I think it is part of the population analysis module. You can actually request a population analysis at every step of the optimisation, but be careful what you wish for (running NBO will take forever). That way the optimisation will be a little slower, but that is the most convenient way. Therefore the solution to your very specific problem is adding the following line to the route section:
pop=always volume
Be aware, that the output will be an even bigger nightmare than before.
Here is a complete input of the $\ce{H3B-NH3}$ I have used previously.
%chk=df-bp86-d3.def2svp.chk
%nproc=4
%mem=16000MB
#P BP86/def2SVP/W06 ! Density Functional Theory Calculation
DenFit ! Use density fitting
empiricaldispersion=GD3 ! Use Grimme Dispersion
opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed)
scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence
int(ultrafinegrid) ! Larger Grid
scrf(pcm,solvent=water) ! Use solvent
gfinput gfoldprint iop(6/7=3) ! For molden
symmetry(loose) ! Loosen symmetry requirements
pop=always ! Perform a population analysis at every step
Volume ! Report vdW volume
Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP
Scan
0 1
N 0.00000 0.00000 -0.80852
H 0.92634 0.00000 -1.36025
H -0.46317 -0.80223 -1.36025
H -0.46317 0.80223 -1.36025
B 0.00000 0.00000 0.19148
H 0.52348 0.90670 0.85928
H -1.04696 0.00000 0.85928
H 0.52348 -0.90670 0.85928
B 1 5 S 5 0.2
For comparison (I'm not 100% sure I used the right values):
Energies
Step1 SCF Done: E(RB-P86) = -82.7531561505 A.U. after 8 cycles
Step2 SCF Done: E(RB-P86) = -83.0511089272 A.U. after 7 cycles
Step3 SCF Done: E(RB-P86) = -83.1459883448 A.U. after 7 cycles
Step4 SCF Done: E(RB-P86) = -83.1661412429 A.U. after 8 cycles
Step5 SCF Done: E(RB-P86) = -83.1606918387 A.U. after 8 cycles
Step6 SCF Done: E(RB-P86) = -83.1485268431 A.U. after 8 cycles
Volumes
Step1 Molar volume = 491.032 bohr**3/mol ( 43.819 cm**3/mol)
Step2 Molar volume = 415.148 bohr**3/mol ( 37.047 cm**3/mol)
Step3 Molar volume = 401.060 bohr**3/mol ( 35.790 cm**3/mol)
Step4 Molar volume = 406.785 bohr**3/mol ( 36.301 cm**3/mol)
Step5 Molar volume = 514.428 bohr**3/mol ( 45.907 cm**3/mol)
Step6 Molar volume = 453.278 bohr**3/mol ( 40.450 cm**3/mol)
The energies differ very slightly only (from the below approach), but the volumes do differ at some points. Now that could be, that I just used the wrong population analysis, or something else. I don't have the time to investigate. (Leave a comment or edit if you find out.)
I've again looked into that, but without some serious hacking via the external keyword interface, I don't think it is possible. I also couldn't find a suitable iOP to control what gets punched to checkpoint file and when, and then use the result from that for a property/analysis run.
The punch keyword is equally disappointing, only outputting the last state.
While the manual suggests that you can access a given step of a scan in the checkpoint file, I found this to be not true. The documentation is fuzzy on that part and it also warns, that not everything is stored. Since it is a binary file, there is also no real way of checking what is contained.
I guess the real reason for the existence of that option is to restart a calculation that went rogue with different parameters from an intermediate calculation.
I was equally unable to find a way to set a given redundant coordinate to a specific value and then freezing it in order to perform a partial optimisation at that point, and then manipulate that coordinate while reading in the guess and perform another calculation.
Even with the new and generally awesome way to define general internal coordinates in Gaussian 16 (GIC) I was unsuccessful.
From all that trying, I came up with the probably easiest (=lazy) way (not computationally most efficient) to do what you want is to perform the scan, extract the (p'optimised) coordinates and run a series of single point calculations.
A slightly less (probably) computationally heavy approach would be to perform an initial scan on a very low level (like pm6, if your molecule is well-behaved), and the do p'optimisations on the resulting steps. Usually pm6 is able to offer reasonable starting geometries, which in many other cases may also help you reduce computational effort. So this is probably easy and quite efficient.
There are various options to extract geometries from a scan, the following question deals exactly with that: Extract all structures of Gaussian 09 molecular dynamics calculation using babel?
There certainly are more efficient ways to do this kind of task, but in the end, most of them require manual labour.
N.B.: I cannot believe it, but the pm6/popt approach is actually a halfway decent application of compound jobs. Although I have to admit I have not tried it yet.
It works as the following example demonstrates. (I used Gaussian 16 Rev. A.03, but it should work with Gaussian 09 Rev D.01, too. Maybe even earlier.)
I set up an initial scan on the pm6 level of theory for $\ce{H3B-NH3}$ and scanned the $\ce{B-N}$ bond:
%chk=pm6.scan.chk
%nproc=2
%mem=8000MB
#P PM6
OPT(MaxCycle=100)
SYMMETRY(loose)
GEOM(ModRedundant)
Volume
title
0 1
N 0.000000 0.000000 -0.764994
H 1.019690 0.000000 -1.164250
H -0.509845 -0.883078 -1.164250
H -0.509845 0.883078 -1.164250
B -0.000000 0.000000 0.235006
H 0.509845 0.883078 0.634262
H -1.019691 0.000000 0.634262
H 0.509845 -0.883078 0.634262
B 1 5 S 5 0.2
I have extracted the pre-optimised geometries with Chemcraft and added a header section and made use of the --Link1-- separator. After the first optimisation I read the MO for a slight speedup. I'm sure some of this can be further automatised, but for the sake of demonstrating, I did it by hand. The commented input:
%chk=df-bp86-d3.def2svp.rescan.chk
#P BP86/def2SVP/W06 ! Density Functional Theory Calculation
DenFit ! Use density fitting
empiricaldispersion=GD3 ! Use Grimme Dispersion
opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed)
scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence
int(ultrafinegrid) ! Larger Grid
scrf(pcm,solvent=water) ! Use solvent
gfinput gfoldprint iop(6/7=3) ! For molden
symmetry(loose) ! Loosen symmetry requirements
Volume ! Report vdW volume
Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP
Step 1
0 1
N 0.00000 0.00000 -0.80852
H 0.92634 0.00000 -1.36025
H -0.46317 -0.80223 -1.36025
H -0.46317 0.80223 -1.36025
B 0.00000 0.00000 0.19148
H 0.52348 0.90670 0.85928
H -1.04696 0.00000 0.85928
H 0.52348 -0.90670 0.85928
B 1 5 F
--Link1--
%chk=df-bp86-d3.def2svp.rescan.chk
#P BP86/def2SVP/W06 ! Density Functional Theory Calculation
DenFit ! Use density fitting
empiricaldispersion=GD3 ! Use Grimme Dispersion
guess(read) ! Use the MO from the previous step
opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed)
scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence
int(ultrafinegrid) ! Larger Grid
scrf(pcm,solvent=water) ! Use solvent
gfinput gfoldprint iop(6/7=3) ! For molden
symmetry(loose) ! Loosen symmetry requirements
Volume ! Report vdW volume
Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP
Step 2
0 1
N 0.00000 0.00000 -0.90852
H 0.94043 0.00000 -1.36414
H -0.47021 -0.81444 -1.36414
H -0.47021 0.81444 -1.36414
B 0.00000 0.00000 0.29148
H 0.53853 0.93276 0.86317
H -1.07706 0.00000 0.86317
H 0.53853 -0.93276 0.86317
B 1 5 F
--Link1--
%chk=df-bp86-d3.def2svp.rescan.chk
#P BP86/def2SVP/W06 ! Density Functional Theory Calculation
DenFit ! Use density fitting
empiricaldispersion=GD3 ! Use Grimme Dispersion
guess(read) ! Use the MO from the previous step
opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed)
scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence
int(ultrafinegrid) ! Larger Grid
scrf(pcm,solvent=water) ! Use solvent
gfinput gfoldprint iop(6/7=3) ! For molden
symmetry(loose) ! Loosen symmetry requirements
Volume ! Report vdW volume
Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP
Step 3
0 1
N 0.00000 0.00000 -0.99853
H 0.95023 0.00000 -1.38525
H -0.47511 -0.82292 -1.38525
H -0.47512 0.82292 -1.38525
B 0.00000 0.00000 0.40147
H 0.55372 0.95907 0.87762
H -1.10743 0.00000 0.87762
H 0.55372 -0.95907 0.87762
B 1 5 F
--Link1--
%chk=df-bp86-d3.def2svp.rescan.chk
#P BP86/def2SVP/W06 ! Density Functional Theory Calculation
DenFit ! Use density fitting
empiricaldispersion=GD3 ! Use Grimme Dispersion
guess(read) ! Use the MO from the previous step
opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed)
scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence
int(ultrafinegrid) ! Larger Grid
scrf(pcm,solvent=water) ! Use solvent
gfinput gfoldprint iop(6/7=3) ! For molden
symmetry(loose) ! Loosen symmetry requirements
Volume ! Report vdW volume
Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP
Step 4
0 1
N 0.00000 0.00000 -1.07752
H 0.95590 0.00000 -1.42049
H -0.47795 -0.82783 -1.42049
H -0.47795 0.82783 -1.42049
B 0.00000 0.00000 0.52248
H 0.56746 0.98288 0.89885
H -1.13493 0.00000 0.89885
H 0.56746 -0.98288 0.89885
B 1 5 F
--Link1--
%chk=df-bp86-d3.def2svp.rescan.chk
#P BP86/def2SVP/W06 ! Density Functional Theory Calculation
DenFit ! Use density fitting
empiricaldispersion=GD3 ! Use Grimme Dispersion
guess(read) ! Use the MO from the previous step
opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed)
scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence
int(ultrafinegrid) ! Larger Grid
scrf(pcm,solvent=water) ! Use solvent
gfinput gfoldprint iop(6/7=3) ! For molden
symmetry(loose) ! Loosen symmetry requirements
Volume ! Report vdW volume
Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP
Step 5
0 1
N 0.00000 0.00000 -1.14730
H 0.95649 0.00000 -1.46941
H -0.47824 -0.82834 -1.46941
H -0.47824 0.82834 -1.46941
B 0.00000 0.00000 0.65270
H 0.57849 1.00197 0.92763
H -1.15698 0.00000 0.92763
H 0.57849 -1.00197 0.92763
B 1 5 F
--Link1--
%chk=df-bp86-d3.def2svp.rescan.chk
#P BP86/def2SVP/W06 ! Density Functional Theory Calculation
DenFit ! Use density fitting
empiricaldispersion=GD3 ! Use Grimme Dispersion
guess(read) ! Use the MO from the previous step
opt(MaxCycle=100,Loose,ModRed) ! Use more optimisation cycles (Loose only for speed)
scf(xqc,MaxConventionalCycle=500) ! If necessary, resort to quadratic convergence
int(ultrafinegrid) ! Larger Grid
scrf(pcm,solvent=water) ! Use solvent
gfinput gfoldprint iop(6/7=3) ! For molden
symmetry(loose) ! Loosen symmetry requirements
Volume ! Report vdW volume
Water H3B-NH3 DF-BP86-D3(PCM)/def2-SVP
Step 6
0 1
N 0.00000 0.00000 -1.21413
H 0.95402 0.00000 -1.53258
H -0.47701 -0.82621 -1.53258
H -0.47701 0.82621 -1.53258
B 0.00000 0.00000 0.78587
H 0.58528 1.01374 0.96868
H -1.17057 0.00000 0.96868
H 0.58528 -1.01374 0.96868
B 1 5 F
For comparison:
Energies
Step1 SCF Done: E(RB-P86) = -82.7531561507 A.U. after 8 cycles
Step2 SCF Done: E(RB-P86) = -83.0511087360 A.U. after 8 cycles
Step3 SCF Done: E(RB-P86) = -83.1459860378 A.U. after 7 cycles
Step4 SCF Done: E(RB-P86) = -83.1661471840 A.U. after 8 cycles
Step5 SCF Done: E(RB-P86) = -83.1606709617 A.U. after 8 cycles
Step5 SCF Done: E(RB-P86) = -83.1484921220 A.U. after 8 cycles
Volumes
Step1 Molar volume = 459.690 bohr**3/mol ( 41.022 cm**3/mol)
Step2 Molar volume = 479.017 bohr**3/mol ( 42.747 cm**3/mol)
Step3 Molar volume = 401.060 bohr**3/mol ( 35.790 cm**3/mol)
Step4 Molar volume = 406.785 bohr**3/mol ( 36.301 cm**3/mol)
Step5 Molar volume = 570.344 bohr**3/mol ( 50.897 cm**3/mol)
Step6 Molar volume = 498.606 bohr**3/mol ( 44.495 cm**3/mol)
Animation | {
"domain": "chemistry.stackexchange",
"id": 9456,
"tags": "computational-chemistry, software"
} |
Adding fade to jquery background position animate | Question: I've just implemented a background animate on some social media icons where the image goes from grey to color on :hover.
I wanted to know if there's a better way to write the following but also implement a fade, so as the background animates, it's also fading in on hover.
<script type="text/javascript">
$(function(){
$('#facebook')
.css( {backgroundPosition: "0 0"} )
.mouseover(function(){
$(this).stop().animate({backgroundPosition:"(-63px 0px)"}, {duration:150})
})
.mouseout(function(){
$(this).stop().animate({backgroundPosition:"(0 0)"}, {duration:150})
})
$('#twitter')
.css( {backgroundPosition: "0 0"} )
.mouseover(function(){
$(this).stop().animate({backgroundPosition:"(-63px 0)"}, {duration:150})
})
.mouseout(function(){
$(this).stop().animate({backgroundPosition:"(0 0)"}, {duration:150})
})
});
</script>
Answer:
I wanted to know if there's firstly a better way to write the following
Well, not much to be improved but you can simplify things a little bit using hover()
$('#facebook')
.css('background-position', '0 0')
.hover(function () {
$(this).stop().animate({
'background-position' : '(-63px 0px)'
}, 150);
}, function () {
$(this).stop().animate({
'background-position' : '(0 0)'
}, 150);
});
implement a fade so as the background animates it's also fading in on hover
Your best bet is to implement it with two separate images instead of a sprite. You absolute position each image on top of each other, animate their top for the moving effect, and then fadeIn()|fadeOut(). | {
"domain": "codereview.stackexchange",
"id": 1350,
"tags": "javascript, jquery, css"
} |
complexity of the half language | Question: For any language $L$ over $\Sigma^*$, define
$$L_{1/2} = \{x \in \Sigma^* : xy\in L, y\in\Sigma^{|x|} \}.$$
In words, $L_{1/2}$ consists of all $x$ for which there is a $y$ of equal length such that $xy\in L$.
An exercise in Sipser's book asks to show that $L_{1/2}$ is regular whenever $L$ is. I have seen two distinct solutions, and both involve an exponential blow-up of states.
Question: can anyone construct a family of languages $\{L_n\}$ such that the canonical automaton for $(L_n)_{1/2}$ is significantly (say, exponentially) larger than that for $L$? My best efforts so far only increase the state size by $+1$!
Answer: See Mike Domaratzki's paper, State complexity of proportional removals
http://dl.acm.org/citation.cfm?id=782471
http://www.cs.umanitoba.ca/~mdomarat/pubs/sc_jalc.ps | {
"domain": "cstheory.stackexchange",
"id": 1395,
"tags": "fl.formal-languages, automata-theory"
} |
p-cycles and Fluxes | Question: I would like to ask why the existence of a non-trivial p-cycle leads to a non-trivial flux. I would say that e.g. for a five-form $F_{(5)}$ field strength , the flux is: $$\int\limits_{\mathcal{C}^{5}}F_{(5)} $$ so in general: $$\int\limits_{\mathcal{C}^{p}}F_{(p)} $$
Is this correct? And why the geometry should be a p-cycle? Couldn't it be some other topology?
Answer: A p-cycle is a differential form that lives in $ker(\partial_p)$ for the differential $\partial_p$ (in grading $p$), and such a form is nontrivial if it is not in the image of $\partial_{p+1}$. Mathematically we can see this as a cycle that is not the boundary of anything, picture a circle around a torus that bounds no area on the torus. If one has a boundary we can have the Stokes' rule that
$$
\int_{\partial M}\omega = \int_M d\omega.
$$
This is seen in Gauss' law. For a cocycle we then have a form $\omega \ne d\xi$, it is not the result of a coboundary, but where it is closed with $d\omega = 0$. Physically this means the field content of fields is not due to another field. This has some bearing of gauge invariance with ${\bf A}\rightarrow {\bf A} + {\bf d}\xi$ is such that $d^2\xi = 0$ gives gauge invariance. This is a topological form of a similar thing, and is seen in BRST quantization.
The argument for p-cycles is that fields are not due to special conditions on a boundary, but are purely topological. This removes the need for auxiliary conditions in the theory. | {
"domain": "physics.stackexchange",
"id": 31259,
"tags": "string-theory, topology"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.