anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Gamma random variable , need to find the approximate 90th percentile of X? | Question: A colleague defines a random variable $X = \frac{Z}{Y^2}$, where $Z$ is a known normal random variable, $Y$ is a known gamma random variable, and $Z$ and $Y$ are independent of each other.
You are not able to get an analytical form for the cumulative distribution function for $X$, but you need to find the approximate 90th percentile of $X$. How do you go about doing this?
Answer: In R:
> Z = function(n){rnorm(n)} # normal(0,1) here
> Y = function(n){rgamma(n,10,10)} # some gamma variable
> quantile(Z(100000)/Y(100000)^2,.9) # 100000 from your target dist
90%
1.635033
So 90% of the values are less than 1.635 | {
"domain": "datascience.stackexchange",
"id": 1435,
"tags": "distribution"
} |
How to teleop pioneer3? | Question:
Hi ROS-users,
I have some..
Hardware: Pioneer3, SICK-lms200, laptop, 2 usb-serial cables
Software: ros-hydro-desktop-full on laptop running ubuntu-12.04. sicktoolbox(works fine)
Objectives: 1. To control robot by laptop keyboard and log laser data.
2. To build map from that logged data using slam_gmapping.(I know how to achieve objective2)
I can connect my robot and sick laser using rosaria.
Can anyone tell me how can i move my robot using keyboard?
Thanks in advance
Hossain
Originally posted by cognitiveRobot on ROS Answers with karma: 167 on 2013-10-17
Post score: 0
Answer:
I don't see any sort of launch or nodes in the rosaria stack that offer keyboard teleop.
Rosaria does subscribes to the "cmd_vel" topic, so if you want you can write a simple node that listens for keyboard input and translates it into geometry_msgs/Twist messages (http://docs.ros.org/api/geometry_msgs/html/msg/Twist.html).
I personally have been using the p2os stack for a while and it also lacks such a node. I have been getting by using the erratic robot stack and its keyboard teleop launch file. You can certainly look there to find out what needs to happen in your own version or simply use it as is.
Originally posted by skiesel with karma: 549 on 2013-10-23
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by cognitiveRobot on 2013-10-23:
Thanks. i tried with p2os. failed:) Can you be more specific what i should follow? or can u please share what you used?
Comment by skiesel on 2013-10-23:
I'm suggesting that you check out the keyboard teleop in the erratic stack as a model to follow.
Comment by cognitiveRobot on 2013-10-23:
$ sudo apt-get install ros-hydro-erratic-robot
[sudo] password for albot1:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package ros-hydro-erratic-robot
should i download and make their package in ws?
Comment by skiesel on 2013-10-23:
Sorry, to be more clear, I would recommend that you check out this file as an example of how to write your own keyboard teleop node:
https://github.com/arebgun/erratic_robot/blob/master/erratic_teleop/src/keyboard.cpp
Comment by cognitiveRobot on 2013-10-23:
Thanks. That link isn't active anymore. i guess https://github.com/arebgun/erratic_robot/blob/master/erratic_teleop/src/keyboard.cpp is same. so if i build this cpp file somehow and then run with $roscore and $ rosrun rosaria RosAria (to connect my robot) i should be able to control my robot, right? or do i need anything else running?
Comment by skiesel on 2013-10-23:
Sorry that's frustrating. Try this: https://github.com/arebgun/erratic_robot/ Then click on "erratic_teleop", then click on "src" then "keyboard.cpp". Rosaria listens on the cmd_vel topic. You need something that translates keyboard input into a message of this type (geometry_msgs/Twist). | {
"domain": "robotics.stackexchange",
"id": 15906,
"tags": "control, ros, laser, data"
} |
Cylindrical wave | Question: I know that a wave dependent of the radius (cylindrical symmetry), has a good a approximations as $$u(r,t)=\frac{a}{\sqrt{r}}[f(x-vt)+f(x+vt)]$$ when $r$ is big. I would like to know how to deduce that approximation from the wave equation, which is this (after making symmetry simplifications):
$$u_{tt}-v^2\left(u_{rr}+\frac{1}{r}u_r\right)=0$$
Proving that's a good approximation is easy (just plug it in the the equation), I want to know how to deduce that from the above equation.
I've been searching and I found this: http://vixra.org/abs/0908.0045, which actually solved me a couple of problems, but the way they do it looks a bit clumsy to me, saying for example that "assuming the function $g$ depends on $r$ so some terms just go away..."
Thanks in advance.
Answer: Use the following identity:
$$ r^{-\alpha} \partial^2_{rr} \left( r^\alpha f(r) \right) = f_{rr} + \frac{2\alpha}{r} f_r + \frac{\alpha(\alpha - 1)}{r^2} f $$
Now, by inspection and comparing the above equation to the cylindrical wave equation you have that
$$ u_{tt} - \nu^2(u_{rr} + \frac1r u_r) = u_{tt} - \nu^2 \left( \frac{1}{\sqrt{r}}\partial^2_{rr} \left[ \sqrt{r} u\right] + \frac{1}{4r^{5/2}} \sqrt{r} u\right) $$
So writing $U = \sqrt{r} u$ we have that
$$ U_{tt} - \nu^2 U_{rr}+ \frac{\nu^2}{4r^2} U = 0 $$
So that $U = \sqrt{r} u$ solves the 1 dimensional wave equation up to a term that decays quickly (as inverse square). Hence $u$ is approximated by $1/\sqrt{r}$ times a solution of the 1 dimensional wave equation when $r$ is large. | {
"domain": "physics.stackexchange",
"id": 6639,
"tags": "waves, wavefunction"
} |
Reason for why a physical quantity is zero in the below description | Question: If we assume the centre of mass to be the origin and the frame is the centre of mass frame, then we know that the total linear momentum of system from centre of mass frame is always zero.
But what about the total angular momentum?
Suppose that we have a body rotating about a fixed axis. Consider (i)th particle. I have posted the picture. If if the position vector is $r$ are and the linear momentum vector is $P$ and the angular momentum is $L$.
Then
L = Σr×P
= Σ(r'+rc)×(P'+Pc)=Σr'×P'+Σrc×P'
RC means the position vector of centre of mass. There will be also to other terms which I'm not writing because I know the meaning of those two rest terms. The first term here I wrote is is the the angular momentum of the body with respect to centre of mass as told by our teacher. But we know that the linear momentum from centre of mass frame is zero so in this case it is zero X something which should be zero. The total angular momentum from centre of maths should be zero but our teacher told it as non zero. Why?!
Also if possible kindly tell me that why the second term is zero as told by the teacher?
Answer: I will show you something that maybe helps. Look at the total angular momentum as measured from the axis of rotation: $\vec{L} = \sum_i \vec{r}_i \times \vec{p}_i$. Now you can rewrite $\vec{r}_i$ as $\vec{r}_i = \vec{r}_{CM} + \vec{r'}_i$. Doing so, you can see that:
$\begin{align} \sum_i \vec{r}_i \times \vec{p}_i &= \sum_i (\vec{r}_{CM} + \vec{r'}_i) \times \vec{p}_i = \vec{r}_{CM} \times \sum_i \vec{p}_i + \sum_i \vec{r'}_i \times \vec{p}_i = \sum_i \vec{r'}_i \times \vec{p}_i \end{align}$.
This means that the total angular momentum stays the same, no matter which reference you choose (as you should expect). The reason this is true is because the sum of the linear momenta is zero, as you pointed out!
Now looking at what you wrote, you also split up the linear momenta:
$L = \begin{align} \sum_i \vec{r}_i \times \vec{p}_i &= \sum_i (\vec{r}_{CM} + \vec{r'}_i) \times (\vec{p}_{CM} \times \vec{p'}_i) = \sum_i \vec{r'}_i \times \vec{p'}_i + \sum_i \vec{r}_{CM} \times \vec{p'}_{i} + \ldots \end{align}$
These two last terms are the ones you wrote down. The second term is zero for the reason I already explained: $\vec{r}_{CM}$ is independent of the index $i$, so you can pull it out of the summation so that $\sum_i \vec{r}_{CM} \times \vec{p'}_{i} = \vec{r}_{CM} \times \sum_i \vec{p'}_{i}$ and your object only rotates, it does not move, so $\sum_i \vec{p'}_{i} = 0$. That is why the second term is zero! Now the total angular momentum of the center of mass is non zero, as your teacher pointed out correctly. This is because your rigid body is spinning around an axis which is not going trough the center of mass, so your center of mass is rotating. That is why it SHOULD have a non zero angular momentum! I hope this makes it all clear, if not please ask again :) | {
"domain": "physics.stackexchange",
"id": 85367,
"tags": "newtonian-mechanics, angular-momentum, reference-frames, vectors"
} |
If we had three eyes, would our visual perspective be fourth dimensional? | Question: If one covers up one eye, then he loses depth perception (two dimensional perspective). When we uncover that eye, we can now see depth (three dimensional perspective). My question is if we had four eyes, would we be able to see from a four dimensional perspective?
Answer: No.
The world we observe with our five senses is three dimensional. Two independent measurements are enough to calculate the three dimensional position of everything, which is what our brain does with the input of two eyes.
More eyes would only over constrain the solution, and might help in low lighting or long distance estimates when the errors are large. | {
"domain": "physics.stackexchange",
"id": 25003,
"tags": "spacetime-dimensions, perception, vision, visualization"
} |
Encode text to Baudot (teletype) code | Question: After watching Computerphile videos on WW2 cryptography and radioteletype I decided to code a translator to Baudot code. I wonder if it can be somehow improved?
def translate_to_baudot(untranslatedtext):
from textwrap import wrap
def getdic():
chars=list('ABCDEFGHIJKLMNOPQRSTUVWXYZ ∆-:$3!\'().,9014‽57;2/6"') #‽ is bell and ∆ is new line
baud=["11000","10011","01110","10010","10000","10110","01011","00101","01100","11010","11110","01001","00111","00110","00011","01101","11101","01010","10100","00001","11100","01111","11001","10111","10101","10001","00100","01000","11000","10011","01110","10010","10000","10110","01011","00101","01100","11010","11110","01001","00111","00110","00011","01101","11101","01010","10100","00001","11100","01111","11001","10111","10101","10001"]
baudic={chars[i]: baud[i] for i in range(len(chars))}
return baudic
def add_shift(text,charnum):
if charnum>0:
if text[charnum-1] in'ABCDEFGHIJKLMNOPQRSTUVWXYZ ∆' and text[charnum] not in'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
return '11111'
elif text[charnum-1] not in'ABCDEFGHIJKLMNOPQRSTUVWXYZ ∆' and text[charnum] in'ABCDEFGHIJKLMNOPQRSTUVWXYZ':
return '11011'
else:
return ""
elif charnum==0 and text[0] not in'ABCDEFGHIJKLMNOPQRSTUVWXYZ ':
return '11111'
else:
return ""
def bell_check(text):
text=text.replace('\a','‽')
return text
def line_check(text):
text=text.replace('\n','∆')
return text
def strip_non_baud(text):
for char in text:
if char not in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ-:$3!\'().,9014‽57;2/6" ∆':
text=text.replace('char','')
return text
def text_to_baud(text):
baudic=getdic()
baudtext=''
charnum=0
for char in text:
baudtext+=add_shift(text,charnum)
baudtext+=baudic[char]
charnum+=1
return baudtext
text=untranslatedtext
text=bell_check(text)
text=line_check(text)
text=text.upper()
text='‽‽‽∆'+strip_non_baud(text)+'∆‽‽‽'
print(text)
baudtext=text_to_baud(text)
baudint=int(baudtext,2)
print(baudint)
return baudtext
if __name__=='__main__':
text=input('enter your text here: ')
translate_to_baudot('text')
I made this when I just had started coding so it's probably not even close to the best approach.
Answer: You need blank lines between your functions for both PEP8 and general sanity.
This:
chars[i]: baud[i] for i in range(len(chars))
should not use range, and should use a zip instead. However, I am going to suggest that you rework this database so that your alphabets are expressed in binary order, not in alphabetic order. (If you stuck to alphabetic order you'd also want to sort your ordinals, which 3!\'().,9014‽57;2/6 you have not.) If you hold these in binary order, you can simply enumerate() over them to get their code as the index.
I don't understand why you use the pseudo-printable substitutes ∆, ‽ for bell and newline. You're going from plaintext to an encoding. If someone wants a bell, they can write alarm \a within the input string; I don't see why bell_check would be useful. Also note that (like Windows, and traditional printers) this encoding relies on CRLF pairs, so you might want to do some translation from single '\n' characters to such pairs.
I had a lot of difficulty in reconciling your alphabet data with the alphabets described on Wikipedia. In my example code I have assumed that Wikipedia is right.
The Baudot shift is stateful; that is, the terminal is either in letter mode or figure mode for a string of potentially multiple characters. Your add_shift checks both the previous and current character, but this is more complicated than it needs to be: you can just hold a state variable for whether you're in letter or figure mode. Rather than returning a blank string, consider writing an iterator function that either yields or doesn't.
Consider an optional feature that throws if there are non-encodable characters in the input.
Representing the output as one enormous integer is not useful. If anything, you would want to form a binary-packed bytes object; but I have not shown how to do this. Instead I have demonstrated an easy way to show the binary output with separation spaces.
Suggested
# Assume US TTY variant of ITA2, LSB on right
# https://en.wikipedia.org/wiki/Baudot_code#ITA2
from typing import Iterator
LETTER_SERIES = (
'\0' # null
'E'
'\n' # linefeed
'A SIU'
'\r' # carriage return
'DRJNFCKTZLWHYPQOBG'
'\x0E' # shift-to-figure (shown as "shift-out")
'MXV'
'\x0F' # letter page extension (shown as "shift-in")
)
FIGURE_SERIES = (
'\0' # null
'3'
'\n' # linefeed
'- '
'\a' # bell (shown as "alarm")
'87'
'\r' # carriage return
'$4'
"'"
',!:(5")2#6019?&'
'\x0E' # figure page extension (shown as "shift-out")
'./;'
'\x0F' # shift-to-letter (shown as "shift-in")
)
def series_to_codes(series: str) -> Iterator[tuple[str, str]]:
for i, c in enumerate(series):
yield c, f'{i:05b}'
LETTER_CODES = dict(series_to_codes(LETTER_SERIES))
FIGURE_CODES = dict(series_to_codes(FIGURE_SERIES))
def text_to_baudot_codes(text: str, strict: bool = False) -> Iterator[str]:
i_codeset = 0
codesets = LETTER_CODES, FIGURE_CODES
shifts = '\x0E\x0F'
for orig in text:
if orig in shifts:
if strict:
raise ValueError()
continue
orig = orig.upper()
current_code = codesets[i_codeset].get(orig)
if current_code is not None:
yield current_code
continue
other_code = codesets[1 - i_codeset].get(orig)
if other_code is not None:
yield codesets[i_codeset][shifts[i_codeset]]
yield other_code
i_codeset = 1 - i_codeset
continue
if strict:
raise ValueError()
def text_to_baudot(text: str, strict: bool = False) -> str:
return ' '.join(text_to_baudot_codes(text, strict))
if __name__ == '__main__':
text = input('Text: ')
print(text_to_baudot(text))
Output
Text: t3rd.
10000 11011 00001 11111 01010 01001 11011 11100 | {
"domain": "codereview.stackexchange",
"id": 43649,
"tags": "python, beginner, strings"
} |
Naive string compression | Question: Description:
Implement a method to perform basic compression using counts of repeated characters. If the compressed string is not smaller then return the original string.
Code:
class Main {
public static String compress(String in) {
if (in == null) {
throw new IllegalArgumentException("Input string cannot be null");
}
if (in.length() <= 1) {
return in;
}
StringBuffer out = new StringBuffer();
int count = 1;
for (int i = 1; i < in.length(); i++) {
char current = in.charAt(i);
char previous = in.charAt(i - 1);
if (current == previous) {
count++;
}
else {
out.append(previous);
out.append(count);
count = 1;
}
}
out.append(in.charAt(in.length() - 1));
out.append(count);
return out.toString().length() < in.length() ? out.toString() : in;
}
public static void main(String[] args) {
try {
System.out.println("Should not happen: " + compress(null));
} catch (IllegalArgumentException e) {
System.out.println("Got expected exception for null");
}
System.out.println(compress("").equals(""));
System.out.println(compress("a").equals("a"));
System.out.println(compress("ab").equals("ab"));
System.out.println(compress("aa").equals("aa"));
System.out.println(compress("aabcccccaaa").equals("a2b1c5a3"));
System.out.println(compress("aabb").equals("aabb"));
}
}
Question:
The solution was just based on intuition. The last append was by trial and error which I didn't like (I may be more nervous during the real interview). I would like to know if there is any way to avoid such mistakes, possibly using loop invariants?
Answer: The main problem IMO is, that you cram it all into a single method which is not helpful to organize your thoughts.
When I tried this for comparison, I came up with the following in pseudo code:
// outer logic
compress(s)
rle = doRunLengthEncoding(s)
if rle.length < s.length return rle
else return s
// base algorithm
doRunLengthEncoding(s)
res = empty String buffer
char[] characters = s.toCharArray
int startIndex = 0
while startIndex < characters.length
int endIndex = findEndOfCurrentSequence(startIndex, characters)
res.append(currentChar).append(endIndex - startIndex)
startIndex = endIndex
return res
// return the first index i, where c[i] != c[start], return
// c.length if there is no such index
findEndOfCurrentSequence(start, c)
... // simple enough
This way, you divide the problem into manageble pieces, give you thoughts a place to stop every once in a while, and should get rid of this nervousness.
BTW: from an interviewer's point of view (at least I you want into my team in my country ;-)) a well thought pseudocode algorithm is worth more than working code. If you apply for a programmer's job, I just assume that you have the basic skills to use an IDE and kick against some piece of code until it compiles and produces the correct result. What I am looking for is organized thoughts and problem solving skills.
One last thing: there is one and only one correct exception to throw for a null parameter that must not be null: NullPointerException. Ideally by simply using Objects.requireNonNull. | {
"domain": "codereview.stackexchange",
"id": 29993,
"tags": "java, strings, programming-challenge, interview-questions, compression"
} |
Name of the ball screw nut used in pistons? | Question: I have seen several products on the internet marketed as 'ball screws' that have both the screw itself and the special ball bearing nut. They are intended to have a motor mounted on one end of the screw to rotates it, causing the nut to be moved.
However, I'm looking for the particular kind of ball screw where the nut is part of a gear, and mounted via axial bearings, so that as it is rotated it remains stationary and the screw is extended.
Are these things sold in complete units without any machining necessary? What are they called/where can I find one?
Answer: The thing is called a traveling screw (TS) linear actuator.
You should be able to get them as complete units from the companies making them. | {
"domain": "engineering.stackexchange",
"id": 392,
"tags": "mechanical-engineering"
} |
Determine whether a list contains a particular sequence | Question: Allowing wraparound, the sequence [2, 3, 5, 7, 11] contains the sequence [7, 11, 2].
I'm looking for compact/pythonic way to code this, rather than an optimally-efficient way. (Unless I need the efficiency, I usually gravitate to the path of least ASCII).
An obvious method would be as follows (I provide three different ways to code it):
def Contains(L, s):
L_extended = L+L[0:len(s)]
for i in xrange( len(L) ):
if L_extended[i:i+len(s)] == s:
return True
return False
def Contains2(L, s):
L_extended = L+L[0:len(s)]
return next( \
( True for i in xrange(len(L)) if L_extended[i:i+len(s)] == s ), \
False \
)
def Contains3(L, s):
L_extended = L+L[0:len(s)]
try:
next( i for i in xrange(len(L)) if L_extended[i:i+len(s)] == s )
except StopIteration:
return False
return True
print Contains3( [1,2,3,4,5], [1,3] ) # False
print Contains3( [1,2,3,4,5], [2,3] ) # True
print Contains3( [1,2,3,4,5], [4,5,1] ) # True
My preference is for the first, but Python is full of pleasant surprises -- can anyone see a cleaner way to do it?
Answer: As you said in your comments above, the first is the cleanest way you came up with.
Instead of writing your own logic, Python has a built-in library difflib that contains the SequenceMatcher class. You can use this to find the longest match between two iterables:
from difflib import SequenceMatcher
def contains(main_list, sub_list, junk=None):
sub_len = len(sub_list)
extended = main_list + main_list[:sub_len]
sequence = SequenceMatcher(junk, extended, sub_list)
return sequence.get_longest_match(0, len(extended), 0, sub_len).size == sub_len
My only qualm with get_longest_match is that its parameters are ALL positional arguments and are thus required. I would prefer the API assume we want the entire length of both iterables searched. Instead of using get_longest_match you could instead use the get_matching_blocks function (which returns tuples) then iterate through those until the corresponding tuple value is equal to sub_len.
By using SequenceMatcher you also get the flexibility to declare what 'junk' you want the class to ignore. So say you want to find the longest match between the two lists, but you don't care about 1 or 2. You would simply pass the lambda:
lambda num: num in [1, 2]
as the junk parameter.
Now that I've given my structure recommendation, I want to give some style pointers.
First of all, take a look at PEP8, the official Python style guide. It gives a great reference to how your code should look. Here are some places your code strays from the Python conventions:
Use descriptive variable names. Single letter variable names are typically frowned-upon. When naming, err on the side of being too descriptive instead of too brief.
Use underscores_in_names for basically everything in Python. The only aberration from this rule is class names which are written in PascalCase.
Also, variable and function/method names should be lowercase. Conventionally, uppercase names are saved for constants. So by convention, you L variable is considered constant.
Your spacing is quite nice; except for a few occasions where you place a single space after an opening paren and before the close paren. PEP8 actually labels that style as a pet peeve.
Multi-line statement happen. PEP8 purposefully takes a neutral stance on how those should be formatted. However, Python is quite lenient in terms of newlines in statements, so more than likely, there is a better way to format your multi-line statements without escaping the newlines.
def Contains2(L, s):
L_extended = L+L[0:len(s)]
return next((True for i in xrange(len(L))
if L_extended[i:i+len(s)] == s), False) | {
"domain": "codereview.stackexchange",
"id": 7882,
"tags": "python"
} |
Stacked bodies, no applied forces, internal forces accelerate the whole system | Question: A block P of mass $m_{1}$ in on a frictionless horizontal plane and a block Q of mass $m_{2}$ is always on top of P. Initially P and Q are at rest. At time t=0, an initial speed $v_{0}$ is given to P in the rightward direction. Then Q also starts to move. When a time T is passed after P is given an
initial speed, the velocity of P coincides with the velocity of Q. A coefficient
of kinetic friction between P and Q is denoted as µ. Treat the rightward
direction as positive, and the acceleration of gravity is denoted as $g$.
Diagram
Find the expression of T using some other suitable quantities.
This is a question of the MEXT scholarships selection test of physics 2016
Can you please help me out with the solution, tried to use kinematic equations but I'm missing out something.
Answer: From the diagram, P has mass M and Q has mass m. Then the final velocity:
V = $v_o$ - (μmg/M)T = 0 + (μmg/m)T. Solve for T. | {
"domain": "physics.stackexchange",
"id": 69508,
"tags": "homework-and-exercises, newtonian-mechanics, friction, free-body-diagram"
} |
Demodulation with MATLAB | Question: I am trying to listen VLF radio signals. I have a recorded wave file (download) and here is the frequency spectrum:
I have produced this spectrum using this matlab code:
signal= wavread('/Users/ecabuk/Downloads/DR0000_0165.wav');
X_mags = abs(fft(signal));
bin_vals = [0 : N-1];
fax_Hz = bin_vals*fs/N;
N_2 = ceil(N/2);
plot(fax_Hz(1:N_2), 10*log10(X_mags(1:N_2)))
xlabel('Frequency (Hz)')
ylabel('Magnitude (dB)');
title('Single-sided Magnitude spectrum (Hertz)');
axis tight
It has two clear picks near at ~24KHz and ~25KHz.
How can I crop 24KHz and then return to time domain and save as a wave file, so I can hear that. (and same thing for 25KHz of course)
I am not sure but am I trying to make a Demodulation process?
Answer: You are asking 2 questions:
How to extract those labeled frequencies and remove the rest of the data?
How to do Demodulation?
Well, regarding your first question, a dirty solution would be by working the the DFT data.
Just zero all frequencies but those you're after and apply Inverse DFT (ifft).
Leave some margin around your wanted frequencies.
About the second question, look at Wikipedia - Demodulation.
Basically, in order to shift data in the frequency domain you multiply by pure harmonic signal in the time domain.
You chose the frequency of this harmonic signal s.t. (At least this is the classic case) the frequency you want will move to the DC frequency.
Then you apply Low Pass Filter to leave only the data you wanted.
This is the opposite of the Modulation process. | {
"domain": "dsp.stackexchange",
"id": 1990,
"tags": "matlab, frequency-spectrum, demodulation"
} |
My first graph traversal code | Question: I'm trying to solve a graph traversal problem I found, and was wondering how I could improve my implementation. Currently it seems a little convoluted and long.
The problem is as follows:
I have a matrix of size (rows x cols). This matrix has some cells that are empty (designated by a 0) and some cells that are blocked off (designated by a 1). Given a length, I would like to find a consecutive set of empty cells (ie, a path) of size length.
For this specific code instance, I actually hardcoded the matrix, as well as the length. My approach was:
to loop through the matrix;
at each position try to find a path, if that position was empty and not yet visited;
if so, then find all adjacent positions that fit the same condition (ie, empty and not visited);
if adjacent positions exist, continue finding further adjacent positions until my path is complete in size;
if a path is not possible, then print "impossible".
My code is as follows:
def positionInGraph(pos):
return ((pos[0] >= 0) and (pos[1] >= 0)) and ((pos[0] < rows) and (pos[1] < cols))
def positionNotVisited(pos):
return not(pos in visited)
def positionIsEmpty(pos):
return (graph[pos[0]][pos[1]]==0)
def getAdjacent(posx, posy):
adjQueue = [(posx-1,posy), (posx, posy+1), (posx+1, posy), (posx, posy-1)]
for i in adjQueue:
if (positionInGraph(i) and positionNotVisited(i) and positionIsEmpty(i)):
yield i
def getOnePath(pos):
counter = 0
pathList = []
processQueue = []
if (not(positionIsEmpty(pos)) or (pos in visited)):
return (pathList, counter)
else:
processQueue.append(pos)
while (processQueue and (counter < length)):
currentPos = processQueue.pop()
visited.add(currentPos)
pathList.append(currentPos)
counter = counter + 1
adjQueue = getAdjacent(currentPos[0],currentPos[1])
if (not(adjQueue) and counter < length):
pathList.pop(currentPos)
counter -= 1
for i in adjQueue:
processQueue.append(i)
print pathList
return (pathList, counter)
def findFinalPath():
#5 4 8
# oxoo
# xoxo
# ooxo
# xooo
# oxox
for i in xrange(0,rows):
for j in xrange(0,cols):
path = getOnePath((i,j))
if path[1] == length:
return path[0]
return "impossible"
visited = set()
graph = [[0,1,0,0],[1,0,1,0],[0,0,1,0],[1,0,0,0],[0,1,0,1]]
rows = 5
cols = 4
length = 8
findFinalPath()
Answer: 1. Bug
I tried a different test case:
graph = [[1, 0, 0, 0, 1], [1, 1, 0, 1, 0], [0, 0, 0, 0, 0], [1, 1, 0, 1, 1], [1, 0, 0, 0, 0]]
rows = 5
cols = 5
length = 8
and then findFinalPath returned the following path:
[(0, 1), (0, 2), (1, 2), (2, 2), (2, 1), (2, 0), (3, 2), (4, 2)]
This is not a legal path, since (2, 0) is not adjacent to (3, 2).
I give an explanation for this bug in 2.18 below, but it would be a good exercise to for you to try to figure out for yourself what's going wrong here. How does your search manage to step from (2, 0) to (3, 2)?
2. Comments on your code
No docstrings! What do your functions do, and how do you call them?
This kind of program is an excellent candidate for doctests.
The use of global variables makes your code difficult to re-use. For example, the global variable graph means that it would be awkward if you ever wanted to find paths in two different graphs.
When you have persistent state (like your graph) with associated operations (like your function positionInGraph) it's often a good idea to organize your code into classes and methods respectively. See section 3 below for how I would do this in your case.
You don't guard the execution of the test program with if __name__ == __main__:. This means that when I import your program into an interactive Python interpreter, it immediately starts running, which makes it harder for me to test. Put your test code into test functions or doctests.
Returning the string "impossible" when no path is found isn't the best way to design the interface. Returning an exceptional value to indicate failure tends to be error-prone: it's too easy for the caller to forget to check. It's usually better to raise an exception when exceptional circumstances are encountered.
You represent your coordinates as a pair (y, x). It would be slightly easier to understand the output of your program if you represented coordinates in the usual way (x, y). This would of course require a corresponding change, either looking up cells with graph[pos[1]][pos[0]] (preferred) or transposing your matrix so that it is in column-major order.
You set the number of rows and columns by hand but these numbers are easy to determine: they are len(graph) and len(graph[0]) respectively. (I prefer to call these numbers height and width myself.)
Your functions positionInGraph, positionNotVisited and positionIsEmpty all take a pos as their argument, but getAdjacent takes two arguments posx and posy (which are wrongly named: posx is the y-coordinate and vice versa). You should generally strive to be consistent in little details like this: it makes it easier to remember how to call functions.
The Python style guide (PEP8) says that "Function names should be lowercase, with words separated by underscores as necessary to improve readability" and the same for variable names. So you should consider changing positionIsEmpty to position_is_empty and so on. (You're not obliged to follow PEP8 but it makes it easier for other Python programmers to read your code.)
You can use itertools.product to avoid nested loops.
The loop
for i in adjQueue:
processQueue.append(i)
can be written
processQueue.extend(adjQueue)
You have a variable called processQueue but it is not a queue. A queue is a data structure where you add elements to one end of a list and remove them from the other end (in Python you would use collections.deque). A data structure where you add and remove elements at the same end is called a stack.
You have a variable counter that you increment and decrement in parallel with adding and removing positions to pathList, so that counter is always the length of pathList. It would be better to drop counter and call len(pathList) instead: one less thing to go wrong. (If you were worried that len(pathList) might be an O(n) operation as it is in some languages, you can look at the TimeComplexity page on the Python wiki for reassurance.)
If you have an algorithm that uses a stack, it's often easiest and clearest to implement it as a recursive function (you can implicitly use the function call stack instead of having to explicitly push and pop your own stack). See section 3 below for how to do this.
You give the maze once in a comment:
# oxoo
# xoxo
# ooxo
# xooo
# oxox
and then again in code:
graph = [[0,1,0,0],[1,0,1,0],[0,0,1,0],[1,0,0,0],[0,1,0,1]]
This violates the DRY principle (Don't Repeat Yourself): it would be easy for you to make a mistake when encoding your maze as a matrix of 1s and 0s. Why not get the computer to do it for you?
Similarly, it's hard for you to check the results of your program, because the path comes out as a list of coordinates, which is tedious and error-prone to check by eye. Even in the buggy example I gave in section 1 above, it would be easy to give it a quick look and miss the error. It would be better to present the results in a form that's easy for you to check. (See section 3 below for how I did this.)
Your code has much unnecessary use of parentheses. The lines
return ((pos[0] >= 0) and (pos[1] >= 0)) and ((pos[0] < rows) and (pos[1] < cols))
return not(pos in visited)
return (graph[pos[0]][pos[1]]==0)
if (not(positionIsEmpty(pos)) or (pos in visited)):
while (processQueue and (counter < length)):
return (pathList, counter)
can be written
return 0 <= pos[0] < rows and 0 <= pos[1] < cols
return pos not in visited
return graph[pos[0]][pos[1]] == 0
if not positionIsEmpty(pos) or pos in visited:
while processQueue and counter < length:
return pathList, counter
respectively. "Program as if you know the language"!
Explanation for the bug in section 1: your back-tracking implementation doesn't backtrack all your search state, so that the search state becomes inconsistent. When your search reaches a dead end, it pops the last position from pathList and decrements counter, but does not update the rest of the search state. This means that the next currentPos that gets popped from processQueue will not (in general) be adjacent to the end of the path. When you backtrack in a search, you need to backtrack all your state: in your case pathList, counter, processQueue, and visited all need to be backtracked, but you only update the first two of these, and so it goes wrong.
See the revised code for how I keep the search state consistent. Every addition to the state as the search goes forward:
path.append(p)
visited.add(p)
is matched by a corresponding deletion as the search backtracks:
visited.remove(p)
path.pop()
(There's more to say about this issue, but I've moved this to section 4 below.)
3. Revised code
from itertools import product
class Found(Exception): pass
class NotFoundError(Exception): pass
class Maze(object):
"""A rectangular matrix of cells representing a maze.
Describe the maze to the constructor in the form of a string
containing dots for empty cells and any other character for walls.
>>> m = Maze('''.#..
... #.#.
... ..#.
... #...
... .#.#''')
>>> print(m.format(m.path(8)))
.#.o
#.#o
oo#o
#ooo
.#.#
>>> print(m.format(m.path(10)))
... # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
NotFoundError: no path of length 10
"""
def __init__(self, maze):
self.matrix = [[c != '.' for c in line] for line in maze.split()]
self.height = len(self.matrix)
self.width = len(self.matrix[0])
assert all(len(row) == self.width for row in self.matrix)
def in_bounds(self, p):
"""Return True if `p` is a legal position in the matrix."""
return 0 <= p[0] < self.width and 0 <= p[1] < self.height
def empty(self, p):
"""Return True if `p` is an empty position in the matrix."""
return not self.matrix[p[1]][p[0]]
def neighbours(self, p):
"""Generate legal positions adjacent to `p`."""
for dx, dy in ((1, 0), (0, 1), (-1, 0), (0, -1)):
q = p[0] + dx, p[1] + dy
if self.in_bounds(q):
yield q
def path(self, n):
"""Find a path of length `n` in the maze, or raise NotFoundError if
there is no such path.
"""
path = []
visited = set()
def search(p):
path.append(p)
visited.add(p)
if len(path) == n:
raise Found()
for q in self.neighbours(p):
if self.empty(q) and q not in visited:
search(q)
visited.remove(p)
path.pop()
try:
for p in product(range(self.width), range(self.height)):
if self.empty(p):
search(p)
except Found:
return path
else:
raise NotFoundError('no path of length {}'.format(n))
def format(self, path=()):
"""Format the maze as a string. If optional argument path (an
iterable) is given, highlight cells in the path.
"""
path = set(path)
def row(y):
for x in range(self.width):
p = x, y
if not self.empty(p): yield '#'
elif p in path: yield 'o'
else: yield '.'
return '\n'.join(''.join(row(y)) for y in range(self.height))
4. Extra credit: encapsulation of search state
In 2.18 above I discussed the need to update your search state properly when implementing a back-tracking search. As with any kind of data consistency problem, it's good practice to encapsulate operations that must be performed together.
An easy way to do this in Python is via a context manager. A "context manager" is just an object with __enter__ and __exit__ methods: when the context manager is the subject of a with statement, the __enter__ method is called on entry to the with statement, and the __exit__ method on exit. By putting your forward operations in the __enter__ method and the back-tracking operations in the __exit__ method you can be sure that these pair up properly.
But an even easier way to do this is to use contextlib.contextmanager to build the context manager for you, like this:
from contextlib import contextmanager
def path(self, n):
"""Find a path of length `n` in the maze, or raise NotFoundError if
there is no such path.
"""
path = []
visited = set()
@contextmanager
def visit(p):
path.append(p) # Forward: add p to path.
visited.add(p) # Forward: mark p as visited.
yield # Rest of search happens here.
visited.remove(p) # Backtrack: mark p not visited.
path.pop() # Backtrack: remove p from path.
def search(p):
with visit(p):
if len(path) == n:
raise Found()
for q in self.neighbours(p):
if self.empty(q) and q not in visited:
search(q)
try:
for p in product(range(self.width), range(self.height)):
if self.empty(p):
search(p)
except Found:
return path
else:
raise NotFoundError('no path of length {}'.format(n)) | {
"domain": "codereview.stackexchange",
"id": 3452,
"tags": "python, graph"
} |
Special relativity version of Feynman's "Space-Time Approach to Non-Relativistic Quantum Mechanics" | Question: I'm looking for an article that sets up the framework described by Feynman in Space-Time Approach to Non-Relativistic Quantum Mechanics, but in Special Relativity.
Answer: Dear mtrencseni,
the special relativistic counterpart of the 1948 Feynman paper you mentioned is the 1949 Feynman paper
http://web.ihep.su/dbserv/compas/src/feynman49b/eng.pdf
called "Spacetime Approach to Quantum Electrodynamics". Note that I used the same Russian server haha. Well, more precisely, Quantum Electrodynamics is one important example of a quantum field theory but the general methods in the Feynman paper were only updated in their "technical details" when people needed to get the similar description for any quantum field theory.
If you were expecting that the special relativistic version of the 1948 paper would still be essentially the same thing, with some $p^2/2m$ replaced by $mc^2/\sqrt{1-v^2/c^2}$, you must feel disappointed. But in fact, it is a reason for a huge happiness.
The fact is that when special relativity is added to the quantum mechanical framework, one immediately encounters many effects that force us to use a fundamentally different classical starting point - field theory instead of quantum mechanics. Why?
Well, if you work with quantum mechanics - whether in an operator framework or in the path-integral approach - it doesn't respect the Lorentz symmetry. To respect the Lorentz symmetry, you need to switch from the non-relativistic one-particle Schrödinger equation to something like the Klein-Gordon equation or the Dirac equation. Both of them have a path-integral description as well although it is a bit subtle.
Take the latter - the Dirac equation - because it's relevant for the same electron that used to be described by the non-relativistic Schrödinger equation. One can show that because of relativity, the equation inevitably predicts solutions with negative energy. While $E=p^2/2m$ only has non-negative values of energy, the condition $E^2-p^2 c^2-m^2 c^4$ has both positive and negative values of $E$ as solutions. You can't avoid it - the squaring of $E$ is a fundamental feature of special relativity.
And indeed, the Dirac equation may be showed to have negative-energy solutions as well. If particles could have arbitrarily low energies, there would be an instability. You could get any energy from an electron, while the electron would be falling to arbitrarily low, negative energy levels.
Nature avoids it because it tries to find the lowest-energy state, dissipate all the excess energy, and call the lowest-energy state "the vacuum" or "the ground state". That's the only way to guarantee that it will be stable. So Nature actually fills all these negative-energy states - it has no choice. There is this "Dirac sea" of negative-energy electrons everywhere. By the Pauli exclusion principle, there can be at most one electron in each state. So the vacuum has 0 electrons in positive energy states and 1 electron in each negative energy state.
If there is a hole missing in this sea of negative-energy electron states, it will look like minus one electron with negative energy, i.e. it will be a positively charged particle, the positron, with a positive energy and positive charge. That's how Dirac predicted antimatter that was abruptly found. He got his Nobel prize for that.
Moreover, in relativity, you will find out that the pair creation of particles is inevitable. The number of particles can't be conserved. In some sense, it's because particles can move back and forth in spacetime - and the particle moving backwards in time is an antiparticle.
For all these reasons, you need to study quantum field theory if you want to combine special relativity with quantum mechanics. By quantizing a classical field, you get a system that looks like a system of particles once again. The number of particles coming from the quantum fields turns out to be integer (in the non-interacting limit) because of the same reason why the energy of a quantum harmonic oscillator is equally spaced. You will be able to recover the mechanics in the non-relativistic limit. But you won't be able to get rid of the phenomena that are implied by the fields - such as the pair creation and pair annihilation. They're real and important.
Relativistic non-field quantum mechanics is kind of inconsistent - or at least, it's not a viable approach to describe the real world. So you will only spend a very limited time with this concept - and you should eventually move to quantum field theory. This is true regardless of the formalism - Schrödinger's equation for the wave function, Heisenberg's equations for the operators, or Feynman's path integral approach. What I say about the fields is a physical insight and physical insights are independent of conventions and the choice of the mathematical machinery.
Best wishes
Lubos | {
"domain": "physics.stackexchange",
"id": 5574,
"tags": "quantum-mechanics, quantum-field-theory, special-relativity"
} |
Is the moon inside earth’s atmosphere? If so, what are the consequences? | Question: Recently, a paper was published claiming that the Earth’s atmosphere extends far beyond the Moon. This has been reported on by a number of news sources and websites, including the ESA, Business Insider, Space.com, and EarthSky.org.
If these findings are true, how does this affect the current understanding related to the measured atmospheric pressure, the currently accepted pressure in space between the Earth and the Moon, the pressure on the Moon, the effect of Earth's gravity on the atmosphere, and Earth's ability to maintain this extensive atmosphere while traveling through the Solar System? Is the atmosphere gradually lost or replenished?
Image credit: European Space Agency
Answer: It's long been known that Earth's atmosphere gradually reduces in density, and is detectable to very high altitude.
A 1976 atmospheric model has data up to 1000 km of altitude. At that altitude, atmospheric density is on the order of 10-15 kg/m3. This is already better than the best vacuum we can create on Earth's surface. So we can call this an atmosphere (because it contains gases also found on Earth at ground level), but its consequences are limited.
It's sparse enough that at this altitude, atmospheric drag on satellites is negligible for the lifetime of a satellite.
Earth's atmosphere loses some mass to space: at high altitude some atoms achieve escape velocity. This is replenished by evaporation and outgassing on Earth. For example, helium gas starts its life as alpha particles from uranium decay, the gas seeps through the ground, rises through the atmosphere (because it's much lighter than air) and eventually escapes.
We already know with very high accuracy that the Moon slowly moves away from us as it's slowed down. Most of that slowdown is caused by tidal effects, but there will also be an atmospheric component. Due to the very low density, this component will be small. | {
"domain": "astronomy.stackexchange",
"id": 3565,
"tags": "the-moon, gravity, earth, planetary-atmosphere"
} |
Infering the mean distance from density | Question: A simple calculation surely but how we can infer the mean distance $l_{\text{mean}}$ between particles from their density $n$, i.e :
$$l_{\text{mean}}=\left(\dfrac{1}{n}\right)^{1/3}$$
?
I tried to visualize it for example into a cube : by taking $n=8$, we would have $l_{\text{mean}} =0.5$, so equal to the half side length of cube but we must have 8 particles. How to take a right example ?
Maybe I should consider instead no $n$ but $\bar{n}$, or the representation by a cube is not appropriate, I don't know. Is there a mathematical justification for mean path $l_{\text{mean}}$?
Any help is welcome.
Answer: $$Density=\frac{mass\,or\,no.\,of\,particles}{Total\,Volume\,occupied\,by\,all\,these\,particles}$$
Hence if I invert it I get the Volume occupied by a single particle. So assuming these particles occupy a cubical volume with cube side $l_{mean}$ on average,
$$l_{mean}^3=\frac{Total\,volume}{Total\,Number}=\frac{1}{Density}$$ | {
"domain": "physics.stackexchange",
"id": 76964,
"tags": "density, mean-free-path"
} |
Sleeping thread after publishing a message into a topic | Question:
Hi,
I've been using rosbridge and jroslib as an API to it. Testing the jroslib with the turtlesim, to publish messages I'm finding some errors. I have to use Thread.sleep(500) to the thread sleep so the message can somehow be published into the topic, otherwise the message would not be published nor the turtle would move.,
public static void main(String[] args) throws InterruptedException {
Ros ros = new Ros("localhost");
final int[] answer = {0};
ros.connect();
Topic turtlePub = new Topic(ros,"/turtle1/cmd_vel", "geometry_msgs/Twist", 3);
turtlePub.advertise();
turtlePub.subscribe(new TopicCallback() {
@Override
public void handleMessage(Message message) {
System.out.println(message);
}
});
Message toSend = new Twist(new Vector3(2.0,0,0), new Vector3(0,0,1.8));
turtlePub.publish(toSend);
Thread.sleep(500);
ros.disconnect();
System.out.println("finished");
}
Originally posted by pnakibar on ROS Answers with karma: 1 on 2015-02-26
Post score: 0
Answer:
I'm not completely familiar with jroslib, but in general ROS communications are asyncronous and run in multiple threads. If you publish a message then immediately terminate the process, the message may have enough time in the other thread which actually does the transmission, with acknowledgement and retry logic.
Originally posted by tfoote with karma: 58457 on 2015-02-26
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by tfoote on 2015-02-26:
PS can you link to documentation for jroslib?
Comment by pnakibar on 2015-02-26:
It probably is exactly this!
The documentation is pretty lacking, although the API is very straightforward.
Their git: https://github.com/WPI-RAIL/jrosbridge | {
"domain": "robotics.stackexchange",
"id": 21001,
"tags": "ros, thread, rosbridge, turtlesim, java"
} |
What is & how to solve File error: my.xml.state (Remote I/O error)? | Question: I caught the next exception during my phylogeographical analysis in BEAST 2 with GEO_SPHERE. What could be the reason? & how to evade this in the future?
...
856000000 -3662.2647 5969.5577 -9631.8225 42m16s/Msamples
857000000 -3522.3540 6105.9849 -9628.3389 42m5s/Msamples
858000000 -3621.7922 6022.9359 -9644.7282 41m54s/Msamples
859000000 -3463.1050 6107.9766 -9571.0817 41m43s/Msamples
860000000 -3470.6562 6160.7143 -9631.3706 41m35s/Msamples
File error: VP1_test.xml.state (Remote I/O error)
java.lang.RuntimeException: An error was encounted. Terminating BEAST
at beast.app.util.ErrorLogHandler.publish(Unknown Source)
at java.logging/java.util.logging.Logger.log(Logger.java:979)
at java.logging/java.util.logging.Logger.doLog(Logger.java:1006)
at java.logging/java.util.logging.Logger.log(Logger.java:1029)
at java.logging/java.util.logging.Logger.severe(Logger.java:1776)
at beast.app.beastapp.BeastMain.<init>(Unknown Source)
at beast.app.beastapp.BeastMain.main(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at beast.app.beastapp.BeastLauncher.run(Unknown Source)
at beast.app.beastapp.BeastLauncher.main(Unknown Source)
Unfortunately, googling this error with "BEAST" gave me nothing. If without it, there are a lot of irrelevant problems.
Update
I have just found out that this error happened simultaneously onto all my nodes (different processors, RAM, & common hards & SSDs). All logs of BEAST 2 analyses have it. What is stranger, some of the analyses pressed on after this & someones were broken. To the present I got more interested not in the broken ones but could this exception spoil those which continued their work?
Answer: You can restart this at the place you left 'cause is Beast2.
860000000 -3470.6562 6160.7143 -9631.3706 41m35s/Msamples
However, this is almost 1 billion replications and should be sufficient for convergence. The fact it ends on a round number might infer a algorithm limit is in place. If the Beast2 was restarted from this point and it abruptly ended this would be a very likely reason.
The alternative is an error external to the algorithm such as a RAM error - it has been running a long time.
Whatever the reason I would simply perform ESS statistics and convergence diagnostics on what you have now. There's no real reason to continue if it has converged for 10e6 replications maximum and this is much, much bigger than that.
The terminated processes have not spoiled the work again you can simply input the terminated chains into the restart for Beast2. This is because Beast2 uses checkpoints. Brief information on how to resume from the last checkpoint is highlighted here https://www.beast2.org/2014/04/07/checkpointing-tricks.html. Convergence checks are important.
The situation is only is only catastrophic for Beast1 where you would have needed to start from the beginning. | {
"domain": "bioinformatics.stackexchange",
"id": 2143,
"tags": "phylogenetics, phylogeny, beast"
} |
The "nose" of the periodic table | Question: My teacher said that on the periodic table there is a "nose" formed by Al, Zn, Ag, and Cd. She said that they are all fixed charged (+3, +2, +1, and +2 respectively), and said that if I write them in ionic equations, I just say Silver Nitrate instead of Silver (I) Nitrate. She also said to put all Al as +3 charge in all cases, etc. But I did some research and found out that you do say Silver (I) Nitrate (actually both are fine, but my teacher specifically said never put the (I) in such cases). So is my teacher wrong? And are there cases where the charges of the elements are not what my teacher says?
Answer: All of these elements can form compounds in other oxidation states. Aluminium forms some compounds in the +1 state (e.g. see the section in https://en.wikipedia.org/wiki/Aluminium_iodide), as does Zinc (see the section in https://en.wikipedia.org/wiki/Compounds_of_zinc) and Cadmium (e.g. https://en.wikipedia.org/wiki/Cadmium(I)_tetrachloroaluminate). But Silver is the element that shows this most often in the list given - for instance it forms fluorides in the +1, +2 and +3 oxidation states (https://en.wikipedia.org/wiki/Silver_fluoride). As such I wouldn't say including the oxidation states in the formula is wrong, but on the other hand if you omit it everybody who knows there is an ambiguity will understand what is implied. | {
"domain": "chemistry.stackexchange",
"id": 12807,
"tags": "ions, ionic-compounds, elements"
} |
How would I determine the relative acidity of substances? | Question:
Which of the following salt will produce a neutral solution (If any)? A) $\ce{KI}$ (Potassium Iodine) or B) $\ce{SrS}$ (Strontium Sulfur)
I tried to solve this problem by using the "Relative Strengths of Brønsted-Lowry Acids and Base in aq solution at room temp table". From what I know each of those two substance will dissociate into ions. Having the ions $\ce{K^+, I^-,Sr^{2+}, S^{2-}}$
Since K does not hydrolyze and I is a product of HI which is a strong acid it won't hydrolyze either. I also know that the only cations that hydrolyze are Al,Fe and Cr hence Sr will not hydrolyze. However how do I know weather S will hydrolyze? The other question I had difficulties was "Order the following from most acidic to least. SO3, CO3 and CO2".
Answer: In my homework one of the question was which of the following salt will produce a neutral solution(If any)? A) KI (Potassium Iodine) or B) SrS (Strontium Sulfur)
For (A) both K+ and I- will essentially stay as the ions. So assuming the solution is neutral to start, it will remain neutral.
But for (B), in a neutral solution, $\ce{Sr^{2+}}$ will stay as the ion, $\ce{S^{2-}}$ will react as:
$\ce{S^{2-} + H2O <=> HS^- + OH-}$
so the solution will become slightly more basic.
"Order the following from most acidic to least. $\ce{SO3}$, $\ce{CO3}$ and $\ce{CO2}$".
I can't really make sense of this question since $\ce{CO3}$ is highly unstable, and there are three isomers.
Bubbling $\ce{SO3}$ and $\ce{CO2}$ gases into water would produce acidic solutioons via reactions
$\ce{SO3 + H2O -> H2SO4}$
$\ce{CO2 + H2O -> H2CO3}$ | {
"domain": "chemistry.stackexchange",
"id": 8590,
"tags": "acid-base"
} |
Calculating the electric field produced by a line on a point | Question: Here's the question:
The electric field of a point charge can be obtained from Coulomb's law, But since here we have the charge distributed continuously over some region, the sum becomes an integral:
$$E(r) = \frac{1}{4\pi \epsilon_0} \int \frac{1}{r^2} \hat{r}dq$$
Now for my solution, I use the integral above in integrate $\frac{1}{x^2}$ from $0$ to the right end of $L_1$. The issue is after evaluation I have a division over $0$. In an attempt to change this, I took the midpoint of $L_1$ as a reference and then integrate over the range $\frac{-L1}{2}$ to $\frac{L1}{2}$. This integral does not converge. Should I integrate from $a$ to $b$? $a$ being the left end of $L_1$ and $b$ being the right end.
For part b, how do I integrate over the changing field points associated with locations along wire $L_2$?
Answer: The small contribution to the $x$-component $dE_x$ at a point $x>L_1$ created by a small amount of charge $dq$ located at position $x_s$ is simply
$$
dE_x=\frac{dq}{4\pi\epsilon_0}\frac{1}{(x-x_s)^2}\, .
$$
For your line with total charge $Q_1$, you have $dq=\frac{Q_1}{L_1}dx_s$ for the piece of wire located at $x_s$, and the net field produced by this line will be continuous sum (i.e. the integral) from $0$ to $L_1$ of the $dE_x$'s:
$$
E_{1x}=\frac{Q_1}{4\pi\epsilon_0 L_1}\displaystyle\int_0^{L_1}
\frac{dx_s}{(x-x_s)^2}\, .
$$
You can deal with part b) in much the same way, by calculating the small force $dF_x$ on a small amount of charge in line two located at $x_2$, then integrate over the whole of line $2$. | {
"domain": "physics.stackexchange",
"id": 38017,
"tags": "homework-and-exercises, electromagnetism, electrostatics, electric-fields"
} |
Snake++ game (in C++ with SDL) | Question: Yes, It's called Snake++. I build it to learn C++.
What features and techniques can be improved? I used SDL for some basic rendering, but my concern is more about the language use.
Things I'm very concerned about:
Generating a new food means trying random positions over and over again till one is free. This is going to become a problem very soon. What data structures can I use here?
Am I using references to their full potential and avoiding unnecessary copying?
Main.cpp
#include <iostream>
#include "Game.hpp"
using namespace std;
int main(int argc, char * argv[])
{
Game game = Game();
Game().Run();
cout << "Game has terminated successfully, score: " << game.GetScore()
<< ", size: " << game.GetSize() << endl;
return 0;
}
Game.hpp
#pragma once
#include <vector>
#include "SDL.h"
#include "SDL_image.h"
class Game
{
public:
Game();
void Run();
int GetScore();
int GetSize();
private:
bool running = false;
bool alive = false;
int fps = 0;
static const int FRAME_RATE = 1000 / 60;
static const int SCREEN_WIDTH = 640;
static const int SCREEN_HEIGHT = 640;
static const int GRID_WIDTH = 32;
static const int GRID_HEIGHT = 32;
SDL_Window * window = nullptr;
SDL_Renderer * renderer = nullptr;
enum class Block { head, body, food, empty };
enum class Move { up, down, left, right };
Move last_dir = Move::up;
Move dir = Move::up;
struct { float x = GRID_WIDTH / 2, y = GRID_HEIGHT / 2; } pos;
SDL_Point head = { static_cast<int>(pos.x), static_cast<int>(pos.y) };
SDL_Point food;
std::vector<SDL_Point> body;
Block grid[GRID_WIDTH][GRID_HEIGHT];
float speed = 0.5f;
int growing = 0;
int score = 0;
int size = 1;
void ReplaceFood();
void GrowBody(int quantity);
void UpdateWindowTitle();
void GameLoop();
void Render();
void Update();
void PollEvents();
void Close();
};
Game.cpp
#include <iostream>
#include <string>
#include <ctime>
#include "SDL.h"
#include "Game.hpp"
using namespace std;
Game::Game()
{
for (int i = 0; i < GRID_WIDTH; ++i)
for (int j = 0; j < GRID_HEIGHT; ++j)
{
grid[i][j] = Block::empty;
}
srand(static_cast<unsigned int>(time(0)));
}
void Game::Run()
{
// Initialize SDL
if (SDL_Init(SDL_INIT_VIDEO) < 0)
{
cerr << "SDL could not initialize! SDL_Error: " << SDL_GetError() << endl;
exit(EXIT_FAILURE);
}
// Create Window
window = SDL_CreateWindow("Snake Game", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED,
SCREEN_WIDTH, SCREEN_HEIGHT, SDL_WINDOW_SHOWN);
if (window == NULL)
{
cout << "Window could not be created! SDL_Error: " << SDL_GetError() << endl;
exit(EXIT_FAILURE);
}
// Create renderer
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED);
if (renderer == NULL)
{
cout << "Renderer could not be created! SDL_Error: " << SDL_GetError() << endl;
exit(EXIT_FAILURE);
}
alive = true;
running = true;
ReplaceFood();
GameLoop();
}
void Game::ReplaceFood()
{
int x, y;
while (true)
{
x = rand() % GRID_WIDTH;
y = rand() % GRID_HEIGHT;
if (grid[x][y] == Block::empty)
{
grid[x][y] = Block::food;
food.x = x;
food.y = y;
break;
}
}
}
void Game::GameLoop()
{
Uint32 before, second = SDL_GetTicks(), after;
int frame_time, frames = 0;
while (running)
{
before = SDL_GetTicks();
PollEvents();
Update();
Render();
frames++;
after = SDL_GetTicks();
frame_time = after - before;
if (after - second >= 1000)
{
fps = frames;
frames = 0;
second = after;
UpdateWindowTitle();
}
if (FRAME_RATE > frame_time)
{
SDL_Delay(FRAME_RATE - frame_time);
}
}
}
void Game::PollEvents()
{
SDL_Event e;
while (SDL_PollEvent(&e))
{
if (e.type == SDL_QUIT)
{
running = false;
}
else if (e.type == SDL_KEYDOWN)
{
switch (e.key.keysym.sym)
{
case SDLK_UP:
if (last_dir != Move::down || size == 1)
dir = Move::up;
break;
case SDLK_DOWN:
if (last_dir != Move::up || size == 1)
dir = Move::down;
break;
case SDLK_LEFT:
if (last_dir != Move::right || size == 1)
dir = Move::left;
break;
case SDLK_RIGHT:
if (last_dir != Move::left || size == 1)
dir = Move::right;
break;
}
}
}
}
int Game::GetSize()
{
return size;
}
void Game::GrowBody(int quantity)
{
growing += quantity;
}
void Game::Update()
{
if (!alive)
return;
switch (dir)
{
case Move::up:
pos.y -= speed;
pos.x = floorf(pos.x);
break;
case Move::down:
pos.y += speed;
pos.x = floorf(pos.x);
break;
case Move::left:
pos.x -= speed;
pos.y = floorf(pos.y);
break;
case Move::right:
pos.x += speed;
pos.y = floorf(pos.y);
break;
}
// Wrap
if (pos.x < 0) pos.x = GRID_WIDTH - 1;
else if (pos.x > GRID_WIDTH - 1) pos.x = 0;
if (pos.y < 0) pos.y = GRID_HEIGHT - 1;
else if (pos.y > GRID_HEIGHT - 1) pos.y = 0;
int new_x = static_cast<int>(pos.x);
int new_y = static_cast<int>(pos.y);
// Check if head position has changed
if (new_x != head.x || new_y != head.y)
{
last_dir = dir;
// If we are growing, just make a new neck
if (growing > 0)
{
size++;
body.push_back(head);
growing--;
grid[head.x][head.y] = Block::body;
}
else
{
// We need to shift the body
SDL_Point free = head;
vector<SDL_Point>::reverse_iterator rit = body.rbegin();
for ( ; rit != body.rend(); ++rit)
{
grid[free.x][free.y] = Block::body;
swap(*rit, free);
}
grid[free.x][free.y] = Block::empty;
}
}
head.x = new_x;
head.y = new_y;
Block & next = grid[head.x][head.y];
// Check if there's food over here
if (next == Block::food)
{
score++;
ReplaceFood();
GrowBody(1);
}
// Check if we're dead
else if (next == Block::body)
{
alive = false;
}
next = Block::head;
}
int Game::GetScore()
{
return score;
}
void Game::UpdateWindowTitle()
{
string title = "Snakle++ Score: " + to_string(score) + " FPS: " + to_string(fps);
SDL_SetWindowTitle(window, title.c_str());
}
void Game::Render()
{
SDL_Rect block;
block.w = SCREEN_WIDTH / GRID_WIDTH;
block.h = SCREEN_WIDTH / GRID_HEIGHT;
// Clear screen
SDL_SetRenderDrawColor(renderer, 0x1E, 0x1E, 0x1E, 0xFF);
SDL_RenderClear(renderer);
// Render food
SDL_SetRenderDrawColor(renderer, 0xFF, 0xCC, 0x00, 0xFF);
block.x = food.x * block.w;
block.y = food.y * block.h;
SDL_RenderFillRect(renderer, &block);
// Render snake's body
SDL_SetRenderDrawColor(renderer, 0xFF, 0xFF, 0xFF, 0xFF);
for (SDL_Point & point : body)
{
block.x = point.x * block.w;
block.y = point.y * block.h;
SDL_RenderFillRect(renderer, &block);
}
// Render snake's head
block.x = head.x * block.w;
block.y = head.y * block.h;
if (alive) SDL_SetRenderDrawColor(renderer, 0x00, 0x7A, 0xCC, 0xFF);
else SDL_SetRenderDrawColor(renderer, 0xFF, 0x00, 0x00, 0xFF);
SDL_RenderFillRect(renderer, &block);
// Update Screen
SDL_RenderPresent(renderer);
}
void Game::Close()
{
SDL_DestroyWindow(window);
SDL_Quit();
}
Answer: Object Usage
This code:
Game game = Game();
Game().Run();
cout << "Game has terminated successfully, score: " << game.GetScore()
<< ", size: " << game.GetSize() << endl;
...isn't doing what I'm pretty sure you think it is. This part: Game game = Game(); creates an object named game which is of type Game. But, I'd prefer to use just Game game;, which accomplishes the same thing more easily.
Then you do: Game().Run();. This creates another (temporary) Game object, and invokes the Run member function on that temporary Game object (so the Game object named game that you just creates sits idly by, doing nothing).
Then you do:
cout << "Game has terminated successfully, score: " << game.GetScore()
<< ", size: " << game.GetSize() << endl;
...which tries to print the score accumulated in the object named game--but game hasn't run. Only the temporary object has run (so by rights, the score you display should always be 0).
If I were doing this, I'd probably do something more like:
Game game;
game.run();
cout << "Game has terminated successfully, score: " << game.GetScore()
<< ", size: " << game.GetSize() << endl;
using namespace std; isn't just using; it's abusing!
I'd (strongly) advise against using namespace std;. A using directive for another namespace can be all right, but std:: contains a huge amount of stuff, some of it with very common names that are likely to conflict with other code. Worse, every new release of the C++ standard adds still more "stuff" to std. It's generally preferable to just qualify names when you use them, so (for example) the cout shown above would be more like:
std::cout << "Game has terminated successfully, score: " << game.GetScore()
<< ", size: " << game.GetSize() << std::endl;
Avoid std::endl
I'd advise avoiding std::endl in general. Along with writing a new-line to the stream, it flushes the stream. You want the new-line, but almost never want to flush the stream, so it's generally better to just write a \n. On the rare occasion that you actually want the flush, do it explicitly: std::cout << '\n' << std::flush;.
Avoid the C random number generation routines
C's srand()/rand() have quite a few problems. I'd generally advise using the new routines in <random> instead. This is kind of a pain (seeding the new generators well is particularly painful) but they generally produce much higher quality randomness, are much more friendly to multi-threading, and using them well will keep the cool C++ programmers (now there's an oxymoron) from calling you names.
avoid exit()
When writing C++, it's generally better to avoid using exit. Calling it generally prevents destructors for objects on the stack from running, so you can't get a clean shutdown.
As a general rule, I'd add a try/catch block in main, and where you're currently calling exit(), throw an object derived from std::exception. In your case, std::runtime_error probably make sense.
if (renderer == NULL)
{
throw std::runtime_error("Renderer could not be created!");
}
In main:
try {
game.Run();
std::cout << "Game has terminated successfully, score: " << game.GetScore()
<< ", size: " << game.GetSize() << '\n';
}
catch (std::exception const &e) {
std::cerr << e.what();
}
Prefer nullptr to NULL
Pretty much self-explanatory. In C++, NULL is required to be an integer constant with the value 0 (e.g., either 0 or 0L). nullptr is a bit more special--it can convert to any pointer type, but can't accidentally be converted to an integer type. So, anywhere you might consider using NULL, you're almost certainly better off using nullptr:
if (renderer == nullptr)
Some also prefer to reverse those (giving "Yoda conditions"):
if (nullptr == renderer)
This way, if you accidentally use = where you meant ==:
if (nullptr = renderer)
...the code won't compile, because you've attempted to assign to a constant (whereas if (renderer = nullptr) could compile and do the wrong thing, though most current compilers will at least give a warning about it). | {
"domain": "codereview.stackexchange",
"id": 33358,
"tags": "c++, beginner, snake-game, sdl"
} |
Basic command line REPL calculator | Question: I'm pretty confident in this code but I want to be sure I haven't missed anything to do with optimization or error handling. The program takes input in the form of {number} {operator} {number} and prints the result. If the input is invalid it prints nothing.
Example input: 1.0 + 2.0
Output: 3
#include <iostream>
#include <string>
#include <stdexcept>
#include <regex>
double calculate(const double a, const char op, const double b)
{
switch (op)
{
case '+':
return a + b;
case '-':
return a - b;
case '*':
return a * b;
case '/':
if (b == 0.0)
throw std::invalid_argument {"Divide by zero"};
return a / b;
default:
throw std::invalid_argument {{op}};
}
}
int main()
{
const std::regex valid_input {
"([0-9]*\\.?[0-9]*) ([\\+\\-\\*\\/]) ([0-9]*\\.?[0-9]*)"};
std::string line;
std::smatch terms;
double a;
double b;
char op;
while (std::getline(std::cin, line))
{
if (!std::regex_match(line, terms, valid_input))
continue;
a = std::stod(terms[1]);
b = std::stod(terms[3]);
op = terms[2].str().front();
std::cout << calculate(a, op, b) << '\n';
}
}
Answer: Escaping escape characters becomes confusing.
Use the appropriate string literal.
// Your String
"([0-9]*\\.?[0-9]*) ([\\+\\-\\*\\/]) ([0-9]*\\.?[0-9]*)"
// A Raw String (no escape characters)
R"RAW(([0-9]*\.?[0-9]*) ([\+\-\*\/]) ([0-9]*\.?[0-9]*))RAW"
Don't think your regular expressions are good enough.
([0-9]*\.?[0-9]*)
// Can map
.
<blank>
I would do:
// If you have the decimal point then you require at least on digit after it.
// If there is no decimal point then you require at least one digit.
(([0-9]*(\.[0-9]+))|[0-9]+)
Don't think regular expressions are the best way to capture number operator number. The stream operators support this already so prefer to use those.
while (std::getline(std::cin, line))
{
std::stringstream lineStream(line);
double a;
double b;
char op;
char emptyTest;
if (lineStream >> a >> op >> b && !(lineStream >> emptyTest)) {
// Enter here if:
// 1: We correctly read a op and b
// 2: There are zero extra non-space characters on the line
// We find this because the read to emptyTest fails.
}
Declare your variables as close to the location that you use them. This also allows you to initialize them on declaration (which is always nice).
// Don't do this.
double a;
double b;
char op;
// Now you can see the type and know that they are initialized.
double a = std::stod(terms[1]);
double b = std::stod(terms[3]);
char op = terms[2].str().front();
Sure you can use a switch that is totally fine. But you can practice using the command pattern in this situation.
static std::map<char, std::function<double(double, doubel>> const actionMap = {
{'+', [](double lhs, double rhs){return lhs + rhs;}},
{'-', [](double lhs, double rhs){return lhs - rhs;}},
{'*', [](double lhs, double rhs){return lhs * rhs;}},
{'/', [](double lhs, double rhs){
if (b == 0.0) {
throw std::invalid_argument {"Divide by zero"};
}
return lhs / rhs;
}
};
// Now use the action map to find the function you want to call.
auto find = actionMap.find(op);
if (find == actionMap.end()) {
throw std::invalid_argument {{op}};
}
return find->second(a, b); | {
"domain": "codereview.stackexchange",
"id": 26591,
"tags": "c++, regex, math-expression-eval"
} |
Can't Install ROS Indigo on the Raspberry Pi 2 model B(jessie) | Question:
I'm installing ROS Indigo on the Raspberry Pi 2 model B(jessie). I'm following this tutorial .But I can't install ROS!!
On 3.2.1 Unavailable Dpendencies, libconsole-bridge-dev of the tutorials, I execute command "$ apt-get source -b console-bridge'".Then I get a following error message.
Linking CXX static library libgtest.a
cd /home/pi/ros_catkin_ws/external_src/console-bridge-0.3.2/obj-arm-linux-gnueabihf/test && /usr/bin/cmake -P CMakeFiles/gtest.dir/cmake_clean_target.cmake
CMake Error: Error in cmake code at
/home/pi/ros_catkin_ws/external_src/console-bridge-0.3.2/obj-arm-linux-gnueabihf/test/CMakeFiles/gtest.dir/cmake_clean_target.cmake:1:
Parse error. Expected a command name, got unquoted argument with text "ick]r�
".
CMake Error: Error processing file: CMakeFiles/gtest.dir/cmake_clean_target.cmake
test/CMakeFiles/gtest.dir/build.make:88: recipe for target 'test/libgtest.a' failed
make[3]: *** [test/libgtest.a] Error 1
make[3]: Leaving directory '/home/pi/ros_catkin_ws/external_src/console-bridge-0.3.2/obj-arm-linux-gnueabihf'
CMakeFiles/Makefile2:154: recipe for target 'test/CMakeFiles/gtest.dir/all' failed
make[2]: *** [test/CMakeFiles/gtest.dir/all] Error 2
make[2]: Leaving directory '/home/pi/ros_catkin_ws/external_src/console-bridge-0.3.2/obj-arm-linux-gnueabihf'
Makefile:130: recipe for target 'all' failed
make[1]: *** [all] Error 2
make[1]: Leaving directory '/home/pi/ros_catkin_ws/external_src/console-bridge-0.3.2/obj-arm-linux-gnueabihf'
dh_auto_build: make -j1 returned exit code 2
debian/rules:7: recipe for target 'build' failed
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2
Build command 'cd console-bridge-0.3.2 && dpkg-buildpackage -b -uc' failed.
E: Child process failed
I have no idea how I can solve the error. Can you please give me information on that?
Originally posted by NKCT on ROS Answers with karma: 11 on 2016-12-07
Post score: 1
Answer:
Parse error. Expected a command name, got unquoted argument with text "ick]r�
".
This looks like your CMake files have been corrupted. It's not clear why or how that happened during your build. I'd look for filesystem issues.
Originally posted by tfoote with karma: 58457 on 2017-12-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 26432,
"tags": "raspberrypi, ros-indigo"
} |
Can we apply transfer learning between any two different CNN architectures? | Question: There are many types of CNN architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet, etc. Can we apply transfer learning between any two different CNN architectures? For instance, can we apply transfer learning from AlexNet to GoogLeNet, etc.? Or even just from a "conventional" CNN to one of these other architectures, or the other way around? Is this possible in general?
EDIT: My understanding is that all machine learning models have the ability to perform transfer learning. If this is true, then I guess the question is, as I said, whether we can transfer between two different CNN architectures – for instance, what was learned by a conventional CNN to a different CNN architecture.
Answer: No, transfer learning cannot be applied "between" different architectures, as transfer learning is the practice of taking a neural network that has already been trained on one task and retraining it on another task with the same input modality, which means that only the weights (and other trainable parameters) of the network change during transfer learning but not the architecture.
In my understanding, transfer learning is also only really effective in deep learning, but I could be wrong, considering that this Google search seems to yield some results.
You might otherwise be thinking of knowledge distillation, which is a related but different concept, where an already trained network acts as a teacher and teaches another network (a student network) with possibly a different architecture (or a machine learning model not based on neural networks at all) the correct outputs for a bunch of input examples. | {
"domain": "ai.stackexchange",
"id": 2778,
"tags": "convolutional-neural-networks, transfer-learning"
} |
Why is it called a Seq2Seq model if the output is just a number? | Question: Why is it called a Seq2Seq model if the output is just a number?
For example, if you are trying to predict a movie's recommendation, and you are inputting a sequence of users and their ratings, shouldn't it be a Seq2Number model since you're only predicting 1 rating at a time?
Answer: The premise of your question is wrong. A model that goes from a sequence to a single prediction is simply NOT called a sequence to sequence to model.
The model type you are describing is called a sequence encoder.
An example would be sentiment prediction, where we input a sequence of text and output a number.
Similarly, a model that goes from a fixed size value to a sequence is called a sequence decoder. An example would be image captioning.
If a model inputs one sequence and outputs another, as in machine translation, it is called a sequence to sequence model, and consists of both an encoder and a decoder.
If you saw different terminology, it was either mislabeled or misunderstood. | {
"domain": "ai.stackexchange",
"id": 3686,
"tags": "transformer, seq2seq"
} |
Could ball lightening be a form of plasma? | Question: With regard to the recent arXiv article:
J. D. Shelton, Eddy Current Model of Ball Lightening
http://arxiv.org/abs/1102.1224
I wonder if this is a reasonable explanation of ball lightening, or if there is such an explanation. The paper is somewhat technical and E&M is one of my worst subjects.
Please feel free to edit this question to one better suited, or if you don't have the rep, add a comment suggesting changes.
Answer: Ball lightning could definitely be some atmospheric pressure plasma phenomenon. You can make a pretty impressive ball plasma by discharging a kilojoule-scale capacitor bank into a bucket of salt water. Check out Free-Floating Atmospheric Pressure Ball Plasma. In most of those pictures they're using a copper sulfate solution, but that's not essential (sodium chloride also works). These ones only last a (significant) fraction of a second, but I'm sure if you made a larger one (e.g. by a lightning strike), they could last longer.
BTW, this was the subject of a killer science fair project: http://www.youtube.com/watch?v=SE6sbaNsKoc | {
"domain": "physics.stackexchange",
"id": 59732,
"tags": "electromagnetism, plasma-physics"
} |
rtabmap localization | Question:
Hi
I'm trying to run rtabmap in localization mode.
This is my launch file:
<launch>
<arg name="localization" value="true"/>
<arg if="$(arg localization)" name="rtabmap_args" default=""/>
<arg unless="$(arg localization)" name="rtabmap_args" default="--delete_db_on_start"/>
<node pkg="tf2_ros" type="static_transform_publisher" name="camera_broadcaster" args="0.613 0.012 0.425 0.002 0.008 -0.010 1.000 base_link camera_link" />
<node pkg="tf2_ros" type="static_transform_publisher" name="footprint_broadcaster" args="0 0 0.125 0 0 0 1 base_footprint base_link" />
<param name="robot_description" textfile="$(find robot_slam)/urdf/robot.urdf" />
<node pkg="robot_slam" name="control" type="control.py" output="screen">
</node>
<!-- ROS navigation stack move_base -->
<group ns="planner">
<remap from="point_cloud" to="/planner/planner_cloud"/>
<remap from="map" to="/rtabmap/grid_map"/>
<remap from="move_base_simple/goal" to="/move_base_simple/goal"/>
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
<rosparam file="$(find robot_slam)/launch/config/move_base/costmap_common.yaml" command="load" ns="global_costmap" />
<rosparam file="$(find robot_slam)/launch/config/move_base/costmap_common.yaml" command="load" ns="local_costmap" />
<rosparam file="$(find robot_slam)/launch/config/move_base/costmap_local.yaml" command="load" />
<rosparam file="$(find robot_slam)/launch/config/move_base/costmap_global.yaml" command="load"/>
<rosparam file="$(find robot_slam)/launch/config/move_base/base_local_planner.yaml" command="load" />
</node>
</group>
<!-- encoder odometry -->
<node pkg="robot_slam" name="encoder_odometry" type="odometry.py" output="screen">
<remap from="odom" to="/planner/odom"/>
<param name="odom_frame_id" type="string" value="odom"/>
<param name="robot_frame_id" type="string" value="base_footprint"/>
</node>
<group ns="rtabmap">
<!-- RGB-D Odometry -->
<node pkg="rtabmap_ros" type="rgbd_odometry" name="rgbd_odometry" output="log">
<remap from="rgb/image" to="/camera/color/image_raw"/>
<remap from="depth/image" to="/camera/aligned_depth_to_color/image_raw"/>
<remap from="rgb/camera_info" to="/camera/color/camera_info"/>
<remap from="rgbd_image" to="rgbd_image"/>
<remap from="odom" to="rgbd_odom"/>
<param name="frame_id" type="string" value="base_footprint"/>
<param name="odom_frame_id" type="string" value="odom"/>
<param name="publish_tf" type="bool" value="true"/>
<param name="ground_truth_frame_id" type="string" value=""/>
<param name="ground_truth_base_frame_id" type="string" value=""/>
<param name="wait_for_transform" type="bool" value="true"/>
<param name="wait_for_transform_duration" type="double" value="0.2"/>
<param name="approx_sync" type="bool" value="false"/>
<param name="config_path" type="string" value=""/>
<param name="queue_size" type="int" value="10"/>
<param name="subscribe_rgbd" type="bool" value="false"/>
<param name="guess_frame_id" type="string" value=""/>
<param name="guess_min_translation" type="double" value="0"/>
<param name="guess_min_rotation" type="double" value="0"/>
<param name="Odom/ResetCountdown" type="int" value="10"/>
<param name="Odom/Holonomic" type="bool" value="false"/>
<param name="Vis/MaxFeatures" type="string" value="1000"/>
<param name="Reg/Force3DoF" type="bool" value="false"/>
<param name="RGBD/SavedLocalizationIgnored" type="bool" value="true"/>
</node>
<node pkg="rtabmap_ros" type="rtabmap" name="rtabmap" args="$(arg rtabmap_args)" output="log">
<param if="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="false"/>
<param if="$(arg localization)" name="Mem/InitWMWithAllNodes" type="string" value="true"/>
<remap from="odom" to="/rtabmap/rgbd_odom"/>
<remap from="rgb/image" to="/camera/color/image_raw"/>
<remap from="depth/image" to="/camera/aligned_depth_to_color/image_raw"/>
<remap from="rgb/camera_info" to="/camera/color/camera_info"/>
<param name="frame_id" type="string" value="base_footprint"/>
<param name="odom_frame_id" type="string" value="odom"/>
<param name="map_frame_id" type="string" value="map"/>
<param name="subscribe_depth" type="bool" value="true"/>
<param name="subscribe_scan" type="bool" value="false"/>
<param name="queue_size" type="int" value="10"/>
<!-- RTAB-Map's parameters -->
<param name="RGBD/ProximityBySpace" type="string" value="false"/>
<param name="RGBD/AngularUpdate" type="string" value="0.01"/>
<param name="RGBD/LinearUpdate" type="string" value="0.01"/>
<param name="RGBD/OptimizeFromGraphEnd" type="string" value="false"/>
<param name="Reg/Strategy" type="string" value="1"/> <!-- 0=vis, 1=ICP, 2=both. 1 works best -->
<param name="Reg/Force3DoF" type="string" value="false"/>
<param name="Vis/MinInliers" type="string" value="20"/>
<param name="Vis/InlierDistance" type="string" value="0.05"/>
<param name="Rtabmap/TimeThr" type="string" value="700"/>
<param name="Mem/RehearsalSimilarity" type="string" value="0.45"/>
<param name="Icp/CorrespondenceRatio" type="string" value="0.5"/>
<param name="Grid/MaxGroundHeight" type="double" value="0.2"/>
<param name="Grid/MaxObstacleHeight" type="double" value="1.0"/>
<param name="Grid/NormalsSegmentation" type="bool" value="false"/>
<param name="Grid/RangeMax" type="double" value="4"/>
</node>
<node pkg="nodelet" type="nodelet" name="voxel_grid" args="load pcl/VoxelGrid /camera/realsense2_camera_manager" output="screen">
<remap from="~input" to="/camera/depth/color/points" />
<remap from="~output" to="/camera/depth/color/points_downsampled" />
<rosparam>
filter_field_name: z
filter_limit_min: 0.01
filter_limit_max: 10
filter_limit_negative: False
leaf_size: 0.05
</rosparam>
</node>
<!-- obstacle detection -->
<node pkg="nodelet" type="nodelet" name="obstacles_detection" args="load rtabmap_ros/obstacles_detection /camera/realsense2_camera_manager" output="screen">
<remap from="cloud" to="/camera/depth/color/points_downsampled"/>
<remap from="obstacles" to="/planner/planner_cloud"/>
<param name="frame_id" type="string" value="base_footprint"/>
<param name="wait_for_transform" type="bool" value="true"/>
<!-- should be the same as leaf_size in voxelfilter above -->
<param name="Grid/CellSize" type="double" value="0.05"/>
<param name="Grid/NormalsSegmentation" type="bool" value="false"/>
<param name="Grid/MaxGroundHeight" type="double" value="0.2"/>
<param name="Grid/MaxObstacleHeight" type="double" value="1.0"/>
<param name="Grid/RangeMax" type="double" value="4"/>
</node>
</group>
When I start the launch file with a presaved database, map to odom gets published (I can see them in RVIZ) but it's wrong. It's not updated after the start. And the info topic shows high enough hypothesis values (>0.11) but there's never a loop closure.
When I move the robot around, it moves along correctly in RVIZ, but there's that offset that is not corrected, i.e. the preloaded map is useless. Do I understand correctly that the rtabmap node should detect the loop closures and send out the map to odom transform? Does that also mean that the rgbd_odom node doesn't use the map to localize?
Thanks in advance,
rembert
Originally posted by rembert on ROS Answers with karma: 3 on 2018-09-07
Post score: 0
Answer:
Hi,
<param name="RGBD/SavedLocalizationIgnored" type="bool" value="true"/> should be used under rtabmap node, not rgbd_odometry node.
When RGBD/SavedLocalizationIgnored is set to false, rtabmap assumes that the robot is starting from where it stopped mapping, providing the map on start. If the initial localization is wrong (kidnapped robot problem), RTAB-Map will still relocalize the robot on loop closure / global localization. When RGBD/SavedLocalizationIgnored is set to true, rtabmap won't publish the map until it can be localized against the map saved in the database.
The problem here seems that RTAB-Map cannot accept a loop closure / global localization. Can you show warnings on terminal when a loop closure is rejected? <param name="Reg/Strategy" type="string" value="1"/> should be 0 as you don't subscribe to any laser scan inputs. Loop closures may be rejected because there are no scans used. Without a real lidar, I recommend using Reg/Strategy = 0.
I see you have "encoder odometry", make sure there is no conflict with TF sent already by rgbd_odometry. It looks like encoder_odometry and rgbd_odometry are both publishing the same TF /odom -> /base_footprint. This would likely cause strange flickering on TF as both would not give the same transform. You may want to show the tf tree too:
$ rosrun tf view_frames
$ evince frames.pdf
rgbd_odometry is independent of the map, it publishes /odom->/base_footprint. rtabmap publishes the odometry correction on /map -> /odom.
cheers,
Mathieu
Originally posted by matlabbe with karma: 6409 on 2018-09-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by rembert on 2018-09-24:
Hi Mathieu
Thanks a lot for your quick response, and most of all for developing this wonderful package.
It works much better now, with Reg/Strategy on 0.
Detecting more loop closures and refining everything in your GUI tool offline seems to help also.
Comment by rembert on 2018-09-24:
We're not publishing other odometry on the tf tree, though I tried before to fuse the encoder signals with the rgdb odometry but than the map was not correct. I might be looking further into that in the future. | {
"domain": "robotics.stackexchange",
"id": 31734,
"tags": "localization, navigation, ros-kinetic, rgbd, rtabmap-odometry"
} |
Is there any definition of life which makes viruses undeterminable? | Question: There are many different definitions of life (RNA, something that comes through evolution) but not one I have seen which could not determine wheter viruses are living things (even though there are many definitions both for YES and NO). Are there any such definitions (I'm looking for cases where it's really fundamental debate, not only struggling for the correct dictionary definition)?
Thank you.
Answer: Your last sentence is the key: defining life really is just finding a dictionary definition that we can agree upon. Biology is something that defies discrete definitions at times: "What is it to be alive?" "What is a species?" maybe even "What is the wild type allele of a gene?"
I would recommend not looking at viruses as a challenge to determine if they are alive or not so much as an excellent opportunity to discuss what we think are important characteristics of life.
Life can alternately be described as:
"Comprised of self-replicating cells" (a paraphrase of the "Cell Theory of Life"
or
As things that embody at least most of the following characteristics:
1. Self-Replicating
2. Metabolizing
3. Growing
4. Showing signs of adaptation
5. Being organized
6. Respond to their environment
7. Being comprised of cells
I like to think that we should focus on extraterrestrial forms when we define life. i.e. what would we want to see in an extraterrestrial in order to call it 'life'? While some are troubled by calling viruses alive here on earth, the same people might be willing to say that we have found extraterrestrial life on another planet if it was similar (granted, it's hard to imagine this kind of life existing without a host...)
as an aside: You might also ask whether this question is fit for this stack as it can not be supported by literature references (at least none that would actually support a conclusion). So should this be posted as 'Biology' or 'Philosophy'? | {
"domain": "biology.stackexchange",
"id": 4704,
"tags": "virus, life, philosophy-of-science"
} |
$q\mathbf{A}\cdot\mathbf{v}$ term in potential energy | Question: In the famous Goldstein mechanics book, there is an example about a single (non-relativistic) particle of mass m and charge q moving in an E&M field.
It says the force on the charge can be derived from the following velocity-dependent potential energy
$$U=q\phi-q\mathbf{A}\cdot\mathbf{v} .\tag{1.62}$$
(eq 1.62 of 3rd ed.)
I can see where the expression came from my E&M knowledge. So far it's OK. Next
$$L=T-V=\frac{1}{2}mv^2-q\phi+q\mathbf{A}\cdot\mathbf{v}.$$
(p.341) (It changed notation from $U$ to $V$ without mention.)
It says that because of the $q\mathbf{A}\cdot\mathbf{v}$ term in $V$, the Hamiltonian is not $T+V$. However, it says it's still total energy since the "potential" energy in an E&M filed is determined by $\phi$ alone.
I'm confused by the sentence. Is it insisting that potential energy is only $V=\phi$? Then why it introduced velocity-dependent potential earlier?
What's the role of $q\mathbf{A}\cdot\mathbf{v}$ term?
Answer:
Firstly, Goldstein uses the letters $V$ and $U$ for velocity-independent and velocity-dependent potentials, respectively, as explained in the beginning of section 1.5,
Both the 2nd edition (p. 346) & the 3rd edition (p. 341) wrongly state that the Lagrangian for a point charge in an E&M field is
$$L~=~T-V$$
rather than
$$L~=~T-U. $$
It seems that Goldstein forgets his own notation convention from Section 1.5!
The 2nd edition states (p. 346)
Because of this linear term in $U$, the Hamiltonian is not $T+U$.
While the 3rd edition states (p. 342)
Because of this linear term in $V$, the Hamiltonian is not $T+V$.
The 2nd edition is here correct, while the 3rd edition is wrong, as the Hamiltonian $H$ is indeed the sum of the kinetic energy $T$ and the electric potential energy $V=q\phi$. It seems that the initial error in the 2nd edition caused a new error in the 3rd edition!
References:
H. Goldstein, Classical Mechanics, 2nd edition, p. 346.
H. Goldstein, Classical Mechanics, 3rd edition, p. 341-342. | {
"domain": "physics.stackexchange",
"id": 50979,
"tags": "electromagnetism, energy, lagrangian-formalism, hamiltonian-formalism, hamiltonian"
} |
Help me proving Dalton's law of partial pressure formula | Question: How do I prove
$${{\displaystyle p_{i}=p_{\text{total}}y_{i}}}$$
where $y_i$ is the mole fraction of the $i^{th}$ component in the total mixture of $n$ components ?
Here is what I know
Mathematically, the pressure of a mixture of non-reactive gases can be defined as the summation:
${\displaystyle p_{\text{total}}=\sum _{i=1}^{n}p_{i}}$
or
${\displaystyle p_{\text{total}}=p_{1}+p_{2}+p_{3}+...p_{n}}$
And mole fraction $y_i= \dfrac{n_i}{n_{\text {total}}}$
I don't see how to relate , please help.
Answer: After having some discussion with @JohnRennie
Let $P_i=n_i \dfrac{RT}{V}$ then $P_i=\dfrac{n_i}{n_{\text{total}}}\dfrac{n_{\text{total}}RT}V$
$P_i=x_i P_{\text{total}}$
Where $x_i$ is mole fraction of $i^{\text{th}}$ particle. | {
"domain": "chemistry.stackexchange",
"id": 8022,
"tags": "vapor-pressure"
} |
Are astronomers waiting to see something in an image from a gravitational lens that they've already seen in an adjacent image? | Question: @RobJeffries' answer to the question Does gravitational lensing provide time evolution information? points out that there can be a substantial different in arrival times of light from a given source seen in different images from a gravitational lens.
The linked paper there shows "$\Delta t$" values of the order of 30 days, but it is hard for me to understand what the actual observable is.
What I'm asking for here is (ideally) if there is a well defined event that a lay person could understand, something blinking or disappearing or brightening substantially that has already been seen in one image produced by a gravitational lens that has not yet been seen in one of the other images, and is expected to be seen in the (presumably near) future.
If something like this does not exist, a substitute could be a case where this has happened, and the second sighting of the same event was predicted, waited for, and observed on time.
I have no idea if this happens all the time, or has never happened yet.
Answer: What you do is cross-correlate the observational datasets for the multiple sources and look for the "lag" that maximises the cross-correlation function. Generally speaking, the "events" are not really individual flares or dips, but the summation of all the time variability that is seen.
The variability in question usually comes about from the central portions of the "central engine" of a quasar or active galactic nucleus. For a supermassive black hole at the centre of a quasar, the innermost stable circular orbit is at 3 times the Schwarzschild radius ($= 6GM_{\rm BH}/c^2$). This basically defines the inner edge of any accretion disk and if we divide this by $c$ then we get a a timescale for the most rapid variations in luminosity output. So this is very nearly the same formula as presented in the linked question
$$\tau \sim 3\times 10^{-5} \left(\frac{M_{\rm BH}}{M_{\odot}}\right)\ {\rm sec}\, ,$$
except that the supermassive black holes are much less massive than entire foreground lensing galaxies (usually). This the timescale of variation is much shorter than the potential delay time due to gravitational lensing. It is this difference in timescales that means there is plenty of "structure" within the light curves that can be locked onto by the cross-correlation.
There is a notable example however of a type Ia supernova being seen in a multiply lensed image (Goobar et al. (2017), but the predicted delay in the light curves was $<35$ hours and the light curves are not good enough to measure this. This technique is an active area of research and a major bit of science that is exprected to be achieved by the Large Synoptic Survey Telescope (Huber et al. 2019).
Finally, the thing you are really looking for has happened in terms of SN "Refsdal". This was a type II supernova seen to "go off" in a multiply imaged galaxy, seen through/around a galaxy cluster. A prediction was made, based on a model for the cluster gravitational potential, that another image ought to appear within a year or two. This further image was then detected by Kelly et al. (2016) in a paper entitled "Deja vu all over again".
From Kelly et al. (2016) ("Deja vu all over again"). See "SX" in the third panel:
Figure 1. Coadded WFC3-IR F125W and F160W exposures of the MACS J1149.5+2223 galaxy-cluster field taken with HST. The top panel shows images acquired in 2011 before the SN appeared in S1–S4 or SX. The middle panel displays images taken on 2015 April 20 when the four images forming the Einstein cross are close to maximum brightness, but no flux is evident at the position of SX. The bottom panel shows images taken on 2015 December 11 which reveal the new image SX of SN Refsdal. Images S1–S3 in the Einstein cross configuration remain visible in the 2015 December 11 coadded image (see Kelly et al. 2015a and Rodney et al. 2015b for analysis of the SN light curve).
Kelly, P. L., Brammer, G., Selsing, J., et al. 2015a, ApJ, submitted
(arXiv:1512.09093)
Rodney, S. A., Strolger, L.-G., Kelly, P. L., et al. 2015b, ApJ, in press
(arXiv:1512.05734) | {
"domain": "astronomy.stackexchange",
"id": 3625,
"tags": "observational-astronomy, gravitational-lensing"
} |
Force between 2 plates of a capacitors , which approach is correct? | Question: If I have a parallel plate capacitor with charge $Q$, to calculate the force on first plate we say the electric field of $\frac{Q}{2\epsilon_0 A}$ is created by the second plate and since the charge on the first plate is Q the total force experienced by it is $\frac{Q^2}{2\epsilon_0 A}$
But if I consider on the first plate an elemental charge $dQ$ then it has no contribution on net electric field , so the net field is $\frac{Q}{\epsilon_0 A}$ and the net force on the particle is $\frac{Q dq}{\epsilon_0 A}$, and thus total force should be $\frac{Q^2}{\epsilon_0 A}$
What am I missing here ?? Where am I going wrong ??
Answer: What you are missing here is that the E field contribution of palate 1 starts after the charge , thus the net field that is experienced by plate 2 is just the half on the net field and thus Force in expression 1 is correct. Nonetheless if you are not convinced , you can also visualise that you have also added the action reaction pairs in the expression too. | {
"domain": "physics.stackexchange",
"id": 59375,
"tags": "homework-and-exercises, electrostatics, coulombs-law"
} |
The Space of an Unsatisfiable Formula | Question: I am reading the paper Measuring the hardness of SAT instances by Ansótegui, Bonet, Levy and Manyà (Proc. 23rd AAAI Conf. on AI, pp. 222–228, 2008) (PDF). I am trying to understand the demonstration of the last part of the Lemma 3 (in bold). For this, I get an example. Let be $\Gamma = (a+b)(a+b')(a'+c)(a'+c')$ then its tree-like refutation is:
Following the demonstration of the last part of the Lemma 3, $[b\rightarrow 1]\Gamma=(1)(a)(a'+c)(a'+c')$, and adding the literal $b'$ where $[b\rightarrow 1]$ has removed it, we get $\Gamma' = (1)(a+b')(a'+c)(a'+c')$. In accordance the paper the tree-like refutation of $\Gamma'$ is a proof for $\Gamma \vdash b'$. The natural question, I think so, How it is posible that the tree-like refutation of $\Gamma'$ be a proof for $\Gamma \vdash b'$ if $\Gamma \neq \Gamma'$?.
Lemma 3 The space satisfies the following three properties:
$s(\Gamma \cup \{\Box\})$ = 0
For any unsatisfiable formula $\Gamma$, and any partial truth assignment $\phi$, we have $s(\phi(\Gamma))\leq s(\Gamma)$.
For any unsatisfiable formula $\Gamma$, if $\Box\notin\Gamma$, then there exists a variable $x$ and an assignment $\phi\colon\{x\}\to\{0,1\}$, such that $s(\phi(\Gamma))\leq s(\Gamma)-1$.
The space of a formula is the minimum measure on formulas that satisfy (1), (2) and (3). In other words, we could define the space as:3
$$s(\Gamma) = \min_{x, \overline{x}\in\Gamma, b\in\{0,1\}} \big\{
\max\{s([x\mapsto b](\Gamma))+1, s([x\mapsto\overline{b}](\Gamma))\}\;\big\}$$
when $\Box\notin\Gamma$, and $s(\Gamma\cup\{\Box\}) = 0$.
Answer: Here is the original refutation:
From $a'+c$ and $a'+c'$ deduce $a'$.
From $a$ and $a'$ deduce $\square$.
When you add $b'$ back, you get:
From $a'+c$ and $a'+c'$ deduce $a'$.
From $a+b'$ and $a'$ deduce $b'$.
More generally, since the only difference between $\Gamma$ and $\Gamma'$ is the addition of $b'$, it is not hard to prove by induction that for every line $\ell$ proved in the refutation of $\Gamma$, the corresponding line in the corresponding $\Gamma'$ proof is either $\ell$ or $\ell+b'$. Hence the $\Gamma'$ proof proves either $\square$ or $b'$. It can happen that it indeed proves $\square$, in which case we can deduce $b'$ by weakening, which is an admissible rule for resolution space (we can get rid of it without increasing the space). | {
"domain": "cs.stackexchange",
"id": 5296,
"tags": "satisfiability"
} |
Power/torque ratio vs acceleration | Question: Let there an object with determined mass which needs to be moved with a motor (lets say electrical). We have two available motors, which have exactly the same power (and other characteristics), but different torque / rated speed. Which motor will accelerate the object faster, motor with high torque, but low rated speed, or motor with low torque, but high rated speed? There is no transmission.
Answer: Depends on the transmission and rotating masses. With ideal transmissions and identical rotating masses, both will achieve the exact same acceleration of the "object". If there's no transmission, then the question is what the power output of the motors look like as a function of speed. | {
"domain": "physics.stackexchange",
"id": 44888,
"tags": "acceleration, torque, power"
} |
Are there cats with black skin? | Question: Some mammals can have a black, pink or spotted skin, depending on race - see for example humans or pigs. But I recently learned that this is not the case with mice. Even black furred mice have pink skin, and there is a rare mutation which produces black skinned mice.
I was talking with a friend about cats and we got to wonder if (domestic) cats are the type of animal which can have dark skin, like pigs, or the type which doesn't, like mice. The tons of cat related sites on the Internet seem to all talk about black furred cats only, or about diseases presenting with skin discoloration, but they don't mention the skin color. So, which type of animal are cats?
Answer: Easy: look at images of hairless cats. You will see they can be not only all black, but also grey, spotted, pink, and a few other rarer colors.
Also, take an average cat - and shave for surgery:
Note pigmented skin matching dark stripes. | {
"domain": "biology.stackexchange",
"id": 3537,
"tags": "zoology, pigmentation"
} |
Why do electricity generators have to work harder for higher loads? | Question: Maybe this is a silly question, but why does, say, a gasoline-powered AC generator have to use more gas depending on the load?
Let's say I have a 120VAC generator and either a 1A or a 10A load, and assume it can handle 1200W without issue. For 10A, the generator uses more gas and works harder compared to 1A.
Is this because the 10A is passively causing a magnetic field that makes the generator shaft physically more difficult to spin? If so, why does the magnetic field oppose the generators movement if the current is flowing in the direction the generator is trying to make it go to begin with? (Sorry, I know that is probably silly but I have a very limited understanding of electricity and magnetism.)
Or is it because the circuitry in the generator is actively sensing voltage drop due to higher load, and increasing the throttle to maintain a constant output voltage? Sort of like I'd imagine a water pump would have to work harder to maintain a constant pressure in a system with a leak in it or with flow. (Related: If so, is this type of sensing necessary for a generator to limit the voltage to low loads, or is it just a bonus feature to increase efficiency by not running at full throttle all the time?)
Or is it some combination of both? Or are both of those somehow the same thing? (It seems like it must be a little bit of the former; because the generator stalls under too high loads, which I guess means something is resisting the spin - unless the generator just actively shuts itself down?)
What is the process from "higher load" => "higher fuel usage"?
Answer: From a simple energy conversion perspective, and assuming the output voltage is constant, increasing the current by a factor of 10 means the generator must supply 10 times more electric power (electric power is the product of voltage and current).
Since the generator only converts mechanical power to electric power, the motor driving the generator shaft must supply at least 10 times more power in order for the generator to supply 10 times more power.
This is fundamental and inescapable.
Your suspicion that the generator is more difficult to turn when the current increases is correct.
I have an old Lifecycle exercise bike. When I pedal, I'm turning the shaft of a standard automotive alternator. The control circuitry adjusts how difficult it is to pedal by changing the load connected to the alternator - the more current supplied to the load, the more difficult it is to pedal the 'bike'. | {
"domain": "physics.stackexchange",
"id": 94894,
"tags": "electromagnetism, electricity, energy-conservation"
} |
hector_navigation_node problem compilation | Question:
Hello,
I downloaded the hector_navigation package from github but doesn't succeed on using the hector_navigation_node
the error is : "ResourceNotFound: hector_exploration_node".
I downloaded the hector_gazebo and hector_slam packages too and don't have any problem to use them. Is there anything I have to add in a CMakeList.txt to compile this package?
Thank you for your answer
Originally posted by est_CEAR on ROS Answers with karma: 170 on 2014-10-21
Post score: 0
Answer:
Well, I used only catkin_make but had to use :
$rosmake hector_exploration_node
which is from the old version of ROS.
Now it works, thank you "ingcavh". Can you modify or complete your answer so that I can validate it, or validate mine because I can't from my own?
Originally posted by est_CEAR with karma: 170 on 2014-11-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by ingcavh on 2014-11-12:
Your welcome :D | {
"domain": "robotics.stackexchange",
"id": 19798,
"tags": "ros, compilation, hector"
} |
Frames and rviz visualization | Question:
I have the following frame transformation chain:
laser_frame -> base_link -> base_stabilized -> base_footprint -> odom -> map -> world
Where
laser_frame is where the scans are
generated
odom is the odometry regarding the
map
map is where the 2D map is created
based on scans
world is the frame in the middle of
the stage, what is meant to be
(0,0,0) and the very first coordinate
axis
I want the map to be created over the world while scans/world/map moving regarding the quadrotor. I mean something intuitive, the quadrotor moves inside the stage so rviz must show how the stage (world, scans and map) move relative to it, keeping it in the center of view, fixed.
What must be the fixed frame? What must be the target frame? Why? Must I do some weird transformation or it's just simply the setup?
Originally posted by Chipiron on ROS Answers with karma: 93 on 2014-08-21
Post score: 0
Answer:
The answer to this question is not simple.
It depends on how you want your frames and what transforms are you doing and how are you doing them.
Each case will require different solutions.
I recomend to make a draw about your frames and reference axes, what you want, what you have and asking yourself if the transformations you are doing have any sense by themselves.
Originally posted by Chipiron with karma: 93 on 2014-08-22
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 19131,
"tags": "navigation, mapping, rviz, frames, world"
} |
Searching for pairs of integers that satisfy primality and divisibility conditions | Question: I wrote a program in Python 3 to check all tuples of the form (s,t) where s and t are positive integers satisfying three conditions:
The inequalities s <= t^2 and t <= s^2 hold, and the value s+t divides st(s+1)(t+1);
The value of s+1 is prime;
The inequalities both s * [math.ceil( math.ceil( (t^2)/(s+1) ) * ((s+1)/t) )] > s(s+t) and s > t hold.
I'm interested in how often a tuple (s,t) satisfies each of these conditions in turn; that is, how often a tuple meets condition (1), versus meeting both (1) and (2), versus meeting (1) - (3).
import math
def IsPrime(n):
# This returns True if a number is prime, and False if the number is composite.
if n % 2 == 0 and n > 2:
return False
return all(n % i for i in range(3, int(math.sqrt(n)) + 1, 2))
def IsGQ(s,t):
# This checks the divisibility condition.
return (s*t*(s+1)*(t+1)) % (s+t) == 0 and s <= t**2 and t <= s**2
def IsNotTransitive(s,t):
n = math.ceil((t**2)/(s+1))
k = math.ceil(n*((s+1)/t))
return (s*k) > t*(t+s) and s > t
rng = 1000 # The upper limit that `t` will iterate up to
quads = 0 # Counter for tuples (s,t) that satisfy conditions (1) and (2)
prime_quads = 0 # Counter for tuples (s,t) that satisfy conditions (1) - (3)
intransitive_quads = 0 # Counter for tuples (s,t) that satisfy conditions (1) - (4)
# The next 5 lines place all prime numbers up to rng^2 into a list.
# Ideally, this cuts down on how many times s+1 has to be checked to be a prime.
primes = []
for i in range(1, rng**2):
if IsPrime(i):
primes.append(i)
for t in range(4, rng + 1): # Due to project details, I don't care about t<4.
if t % 50 == 0:
print("We have reached t = " + str(t) + ".")
for s in range(2, t**2 + 1): # To satisfy condition (1), I don't want s to iterate above t^2
if IsGQ(s,t): # Condition (1) and (2)?
quads += 1
if s+1 in primes: # Condition (3)?
prime_quads += 1
if IsNotTransitive(s,t): # Condition (4)?
intransitive_quads += 1
print(str(quads))
print(str(prime_quads))
print(str(intransitive_quads))
Currently, this runs to completion in over 10 minutes for rng = 1000. Ideally, running this for rng = 10000 in a reasonable amount of time is my current goal; computing these numbers for the value rng = 100000 is my most optimistic goal.
Is there any way to reorder my iteration or change my method of checking for primeness that could be more efficient? It seems that the time for s+1 in primes to complete grows quadratically with rng. Since, after a certain point, s+1 will be less than every value in primes that hasn't been checked yet, would it be better to write my own "in" function along the lines of
def CheckPrime(s, primes):
for n in primes:
if s+1 == n:
return True
elif s+1 < n:
return False
or would this still be slower than the current code?
Note: IsPrime comes from this Stack Overflow answer.
Answer: You are correct that the largest time sink in this function is your IsPrime function. Your CheckPrime function should in theory be faster, however, there is a better way.
First, a relatively small optimization: Instead of using the IsPrime from the linked SO question, use the Sieve of Eratosthenes to generate a list of of True / False values depending on the index. Here's a simple implementation:
def sieve(limit):
primes = [True] * limit
primes[0] = primes[1] = False
for i in range(limit):
if primes[i]:
for n in range(i ** 2, limit, i):
primes[n] = False
return primes
If you cache the result of this function, checking if a number is prime becomes as simple as cache[n], O(1) instead of roughly O(n).
This change certainly helps, with rng = 1000, on my system it took 4 minutes, 3 seconds to run. *
The next best change I see is reducing the range that you iterate over s. If condition 1 does not pass, there's no need to even touch that value for s. With this in mind, I changed s in range(2, t**2 + 1) to s in range(int(t ** 0.5)) and ran the code again. While this will certainly help with larger ranges, it didn't help too much with rng = 1000, resulting in a time of (again) 4 minutes and 3 seconds. I suspect that this is because at the upper bounds of the limit, we only get to skip ~100 iterations. It will make a difference if you run this for rng = 100000.
Next, I remembered reading that the cost of a function call in Python is fairly high [source] - 150ns per call. That might not seem like much, but when you are calling a function roughly 300 hundred million times (IsGQ function, rng = 1000), it can be significant. With this in mind, I inlined the IsGQ function, and dropped the range checks since they are now handled by the range function, and moved the s > t check for condition 3 (4?) into the if statement. This helped a bit, though less than I expected, resulting in a run time of 3 minutes and 54 seconds.
There's probably more optimization that can be done here, but I am no expert in Python, and nothing else is jumping out to me.
Here's the code I used for the final run:
import math
rng = 1000
def sieve(limit):
primes = [True] * limit
primes[0] = primes[1] = False
for i in range(limit):
if primes[i]:
for n in range(i ** 2, limit, i):
primes[n] = False
return primes
print('Populating prime cache')
primeCache = sieve(rng ** 2 + 2)
print('Prime cache populated')
def IsNotTransitive(s,t):
n = math.ceil((t**2)/(s+1))
k = math.ceil(n*((s+1)/t))
return (s*k) > t*(t+s)
quads = 0
prime_quads = 0
intransitive_quads = 0
for t in range(4, rng + 1):
if t % 50 == 0:
print("We have reached t = " + str(t))
for s in range(int(t ** 0.5), t**2 + 1):
if s * t * (s + 1) * (t + 1) % (s + t) == 0:
quads += 1
if primeCache[s + 1]:
prime_quads += 1
if s > t and IsNotTransitive(s,t):
intransitive_quads += 1
print(str(quads))
print(str(prime_quads))
print(str(intransitive_quads))
* All times given are CPU times, not real time. Real time was generally only a few seconds higher. 4m7s for the first run. | {
"domain": "codereview.stackexchange",
"id": 30248,
"tags": "python, performance, primes"
} |
Integration of Bloch equation in magnetic resonance | Question: From Bloch equation we have
\begin{equation}\label{bloch_01}
\tag{1}
\frac{d M_z}{dt} = \frac{M_0-M_z}{T_1}
\end{equation}
from there we can integrate and we get
\begin{equation}\label{bloch_02}
\tag{2}
M_0-M_z = e^{-\frac{t}{T_1}}e^c
\end{equation}
where $c$ is an integration constant. For $\alpha = 90^\circ$
$$
M_z = M_0(1-e^{-\frac{t}{T1}})
$$
Before integrating, I tried to go from (1) to (2) by separating variable s.t.:
\begin{align*}
\frac{d M_z}{dt} &= \frac{M_0-M_z}{T_1}\\
\frac{T_1}{dt} &= \frac{M_0-M_z}{d M_z}\\
\frac{dt}{T_1} &= \frac{d M_z }{M_0-M_z}\\
\ln{T_1} &= \ln{M_0-M_z}\\
\end{align*}
but I suspect it is wrong and could not go further. What am I doing wrong?
Edit (thanks to answer below from @mikestone)
\begin{align*}
\frac{d M_z(t)}{dt} &= \frac{M_0-M_z(t)}{T_1}\\
\frac{T_1}{dt} &= \frac{M_0-M_z(t)}{d M_z}\\
\frac{dt}{T_1} &= \frac{d M_z(t)}{M_0-M_z(t)}\\
\frac{t}{T_1} &= -\ln({M_0-M_z(t)})+ c\\
-\frac{t}{T_1} &= \ln({M_0-M_z(t)})+ c\\
e^{-\frac{t}{T_1}} &=(M_0-M_z(t))e^c\\
e^{-\frac{t}{T_1}}e^{-c} &=M_0-M_z(t)\\
M_0-M_z(t) &= e^{-\frac{t}{T_1}}e^{-c}\\
\end{align*}
Answer: $T_1$ is a constant, so your $\ln T_1$ is wrong. Also there is a minus sign before $M_z$ in the penultimate line, so your last line should read
$$
t/T_1= -\ln (M_0-M_z(t))+ c.
$$
Exponentiating then gives
$$
M_0-M_z(t)= e^{-t/T_1} e^{c}.
$$ | {
"domain": "physics.stackexchange",
"id": 78757,
"tags": "integration, differential-equations, nuclear-magnetic-resonance"
} |
Why marbles don't shatter like a glass panel does? | Question: Both are made of the same material, not talking about the tempered glass. But I don't see marbles shatter the way glass panel does, why is that? If I could scale up the marble to the size of a car and strike a hammer on it, would it shatter?
Answer: The difference is geometry, both in shape and size.
First, consider that the smaller something is, the stiffer it is in general. Take a large rubber eraser and squeeze it (in compression, not bending) and then cut it in half and squeeze again. You need double the force to get the same deflection with half the size.
Next is the shape, where something flat like a glass pane is allowed to bend which puts the most strain into the material, compared to a sphere that mostly compresses. The details here are complex, but certain shapes are stiffer and certain ones are more complaint. A sphere is exceptional at resisting loading because most of the internal stresses are compressive.
Brittle shattering occurs when the bonds between molecules in a solid break (in tension) causing a dislocation, which then loads up neighboring molecules which in turn break also. In the end, there is a runaway process of crack propagation until the object is fully cracked. | {
"domain": "physics.stackexchange",
"id": 85224,
"tags": "collision, material-science, glass"
} |
Water pouring challenge | Question: I'm trying to write code to solve water pouring problem in Clojure. The following code is strongly influenced by Martin Odersky's solution lectured in his coursera course, Functional Programming Principle in Scala. He first created a class Pouring with parameter capacity, which is accessible everywhere in the class. Though he made the best use of case class in his solution, I just used plain clojure map to represent the moves.
I tried to solve the problem in the same way, and end up with the following code:
(ns pouring)
(declare capacity init-state)
(defn empty [state glass]
(assoc state glass 0))
(defn fill [state glass]
(assoc state glass (capacity glass)))
(defn pour [state [from to]]
(let [amt (min (state from) (- (capacity to) (state to)))]
(-> state
(assoc from (- (state from) amt))
(assoc to (+ (state to) amt)))))
(defn change [state move]
(if state
(cond (:empty move) (empty state (:empty move))
(:fill move) (fill state (:fill move))
(:pour move) (pour state (:pour move)))
(change init-state move)))
(defn moves [capacity]
(let [glasses (range (count capacity))]
(lazy-cat
(map (fn [g] {:empty g}) glasses)
(map (fn [g] {:fill g}) glasses)
(for [from glasses, to glasses :when (not= from to)]
{:pour [from to]}))))
(defn extend-path [path move]
{:history (conj (:history path) move)
:end-state (change (:end-state path) move)})
(defn extend
([paths explored]
(if-let [more (for [path paths
next-path (map #(extend-path path %) (moves capacity))
:when (not (contains? explored (:end-state next-path)))]
next-path)]
(lazy-cat paths (extend more (conj explored (map #(:end-state %) more))))))
([] (extend #{{:history [], :end-state init-state}} #{init-state})))
(defn init [c]
(def capacity c)
(def init-state (vec (repeat (count c) 0))))
(defn solve [capacity target]
(init capacity)
(first (for [path (extend)
:when (some #(= % target) (:end-state path))]
path)))
The part I don't like is init function. Most clojurians say that using def inside a function is bad. If I don't use init function, perhaps I have to add parameter capacity to almost every function in the code, which doesn't look pretty.
I was considered using nested function.
(defn solve [capacity target]
(defn empty ...)
(defn fill ...)
(defn pour ...)
(defn extend ...)
...)
However, using defn inside function is also bad, and the functions defined inside are actually not local functions, which are accessible outside the outer function.
Yes, there's letfn in clojure. I can put every function except solve into letfn. But in this case, the code is not very readable. I think letfn is only for short (which can be expressed in 1 or 2 lines) functions.
Perhaps there are better ways to do this. Any suggestions?
Answer: I won't try to be comprehensive:
The part I don't like is init function.
Your instincts are right.
If I don't use init function, perhaps I have to add parameter capacity to almost every function in the code, which looks not pretty.
Clojure supports dynamic binding, which you can use instead of misusing def; although I'm not sure it's warranted here.
Another problem I see is even though what you should do for each move (say [:pour 1 2]) is a constant, first it needs to be computed each time extend function is called, second look it up in a cond again in change. A move is a function from a state to a new state. Why not just use that function? Because we need the description for generating the history. So a move has a state transition function and a description.
(defrecord Move [description change])
And we can generate moves like this:
(defn moves [capacity]
(let [glasses (range (count capacity))]
(concat
(for [g glasses] (Move. [:empty g] #(empty capacity % g)))
(for [g glasses] (Move. [:fill g] #(fill capacity % g)))
(for [from glasses, to glasses :when (not= from to)]
(Move. [:pour from to] #(pour capacity % [from to]) )))))
We don't need change function anymore:
(defn extend-path [path move]
(-> path
(update-in [:history] #(conj % (:description move)))
(update-in [:end-state] (:change move))))
If you pay attention, change function was dependent on what the specific moves were. Previous version of extend-path was dependent on them also, transitive as it was dependent on change. We will return to this, later.
We don't need to pass capacity to extend anymore; instead of generating moves, we can just pass them in:
(defn extend
([moves paths explored]
(if-let [more (for [path paths
next-path (map #(extend-path path %) moves)
:when (not (contains? explored (:end-state next-path)))]
next-path)]
(lazy-cat paths (extend moves more (conj explored (map #(:end-state %) more)))))))
Thus extend is not dependent on the specifics of this problem.
If we inline init we get something like this:
(defn solve [capacity target]
(let [init-state (vec (repeat (count capacity) 0))
moves (moves capacity)
init-path #{{:history [], :end-state init-state}}]
(first (for [path (extend moves init-path #{})
:when (some #(= % target) (:end-state path))]
path))))
I initialize the explored states as empty set, as it made more sense to me.
I notice that the body of solve has only the test in the :when clause left specific to this problem. The rest of the body is actually some search algorithm. (I assume it is breadth-first)
we can extract it, too:
(defn breadth-first [moves init-path final?]
(let [init-path #{{:history [], :end-state init-state}}]
(first (for [path (extend moves init-path #{})
:when (final? (:end-state path))]
path))))
(defn solve [capacity target]
(let [init-state (vec (repeat (count capacity) 0))
moves (moves capacity)
final? #((set %) target)]
(breadth-first moves init-state final?))
This enhanced separation of concerns, as breadth-first, extend, extend-path, Move are no longer dependent on the specifics of this problem. You can move them to another source file, which would not change much as the problem requirements evolve. This also makes it easier to change the search algorithm being used. | {
"domain": "codereview.stackexchange",
"id": 12154,
"tags": "clojure"
} |
Applying Time Dilation Twice | Question: I have recently been thinking about the equations of special relativity and have run across something interesting. I was thinking about the following scenario: one person is standing stationary on a road, another person is driving at 0.125c, and a third person is driving at 0.25c.
For the rest of this post, I will use $\gamma_v$ to denote the relativistic factor for a speed v and $t_v$ to represent the time measured by the person moving at speed v.
So, I thought of two ways of analyzing the time dilation when comparing the stationary person and the person moving at 0.25c. The simple way is:
$t_0= \gamma_{0.25c}t_{0.25c}$ (1)
I then thought about using the time measured by the person moving at 0.125c to determine the time dilation. According to the person moving at 0.125c, the person moving at 0.25c is only moving at 0.125c, so:
$t_{0.125c}=\gamma_{0.125c}t_{0.25c}$ (2)
Additionally, the following should be true:
$t_{0}=\gamma_{0.125c}t_{0.125c}$ (3)
Rearranging equation 3 gives:
$t_{0.125c}=t_{0} / \gamma_{0.125c}$ (4)
Plugging equation 4 into equation 2 gives:
$t_{0} / \gamma_{0.125c} = \gamma_{0.125c}t_{0.25c} $ (5)
Rearranging equation 5 gives:
$t_{0} = {\gamma_{0.125c}}^2 t_{0.25c}$ (6)
Finally, plugging equation 6 into equation 1 gives:
${\gamma_{0.125c}}^2 t_{0.25c} = \gamma_{0.25c}t_{0.25c}$
${\gamma_{0.125c}}^2 = \gamma_{0.25c}$ (7)
However, in carrying out the calculations, equation 7 appears to be false. It appears as if I have made a mistake, but I am not sure where. Any help would be very much appreciated. Thank you!
Answer: There are two errors:
Relative-velocity is not subtractive--instead, it's $$v_{BA}=\frac{v_{BO}-v_{AO}}{1-v_{BO}v_{AO}}.$$
Time-dilation factors are not multiplicative--there is an extra factor $$\gamma_{BA}=\gamma_{BO}\gamma_{AO}(1-v_{BO}v_{AO}).$$
These are easier to recognize if you work with rapidities (Minkowski angles), where $v_{BO}=\tanh\theta_{BO}$, $\gamma_{BO}=\cosh\theta_{BO}$, etc...
Relative-rapidity is $\theta_{BA}=\theta_{BO}-\theta_{AO}$.
Relative-velocity is $v_{BA}=\tanh(\theta_{BA})=\tanh(\theta_{BO}-\theta_{AO})=\frac{\tanh\theta_{BO}-\tanh\theta_{AO}}{1-\tanh\theta_{BO}\tanh\theta_{AO}}$.
Relative-time-dilation-factor is $\begin{align}\gamma_{BA}=\cosh(\theta_{BA})=\cosh(\theta_{BO}-\theta_{AO})&=\cosh\theta_{BO}\cosh\theta_{AO}-\sinh\theta_{BO}\sinh\theta_{AO}\\&= \cosh\theta_{BO}\cosh\theta_{AO}(1-\tanh\theta_{BO}\tanh\theta_{AO})\end{align}$.
Here's an example with a spacetime diagram on rotated graph paper.
Here $v_{CA}=-3/5$ and $v_{BA}=3/5$--so $\gamma_{CA}=5/4$ and $\gamma_{BA}=5/4$.
So, $v_{BC}=\frac{v_{BA}-v_{CA}}{1-v_{BA}v_{CA}}=15/17$.
Note that $\gamma_{BC}=17/8$ (graphically)
and $\gamma_{BA}\gamma_{CA}(1-v_{BA}v_{CA})=(\frac{5}{4})(\frac{5}{4})(1-(\frac{3}{5})(\frac{-3}{5}))=17/8$.
postscript: This diagram shows that although Alice says that events N and Q are simultaneous, they are not simultaneous according to Bob or Carol. | {
"domain": "physics.stackexchange",
"id": 39553,
"tags": "special-relativity, relativity, time-dilation"
} |
Why does my self-made air pump not blow air in the correct direction? | Question: I am embarking on a project for building a low-noise air pump for use with things such as air mattresses or pool toys. I just find that the pumps sold at stores are way too noisy, plus it looked like a fun project.
As a first step, I am trying to first build an air pump that works. I have designed and 3d printed the air pump below. It is powered by a brushless dc motor. My idea was that by using a lower rpm, higher torque motor I could use a bigger fan to provide the same amount of air in a much quieter way. I opted for a radial design, rather than centrifugal, as from what I can understand, centrifugal pumps are significantly noisier.
The motor is rated for 7030 RPM in a no load setting. The air inlet is on top, and the outlet is on the bottom. The outlet is significantly smaller than the inlet. The fan rotates counter-clockwise.
When turning on the pump I find that barely any air comes out of the outlet, but instead most of the air actually comes out of the inlet. If I attach an inflation nozzle to the outlet, most of the air actually stops coming out from the outlet, and more air comes out from the inlet. Why does this happen and how can I avoid it?
From what I could figure out, the problem is probably related to the static pressure the fan is able to produce, since the outlet is smaller than the inlet, so the pump must be able to compress the air enough to push it through the outlet. How do I increase the static pressure produced by the fan (preferably without increasing rpm due to noise)?
Answer: If your fan is throwing back air out the inlet, you have choked the fan with an exit hole that is too small. Note that what is actually happening is that part of the fan disc is pulling air into the pump while another part of the fan disc is allowing that air to escape, at the same time.
Note that using a diffuser downstream of an axial-flow fan to convert kinetic energy in the airflow into static pressure is the wrong way to design an air pump for this application. The entire point of an axial fan is to slightly increase the pressure of a large volume of airflow, and not to develop high pressure in a small airflow.
This fact is chapter one, verse one of the art of turbomachinery and is the reason that high pressure is generated either by centrifugal pumps or piston/diaphragm pumps and not by axial fans. | {
"domain": "engineering.stackexchange",
"id": 5219,
"tags": "fan"
} |
Is color confinement detected? | Question: I'm a graduate student studying QFT. I'm quite interested that is color confinement detected or proved? (both directly and indirectly) Or it is just an assumption?
Answer: Here is an experimentalist's answer.
Color confinement is a theoretical concept arising from the plethora of experimental observations that are summed up theoretically in the Standard Model. We have no free quarks or gluons, we do have quark jets and gluon jets. So confinement as predicted by the ${\rm SU}(3)\times {\rm SU}(2)\times \rm U(1)$ SM is consistent with all the existing data.
One has to keep in mind that a theory applying to experimental data can be falsified, or can be found consistent with the data; but consistent is not proof, it is a temporary validation.
A theory of course has axioms and mathematical proofs, so a theorist should answer whether the theory of QCD allows unconfined manifestations of color. These should be in phase spaces not explored by present experiments. | {
"domain": "physics.stackexchange",
"id": 85532,
"tags": "quantum-field-theory, standard-model, quantum-chromodynamics, elementary-particles, confinement"
} |
Struct to web style string | Question: I've had to make several functions to turn some structures into strings. I am a still green when it comes C so I am unsure if I am doing this a very awkward way. The system I am coding for does not have snprintf, I know that would be far more elegant, however I cannot use it.
Any advice?
int device_to_string(char* const asString, pDevice dev, size_t maxLength)
{
char* ipAsString;
size_t actualLength;
struct in_addr addr;
if (dev == NULL)
{
return NULL_ERROR;
}
addr.s_addr = dev->ip;
ipAsString = inet_ntoa(addr);
actualLength = strlen("name=") + strlen(dev->name) +
strlen("&ip=") + strlen(ipAsString) +
strlen("&mac=") + strlen(dev->mac) +
strlen("&type=") + strlen(dev->type) + 1;
if (actualLength > maxLength)
{
return SIZE_ERROR;
}
strncat(asString, "name=", strlen("name="));
strncat(asString, dev->name, strlen(dev->name));
strncat(asString, "&ip=", strlen("&ip="));
strncat(asString, ipAsString, strlen(ipAsString));
strncat(asString, "&mac=", strlen("&mac="));
strncat(asString, dev->mac, strlen(dev->mac));
strncat(asString, "&type=", strlen("&type="));
strncat(asString, dev->type, strlen(dev->type));
asString[actualLength] = '\0';
return NO_ERROR;
}
Answer: Yeah, without snprintf and sprintf it gets a bit tedious, but I think this code is actually quite clear. You use your horizontal and vertical whitespace very well, and it's clear what you're doing with each block of code. You have also controlled for any possible issues that might come up (null pointer, insufficient buffer length, etc). Maybe there's a more concise way to do it, but in terms of clarity and maintainability I think this code will suffice. | {
"domain": "codereview.stackexchange",
"id": 34,
"tags": "c, strings"
} |
Electricity generation from sand particles in airstream hitting surface | Question: I read an experiment which demonstrated that metal filings fired at a surface using air blasts charges the particles, surface and airstream. I understand this is triboelectricity, resulting from friction.
For reference: https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1929.0004
Experiments show that the charge increases if
airstream is faster
airstream is hotter
particles are finer (greater surface area)
particles are shot at surface obliquely
What if we use concentrated sunlight to heat air (e.g. to 300° C) and blow that air at high speeds to fire fine sand on some surface and continuously do so to generate electricity? Why isn't this feasible? Is there some physical limit that comes into play which makes this idea irrelevant?
EDIT 1: I'm frustrated as to why someone is assuming I meant to make some perpetual mumbo jumbo machine. It's not! This is a conversion of heat (in solar) to electricity. The device being so simple, I think it should be useful for charging cell phones and such, with lack of efficiency being a non-issue, since solar is mostly wasted anyway!
I am using the concentrated sunlight to generate a temperature (and thus, pressure) gradient in the setup, so that airflow is generated. There is no "pump" in my modified system. I am trying to heat air to generate "chimney" effect airflow.
Note: Generally I don't like talk about efficiency when it comes to simple easy to make, use and maintain devices like this. Unless it's too low, don't bring it up.
Answer: This seems to be your question:
What if we use concentrated sunlight to heat air (e.g. to 300° C) and blow that air at high speeds to fire fine sand on some surface and continuously do so to generate electricity? Why isn't this feasible?
It isn't feasible because you have to generate power to blow this air around to capture some tiny electric charge potential. You are converting energy too many times to be useful: solar to heat, solar to pressurized/moving air, sand charges to electricity. You may not like talking about efficiency, but efficiency is what makes things feasible. A machine that mostly heats up air and sand can certainly be built, but these things have to make money. Make a statement to investors that you don't care about efficiency and they'll throw you out of the room. | {
"domain": "engineering.stackexchange",
"id": 5350,
"tags": "electrical-engineering, materials, thermodynamics, experimental-physics"
} |
General formulation of time reversal symmetry action on fermions | Question: I'm wondering about a general way to define the action of time reversal on a fermion field $\psi$. From a few sources I've read (e.g. appendix A of Witten's paper on fermion path integrals), it seems like in general, a reflection of the coordinate $x^\mu$ in space time corresponds to the transformation
$$ R_\mu:\psi(x) \mapsto \gamma_\mu \psi(R_\mu(x)),$$
where $R_\mu(x^\nu) = -x^\nu$ if $\mu=\nu$ and $R_\mu(x^\nu)=x^\nu$ otherwise. If this is true, we should be able to choose time reversal so that it acts on fermions as
$$T: \psi(t,x) \mapsto \gamma_0 \psi(-t,x).$$
Indeed, I have seen papers (e.g. this paper by Cordova et al) where such a choice is made, at least for theories in 2+1 dimensions.
I'm curious if this is a satisfactory definition of time reversal in general (which seems to be implied by the references mentioned), and if it's not, whether there is a simple modification that makes it into a general definition.
One reason why I don't think it can be general is that it seems to be dependent on the signature we choose. For example, the Dirac mass in 2+1D is supposed to be $T$-odd. If we are in $(-,+,+)$ signature then we can choose e.g. $\gamma_0 = i\sigma^y$, in which case one checks that the Hermitian term $im\bar\psi\psi$ is indeed $T$-odd. However, if we are in $(+,-,-)$ signature where we can choose e.g. $\gamma_0=\sigma^x$, then the Hermitian term $m\bar\psi\psi$ is $T$-even. So clearly something with our definition of $T$ is not right.
Answer:
A spinor, in the sense of transforming in a representation of $\mathrm{Spin}(1,3)$, the double cover of the proper orthochronous Lorentz transformations $\mathrm{SO}^+(1,3)$, does not have a transformation behaviour under parity or time reversal transformations, since neither parity nor time reversal are in $\mathrm{SO}^+(1,3)$ - they are not continuously connected to the identity and therefore can't be generated by exponentiating the Lorentz algebra. What we really need is the notion of a pinor, something that transforms in a representation of the pin group, the double cover of the full Lorentz group $\mathrm{O}(1,3)$.
Very annoyingly, while the spinor representations are generally insensitive to the metric sign convention of our spacetime (mostly + or mostly -) because $\mathrm{Spin}(1,3)$ and $\mathrm{Spin}(3,1)$ are isomorphic, $\mathrm{Pin}(1,3)$ and $\mathrm{Pin}(3,1)$ are not. It is therefore rather easy to get tangled in inconsistent sign conventions when talking about the action of parity or time reversal on fermions.
In the end, it turns out that time reversal acting on a standard Dirac fermion will come out to be proportional to $\gamma^1 \gamma^3$, but the sign depends on several sign choices along the way. In order to derive this for your particular conventions, you need to consider the basic solutions of the Dirac equation in terms of a Weyl spinor $\xi$ which will look something like
$$ u(\vec p) = \begin{pmatrix}\sqrt{p\cdot \sigma}\xi \\\sqrt{p\cdot\bar\sigma}\xi\end{pmatrix}$$
and now apply time reversal to this expression, then figure out what sign you need to give $\gamma^1\gamma^3$ so that $\pm\gamma^1\gamma^3u(\vec p)$ is the same as this time-reversed expresson. Then take the full mode expansion of the fermion field, consider that the creation/annihilation operators also flip momentum and spin, and work out what the factor of $\gamma^1\gamma^3$ is that ends up in front of the expansion. This method is applied e.g. in Blagoje's notes on CPT symmetry. | {
"domain": "physics.stackexchange",
"id": 55871,
"tags": "field-theory, fermions, time-reversal-symmetry, clifford-algebra"
} |
Green's Function Integration by Parts | Question:
On page 41 (on Green's functions) of "Quantum Field Theory and the Standard Model" by Matthew D. Schwartz there is an equation
$$-\int d^4y[\Box_y\Pi(x,y)]h(y)=-\int d^4 y \Pi(x,y)\Box_y h(y) \tag{3.81}$$
that he says is derived using integration by parts. The same calculation also comes up again on page 81 (on Position-space Feynman rules).
I could not figure out that derivation, so to at least gain plausibility I tried using integration by parts on the two sides separately but in a grossly simplified form:
\begin{align*}
\int f(y)g''(y)dy &= f(y)g'(y)| - \int g'(y)f'(y)dy\\
\int g(y)f''(y)dy &= g(y)f'(y)| - \int f'(y)g'(y)dy\quad\text{so}\\
\int f(y)g''(y)dy - \int g(y)f''(y)dy &= f(y)g'(y)| - g(y)f'(y)|\\
\end{align*}
The best would be to forget about plausibility and provide the exact details for his derivation, but it would be good to know whether the plausibility is on the right track, and just needs more detail on, say, the limits above and below that are omitted. Maybe that's where I am not going far enough.
Answer: Your proof strategy is basically correct; the final ingredient is we usually assume functions vanish fast enough at infinity that boundary terms are zero (e.g. you use this in deriving the Euler-Lagrange equation), which simplifies each use of integration by parts. | {
"domain": "physics.stackexchange",
"id": 42255,
"tags": "field-theory, integration"
} |
Salmon tximport | Question: I ran bulk RNA-seq experiment and got quant.sf file. Now, I am struggling with understanding what tximport package does and how to use it correctly. My ultimate goal is to feed the data into DESeq2 for differential expression analysis. In the documentation:
http://bioconductor.org/packages/release/bioc/manuals/tximport/man/tximport.pdf
We have:
samples <- read.table(file.path(dir,"samples.txt"), header=TRUE)
files <- file.path(dir,"salmon", samples$run, "quant.sf")
I do not understand what are these samples.txt. It is not generated by salmon, so I suppose that I need to write it myself. How then? What is the correct format for the file? And what if I want to feed just one sample in? That is actually what I want initially: process each sample separately. Could anyone explain step by step how we could utilize tximport when we have just one quant.sf file to get to raw count matrix to feed it into DeSeq2?
Answer: You are right, samples.txt is not generated by Salmon (or any other transcript abundance quantifiers). From documentation, you can find a link to an example of how samples.txt should look like.
Note: At the moment, the server is unavailable (I got an Error 503: Service Temporarily Unavailable) - I'd suggest you get directly in contact with the owners / authors.
Luckily, there is a call of the function importing samples.txt file in the doc itself (variable samples), so you can get an idea of ow you have to structure it:
> samples
## pop center assay sample experiment run
## 1 TSI UNIGE NA20503.1.M_111124_5 ERS185497 ERX163094 ERR188297
## 2 TSI UNIGE NA20504.1.M_111124_7 ERS185242 ERX162972 ERR188088
## 3 TSI UNIGE NA20505.1.M_111124_6 ERS185048 ERX163009 ERR188329
## 4 TSI UNIGE NA20507.1.M_111124_7 ERS185412 ERX163158 ERR188288
## 5 TSI UNIGE NA20508.1.M_111124_2 ERS185362 ERX163159 ERR188021
## 6 TSI UNIGE NA20514.1.M_111124_4 ERS185217 ERX163062 ERR188356
As an alternate workaround, find the samples.txt format directly from tximportData Bioconductor page:
download source code
extract
navigate to: [...]/tximportData 2/inst/extdata
read content of samples.txt. | {
"domain": "bioinformatics.stackexchange",
"id": 498,
"tags": "rna-seq, deseq2, tximport"
} |
Calculate co-occurrences of some words in documents dataset | Question: It's a function to calculate the co-occurrences of some words in a news dataset, such as Techcrunch, Wired. The words list could be locations, products or people names.
Words list example:
["New York","Los Angeles","Chicago"]
Return result:
{"Chicago": {"New York" : 1, "Los Angeles": 2}}
The co-cocurrence between "Chicago" and "New York" is 1.
The problems:
The code below will caculate the same co-cocurrence of two words twice. And the time cost by a test dataset with 5 articles is 13.5s.
So for a dataset of 100k articles will cost about 75 hours. Is there any better solutions to improve the performance? Thanks!
"13.5 s ± 415 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)"
def get_co_occurrences(data):
com = defaultdict(dict)
for temp in data:
for i in range(len(city_list)-1):
for j in range(i+1, len(city_list)):
w1, w2 = city_list[i], city_list[j]
if " " + w1 + " " in temp and " " + w2 + " " in temp:
print(w1,w2)
if com[w1].get(w2) is None:
com[w1][w2] = 1
else:
com[w1][w2] += 1
return com
Edited: Python version
Python 3.6.2 | packaged by conda-forge | (default, Jul 23 2017, 22:59:30)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux
Answer:
There is no docstring. What does the function do? What argument does it take? What does it return?
city_list is a global variable. It would be better if this were passed as a parameter to the function: this would make it easier to reuse the function or write unit tests.
The algorithm has nothing to do with cities: it would work just as well for countries, or people, or any other set of search terms. It would make this generality clearer if city_list were named something like terms.
The variable names are vague and the code would be easier to understand if they were more specific. data is a collection of documents, so documents would be clearer. temp is a document from the collection, so document would be clearer. com is a data structure containing counts of co-occurrences that will be returned as the result of the function, so a name like result would be clearer.
There are two loops over i and j such that 0 <= i < j < len(city_list). These could be combined into a single loop using itertools.combinations:
for i, j in combinations(range(len(city_list)), 2):
The only purpose of these indexes is to select the two cities. It would simpler to iterate over the cities directly, avoiding the need for their indexes.
for w1, w2 in combinations(city_list, 2):
The data structure com is a mapping from first city to mappings from second cities to counts of co-occurrences. Unless you really need all those mappings it would simpler if the data structure were a mapping from pair of cities to count of co-occurrences.
When you have a data structure containing counts of items, using collections.Counter often makes the code simpler. First, create the counter object:
result = Counter()
and then increment the count:
if " " + w1 + " " in temp and " " + w2 + " " in temp:
result[w1, w2] += 1
Searching for terms with spaces before and after will miss matches at the beginnings and ends of documents. It would be more reliable to use a regular expression match together with the \b (word boundary) code:
if all(re.search(r'\b{}\b'.format(re.escape(term)), temp) for term in (w1, w2)):
(See the documentation for re.search and re.escape.)
The implementation in the post searches for every pair of cities in every document. If there are \$k\$ cities, \$n\$ documents, and \$w\$ words per document, then the overall runtime is \$Θ(nk^2w)\$.
An alternative approach is to check each word in each document to see if it is a city, and then to iterate over all pairs of cities found in the document. This has runtime \$O(k + n(w \log w + \min(w, k)^2))\$ since there can be \$O(w)\$ cities found in each document. This is more efficient than the original code in all cases, and much faster if (as one might expect) most words in the document are not search terms.
This could be implemented like this, by joining the search terms together into a single regular expression and then using the findall method:
from collections import Counter
from itertools import combinations
import re
def co_occurrences(documents, terms):
"""Return mapping from pairs of search terms to a count of documents
containing both terms.
"""
# Ensure that longer terms match in preference to shorter terms
# that happen to be initial substrings.
terms = sorted(terms, key=len, reverse=True)
# Regular expression matching any of the terms, as complete words.
search = re.compile(r'\b(?:{})\b'.format('|'.join(map(re.escape, terms))))
result = Counter()
for document in documents:
matches = sorted(set(search.findall(document)))
result.update(combinations(matches, 2))
return result | {
"domain": "codereview.stackexchange",
"id": 29158,
"tags": "python, performance, python-3.x, matrix"
} |
*low level* messages in ROS (Indigo) | Question:
Hello everyone,
I'm beginner in ROS and I have a question about the messages in ROS.
Are there low level messages in ROS?
For example, if i want to move the robot to location (x,y), I need to send a MoveBaseGoal message. I want to know if there is a way to see messages that directly go to motor (if they exist)?
Thanks.
Originally posted by matansar on ROS Answers with karma: 43 on 2017-04-03
Post score: 0
Answer:
Most robots implement a cmd_vel topic with the geometry_msgs/Twist type which commands the linear and angular velocities of your mobile base. This isn't quite raw motor commands, but it's much closer to an actuator command.
Some robots perform the conversion from cmd_vel to raw motor commands in the driver node or on the embedded motor controller board, and some robots do the conversion to raw motors commands in a ROS nodes and publish additional topics that are then passed to the motor control node. You'll need to refer to the documentation for your specific robot to find out how it does this. (You may also be able to figure out by inspecting the list of topics when your robot is running)
Originally posted by ahendrix with karma: 47576 on 2017-04-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by matansar on 2017-04-04:
I have tried to find some documentation about my robot.
I works on Armadillo robot (robotican, Indigo). Do you know some links for documentation?
Thanks!
Comment by ahendrix on 2017-04-04:
A quick search on the ROS wiki finds http://wiki.ros.org/robotican_armadillo?distro=indigo and http://wiki.ros.org/robotican , and this tutorial for sending cmd_vel: http://wiki.ros.org/robotican/Tutorials/Command%20you%20robot%20with%20simple%20motion%20commands | {
"domain": "robotics.stackexchange",
"id": 27504,
"tags": "ros"
} |
Why does potential drop exist? | Question: I'm sorry that the question is likely to sound stupid but I just can't seem to be able to wrap my head around it.
I think I am a bit comfortable with the mathematical idea of it but I still can't seem to wrap my head around it. I understand that in moving from the point of lower potential to the point of higher potential, the electrons in a solid conductor are continually accelerated by the electric field but also keep losing this energy in the form of heat to the conductor (and hence to the surroundings).
What I don't understand is the fact that more energy is dissipated within a resistor of length l than would be dissipated in a wire of length l. It just doesn't make sense.
I understand the collisions should change the direction of the electrons to any random direction but the thing is that there is a general movement to the source of higher potential and the kinetic energy of the electrons is increased by the distance it travelled from the start of the resistor to the end of it (and hence, the amount of energy it lost in the resistor should be a function of the distance it travelled within the resistor, independent of resistivity).
If the collisions are more frequent within the resistor, that means that the mean time to a collision is lower. That simply means that the electron travelled less distance before another collision and that means that while it lost energy more frequently, it also lost a proportionately less amount of energy.
Of course this doesn't make sense because I have been to my school's physics lab and I know how hot the resistors can get. But after two days of pondering and googling I still can't seem to find a reasonable explanation for why I'm wrong. I understand that this likely sounds silly and I apologize for that, and I sincerely thank anybody who takes their time to answer this silly question.
Answer: The dissipation in a resistor or wire of the same length $\ell$ doesn't have very much to do with the length - it has to do with the resistance.
For a given current (number of electrons per second crossing a particular point), there will be a voltage drop associated with a given resistance - this is Ohm's law.
A nice intuitive way to think about this is with a bundle of very thin wires. Each wire is so thin that it has quite high resistance; but thousands of these wires in parallel look like a low resistance wire.
If I have that bundle of wires with a certain voltage V across it, and a (low) resistance $R_w$, then we will get a rather large current. Many electrons travel the wire per second, and each of them loses a little bit of energy. But because there are many electrons, the total energy lost is quite large.
Now if I look at just one strand of the bundle, I have a resistor. If I had 1000 strands in the bundle above, then the one strand carries 1/1000th of the current - and 1/1000th of the energy is dissipated since the voltage difference is the same but there are 1000x fewer electrons in this one strand than in the entire bundle.
Does that clear it up for you? | {
"domain": "physics.stackexchange",
"id": 20177,
"tags": "electric-current, voltage"
} |
Same code to parse int, float, decimal? | Question: I have a method to parse strings. It can be used to parse a float using a specific culture and then get a new string with a specific out-culture. The same applies to ints, doubles and decimals.
The code that I've written is quite repetitive for each of the different parse methods which makes it hard to maintain (especially as I am just about to make the method a lot more complex).
Is it possible to make this code less repetitive?
if (mapping.ParseDecimal)
{
decimal i;
if (decimal.TryParse(value, numberStyle, inCulture, out i))
{
return i.ToString(outCulture);
}
else
{
if (logger.IsDebugEnabled)
logger.DebugFormat("Could not parse value \"{0\" to a decimal using the culture \"{1}\".", value, inCulture);
}
}
else if (mapping.ParseDouble)
{
double i;
if (double.TryParse(value, numberStyle, inCulture, out i))
{
return i.ToString(outCulture);
}
else
{
if (logger.IsDebugEnabled)
logger.DebugFormat("Could not parse value \"{0\" to a double using the culture \"{1}\".", value, inCulture);
}
}
else if (mapping.ParseFloat)
{
float i;
if (float.TryParse(value, numberStyle, inCulture, out i))
{
return i.ToString(outCulture);
}
else
{
if (logger.IsDebugEnabled)
logger.DebugFormat("Could not parse value \"{0\" to a float using the culture \"{1}\".", value, inCulture);
}
}
else if (mapping.ParseInt)
{
int i;
if (int.TryParse(value, numberStyle, inCulture, out i))
{
return i.ToString(outCulture);
}
else
{
if (logger.IsDebugEnabled)
logger.DebugFormat("Could not parse value \"{0\" to a int using the culture \"{1}\".", value, inCulture);
}
}
Answer: If repetition is your primary concern, you could try doing something like this:
public delegate string ParserMethod(string value, NumberStyles numberStyle, CultureInfo inCulture, CultureInfo outCulture);
public static class NumericParser
{
public static readonly ParserMethod ParseInt = Create<int>(int.TryParse);
public static readonly ParserMethod ParseFloat = Create<float>(float.TryParse);
public static readonly ParserMethod ParseDouble = Create<double>(double.TryParse);
public static readonly ParserMethod ParseDecimal = Create<decimal>(decimal.TryParse);
public static Logger Logger { get; set; }
delegate bool TryParseMethod<T>(string s, NumberStyles style, IFormatProvider provider, out T result);
static ParserMethod Create<T>(TryParseMethod<T> tryParse) where T : IFormattable
{
return (value, numberStyle, inCulture, outCulture) =>
{
T result;
if (tryParse(value, numberStyle, inCulture, out result))
{
return result.ToString(null, outCulture);
}
else
{
if (Logger != null && Logger.IsDebugEnabled)
Logger.DebugFormat("Could not parse value \"{0}\" to a {1} using the culture \"{2}\".", value, typeof(T).Name, inCulture);
return "";
}
};
}
}
This way, you only have to pass around the appropriate ParserMethod you want to use. In your case, you could map your different mapping values to the appropriate ParserMethod. And call it when needed. | {
"domain": "codereview.stackexchange",
"id": 519,
"tags": "c#, parsing"
} |
Doubt in displacement time graph for a body moving with constant, negative velocity | Question: This is a displacement - time graph of a body having constant, negative velocity.
As we can see, the angle $θ$ (in anti - clockwise direction) is greater than $270^\circ$, and lesser than $360^\circ$, indicating that the slope $m$ or velocity $v$ is $-ve$ (as $\tan(θ)$ is $-ve$).
However, upon doing further research, the agreed upon graph looks like this:
In this case the slope $m$ is $-ve$ as well. Are both graphs correct? Or is the first graph wrong?
Answer: The only essential difference between the two graphs is that the first one is showing displacement $x$ against time, so it starts with $x=0$ at $t=0$ (because displacement is measured from the starting position), whereas the second is showing position against time, and the point at which the position is counted as zero is arbitrary. So neither graph is wrong - they are just measuring different things on the vertical axis.
The important similarity is that both graphs have the same slope, so give you the same value for the velocity of the object. | {
"domain": "physics.stackexchange",
"id": 96734,
"tags": "kinematics, velocity, displacement"
} |
Why is the intensity of an alpha ray constant along a material? | Question: I'm taking a course in radiation physics and I've come across the following problem:
A thin beam of alpha particles of intensity $I_0$ and energy $E_0$ impacts in a material. What is the intensity and the spectrum of energy after having travelled a distance $d$ into the material?
The part I am interested in is the one regarding the intensity. I was expecting some kind of attenuation depending on the properties of the material but my professor sent us the solution by email and states that the solution is that the intensity at any point in the interior of the material is $I_0$ and then $0$ outside the material.
Why is this so? Physically I can't understand it. The energy is a function of the position and I though that it should be the same for the intensity.
Answer: Intensity in this context often refers to only the number of alpha particles incident on a unit area per unit time. If you assume that the alpha particles only slow down and none of them are stopped completely, then the number passing through any area does not change with depth and the intensity in this sense is unchanged. Naturally the intensity in terms of energy per unit area per unit time would drop as the alpha particles lose energy.
This is one of those unfortunate areas in physics where the same word can mean very different things depending on context. | {
"domain": "physics.stackexchange",
"id": 28371,
"tags": "energy, particle-physics, radiation"
} |
irobot create package/stack | Question:
HI
I have a irobot create and BAM so that I can connect my laptop to irobot create via bluetooth.
I browsed software on ROS site and find a few packages/stacks that are for irobot Create. But they are all for some kind of modified Create (such as add-on camera, laser range finer, etc).
Is there any package that is purely for the barebone irobot Create that I can try?
Any recommendations.
Thank you.
Jack
Originally posted by Jackie on ROS Answers with karma: 103 on 2012-09-28
Post score: 1
Answer:
Check out: http://wiki.ros.org/irobot_create_2_1
For the iRobot Create 2, I'm actually working on a barebones here: https://github.com/lucbettaieb/create_2_drivers
Originally posted by luc with karma: 350 on 2015-06-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 11172,
"tags": "create"
} |
Can decoherence occur just by simple reflection? | Question: I create an entangled photon pair. One photon is sent inside a resonant cavity which reflects the photon many number of times. Will there be decoherence between photon in the resonant cavity and its entangled partner?
If yes why should it happen? Also is decoherence affected by type of surface of mirror i.e Glass Vs Metal.
Answer: No, since mirrors don't cancel the wavyness and preserve phase.
Same for simple transport in a medium (having a refraction index), BTW. | {
"domain": "physics.stackexchange",
"id": 29180,
"tags": "quantum-mechanics, optics, quantum-entanglement, reflection, decoherence"
} |
Speed of waves in vacuum | Question: All electromagnetic waves travel at speed of light in vacuum. Gravitional waves also travel at speed of light.
I am having this vague notion that all waves in vacuum must travel at speed of light.
Is there any theorem like that?
If yes, please elaborate.
If not, please give some counterexamples.
Answer: All zero mass particles travel at the speed of light in vacuum is not a theorem, but a result of imposing special relativity on the description of particles in quantum mechanics.
Electromagnetic waves are emergent from zillions of photons, and the same would be true for gravitational waves , emergent from zillions of gravitons, once gravity is definitively quantized.
In the standard model of particle physics before symmetry breaking , all particles have zero mass, acquiring mass by the mechanisms as the Higgs mechanism of symmetry breaking for the electroweak case. So in the cosmological models, before electroweak symmetry breaking one could have the table of particles at zero mass, and in this sense the emergent "radiations" would travel at the speed of light in vacuum. The same for GUTS symmetry breaking. By construction as long as special relativity holds. | {
"domain": "physics.stackexchange",
"id": 55824,
"tags": "general-relativity, waves, spacetime, speed-of-light, vacuum"
} |
Is charge density zero in a dielectric material and why? | Question: I'm trying to solve a problem involving parallel capacitor. I can't decide whether to use poisson's formula or laplace's formula.
The question is, is there $\rho_v$ in a piece of dielectric and why?
Answer: The charge density in the bulk of the dielectric is zero, but the net result of the electric polarization is that charge builds up on the surfaces.
You need to include this charge if you use Maxwell's equations for vacuum. You do not need to include this charge if you use Maxwell's equations in a medium, as it is already accounted for. (This is the version with $D$ and $H$ as well as $E$ and $B$.)
Of course you don't need the whole of Maxwell's equations to solve the problem of the parallel plate capacitor, so long as you know how to modify Gauss's law appropriately for a dielectric. | {
"domain": "physics.stackexchange",
"id": 16184,
"tags": "electromagnetism"
} |
Function to store data in MySQL database (using OurSQL) | Question: I am parsing a huge XML file. It contains some million article entries like this one:
<article key="journals/cgf/HaeglerWAGM10" mdate="2010-11-12">
<author>Simon Haegler</author>
<author>Peter Wonka</author>
<author>Stefan Müller Arisona</author>
<author>Luc J. Van Gool</author>
<author>Pascal Müller</author>
<title>Grammar-based Encoding of Facades.</title>
<pages>1479-1487</pages>
<year>2010</year>
<volume>29</volume>
<journal>Comput. Graph. Forum</journal>
<number>4</number>
<ee>http://dx.doi.org/10.1111/j.1467-8659.2010.01745.x</ee>
<url>db/journals/cgf/cgf29.html#HaeglerWAGM10</url>
</article>
I step through the file and parse those articles by LXML. If I let the code without storing the items into my database it makes some 1000 entries in ~3 seconds. But if I activate the storage which is done by the function below it makes some 10 entries per second. Is this normal? I remember parsing the file once upon a time and the database was not such a bottleneck. But I had a different approach... (looking through my files to find it)
def add_paper(paper, cursor):
questionmarks = str(('?',)*len(paper)).replace("'", "") # produces (?, ?, ?, ... ,?) for oursql query
keys, values = paper.keys(), paper.values()
keys = str(tuple(keys)).replace("'", "") # produces (mdate, title, ... date, some_key)
query_paper = '''INSERT INTO dblp2.papers {0} VALUES {1};'''.\
format(keys, questionmarks)
values = tuple(v.encode('utf8') for v in values)
cursor.execute(query_paper, values)
paper_id = cursor.lastrowid
return paper_id
def populate_database(paper, authors, cursor):
paper_id = add_paper(paper, cursor)
query_author ="""INSERT INTO dblp2.authors (name) VALUES (?) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id)"""
query_link_table = "INSERT INTO dblp2.author_paper (author_id, paper_id) VALUES (?, ?)"
for author in authors:
cursor.execute(query_author, (author.encode('utf8'),))
author_id = cursor.lastrowid
cursor.execute(query_link_table, (author_id, paper_id))
Edit:
I was able to narrow the problem to those three cursor.executes. Perhaps it is a database problem. I will ask over at Stack Overflow, if someone has an idea, why it is that slow. Meanwhile I would be interested if the code can be refactored to be more pythonic. Any ideas?
Edit 2:
If use a similar approach like me storing row per row into the database, don't use the InnoDB engine. It's slower in orders of magnitude. The code is speeding up again, after I changed the engine.
Answer: You might want to try doing everything inside a transaction. It may be faster that way. I think with InnoDB you might be creating a committing a transaction for every statement which will be slow.
keys = str(tuple(keys)).replace("'", "")
That's an odd way of doing that, use
keys = '(%s)' % ','.join(keys) | {
"domain": "codereview.stackexchange",
"id": 8161,
"tags": "python, performance, mysql"
} |
Fibonacci Sequence using Recursion with Memoisation | Question: File fibonacci.aec:
syntax GAS ;We are, of course, targeting GNU Assembler here, rather than FlatAssembler, to be compatible with GCC.
verboseMode on ;Tells ArithmeticExpressionCompiler to output more comments into the assembly code it produces (fibonacci.s).
AsmStart
.global fibonacci #We need to tell the linker that "fibonacci" is the name of a function, and not some random label.
fibonacci:
AsmEnd
If not(mod(n,1)=0) ;If 'n' is not a integer, round it to the nearest integer.
n := n + ( mod(n,1) > 1/2 ? 1-mod(n,1) : (-mod(n,1)))
EndIf
If n<2 ;The 1st Fibonacci number is 1, and the 0th one is 0.
returnValue := n > -1 ? n : 0/0 ;0/0 is NaN (indicating error), because negative Fibonacci numbers don't exist
AsmStart
.intel_syntax noprefix
ret #Far return (to the other section, that is, to the C++ program). The way to do a same-section return depends on whether we are in a 32-bit Assembler or a 64-bit Assembler, while the far return is the same (at least in the "intel_syntax mode").
.att_syntax
AsmEnd
ElseIf not(memoisation[n]=0) ;Has that Fibonacci number already been calculated?
returnValue:=memoisation[n]
AsmStart
.intel_syntax noprefix
ret
.att_syntax
AsmEnd
EndIf
;And now comes the part where we are tricking ArithmeticExpressionCompiler into supporting recursion...
topOfTheStackWithLocalVariables := topOfTheStackWithLocalVariables + 2 ;Allocate space on the stack for 2 local variables ('n', the argument passed to the function, and the temporary result).
temporaryResult := 0 ;The sum of fib(n-1) and fib(n-2) will be stored here, first 0 then fib(n-1) then fib(n-1)+fib(n-2).
stackWithLocalVariables[topOfTheStackWithLocalVariables - 1] := temporaryResult ;Save the local variables onto the stack, for the recursive calls will corrupt them (as they are actually global variables, because ArithmeticExpressionCompiler doesn't support local ones).
stackWithLocalVariables[topOfTheStackWithLocalVariables] := n
n:=n-1
AsmStart
.intel_syntax noprefix
call fibonacci
.att_syntax
AsmEnd
temporaryResult := stackWithLocalVariables[topOfTheStackWithLocalVariables - 1]
temporaryResult := temporaryResult + returnValue ;"returnValue" is supposed to contain fib(n-1).
;And we repeat what we did the last time, now with n-2 instead of n-1...
stackWithLocalVariables[topOfTheStackWithLocalVariables - 1] := temporaryResult
n := stackWithLocalVariables[topOfTheStackWithLocalVariables]
n := n - 2
AsmStart
.intel_syntax noprefix
call fibonacci
.att_syntax
AsmEnd
temporaryResult := stackWithLocalVariables[topOfTheStackWithLocalVariables - 1]
temporaryResult := temporaryResult + returnValue
stackWithLocalVariables[topOfTheStackWithLocalVariables - 1] := temporaryResult
n := stackWithLocalVariables [topOfTheStackWithLocalVariables]
returnValue := temporaryResult
memoisation[n] := returnValue
topOfTheStackWithLocalVariables := topOfTheStackWithLocalVariables - 2
AsmStart
.intel_syntax noprefix
ret
.att_syntax
AsmEnd
File let_gcc_setup_gas.cpp:
/*The C++ wrapper around "fibonacci.aec". Compile this as:
node aec fibonacci.aec #Assuming you've downloaded aec.js from the releases.
g++ -o fibonacci let_gcc_setup_gas.cpp fibonacci.s
*/
#include <algorithm> //The "fill" function.
#include <cmath> //The "isnan" function.
#include <iostream>
#ifdef _WIN32
#include <cstdlib> //system("PAUSE");
#endif
extern "C" { // To the GNU Linker (which comes with Linux and is used by GCC),
// AEC language is a dialect of C, and AEC is a C compiler.
float n, stackWithLocalVariables[1024], memoisation[1024],
topOfTheStackWithLocalVariables, temporaryResult, returnValue,
result; // When using GCC, there is no need to declare variables in the same
// file as you will be using them, or even in the same language. So,
// no need to look up the hard-to-find information about how to
// declare variables in GNU Assembler while targeting 64-bit Linux.
// GCC and GNU Linker will take care of that.
void fibonacci(); // The ".global fibonacci" from inline assembly in
// "fibonacci.aec" (you need to declare it, so that the C++
// compiler doesn't complain: C++ isn't like JavaScript or AEC
// in that regard, C++ tries to catch errors such as a
// mistyped function or variable name in compile-time).
}
int main() {
std::cout << "Enter n:" << std::endl;
std::cin >> n;
topOfTheStackWithLocalVariables = -1;
if (n >= 2)
std::fill(&memoisation[0], &memoisation[int(n)],
0); // This is way more easily done in C++ than in AEC here,
// because the AEC subprogram doesn't know if it's being
// called by C++ or recursively by itself.
fibonacci();
if (std::isnan(returnValue)) {
std::cerr << "The AEC program returned an invalid decimal number."
<< std::endl;
return 1;
}
std::cout << "The " << n
<< ((int(n) % 10 == 3)
? ("rd")
: (int(n) % 10 == 2) ? ("nd")
: (int(n) % 10 == 1) ? ("st") : "th")
<< " Fibonacci number is " << returnValue << "." << std::endl;
#ifdef _WIN32
std::system("PAUSE");
#endif
return 0;
}
The executable files for Windows and Linux are available here, and the assembly code that my compiler for AEC generates is available here.
So, what do you think about it?
Answer: Why would you do such a thing?
I understand that you wrote the Arithmetic Expression Compiler, and perhaps want to show it off. But who would ever want to write a function as simple as a Fibonacci sequence generater using three programming languages (AEC, Intel assembly, and C++) mixed together, and type way more code than it would take in either C++ or even pure Intel assembly itself to implement it?
AEC doesn't provide any benefits here. Looking at the generated assembly, AEC does not perform any kind of optimization.
fibonacci.aec syntax
The syntax in fibonacci.aec looks quite bad. There's assembly code mixed with AEC's own language. It seems AEC generates ATT syntax, and your inline assembly uses Intel syntax, and you have to manually switch between the two. Also, the instructions you do have to add manually seem very trivial: call and ret. It would be much nicer if the AEC language allowed you to express these operations, so you wouldn't need to add assembly.
Comments about your C++ code
Use of global variables
I suppose it is a limitation of AEC that you have to use global variables to communicate between the generated assembly code and the C++ code. However, now you have the problem that you cannot call fibonacci() from different threads simultaneously. There's also a compile-time limit on how many elements of the Fibonacci sequence you can generate, due to the size of stackWithLocalVariables[] and memoisation[].
Floats vs. ints
Your AEC only deals with 32-bit floating point values, but the C++ program deals with integers, and now has to convert to and from floating point variables to satisfy the assembly code. But a lot of conversions are there only because you are reusing float n to store the user's input, even if you clearly expect an integer. Far better would be to declare an int variable in main(), and copy it to n to satisfy fibonacci(), but avoid all the int(n) casts.
Elevenst, twelfnd, thirteenrd
The suffix you add to print out "The n-th Fibonacci number is" is calculated using an expression that doesn't catch all the edge cases. I suggest you just do not try to add such a suffix at all, and instead write something like:
std::cout << "Element " << n << " in the Fibonacci sequence is equal to " << returnValue << ".\n";
Use "\n" instead of std::endl
I strongly suggest you use "\n" instead of std::endl; the latter is equivalent to "\n", but it also forces a flush of the output stream. That is usually unnecessary and can be detrimental to performance.
Avoid using std::system() for trivial things
Using std::system() is usually wrong. has a huge overhead: it has to create a new shell process, that process has to parse the command you gave, and if that command is not a built-in function of the shell, then it has to start yet another process. As you already have noticed, it also is not portable between different operating systems. And something trivial as std::system("PAUSE") can be replaced by a simple C++ statement like:
std::cin.get(); | {
"domain": "codereview.stackexchange",
"id": 38759,
"tags": "c++, recursion, assembly, fibonacci-sequence, memoization"
} |
how can I control the motors or joints in the gazebo | Question:
I have one urdf file. where I have decleared all the links joints actuatorts etc.. I have import the model in gazebo simulator. Now I want to control the motors or joint. I have tried with pr2_controller_manager pr2_interface.. but nothing is working..
Originally posted by Robot on ROS Answers with karma: 180 on 2014-10-01
Post score: 0
Answer:
You might want to be a little more specific about what is not working....
In any case: If you have a urdf with joints in the most general case you need the robot_state_publisher to be up and running http://wiki.ros.org/robot_state_publisher and a node that publishes your joint_states, see http://wiki.ros.org/joint_state_publisher
Originally posted by Wolf with karma: 7555 on 2014-10-01
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Robot on 2014-10-01:
thanks for the reply. actually my urdf is a vehical model. which have four links for four wheel. and each is connected to actuator (motor) in the urdf model. I want to run those motors. which method will be simplest. Should I do this with pr2_controller_manager or pr2_controller_interface.
Comment by Wolf on 2014-10-02:
You might have a look at ros-control; http://wiki.ros.org/ros_control. However, I have no experience using ros_control. Clearly, your will still need to create or find a ros driver for your low level motor controllers... | {
"domain": "robotics.stackexchange",
"id": 19591,
"tags": "urdf"
} |
Which gate based quantum computer is the best? | Question: I am interested in Drug Discovery and Materials Simulation.
I was told that the IBM 433 qubit quantum computer can only do a circuit of 11 qubits x 11 gates. And Honeywell's quantum computer can only do 12 qubits x 12 gates.
I have two questions...
Is that true?
Is it better to go for a 30-40 qubit quantum simulator instead?
Answer: As of 2023, except for random circuit sampling experiments, there have been no experiments of any kind on a quantum computer where the computation couldn't have been done faster and cheaper and more accurately by using a simulator.
Go into any paper, ignore what it says in the title or the abstract, and just count how many gates and qubits appear in the circuit diagrams and look at how noisy the results are. Simulators are still hands-down better. And I don't even mean big simulators, just... simulators. Stuff you can run in milliseconds on a laptop. That won't remain true forever, hardware is improving and the fault tolerant era is coming, but it's true for now. | {
"domain": "quantumcomputing.stackexchange",
"id": 4503,
"tags": "qiskit"
} |
When a planet has a high gravity, is it impossible to build and launch a successful chemical rocket to space? | Question: Just recently a a large rocky planet has been discovered. "Astronomers have discovered a new type of rocky planet beyond the solar system that weighs more than 17 times as much as Earth while being just over twice the size"
http://www.reuters.com/article/2014/06/02/us-astronomy-exoplanet-idUSKBN0ED29V20140602
Then I was thinking, if aliens lived on this planet, could they get off the planet with a normal chemical rocket, or would the rocket have to be so massive that it could not be built? or so heavy that the amount of fuel required would push the weight too far so that it could never reach space?
I wonder is there are sad planets where aliens never made it into space due to the large amount of gravity.
Answer: It never becomes impossible per se, but at some point there could be so much gravity that construction of a working rocket would be beyond our current ability to engineer something that could work. That is, it might take impracticably huge quantities of fuel, or require materials stronger than we can construct.
There are just a couple of amazingly simple equations that govern this (under some reasonably idealized conditions). First, understand that to break free of the gravity of any planet, a craft much reach escape velocity ($v_e$). For a spherical body, this is given by
$$ v_e = \sqrt{2GM\over r} $$
Where:
$G$ is the gravitational constant
$M$ is the mass of the body
$r$ is the distance from the center of gravity (assuming we start from the planet's surface, this is just the radius of the planet).
Now if we want to leave this planet, we need to go from not moving at all to moving at least this fast. Otherwise, we either fall back to the planet and crash, or (if we are lucky) get stuck in an orbit around the planet. This change in velocity is called "delta v" or $\Delta v$.
A craft's available $\Delta v$ is given by the Tsiolkovsky rocket equation:
$$ \Delta v = v_x \ln \frac{m_0}{m_1} $$
Where:
$v_x$ is the effective exhaust velocity (essentially, the fuel efficiency of the rocket)
$m_0$ is the initial total mass of the craft with fuel
$m_1$ is the final mass of the craft with no fuel
So, we must engineer our craft to have at least enough $\Delta v$ to reach escape velocity (plus enough to do something interesting afterwards, like land on the destination planet, or overcome any atmospheric drag...), but at a minimum we require:
$$ \begin{align}
\Delta V &> v_e \\
v_x \ln \frac{m_0}{m_1} &> \sqrt{2GM\over r}
\end{align}$$
This means:
launching a smaller payload reduces fuel required
more efficient engines help a lot
In any case, there's no way to increase the mass $M$ of the planet such that this equation can not be solved by carrying more fuel, making more efficient engines, making a lighter payload, etc. You just might end up with an absurd solution. | {
"domain": "physics.stackexchange",
"id": 14934,
"tags": "newtonian-gravity, rocket-science"
} |
Ferrofluid between glass plates: what’s going on? | Question: There’s a phenomenon I ran across recently and I’m trying to understand what’s going on with it. When you have a thin layer of ferrofluid between two glass plates and move a magnet closer and farther away from it, you get a really interesting transition between two states. When the magnet’s at a distance, the fluid breaks up into large droplets like this:
…but when the magnet’s closer, the droplets merge together into thin tendrils, a lot like the kind you get out of a reaction-diffusion process.
My question is: what forces are at play here? I assume the blobs/tendrils are held together by surface tension, but they seem to be repelling each other—are they acting as individual magnets? If so, what causes them to stay together at all instead of the intra-blob repulsion breaking them up?
Answer: I think that the behavior can be understood by just considering the energies of the various configurations. In the initial state you have relatively large blobs because the ferrofluid is just mainly trying to minimize surface tension energy. But when you subject the sandwiched ferrofluid to a strong magnetic field, then a large amount of magnetization is induced in the ferrofluid since ferrofluids, like all ferromagnetic materials, have a very high magnetic permeability. So then we have the situation shown on the left side of the diagram below.
The problem is that this is not a very energetically favorable configuration because all of those "N" poles are trying to repel other nearby "N" poles, and all those "S" poles are trying to repel other nearby "N" poles. Much more energetically favorable for the fluid to break up into tendrils as shown on the right side of the diagram.
Notice that the same sort of thing tends to happen with solid ferromagnetic materials. The lowest energy state of a solid piece of ferromagnetic material is for the magnetic domains to arrange themselves into randomly oriented small domains. This arrangement minimizes the magnetic energy of the material. It's highly energetically unfavorable for a solid piece of ferromagnetic material to all magnetize as one big magnetic domain oriented in a single direction. The reason that such highly magnetized ferromagnets can exist is that the material is heavily doped with pinning sites which prevent the big magnetic domain(s) from breaking up and rearranging themselves into smaller, randomly oriented magnetic domains. | {
"domain": "physics.stackexchange",
"id": 52113,
"tags": "electromagnetism, fluid-dynamics, surface-tension"
} |
Does water in big cities get heated (boiled) for treatment? | Question: I live in Bangkok and read how water comes to customers here:
I understand from the chart that water go through Thon Buri treatment plant (west Bangkok) and Mahasawat treatment plant (east Bangkok) before being pumped and transferred to the customer.
In the chart I clicked the links for these two treatment plants and read information in other parts of the site and understood water is treated at least either by chloride, fluoride and sulfate based chemicals but it wasn't clear to me if water is purified from bacteria by heating (boiling).
Does treated water in big cities get heated (boiled) to destroy bacteria as well as proteins, and if so, where does all the amino acid residue go to?
Answer: No. Heating usual water would be extraordinarily expensive. In all water treatment plant, all over the world, bacteria are destroyed by adding chlorine gaz Cl2 or similar substances delivering Cl2 when dissolved in water. The bacteria are killed by chlorine. So the customer drinks water containing dead bacteria. It does not hurt. | {
"domain": "chemistry.stackexchange",
"id": 13070,
"tags": "water, toxicity, proteins, amino-acids, filtering"
} |
Does a higher velocity make a collision more or less elastic? Does it have any impact on it at all? | Question: Basically, if you increase the velocity before the collision, does the collision become more elastic? If you used conservation of energy as proof, (i.e the faster the velocity, and the less percent of energy lost), does that work?
Answer: One measure of the elasticity of a collision is the coefficient of restitution, which is given by
$$e=\sqrt \frac{KE_{after}}{KE_{before}}$$
Where e ranges from 0 to 1.
$e$=1 for a perfectly elastic collision and e=0 for a perfectly inelastic collision.
If e is a constant then increasing the initial velocity should not change the elasticity of the collision. However it is only a constant for a limited range of speeds.
In general, for a given object the greater the deformation the greater the loss in kinetic energy dissipated as heat. And the greater the impact velocity the greater the deformation. For small deformations behavior may approach Hooke's law (linear elastic), depending on the material, and the collision becomes more elastic.
Bottom line: Increasing the velocity before the collision generally makes the collision more inelastic.
Hope this helps. | {
"domain": "physics.stackexchange",
"id": 64398,
"tags": "energy, velocity, collision"
} |
How long does an electron stay on a given orbital? | Question: Was wondering what the average time is for an electron on any given orbital, or how often they change energy levels.
Thanks in advance.
Answer: Short answer: of the order of a nanosecond for hydrogen for "allowed" transitions, and the emission rate scales roughly as $Z^2$, where $Z$ is the atomic number. For an oxymoronically named "forbidden" transition, these times increase to tens of milliseconds or fractions of a second.
So let's elaborate: what sets these times?
A point not made enough is that "orbital" is usually taken to mean an energy eigenstate of the Hamiltonian $H_0$ for the electron in a "bare" atom. So, if the "bare" atom were a true model of the atom's nature, an electron in a non-ground-state orbital could not decay at all (the state is an eigenstate of the Hamiltonian), so its lifetime would be infinite!
The decay happens because atom systems are not "bare" like this: these systems are always coupled to quantized electromagnetic field and so the eigenstates of $H_0$ (the orbitals) are not eigenstates of the whole system.
So, to do the full calculation, one needs to know the full Hamiltonian $H_0 + H^\prime$ of combined, coupled atom and EM field. A feeling for this kind of calculation is given in my answer here; the co-efficients in that analysis must in turn emerge from full quantum electrodynamics. But it turns out that a semiclassical analysis for hydrogen yields the right results. One thinks of the excited hydrogen as a classical dipole antenna with dipole moment amplitude $\mu$, which radiates power $P=2\,\omega^4\,\mu^2/(3\,c^3)$ where $\hbar\,\omega$ is the energy level difference for the transition, whence the transition rate in photons per second is $\Gamma = 2\,\omega^3\,\mu^2/(3\,\hbar\,c^3)$. The reciprocal $(3\,\hbar\,c^3)/(2\,\omega^3\,\mu^2)$ estimates how long the dipole takes to radiate its photon. We need now to estimate the dipole moment. This is done (rather crudely) by multiplying the electron's charge $q$ by the mean distance between the electron in the two orbitals; this expectation is $e\,\langle \psi_1\mid \vec{x}\mid\psi_2\rangle$. Amazingly, plugging this value into our classical rate estimate in fact gives us the exact transition rate as calculated from QED; it is:
$$\Gamma_{1\to2} = \frac{4\,\omega^3\,q^2}{3\,\hbar\,c^3} \left|\langle \psi_1\mid \vec{x}\mid\psi_2\rangle\right|^2$$
For example, for the $2\,p\to1\,s$ transition, this formula gives a rate of $6.25\times 10^8{\rm s^{-1}}$, or a lifetime of about $1.4{\rm ns}$. For the $3\,s\to2\,p$ with a much, much smaller overlap $\left|\langle \psi_1\mid \vec{x}\mid\psi_2\rangle\right|^2$ the formula gives $6.3\times 10^6{\rm s^{-1}}$, or a very long lifetime of about $150{\rm ns}$.
The above are all "allowed" transitions, where the overlap $\left|\langle \psi_1\mid \vec{x}\mid\psi_2\rangle\right|^2$ is nonzero. However, all orbitals have a definite parity, being odd or even under spatial inversion; this means that $\psi(-\vec{x}) = \pm\psi(\vec{x})$, so that the quantity $\left|\langle \psi_1\mid \vec{x}\mid\psi_2\rangle\right|^2$ can only be nonzero if one of the orbitals has even, the other odd symmetry to cancel out the odd symmetry of the $\vec{x}$ operator in middle.
"Forbidden" transitions between orbitals of the same parity still happen, but they are much, much slower: they must decay in a two stage process, each stage flipping the parity, where there are two photons whose energy sums to the transition energy. For example, state $2\,s$ lands in the opposite parity (but higher energy) intermediate $2\,p$ state flipping the parity on a "borrowed" quantity of energy and then "pays back" the energy in making the second parity flipping transition $2\,p\to1\,s$. The transition time for this process is $0.12{\rm s}$. | {
"domain": "physics.stackexchange",
"id": 31829,
"tags": "quantum-mechanics, atomic-physics, atoms"
} |
Hokuyo laser not publishing any data | Question:
I have a Jackal with a Hokuyo laser and I am wanting the hokuyo laser to publish information to /front/scan so that I am able to use the Nav stack with my jackal robot, but I am unsure how to do this. The hokuyo is properly connected to the jackal but is not publishing any data for some reason.
Originally posted by dkrivet on ROS Answers with karma: 19 on 2018-07-12
Post score: 0
Original comments
Comment by PeteBlackerThe3rd on 2018-07-12:
Which hokuyo LiDAR are you using? It is connected via USB or ethernet? What node are you running to connect with the LiDAR?
Comment by dkrivet on 2018-07-12:
I am using the Hokuyo UTM-30LX connected via USB. I am unsure of what node to use to connect to the LiDAR, that is the issue. Any help would be much appreciated, I am very new to ROS.
Comment by dkrivet on 2018-07-12:
Ideally I want to publish the LiDAR data to a topic named /front/scan so I can use the nav stack to do autonomous navigation with the jackal
Answer:
You need to install and run the URG node (URG is the protocol the hokuyo sensors use) which will connect to the hardware and publish the laser scan messages to the ROS system. You should be able to install this easily using apt-get.
By default this node publishes laser scan messages on the /scan topic, you can change this using a remap in your launch file to any topic name you want.
Hope this helps.
Originally posted by PeteBlackerThe3rd with karma: 9529 on 2018-07-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by dkrivet on 2018-07-13:
Thank you for the help. I have done this and got the laser to publish data correctly, however it is not publishing to my desired frame. I have actually used the hokuyo_node as the jackal is running ROS indigo and I think the hokuyo_node adds messages to the "laser" frame by default
Comment by dkrivet on 2018-07-13:
However, I want to publish the the "front_laser" frame. How would I go about doing this? Thank you very much
Comment by PeteBlackerThe3rd on 2018-07-13:
You can use the parameter frame_id to set the frame laser scans are published in. The documentation isn't that clear on this unfortunately. | {
"domain": "robotics.stackexchange",
"id": 31256,
"tags": "slam, navigation, ros-kinetic, hokuyo, jackal"
} |
Reaction of orthocresol with butyrolactone in presence of AlCl3 and heat | Question: What will be the major product of the following reaction?
Please also help me with the mechanism. I'm not sure whether C- acylation or O- acylation will occur here. And if O- acylation does occur, I think the heat will cause a Fries' rearrangement. What will be the major product of this rearrangement?
Also, if simple Friedel Crafts acylation occurs, will AlCl3 affect the reaction due to formation of -O-AlCl3 (Lewis acid-base interaction), something like what happens in the case of Friedel Crafts reactions with aniline? (I know aniline cannot undergo FC reactions due to deactivating effect of -NH-AlCl3 interactions)
I'm unable to upload a pic of the question due to some technical problem, but what I mean is reaction of ortho cresol and butyrolactone in AlCl3, along with heat.
Answer: I can take a guess since no one else has provided an accepted answer.
First, I don't think lactones will take part in FC-acylation on an arene, but rather prefer alkylation as mentioned in this paper. In the case of o-cresol and heat, this could be a possible explanation for your question why the O-acylation takes place instead of the C-acylation.
Second, when the Fries rearrangement occurs, and the acylium ion is released in the intermediate, the phenolic oxygen is complexed with the aluminum chloride (second to last step here). If you imagine an ortho-methyl group, as well as the complexed aluminum chloride, it's plausible substitution to the remaining ortho-site is not sterically favorable. This would force it to substitute elsewhere on the ring.
As for why "elsewhere" prefers para to the methyl group and not the oxygen—going by what you wrote in your comment—I'm not sure. The para alumino-oxy substitution should produce the more stabilizing resonance structure for the arenium intermediate. At least, in one other reaction of electrophilic aromatic substitution with ortho-cresol, namely its nitration, the 3/5-nitro products are not found. Maybe it's a solvent matter. Wikipedia states a non-polar solvent prefers ortho in the Fries rearrangement; perhaps if the ortho site is blocked, it will settle for para to a methyl substituent.
Edit
In a summary of the comments, the m-hydroxy-ketone was called into question as the correct product. I mentioned none of the papers I found of similar reactions furnished anything but the para or ortho product. Here are the papers I looked through:
paper 1 - first paragraph: o-tolyl acetates yield predominantly p-hydroxy-ketones.
paper 2 - tables 2 and 3
paper 3 - table 1 (page 378)
paper 4 - 5th paragraph
A few others: book p 223, paper, paper | {
"domain": "chemistry.stackexchange",
"id": 11242,
"tags": "organic-chemistry, reaction-mechanism, phenols"
} |
Is GridSearchCV in combination with ImageDataGenerator possible and recommendable? | Question: I want to optimize some hyperparameters for a CNN architecture by using GridSearchCV (Scikit-Learn) in combination with Data Augmentation (ImageDataGenerator from Keras).
However, GridSearchCV only offers the fit function and not the fit_generator function.
Is it even recommended to use data augmentation with GridSearchCV?
The parameters for the ImageDataGenerator are already fixed and should not be changed. Would it be better to first determine the hyperparameters via grid search without data augmentation and only to use data augmentation for the final model?
What do you think about this topic? What are your experiences?
Answer: As promised, here you can find an example of how you could apply kfold cross validation for a defined convolutional neural network model, applied to an augmented dataset. You can find the code as a simple gist here
It is done as follows:
for a subset of the CIFAR10 images dataset, generate 3 augmented images (by applying horizontal_flip) per original image, so we should finally have as the number of final images in the augmented dataset: 'number of images in the original dataset' * 3.
check that indeed the built augmented dataset has the new expected number of images. We have just created the augmented dataset, not the fit step yet
apply kfold cross validation on the augmented dataset for several hiperparameters combinations; in this example, 3 pairs of 'learning rate-momentum' have been tried. It is made via the usual 'fit' method:
display the results in a dataframe
This way, we have applied hyperparametrization via kfold cross validation; not a full grid search but only with 3 pairs of hiperparams, but the idea would be the same, not depending on the fit_generator method but making yourself your k folds cross validation on the generated augmented dataset. We could also include other data augmentation strategies in this cross validation. | {
"domain": "datascience.stackexchange",
"id": 6727,
"tags": "keras, scikit-learn, grid-search, data-augmentation, gridsearchcv"
} |
catkin_make_isolated builds tf after a package that requires tf? | Question:
I am building ROS on a Raspberry Pi 3 using catkin_make_isolated. The full command is sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/indigo -j2.
I have a package that requires tf, but when using the above command, tf is built after the package, and so the whole make fails.
When building without the new package, the make finishes fine.
Any ideas? Thanks
Originally posted by DM2 on ROS Answers with karma: 26 on 2016-11-23
Post score: 0
Original comments
Comment by gvdhoorn on 2016-11-24:
Please include the CMakeLists.txt and package.xml of the package that "requires tf". Make sure to remove all the boilerplate comments from them though (especially CMakeList.txt).
Answer:
Thanks for helping! I actually went over the CMakeLists.txt myself afterwards and after removing all the junk found the problem myself. I solved it by removing a non existing add_executable that was left over from the download off github. Maybe I should notify the guys who uploaded it.
Originally posted by DM2 with karma: 26 on 2016-11-27
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gvdhoorn on 2016-11-28:
re: "notify the guys": yes, you should. | {
"domain": "robotics.stackexchange",
"id": 26317,
"tags": "ros, catkin-make-isolated"
} |
$m^2$ term in quantum field theory | Question: Consider the following Hamiltonian :
$$ \mathcal{H} = \frac{1}{2} \left ( \partial_{0} \phi \right )^2 + \frac{1}{2} m^2 \phi^2 +\frac{1}{2} \left ( \nabla \phi \right )^2
$$
Recently I faced some difficulty with this Hamiltonian. If we try to use the discretized version of this lagrangian we consider a lattice whit N-Sites where on each site we put a particle then the Hamiltonian will contain the kinetic term + each site potential term and the coupling term between NN sites, the point is, in that case, the Hamiltonian would be:
$$ \mathcal{H} = \frac{1}{2m} \left ( \partial_{0} \phi \right )^2 + \frac{1}{2} m \omega^2 \phi^2 +\frac{1}{2} m\Omega^2 \left ( \nabla \phi \right )^2
$$
where $\Omega$ is the strength of coupling. My question is two fold
how these two terms are related? I mean how can I start from the second one and find the first one?
In the second Hamiltonian, how can I relate $\Omega$ to $ \omega$?
Answer: Ok, this is the free Klein-Gordon Hamiltonian. Strictly, you should be writing this in terms of the momentum canonically conjugate to $\phi$. By convention, we use the symbol $\pi$ for that. So, your original Hamiltonian density is
\begin{array}
\mathcal{H} = \frac{1}{2} \pi^2 + \frac{m^2}{2}\phi^2 + \frac{1}{2}(\nabla\phi)^2.
\end{array}
There's a good reason for being a stickler about notation in this way. You'll see, presently.
First, make the substitution $m\rightarrow \omega$ to get
\begin{array}
\mathcal{H} = \frac{1}{2} \pi^2 + \frac{\omega^2}{2}\phi^2 + \frac{1}{2}(\nabla\phi)^2.
\end{array}
Next, make the change of field coordinate $\phi\rightarrow \sqrt{m}\phi$. The reason for being such a stickler about notation is that, if you work it out carefully (for instance, by doing the change of variables using the action/Lagrangian density and then transitioning back to the Hamiltonian picture) you'll find that rescaling $\phi$ in this way also requires you to make the substitution $\pi\rightarrow \pi/\sqrt{m}$. That gives you
\begin{array}
\mathcal{H} = \frac{1}{2m} \pi^2 + \frac{m\omega^2}{2}\phi^2 + \frac{m}{2}(\nabla\phi)^2.
\end{array}
The addition of $\Omega$ is just something that is done by hand, afterward. It is totally unrelated to $\omega$.
What you will find, though, is that the lattice supports modes that have frequencies that look like $$\omega_{\mathrm{mode}} = \sqrt{\omega^2 + \Omega^2 \mathbf{k}^2},$$
with $k$ some version of the wave number appropriate to your lattice. | {
"domain": "physics.stackexchange",
"id": 62453,
"tags": "quantum-field-theory, mass, field-theory"
} |
Potential energy of a mass-spring system | Question: I'm having trouble with determining the potential energy of the mass-spring system depicted below. We assume that the extensions of all 3 springs are zero when $x_1=x_2=0$. The potential energy that I obtained is apparently incorrect. Here is what I've done:
$$U = \frac{1}{2}kx_1^2 + \frac{1}{2}kx_2^2 + \frac{1}{2}k(x_1+x_2)^2 = k(x_1^2+x_1x_2+x_2^2),$$ where I've considered each spring's potential energy separately. The mark scheme's version for $U$ is this:
$k(x_1^2-x_1x_2+x_2^2)$, so I suppose for them, the PE for the middle spring is $U_{mid} = \frac{1}{2}k(x_1-x_2)^2 = \frac{1}{2}k(x_2-x_1)^2$. So my guess is that they've assumed that $x_1$ is negative? Is this the source of the discrepancy?
Answer: Consider a slightly different diagram:
The extensions (displacement of end of a spring from non-stretched position) of the three springs are $x_1 \hat x, \, x_2 \hat x_2$ and $(x_2-x_1) \hat x$ where $x_1$ and $x_2$ are the components of the displacements in the $\hat x$ direction.
Note that components $x_1$ and $x_2$ can be either negative or positive.
This gives the total potential energy stored in the three springs as $\frac 12 kx_1^2 + \frac 12 kx_2^2+\frac 12 k (x_2-x_1)^2$
If $x_1 =x_2$ then there is no elastic potential energy stored in the middle spring as one would expect.
So my guess is that they've assumed that $x_1$ is negative?
If the displacement of the end of the left hand spring is as shown in your diagram then the extension of the middle spring is still $(x_2-x_1) \hat x$ but in this case the component $x_1$ will have a negative value so you will end up with $(x_2-(-|x_1|))= (x_2+|x_1|)$ as the extension of the middle spring. | {
"domain": "physics.stackexchange",
"id": 48150,
"tags": "homework-and-exercises, newtonian-mechanics, potential-energy, spring"
} |
Aluminum nitride hydrolysis | Question: I wanted to ask some reference about aluminum nitride hydrolysis in water.
Specifically I would like to know if there is the formation of radicals or of some other reactive species.
Answer: I am quoting from this study[1]:
$\ce{AlN}$ powders hydrolyze in moist air at room temperature, resulting in degradation of the powders. The initial hydrolysis product is amorphous $\ce{AlOOH}$, which is further converted to a mixture of polymorphs of $\ce{Al(OH)3}$ (bayerite, nordstrandite, and gibbsite), forming agglomerates around the unreacted $\ce{AlN}$ core. In the hydrolysis each powder shows an induction period, which is attributed to slow hydrolysis of the surface oxide/oxyhydroxide layer. The powders produced by the carbothermal process show the longest induction periods.
$$
\begin{align}
\ce{AlN + 2H2O → AlOOH_{amorph} + NH3} & \tag{R1}\\
\ce{AlOOH_{amorph} + H2O → Al(OH)3} & \tag{R2}\\
\\\hline
\ce{AlN + 3H2O → Al(OH)3 + NH3}\\
\end{align}
$$
You can find a detail explanation of the reaction in that study.
Reference:
Jinwang Li, Masaru Nakamura, Takashi Shirai, Koji Matsumaru, Chanel Ishizaki and Kozo Ishizaki, Hydrolysis of Aluminum Nitride Powders in Moist Air, 2005, DOI: 10.2240/azojomo0111 | {
"domain": "chemistry.stackexchange",
"id": 16902,
"tags": "reference-request, hydrolysis"
} |
opencv2 sources missing on code.ros.org? | Question:
Hello,
I was trying to get the source for opencv2 from:
https://code.ros.org/svn/ros-pkg/stacks/vision_opencv/trunk/opencv2
but it seems to missing most of the files. The only files that appear at the above location are:
Makefile
conf.py
cvbridge_python.rst
flann.patch
index.rst
mainpage.dox
manifest.xml
opencv2-python-link.patch
pythontest.patch
rosdoc.yaml
I poked around some of the tags as well but they all end up with this kind of short file list. Has the source been moved somewhere? Or am I just missing something obvious.
Thanks!
patrick
Originally posted by Pi Robot on ROS Answers with karma: 4046 on 2011-05-15
Post score: 0
Answer:
Patrick,
Source for OpenCV library can be found here. What your link points to is the ROS wrapper which checks out the code from above location at build time.
Originally posted by arebgun with karma: 2121 on 2011-05-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Eric Perko on 2011-05-16:
A good first step when you find a package that doesn't contain what you expect is to check it's Makefile(s) and CMakeLists.txt. For example, you can see from the OpenCV Makefile that it is using the svn_checkout.mk script (which is documented on the wiki somewhere).
Comment by Pi Robot on 2011-05-15:
Ah ha! Thanks! | {
"domain": "robotics.stackexchange",
"id": 5574,
"tags": "ros, vision-opencv"
} |
Why is $\log_{2}n = O(n^{0.00001})$? | Question: Why is $\log_{2}n = O(n^{0.00001})$ true?
This is obvious to me when the exponent is $> 1$ but i'm having trouble understanding the cases where the exponent is very close to $0$. I would have to find some constants $c$ and $n_0$ where $\log_{2}n \le cn^{0.00001}$ for all $n \gt n_0$.
Where I'm stumped is that $n^{0.00001} \approx 1$ and $\log_{2}n$ approaches infinity as $n$ gets larger. It feels like regardless of whatever $c$ and $n_0$ I choose, if $n$ was large enough, I could show that $\log_{2}n \ge c$.
Answer: $n^{0.00001}$ is not approximately $1$. $n^{0.00001}$ goes to infinity as $n \to \infty$.
You can see that $\log_2 n = o( n^{0.00001} )$ by taking the limit of their ratio:
$$
\lim_{n \to \infty} \frac{\log_2 n}{n^{0.00001}} =
\lim_{n \to \infty} \frac{n^{-1}}{0.00001 \cdot n^{0.00001} \cdot n^{-1}} =
\lim_{n \to \infty} \frac{100000}{n^{0.00001}} = 0.
$$
This tells you that you can pick any value of $c>0$, for example $c=1$.
Now you just need a value $n_0$ such that $n^{0.00001} - \log_2 n \ge 0 \; \forall n \ge n_0$.
The derivative of $n^{0.00001} - \log_2 n$ is
$n^{-1}( 0.00001 \cdot n^{0.00001} - 1)$ which is non-negative as soon as $ n^{0.00001} \ge 100000$, i.e., for $n \ge 10^{5 \cdot 10^5}$.
You can then pick any value of $n_0$ such that $(n_0)^{0.00001} - \log_2 n_0$ is non-negative and $n_0 \ge 10^{5 \cdot 10^5}$. For example $n_0 = 2^{10^{7}}$. Indeed:
$(n_0)^{0.00001} = 2^{100} = 1024^{10} > 1000^{10} = 10^{30} > 10^7 = \log_2 n_0$; and
$n_0 = 1024^{10^6} > 1000^{10^6} = 10^{3 \cdot 10^6} > 10^{5 \cdot 10^5}$. | {
"domain": "cs.stackexchange",
"id": 15653,
"tags": "asymptotics"
} |
How to align genomic sequence with corresponding amino acid sequence | Question: Does anyone know of a program that can align a genomic sequence with introns with the corresponding amino acid sequence?
I have both the genomic sequence and the correct amino acid sequence but no information on the genemodel in e.g. gff or genbank format.
This is not what one would call a 'normal' alignment, but I would like it to look like this:
Hypothetical example:
nucleotide sequence with intron in italic:
ATGCACGATACGACTGACGTACGTACGTACGTACGTACGTACGTACGGAGACGTAGACTC
corresponding amino acid sequence:
MHDTTDVRTYVRRRRL
The intron does not encode for amino acid which should thus result in a gap in the 'alignment'.
Desired 'alignment':
ATGCACGATACGACTGACGTACGTACGTACGTACGTACGTACGTACGGAGACGTAGACTC
M H D T T D V R T Y V R R R R L
Answer: Have a look at exonerate's protein2genome!
From the documentation:
Aligning a protein to genomic sequence:
Similarly, it is possible to align a protein sequence to the genome, (similar to GeneWise, but with
heuristics).
exonerate --model protein2genome query.fasta target.fasta
This model will allow introns in the alignment, but also allow frameshifts, and
exon phase changes when a codon is split by an intron. | {
"domain": "bioinformatics.stackexchange",
"id": 302,
"tags": "sequence-alignment"
} |
Does the Kramer-Kronig relations apply to this example $f(t) =\left(1-t^2\right)^4\cdot\theta(1-t^2)$? | Question: Does the Kramer-Kronig relations apply to this example $f(t) =\left(1-t^2\right)^4\cdot\theta(1-t^2)$?
with $\theta(t)$ is the Heaviside step function.
I made a detailed related question here with full explanations, where I got no answers, but the main doubt could be solved just by knowing if the KK-relation conditions are fulfilled or not by this example.
Answer: The function $f(t)$ is real-valued and even, and so is its Fourier transform $F(\omega)$. Clearly, the real and imaginary parts (the latter being zero) of $F(\omega)$ are not related via the Hilbert transform. But this is also not to be expected, because $f(t)$ is not causal. However, since $f(t)$ vanishes for $|t|>1$, we can shift it such that the resulting function is causal:
$$g(t)=f(t-1)$$
The Fourier transform of $g(t)$, expressed in terms of $F(\omega)$, is
$$G(\omega)=F(\omega)e^{-j\omega}=F(\omega)\cos(\omega)-jF(\omega)\sin(\omega)$$
The real and imaginary parts of $G(\omega)$ satisfy the well-known Hilbert transform relations due to the causality of $g(t)$. This relationship can be shown using Bedrosian's theorem.
For general non-causal right-sided sequences we can also derive equations relating the real and imaginary parts of their Fourier transform. Let $f(t)=0$ for $t<-T$, $T>0$. Consequently,
$$f(t)=f(t)u(t+T)\tag{1}$$
where $u(t)$ denotes the unit step function.
Taking the Fourier transform of $(1)$ gives
$$F(\omega)=\frac{1}{2\pi}F(\omega)\star U(\omega)e^{j\omega T}\tag{2}$$
where $\star$ denotes convolution, and $F(\omega)$ and $U(\omega)$ are the Fourier transforms of $f(t)$ and $u(t)$, respectively. With
$$U(\omega)=\pi\delta(\omega)+\frac{1}{j\omega}\tag{3}$$
Equation $(2)$ becomes
$$F(\omega)=\frac12 F(\omega)+\frac{1}{2\pi}F(\omega)\star \frac{e^{j\omega T}}{j\omega}\tag{4}$$
which is equivalent to
$$F(\omega)=F(\omega)\star \frac{\sin\omega T-j\cos\omega T}{\pi\omega}\tag{5}$$
Splitting $(5)$ into real and imaginary parts, and with $F(\omega)=F_R(\omega)+jF_I(\omega)$ we obtain
$$\begin{align}F_R(\omega)&=F_R(\omega)\star\frac{\sin\omega T}{\pi\omega}+F_I(\omega)\star\frac{\cos\omega T}{\pi\omega}\\F_I(\omega)&=F_I(\omega)\star\frac{\sin\omega T}{\pi\omega}-F_R(\omega)\star\frac{\cos\omega T}{\pi\omega}\end{align}\tag{6}$$
For $T=0$, i.e., for causal $f(t)$, Equation $(6)$ simplifies to the well-known Hilbert transform relationships between real and imaginary parts of $F(\omega)$:
$$\begin{align}F_R(\omega)&=F_I(\omega)\star\frac{1}{\pi\omega}=\mathcal{H}\big\{F_I(\omega)\big\}\\F_I(\omega)&=-F_R(\omega)\star\frac{1}{\pi\omega}=-\mathcal{H}\big\{F_R(\omega)\big\}\end{align}\tag{7}$$ | {
"domain": "dsp.stackexchange",
"id": 11889,
"tags": "fourier-transform, frequency-spectrum, continuous-signals, hilbert-transform, causality"
} |
Refactoring binary search algorithm in Ruby | Question: I wrote the binary search algorithm with Ruby, the code is below. The question is if there's any way to make it look cleaner?
def binary_search(sample_array, x, l, r)
mid = (l + r)/2
return -1 if r < l
return binary_search(sample_array, x, l, mid-1) if (sample_array[mid] > x)
return binary_search(sample_array, x, mid+1, r) if (sample_array[mid] < x)
return mid if (sample_array[mid] == x)
end
result = binary_search(sample_array, x, l, r)
puts "#{result}"
It seems to me that returns can be minimised or even some more Ruby idioms added. Tried to use lambda, but smth went wrong.
Answer:
Rather than defining it at the top level, why don't you make it a new method on Array?
Taking l and r makes recursive calls possible, but would be annoying for the end user who always has to pass 0 and array.length-1. You should either make those arguments optional, or don't use recursion, or use a private helper method for the recursion.
Someone suggested you could make it tail-recursive, but I don't think any Ruby implementation I know of can optimize tail recursion, so I'm not sure what the point of that would be. Method calls are expensive in Ruby, so if you are after performance, I think a while loop would be more appropriate.
You could consider calling <=> once and switching on the return value using case. I'm not sure if you would like the way this reads better or not. If performance is important, definitely try it and benchmark both ways.
If you do want to use if statements, the 3rd if is redundant.
You are using the type of binary search which has 3 different branches (depending on whether the value at the probed index is >, <, or == to the value you are searching for). There is another to write a binary search which only uses a single if-else, which is more concise, generally faster, and which is guaranteed to return the first matching element in the array (if there are duplicates). | {
"domain": "codereview.stackexchange",
"id": 1363,
"tags": "ruby, algorithm, search"
} |
Do any two points in Minkowski spacetime determine a unique line? | Question: Any two points in a Euclidean space determine a unique line, but I wasn't sure if this result generalized to Minkowski spacetime given that the latter is not a Euclidean 4-space, but is, instead, a Euclidean 3-space plus a fourth temporal dimension.
Answer: Let us start from the notion of affine space next focussing on the Euclidean $3$-dimensional physical space and finally coming to Minkowski spacetime.
An affine (real) $n$-dimensional space is a triple $(\mathbb A,\vec{\cdot}, V)$, where $\mathbb A$ is a set whose elements are called points, $V$ is a real $n$-dimensional vector space and $\vec{\cdot} : \mathbb A \times \mathbb A \to V$ is a map associating a pair of points $P,Q \in \mathbb A$ with a vector $\vec{PQ}\in V$. The following requirements must hold.
$\vec{PQ}+ \vec{QR} = \vec{PR}\:$ if $P,Q,R \in \mathbb A$.
For every $P\in \mathbb A$ and $v\in V$, there exists exactly one $Q \in \mathbb A$ such that $\vec{PQ}=v$.
These requirements permits one to define the notion of Cartesian coordinate system on $\mathbb A$.
This is nothing but a particular bijective map $\psi : \mathbb A \to \mathbb R^n $.
To this end, fix an origin, i.e. a preferred point $O\in \mathbb A$ and a system of axes, i.e., a vector basis $e_1,\ldots, e_n \in V$.
The bijective map $\psi : \mathbb A^n \to \mathbb R^n $ is the one associating $P\in \mathbb A$ with the components $(x_1(P), \ldots, x_n(P)) \in \mathbb R^n$ of the vector $\vec{OP}$ with respect to the basis $e_1,\ldots, e_n \in V$.
An example of $3$-dimensional affine space is $\mathbb R^3$ itself. However this space has much more structure than a generic affine $3$-dimensional space. For instance a preferred point $(0,0,0)$ and a preferred basis, the canonical one, of the space of translations $V= \mathbb R^3$. For these reasons $\mathbb R^3$ is not a good mathematical representation of the physical Euclidean space, in a sense it includes too much mathematical structure with no corresponding physical objects. On the other hand, even an affine $3$-dimensional space is not adequate to describe the physical space, since we also need further metrical structures to properly mathematically describe the physical space.
The appropriate structure for describing the physical space is the notion of Euclidean space.
An Euclidean $n$-dimensional space $\mathbb E^n$ is a quadruple $(\mathbb E,\vec{\cdot}, V, < , >)$, where $(\mathbb E,\vec{\cdot}, V)$ is an affine space and $<,> : V \times V \to \mathbb R$ is a symmetric
positively-defined scalar product.
This structure selects a preferred distance function $d: \mathbb E \times \mathbb E \to [0,+\infty)$ defined as
$$d(P,Q) = \sqrt{<\vec{PQ}, \vec{PQ}>}\:.$$
This distance is, by construction, translationally invariant: $\vec{P'Q'}= \vec{PQ}$ implies $d(P,Q)= d(P',Q')$.
The definition of orthonormal Cartesian coordinate system is now obtained by specializing the notion of Cartesian coordinate system to the case of an orthonormal basis $e_1,\ldots, e_n \in V$ with respect to $<,>$.
For $n=3$, $\mathbb E^n$ is just the model of the physical Euclidean space.
The notion of (straight) line is given in an affine space and, in fact, two distinct points determine a unique line. If $P,Q \in \mathbb A$ with $P\neq Q$, the associated line is $r = \{R \in \mathbb A \:|\: \vec{RQ} = \lambda \vec{PQ}\:\: \forall \lambda \in \mathbb A \}$.
As you see no metrical notion is necessary.
Let us eventually come to Minkowski spacetime $\mathbb M^4$ which differs form an Euclidean space only in the metrical structures.
First of all $\mathbb M^4$ is a $4$-dimensional affine space, whose points are called events.
(Already with this part of the definition, the notion of straight line does make sense.)
Secondly, $\mathbb M^4$ is equipped with a Lorentzian scalar product in the space of translations $V$. This is a bilinear map $<,> : V \times V \to \mathbb R$, which is symmetric ($<u,v>= <v,u>$), non-degenerate ($<u,v>=0$ for every $v\in V$ implies $u=0$) and with Lorentzian signature (there are bases $e_1,e_2,e_3,e_4$ where $<,>$ is represented by the matrix
$diag(-1,1,1,1)$). | {
"domain": "physics.stackexchange",
"id": 31173,
"tags": "special-relativity, spacetime, metric-tensor"
} |
Implement The Singleton Pattern for Persistency managers Swift | Question: I am creating an iOS app and I want to implement Singleton Pattern.
I have created "Singleton" LibraryAPI to act as an entry point UserManager object to get data from web API. after that I used a facade Pattern "method" to call UserManager implementation.
final class LibraryAPI {
static let shared = LibraryAPI()
private let userManager = UsersManager()
private let isOnline = false
private init(){
}
func getUsers() -> [User] {
return userManager.getUsers()
}
}
my questions:
if I have another Manager Class like "album" class should I use the same LibraryAPI and it will become a monolithic class and how to avoid that?
should I create a "Singleton" LibraryAPI class for each manager object like UserLibraryAPI and albumLibraryAPI?
Note: any references or articles are welcome :)
Answer: Developing singletons isn't a good idea in most cases .Sure, you save some time taking this shortcut, but this code will bite you back later when the architecture of your app grows or when you add unit testing to your project. Here are a few reasons why you should NOT develop singletons:
Note: These are excerpts from this article about swift singletons.
Singletons provide a globally mutable shared state. Ironically, the definition of singleton is one of the reasons why singletons are bad. The global accessibility of singletons makes shared resources accessible from anywhere, especially from code that should not have any access. Even value types like Swift structures and enumerations can access a singleton, which is a bad practice. When any part of an app can access or change the global state, you get weird and hard to fix bugs.
Singletons carry state around for the lifetime of the application. There are cases in which you need to reset the shared state. When you can have multiple instances, you can discard the old one and then create a new one. In a singleton, instead, resetting state might not be so natural and might require specific and complex code.
Singleton classes often become monoliths. This exactly correlates to your concern. Since it’s easy to access a singleton from anywhere, the chances are high that code that needs to be shared ends inside an existing singleton. Massive view controllers are not the only monolithic objects you should avoid in iOS. The same happens to singletons.
If singletons are the wrong solution, what is then the correct one? The critical point here is the distinction between singletons and shared resources. In any real app, shared resources are necessary and unavoidable. There are always parts of an app’s architecture that need to be accessed from many places. Some examples are:
The current global state of the app.
The disk storage where data is saved, be it the file system, a database, the user defaults of the app, or a Core Data managed object context.
A URL session that groups related network requests.
A shared operation queue to prioritize, sequence, and schedule the asynchronous tasks of the app.
Conclusions
There are many articles online which try to answer the question: “when is it ok to use a singleton?”
My answer is: never.
That might sound a bit strict, but the drawbacks of singletons outweigh the little benefits of taking the shortcut. You can, and should, always solve the problem using dependency injection. | {
"domain": "codereview.stackexchange",
"id": 35604,
"tags": "object-oriented, design-patterns, swift"
} |
A divergent Feynman loop in momentum space - how to describe it in position space? | Question: Consider the following loop diagram:
If $k$ is the incoming/outgoing momentum and we're integrating over momentum $p$, the above diagram corresponds to:
$$
- \lambda \frac{1}{k^{2} + m^{2}} \int \frac{d^{4}p}{(2\pi)^{4}} \frac{1}{p^{2}+m^{2}}
$$
This is of course divergent. If we introduce a momentum-cutoff $\Lambda$, we find that the integral in the above gives us (given in these lecture notes):
$$
\int_{\Lambda} \frac{d^{4}p}{(2\pi)^{4}} \frac{1}{p^{2}+m^{2}} \ = \ \frac{1}{16\pi^{2}} \left[ \Lambda^{2} + m^{2} \log \left( \frac{\Lambda^{2}}{m^{2}} \right) \right] + \mathcal{O}\left(\frac{1}{\Lambda}\right)
$$
How do I take the above and describe things in position space? I see often in literature something along the lines of "a momentum cutoff $\Lambda$ corresponds to a cutoff $\frac{\pi}{a}$, where $a$ is a cutoff seperation in position space".
Most discussion about renormalization I see focuses on momentum space, and leaves position space out of it. How can I talk about the above diagram in position space?
Answer: In position space, with $x_1\to y\to x_2$, this is simply:
$$\dfrac{i\lambda}{2}\int \mathrm{d}^4y\,G_F(x_1-y)G_F(x_2-y)G_F(0)$$
with $G_F$ the usual Feynman propagator:
$$G_F(x-y)=i\int\dfrac{\mathrm{d}^4p}{(2\pi)^4}\dfrac{e^{-ip\cdot(x-y)}}{p^2-m^2+i\epsilon}$$
in momentum space, from which your expression is easily recovered. | {
"domain": "physics.stackexchange",
"id": 43116,
"tags": "quantum-field-theory, renormalization, fourier-transform, feynman-diagrams, integration"
} |
What is impulse response in simple AWGN channel? | Question: Assume a sender and receiver communicate through an AWGN channel. Let $x(t)$ be a transmitted signal and $y(t)$ be a received signal and $z(t)$ be the Gaussian noise. It is known that $y(t) = kx(t-\Delta t) + z(t)$ where $k$ is the channel gain (say due to attenuation) and $\Delta t$ is the delay. I was wondering what is the channel impulse response? I assume it should be $h(t) = k\delta(t - \Delta t) + z(t)$? Is it correct?
Answer: A linear system cannot add anything new to its input signal, so additive noise is modeled separately from any linear distortion of the signal. In your case with just scaling and delay, the channel impulse response is $h(t)=k\delta(t-\Delta t)$, and the complete channel is modeled by first filtering the input signal with an LTI system with impulse response $h(t)$ and then adding the noise. | {
"domain": "dsp.stackexchange",
"id": 9758,
"tags": "discrete-signals, digital-communications, continuous-signals, impulse-response, channel"
} |
A directed graph colored path problem? | Question: Given: A rooted, directed, a-cyclic graph $G$. Let $r_0$ be the root node and $t_0$ be another target node. Each node in $G$ is assigned a non unique id/color ($ID_i),\ 1<i<N$ for some integer N.
Problem: Is there a directed path from $r_0$ to $t_0$, such that for some $ID_x$ there is no node in that path with $ID_x$ as its $ID$. In other words, the nodes in the path do-not cover all the $IDs$.
Comments: Clearly the problem is in $NP$ since if we are given such a path we can easily verify it. I suspect the problem is also in $P$ and can be solved in polynomial time. But, I am struggling to find an algorithm for the same.
Can someone please help with this?
Answer: Simple approach: for each $1 \le i \le N$ (I assume that the $<$ in the question are typos) run depth-first or breadth-first search ignoring nodes coloured $i$.
Slightly more advanced approach: use a matrix-based all-pairs reachability algorithm, but rather than record a Boolean isReachable or a numeric shortestDistance record a bitmap of which colours can be avoided. | {
"domain": "cs.stackexchange",
"id": 12036,
"tags": "algorithms, complexity-theory, graphs, np"
} |
Group Theory of Superconducting Order Parameters? | Question: In crystalline superconductors, the order parameter $\Delta(\mathbf{k})$ (aka gap, or Cooper pair wavefunction) can be classified by its symmetry according to the representations of the symmetry group of the crystal. This can get complicated because pairing is between fermions which also have spin, and spin-orbit coupling also plays a role.
I am used to categorizing orbitals and vibrational modes of a point group by their representation from the chemistry point of view, but it seems the superconductivity literature has a very different understanding which is confusing for me. I have the following confusions/questions
The representative function for the odd-parity representations is said to be a "vector" quantity (see slides 6-9 here). What does this mean? All the textbook character tables give scalar polynomials instead (see here). To be even more explicit and show my confusion, the entry on slide 8 under $A_{1u}$ should be antisymmetric under a mirror operation along the $z$-axis (aka $\sigma_h$), but it is clearly not true for the vector function $k_x \hat{\mathbf{x}}-k_y \hat{\mathbf{y}}$ which doesn't even depend on $z$. What am I missing?
Superconducting order parameters are said to have no nodes (fully gapped), point nodes, or line nodes. As an example of point nodes, table 1 of this paper says an order parameter with $B_{1u}$ symmetry has point nodes. But in the character table for that group, $B_{1u}$ transforms as $z$, which means it has a whole plane of "nodes" when $z=0$, not just a single point. How do you get the nodal structure from the representation if not from the characteristic polynomial?
Can anyone clarify what's going on here? The understanding and notation of the superconducting order parameter in group theory seems to be very different than that of orbitals or vibrations.
Answer: First, to be clear about a few things:
The spontaneously broken symmetry leading to superconductivity (no resistance, Meissner effect etc.) is $U(1)$-phase rotation symmetry, associated with conservation of (Cooper pair) particle number. Other symmetries, the crystal space group and spin-rotations, can be additionally broken. Your question pertains these other symmetries.
The gap function $\Delta(\bf{k})$ by itself is a complex scalar function, i.e. an element of $\mathbb{C}$ for each $\mathbf{k}$. However, the transformation properties under space group and spin-rotation transformations allows the identification of additional structure. Most experiments measure the amplitude of this complex quantity.
The total wavefunction must be antisymmetric. Therefore, if the spin-part is antisymmetric (singlet pairing) the orbital (space-group dependent part) must be symmetric, so even under parity. If the spin-part is symmetric (triplet pairing), the orbital part must be odd under parity.
A parity/space inversion transformation simply replaces $\mathbf{k} \to -\mathbf{k}$.
Then, to answer your questions.
The scalar/vector denomination is about spin-rotation transformations.
A basis of states of the fermions is given by $|k,\sigma>$, where $k$ is the momentum and $\sigma = \uparrow,\downarrow$ the spin. The complex scalar function above can be decomposed as
$\Delta(\mathbf{k}) = \sum_{\sigma,\sigma'} < k,\sigma | \Delta_{\mathbf{k},\sigma\sigma'} | k,\sigma' >$ (Eq.1)
Here $\Delta_{\mathbf{k},\sigma\sigma'}$ (using the notation of the slides in your question) is a 2$\times$2-matrix for each point $\mathbf{k}$.
We can write out the 2$\times$2-matrix in the singlet-triplet basis. The singlet-state is one-dimensional, as such there is one degree of freedom to be specified by the $\mathbf{k}$-dependence. One can then call this the scalar wavefunction $\psi(\mathbf{k})$ (as in the slides), since the spin-state is already defined. The spin-singlet is antisymmetric, so the momentum-dependent part must be symmetric (even under parity).
Conversely, the triplet-basis is three-dimensional, meaning there are three degrees of freedom to be specified. Collecting the Pauli matrices in a vector $\vec{\sigma} = (\sigma^x, \sigma^y, \sigma^z)$, we can write the momentum dependence as $\vec{d}(\mathbf{k}) \cdot \vec{\sigma}$, which is a superposition of 2x2-matrices acting on the spin-subspace for each momentum $\mathbf{k}$. One can therefore call this a vector wavefunction $\vec{d}(\mathbf{k})$. It transforms as a vector under spin-rotations. Nevertheless, the gap function in (Eq.1) is a complex scalar.
In fact, these are pseudoscalar and pseudovector functions, meaning they are even under parity transformations. The wavevector $\mathbf{k}$ is of course odd under parity.
If $g$ is an element of the transformation group, then the action on the $d$-vector is specified as
$g \rightharpoonup \vec{d}(\mathbf{k}) = R_\mathrm{s}(g) \vec{d}(R_\mathrm{o}\mathbf{k})$
Here $R_\mathrm{o}$ is the orbital representation and $R_\mathrm{s}$ the spin representation, which are odd and even under parity respectively. This notation is from slide 27 of this presentation by the same author as in your link.
To understand how the representative functions, like $k_x \hat{x} - k_y \hat{y}$, transform, recognize that the unit vectors are basis vectors of the $d$-vector, and therefore transform as pseudovectors, while the $k$-components transforms as vectors.
In Kaba and Sénéchal Table I, you can find explicit matrices for $R_\mathrm{o}$ (called $U$) and $R_\mathrm{s}$ (called $R)$. In particular, the transformation under $\sigma_h$ (called $\sigma_z$ in that reference) is specified as
$R_\mathrm{s}(\sigma_h) = \begin{pmatrix} 1 & & \\ & 1 & \\ & & -1 \end{pmatrix}, \quad R_\mathrm{o}(\sigma_h) = \begin{pmatrix} -1 & & \\ & -1 & \\ & & 1 \end{pmatrix}$
So $\hat{x} \to \hat{x}$ and $k_x \to -k_x$ gives a total minus sign, and the same for the other term, so $k_x \hat{x} - k_y \hat{y}$ transform with a minus sign under $\sigma_\mathrm{h}$. You can find the action under the other generators like this as well.
The order parameter is the pair operator $\hat{\Delta}_{\mathbf{k},\sigma\sigma'} = \hat{c}_{\mathbf{k},\sigma} \hat{c}_{-\mathbf{k},\sigma'}$, where $\hat{c}$ is the electron annihilation operator. The symmetry group representation acting on $\Delta$ derives from the one acting on $\hat{c}$ as follows. Electrons are fermions, and $\hat{c}$ transforms under a spin or projective representation of the group (say $D_{4\mathrm{h}})$. This representation is not given in most character tables, but for instance the fourth power of the rotation generator is $-1$, not $+1$ in this representation. The representation acting on the order parameter follows from Clebsch-Gordan decomposition of the tensor product of this spin representation with itself, and includes only ordinary (unitary) representations. See the references by Sénéchal below.
Nodes are points in $k$-space where $\Delta(\mathbf{k}) =0$. ($\mathbf{k} =0$ is excluded since pairing is always at some finite momentum, moreover near the Fermi surface.) Therefore, you simply need to inspect where the gap function vanishes.
Looking at the representative functions in Table I of the paper you cite, you can see that $B_{1g}$, $B_{2g}$, $B_{3g}$ vanish at $k_z =0$ for all $k_x$, $k_y$. Therefore, there is a line of nodes at $k_z =0$ as a function of $k_x,k_y$.
Conversely, the $B_{1u}$, $B_{2u}$, $B_{3u}$ representative functions only vanish at some very specific values of $k_x, k_y, k_z$, determined by the coefficients $c_1$, $c_2$. Therefore, these have point nodes. (By the way, there are errors/typos in the representative functions of $B_{1u}$ and $B_{3u}$ in that table. It should be $c_1 k_y \hat{x} + c_2 k_x \hat{y}$ for $B_{1u}$, and similar in $y,z$ for $B_{3u}$.)
The function $z$ transforms under $B_{1u}$ as you mention. But that is not the form of the gap function, which for instance is specified in momentum space. The gap function in question looks like $c_1 k_y \hat{x} + c_2 k_x \hat{y}$, and is independent of $k_z$. If you look in Table V of Kaba and Sénéchal, you can see that there exists a gap function with the form $\hat{b}_z\hat{d}_0 k_z$, also transforming under $B_{1u}$. However $\hat{d}_0$ is the singlet spin-state, and $\hat{b}_z$ is a particular form of the orbital (not spin) part of the order parameter.
References
J. Annett, Unconventional Superconductivity. Contemporary Physics, 36(6), 423 (2005). See especially section 3 for a discussion about the symmetry breaking starting from the full group $G \times SU(2) \times U(1)$ (crystal space group, spin-rotations, and phase-rotations) and a visual representation of the order parameter symmetries, and section 5 for symmetries of the gap function.
S. Sumita. Modern classification theory of superconducting gap nodes. PhD Thesis (2020). Kyoto University. Very thorough treatment. See section 1.1.1 for a short summary of the meaning of the symmetry of the order parameter.
Geilhufe & Balatsky. Phys. Rev. B 97, 024507 (2018). Another nice, concise treatment of transformation properties of the order parameter function, starting from Equation (4). Also includes odd-frequency pairing.
Ishizuka et al. Phys. Rev. Lett. 123, 217001 (2019) See Table 4(a) for the correct basis functions of the irreps of $D_{2\mathrm{h}}$.
S.-O. Kaba and D. Sénéchal. Phys. Rev. B 100, 214507 (2019) See section II.B, III.A for a elaborate derivation of the symmetry transformation properties of the $D_{4\mathrm{h}}$ order parameter, of not only the $d$-vector, but also the orbital part.
D. Sénéchal. Lecture notes A pedagogical treatment of the previous paper. | {
"domain": "physics.stackexchange",
"id": 76409,
"tags": "symmetry, group-theory, superconductivity"
} |
How to run controller_manager | Question:
Hi All,
I use a launch file to load a controller:
<node name="controller_spawner" pkg="controller_manager" type="spawner" respawn="false" output="screen" args="controller/twist"/>
When I run the launch file, the system said waiting for service controller_manager/load_controller.
But when I type rosservice call controller_manager/load_controller it told me that service is not available.
Do I need to run controller_manager first before calling that service? If yes, how I run controller_manager?
Thanks in advanced.
Originally posted by Sendoh on ROS Answers with karma: 85 on 2014-12-07
Post score: 2
Answer:
The spawner script is one of the command-line tools offered by the controller_manager ROS package (doc), and it does not start a controller_manager instance.
The controller_manageris typically instantiated inside your robot hardware abstraction (hardware_interface::RobotHW specialization). If using gazebo_ros_control, this is already done by the existing plugins (Tutorial), but if using on a custom abstraction, you have to do the setup yourself.
Originally posted by Adolfo Rodriguez T with karma: 3907 on 2014-12-09
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by lucasw on 2014-12-09:
Can you take a look at http://answers.ros.org/question/198929/running-controller_manager-spawner-with-mybotrobot_description/?
Comment by Adolfo Rodriguez T on 2014-12-10:
Can you reproduce the results of this tutorial? http://gazebosim.org/tutorials?tut=ros_control
If in indigo, you'll have to specify the <hardware_interface> element as a child of <joint>, not <actuator>, in your URDF transmissions. | {
"domain": "robotics.stackexchange",
"id": 20270,
"tags": "ros, rosservice, controller-manager"
} |
Project Euler "Largest product in a grid" (#11) in Java 8 | Question: I have come up with a Java 8 solution for the following problem:
In the 20×20 grid below, four numbers along a diagonal line have been marked in red (bold here).
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
The product of these numbers is 26 × 63 × 78 × 14 = 1788696.
What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20×20 grid?
I would comments on everything:
public class Problem11 extends Problem<Integer> {
/** The n of the n x n grid. */
private final int n;
/** The grid. */
private final Grid grid;
/**
* Constructs this class.
*
* @param n The n of the n x n grid
* @param gridString The String representation of the grid
*/
public Problem11(final int n, final String gridString) {
this.n = n;
this.grid = new Grid(n, gridString);
}
@Override
public void run() {
List<Integer> list = new ArrayList<>(n * n * 8);
grid.forEach(cell -> processCell(list, cell));
result = list.stream().mapToInt(x -> x).max().getAsInt();
}
/**
* Processes a cell and adds the result to a list.
*
* @param list The list of results
* @param cell The cell to consider
*/
private void processCell(final List<Integer> list, final Cell cell) {
IntBinaryOperator sumOperator = (x, y) -> x * y;
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x + 1, y -> y, sumOperator)); //right
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x - 1, y -> y, sumOperator)); //left
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x, y -> y + 1, sumOperator)); //top
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x, y -> y - 1, sumOperator)); //down
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x + 1, y -> y + 1, sumOperator)); //topright
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x - 1, y -> y - 1, sumOperator)); //downleft
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x + 1, y -> y - 1, sumOperator)); //downright
addIfNotEmpty(list, calculationOnCell(cell, 3, x -> x - 1, y -> y + 1, sumOperator)); //topleft
}
/**
* Adds an integer to the list if the OptionalInt is not empty.
*
* @param list The list to be added to
* @param optionalInt The OptionalInt
* @return The input list, with possibly the element appended
*/
private List<Integer> addIfNotEmpty(final List<Integer> list, final OptionalInt optionalInt) {
if (!optionalInt.isPresent()) {
return list;
}
list.add(optionalInt.getAsInt());
return list;
}
/**
* Returns a calculation on the cell.
*
* @param cell The starting cell
* @param steps The number of steps to take to other cells
* @param steppingXOperator The operator to apply to go to the next cell on x
* @param steppingYOperator The operator to apply to go to the next cell on y
* @param calculationOperator The operator to apply to get the result
* @return An OptionalInt instance that is empty if and only if the calculation did not work
*/
private OptionalInt calculationOnCell(final Cell cell, final int steps, final IntUnaryOperator steppingXOperator, final IntUnaryOperator steppingYOperator, final IntBinaryOperator calculationOperator) {
int x = cell.x;
int y = cell.y;
int calculationResult = cell.value;
for (int i = 0; i < steps; i++) {
x = steppingXOperator.applyAsInt(x);
y = steppingYOperator.applyAsInt(y);
if (!grid.inBounds(x, y)) {
return OptionalInt.empty();
}
calculationResult = calculationOperator.applyAsInt(calculationResult, grid.getCell(x, y).value);
}
return OptionalInt.of(calculationResult);
}
@Override
public String getName() {
return "Problem 11";
}
/**
* Structure holding the cells.
*/
private static class Grid implements Iterable<Cell> {
/** The n of the n x n grid. **/
private final int n;
/** A double array holding the cells. **/
private final Cell[][] cells;
/**
* Constructs the Grid.
*
* @param n The n of the n x n grid
* @param input The String input for the grid
*/
public Grid(final int n, final String input) {
this.n = n;
this.cells = createCellsFromString(input);
}
/**
* Creates the Cell double array from the String input.
*
* @param input The string nput
* @return The cell double array
*/
private Cell[][] createCellsFromString(final String input) {
Cell[][] returnCells = new Cell[n][n];
String[] lines = input.split("\\n");
for (int i = 0; i < lines.length; i++) {
String[] words = lines[i].split(" ");
for (int j = 0; j < words.length; j++) {
String word = words[j];
returnCells[i][j] = new Cell(i, j, Integer.parseInt(word));
}
}
return returnCells;
}
/**
* Checks if the x and y are in bounds.
*
* @param x The x to be tested
* @param y The y to be tested
* @return Whether x and y are in bounds
*/
public boolean inBounds(final int x, final int y) {
return (0 <= x && x < n && 0 <= y && y < n);
}
/**
* Returns a cell based on the coordinates.
*
* Throws an IllegalArgumentException if the x and y coordinates are not in bounds
*
* @param x The x coordinate
* @param y The y coordinate
* @return The cell corresponding to the coordinate
*/
public Cell getCell(final int x, final int y) {
if (!inBounds(x, y)) {
throw new IllegalArgumentException("problems.Problem11.Grid.getCell: !inBounds(x, y): x = " + x + " / y = " + y);
}
return cells[x][y];
}
@Override
public Iterator<Cell> iterator() {
return new Iterator<Cell>() {
/** The current x of the iterator. **/
private int x = 0;
/** The current y of the iterator. **/
private int y = 0;
@Override
public boolean hasNext() {
return !(x == n && y == 0);
}
@Override
public Cell next() {
Cell cell = cells[x][y];
advance();
return cell;
}
/**
* Advanced to the next element in the cell double array.
*/
private void advance() {
y++;
if (y == n) {
y = 0;
x++;
}
}
};
}
}
/**
* Structure holding the cell data.
*/
private static class Cell {
/** The x coordinate of the cell. **/
public final int x;
/** The y coordinate of the cell. **/
public final int y;
/** The value of the cell. **/
public final int value;
/**
* Constructs the Cell.
*
* @param x The x coordinate of the cell
* @param y The y coordinate of the cell
* @param value The value of the cell
*/
public Cell(final int x, final int y, final int value) {
this.x = x;
this.y = y;
this.value = value;
}
}
}
public abstract class Problem<T> implements Runnable {
protected T result;
public String getResult() {
return String.valueOf(result);
}
abstract public String getName();
}
The code gets called by something along the lines of:
Problem<?> problem11 = new Problem(20,
"08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08\n" +
"49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00\n" +
"81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65\n" +
"52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91\n" +
"22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80\n" +
"24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50\n" +
"32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70\n" +
"67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21\n" +
"24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72\n" +
"21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95\n" +
"78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92\n" +
"16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57\n" +
"86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58\n" +
"19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40\n" +
"04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66\n" +
"88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69\n" +
"04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36\n" +
"20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16\n" +
"20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54\n" +
"01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48");
problem11.run();
System.out.println(problem11.getResult());
One additional question I have about the code:
Is it possible to write the code for the maximum-evaluation such that it does not use storage (ie. the list), nor uses handwritten code for calculating the sum? I don't know whether such thing would be possible with an IntStream for example.
Answer: Your code looks great and I can see that you have put some thought and time into it. Also, you've used pretty fancy concepts that I didn't know (which is not so hard as my Java is pretty rusty). However, it looks slightly over-engineered to me so I'll try to make things more simple.
Don't mess with anyone's brain
IntBinaryOperator sumOperator = (x, y) -> x * y; : calling sum a product is one of the most confusing thing you could possibly do :-).
Do not repeat yourself - Keep it simple, stupid
In many places, things could have been done in a more straightforward way.
First example : addIfNotEmpty
You don't need to have two return doing the same thing in :
private List<Integer> addIfNotEmpty(final List<Integer> list, final OptionalInt optionalInt) {
if (!optionalInt.isPresent()) {
return list;
}
list.add(optionalInt.getAsInt());
return list;
}
It could easily be written :
private List<Integer> addIfNotEmpty(final List<Integer> list, final OptionalInt optionalInt) {
if (optionalInt.isPresent()) {
list.add(optionalInt.getAsInt());
}
return list;
}
Second example : n
Your Problem11 class has a private final int n and a Grid.
Your Grid has a private final int n; and an Array.
An Array has a length attribute.
Do you see the pattern here ? Good news is that you are considering squares because if we were to have 2 dimensions here, it would probably be a mess.
This shows some kind of problem with your design. Maybe you should rely on the length attribute whenever you need to, maybe you shouldn't even need to (cf Leaky Abstraction).
Third example : Cell
Your Cell is a structure containing a value and its coordinates in an array of Cells. This reminds me of one of the examples in the "Stop Writing Classes " presentation and I have the feeling that we don't really need it.
I have no time to go on right now. I'll try to edit my answers and provide a working piece of code. In the meantime, as you have solved the Project Euler already, I suggest you have a look at the solutions posted on the boards. They are a great way to learn about algorithms, math and programming style as well. | {
"domain": "codereview.stackexchange",
"id": 6287,
"tags": "java, project-euler, lambda"
} |
Is the "Fewest Discriminating Bits" problem NP-complete? | Question: That is a name I have made up for this problem. I have not seen it described anywhere before. I have not been able to find a proof of NP-completeness nor a polynomial time algorithm for this problem yet. It is not a homework problem -- it is related to a problem I have come across in my work.
FEWEST DISCRIMINATING BITS
INSTANCE: A set T containing bit vectors, where each bit vector is exactly N bits long. Every element of T is unique, as one would expect from a set in math. An integer K < N.
QUESTION: Is there a set B of at most K bit positions (i.e. integers in the range [0,N-1]) such that when we remove all bits except those in B from every vector in T, the remaining shorter vectors are all still unique?
Example 1: For the instance N=5, T={00010, 11010, 01101, 00011}, K=2, the answer is yes, because we can select the bit positions B={0,3}. Using the convention that bit position 0 is the rightmost, and the bit position numbers increase right-to-left, removing all bit positions except those in B from the vectors in T leaves T'={00, 10, 11, 01}, and those are all unique.
Example 2: N=5, T={00000, 00001, 00010, 00100}, K=2. The answer is no, because no matter which two bit positions we select, none of the 2-bit vectors will be equal to 11, so at least two of the 2-bit vectors will be equal to each other.
We can of course solve this problem by enumerating all (N choose K) subsets with size K of the N bit positions, and determining which satisfy the condition of the question. However, that is exponential in the input size.
Answer: This problem is NP-complete. A proof based on reduction from 3-SAT is as follows:
Consider an instance of 3-SAT with $n$ variables and $m$ clauses. We will construct $2n + 2m$ bit vectors ("rows") of length $2n + \lceil \log_2(n + m) \rceil$, such that the smallest number of discriminating bits is $n + \lceil \log_2(n + m) \rceil$ iff the original 3-SAT instance is satisfiable.
The first $2n$ bits will correspond to the literals $\left\{ x_1, \neg x_1, x_2, \neg x_2, ..., x_n, \neg x_n \right\}$. With respect to these bits, the first $2m$ rows will come in pairs, the first of which will have a $1$ for each literal included in the corresponding clause, and the second of which will consist entirely of $0$'s. The remaining $2n$ rows will also come in pairs, the first of which will have $1$'s for the corresponding literal and its negation, and the second of which will consist entirely of $0$'s. Finally, the last $\lceil \log_2(n + m) \rceil$ bits will be used to "sign" each pair of rows with its index, from $0$ to $n+m-1$, written in binary.
In order to distinguish each "literal" row from its successor, either the bit corresponding to that literal or the bit corresponding to its negation must be retained. Also, in order to discriminate among the $n+m$ "zero + index" rows, all $\lceil \log_2(n + m) \rceil$ index bits must be retained. The minimum possible number of discriminating bits is therefore $n + \lceil \log_2(n + m) \rceil$. Finally, in order to distinguish each "clause" row from its successor, at least one of the three bits corresponding to literals included in that clause must be retained. If the 3-SAT instance is satisfiable, this last condition will not require any extra bits (in particular, we do not need to retain the bits corresponding to both $x_i$ and $\neg x_i$ for any $i$); and conversely, if there are $n + \lceil \log_2(n + m) \rceil$ bits that discriminate among all $2n+2m$ bit vectors, they must contain exactly one of $x_i$ and $\neg x_i$ for each $i$, and hence correspond to a satisfying assignment of truth values to the $n$ variables. | {
"domain": "cstheory.stackexchange",
"id": 585,
"tags": "cc.complexity-theory, np-hardness"
} |
ANTLR4 grammar for Conventional Commits spec | Question: I would like to create a grammar for the Conventional Commits spec and I would love to hear any feedback for what I wrote.
The spec has some ambiguities, I think, hence my usage of "island grammars" to avoid writing predicates and such, which would limit the parser to a certain language target.
An example of a conventional commit message the parser should be able to handle
fix(some_module): this is a commit description
Some more in-depth description of what was fixed. This
can be a multi-line text, not only a one-liner.
Signed-off: john.doe@some.domain.com
Another-Key: another value
Some-Other-Key: some other value
The lexer:
lexer grammar ConventionalCommitLexer;
options { caseInsensitive = true; }
tokens { TEXT }
WS: [ \t]+ -> skip;
NEWLINE: ('\r')? '\n';
LPAREN: '(' -> mode(Scope);
RPAREN: ')';
SEMICOLON: ':' WS*-> mode(Description);
OTHER: 'other';
FEAT: 'feat';
FIX: 'fix';
DOCS: 'docs';
STYLE: 'style';
REFACTOR: 'refactor';
PERF: 'perf';
TEST: 'test';
CHORE: 'chore';
BUILD: 'build';
CI: 'ci';
BREAKING: 'breaking';
SECURITY: 'security';
REVERT: 'revert';
CONFIG: 'config';
UPGRADE: 'upgrade';
DOWNGRADE: 'downgrade';
PIN: 'pin';
IDENTIFIER: [a-z][a-z0-9_-]*;
mode Scope;
WS_SCOPE: [ \t] -> skip;
SCOPE: IDENTIFIER -> type(IDENTIFIER);
END_OF_SCOPE: RPAREN -> type(RPAREN), mode(DEFAULT_MODE);
mode Description;
DESCRIPTION: ~[\n]+ -> type(TEXT);
END_OF_TEXT: (WS | NEWLINE)+ -> mode(Body), type(NEWLINE);
mode Body;
END_OF_BODY: (WS | NEWLINE)+ -> mode(Footer), type(NEWLINE);
BODY: (SINGLE_LINE | MULTI_LINE) -> type(TEXT);
SINGLE_LINE
: ~[\n]+
;
MULTI_LINE
: (SINGLE_LINE | (~[\n]+ NEWLINE))+ -> mode(Footer)
;
mode Footer;
WS_FOOTER: ' ' -> skip;
KEY: IDENTIFIER -> type(IDENTIFIER);
SEPARATOR: WS_FOOTER* SEMICOLON WS_FOOTER* -> type(SEMICOLON), mode(FooterValue);
mode FooterValue;
NEWLINE_FOOTER: NEWLINE -> type(NEWLINE), mode(Footer);
VALUE: ~[\n]+ -> type(TEXT);
The parser:
parser grammar ConventionalCommitParser;
options { tokenVocab = ConventionalCommitLexer; }
type: (OTHER
| FEAT
| FIX
| DOCS
| STYLE
| REFACTOR
| PERF
| TEST
| CHORE
| BUILD
| CI
| BREAKING
| SECURITY
| REVERT
| CONFIG
| UPGRADE
| DOWNGRADE
| PIN) #RecognizedType
| IDENTIFIER #OtherType
;
footerKeyValue: key = IDENTIFIER SEMICOLON value = TEXT NEWLINE?;
commitMessage: type LPAREN scope = IDENTIFIER RPAREN SEMICOLON description = TEXT (NEWLINE body = TEXT)? (NEWLINE values += footerKeyValue+)? EOF;
Answer: Nice idea.
Obtaining conformant commit messages will be easier
if much of a project's developer community installs
a validating .git/hooks/pre-commit script.
I think a giant regex could handle the task,
but this is far more readable!
SEMICOLON: ':' WS*-> mode(Description);
nit: Whitespace around the -> arrow, please, for readability.
Bigger item: Surely this is a COLON token, no?
{FEAT, FIX} are canonical, and then there are multiple
vocabularies that a project could choose to adopt,
such as Angular's {BUILD, CHORE, ...}.
For such words, I feel you should
Separate them out with a blank line.
Cite the URL of your reference (as you nicely did in this question).
Maybe alphabetize, even if (as here) the cited reference does not.
What we're shooting for is traceability,
and ease of editing when the upstream reference inevitably
changes the list of words.
For the same reason, consider putting BREAKING
right after {FEAT, FIX}, to match the cited reference.
WS_SCOPE: [ \t] -> skip;
I am slightly sad that we didn't manage to
recycle WS from above. Whatever.
I get the sense that you feel CRLF could potentially be a Problem.
DESCRIPTION: ~[\n]+ -> type(TEXT);
Soooo, that's going to lump \r CR in with the TEXT, right?
Is that OK?
Both SINGLE_LINE & MULTI_LINE wrestle with the same detail.
Oh, wait. Could DESCRIPTION take advantage of SINGLE_LINE ?
Maybe we'd be better off insisting that a "strip all CRs!"
preprocessing step shall happen before lexing?
END_OF_TEXT: (WS | NEWLINE)+ -> mode(Body), type(NEWLINE);
Pair of tiny nits:
WS or NEWLINE is kind of funny, it is whitespace. But yeah, I get it. Consider using the name BLANK for WS? To free up the name for this expression?
I would find these slightly easier to read and compare if we consistently mentioned type / mode in the same order.
WS_FOOTER: ' ' -> skip;
Pretty sure you wanted to handle TABs, as well.
Maybe we could reuse WS?
| PIN) #RecognizedType
| IDENTIFIER #OtherType
I found the first comment helpful, thank you.
The second produced some cognitive dissonance with OTHER, sigh!
footerKeyValue: key = IDENTIFIER SEMICOLON value = TEXT NEWLINE?;
The NEWLINE? feels like it's possibly trouble.
Imagine that an annoying commit message
mentioned Another-Key: aaa Looks-Like-A-Header: bbb ccc
on a single line.
I'm concerned that TEXT would pick up just aaa.
It doesn't seem hard to accidentally stumble upon,
perhaps with Priority: time to implement: soon.
Or even with a URL.
Again, insisting on a preprocessing step,
in this case one which appends \n
so we're sure the document ends with NEWLINE,
seems fairly prudent.
commitMessage: type LPAREN scope = IDENTIFIER RPAREN SEMICOLON description = TEXT (NEWLINE body = TEXT)? (NEWLINE values += footerKeyValue+)? EOF;
Wrap this at 80 chars, please, to improve readability.
Overall?
All issues are minor.
Looks fine to ship and get some testing feedback
from real-world inputs. | {
"domain": "codereview.stackexchange",
"id": 44418,
"tags": "parsing, lexical-analysis, antlr"
} |
Relationship between the parameters tree width and the maximum degree | Question: I am working on parameterized complexity and started exploring on various structural parameters. The problem I am working on is known to be W[1]-hard parameterized by treewidth of the input graph and I am wondering if there is any known relationship between treewidth and maximum degree of the input graph. Could anyone provide the information containing the relationship between all the structural parameters.
TIA.
Answer: There are no direct relation.
The $n$-star (vertices $\{0,\dots,n\}$ and edges $(0,i)$ for $i>0$) has maximum degree $n$ and treewidth $1$.
The $n \times n$-grid has maximal degree $4$ and treewidth $n$.
Granted, if your maximal degree is $2$ then the graph is a union of trees and cycles and hence has treewidth at most $2$. But starting from maximal degree $3$, you may have unbounded treewidth (for example, an expander of degree $3$ or the triangle grid).
What you have however is that for every $G = (V,E)$, $tw(G) \geq \min\{deg(v)\mid v \in V\}$. Indeed, let $T$ be a tree decomposition of $G$ of width $k$. We claim that there is a vertex of degree less than $k$ in $G$. For this, consider a leaf $l$ of $T$ and let $t$ be its father. If $B(l) \subseteq B(t)$, then you can remove $l$ from $T$ and still have a tree decomposition of $G$ of width $k$. Proceed until no more leaves of the tree decomposition can be removed.
Now you necessarily have a leaf $l$ such that $B(l) \not \subseteq B(t)$ where $t$ is the father of $l$. Hence, there exists $x \in B(l)$ that is not in $B(t)$. By connectivity, $x$ does not appear any other bag of $T$. Hence, every edges of $G$ having $x$ as an endpoint is covered in $B(l)$. In other words, $x$ has at most $k$ neighbors in $G$. | {
"domain": "cs.stackexchange",
"id": 20548,
"tags": "graphs, np-hard, parameterized-complexity"
} |
Looking for a wheel | Question: If this is not the right community please point me in the right direction.
I am looking for a speed measuring wheel similar to the one in the picture..
What is it called?
I have googled for a description and used an image search for google but can't find a wheel that's the same.
It is for a Square Meter measuring table in our factory. Everywhere we ask they want to supply us with an encoder that is directly connected to the wheel. This means the encoder is spinning the whole time. We don't want this setup.
What were looking for is a wheel that can be used by the pickup sensor that is connected to the bracket, and the wheel must have its own bearing.
I have added descriptions on the image.
Answer: The device you're looking for is called a Hall-effect sensor. There's a decent video on how they work here, and they're commonly used as crankshaft and camshaft position sensors in vehicles.
A typical Hall-effect sensor will only send a pulse when the magnetic/ferrous object passes by the sensor, so you've got no way to determine direction. If you want to determine direction, too, then you should look for a Hall-effect quadrature encoder. | {
"domain": "robotics.stackexchange",
"id": 2310,
"tags": "sensors, wheel"
} |
Calculate Initial Velocity For Orbital (Gravity) Slingshot | Question: I am trying to find the initial velocity to slingshot a planet around the sun and through a gap.
The green ball is the planet, and the yellow ball is the sun. In this trial I need to get the planet to go around the sun and through the gap at 278Gm. I have tried different approaches, but nothing seems to be even remotely correct. Anything under 20k m/s will land you in the sun and anything over 50k will slingshot you out of the system.
I want to know what formula to use so that I can solve this type of problem.
Answer: If you are in a circular orbit what you need is a Hohmann transfer, from Wikipedia:
In orbital mechanics, the Hohmann transfer orbit /ˈhoʊ.mʌn/ is an elliptical orbit used to transfer between two circular orbits of different radii in the same plane.
It works like this assuming the planet is in a circular orbit.
Then the amount of delta v needed to go from the green orbit to the yellow orbit is.
where units are
$ v \,\!$ is the speed of an orbiting body
$\mu = GM\,\!$ is the standard gravitational parameter of the primary body, assuming $ M+m$ is not significantly bigger than $M$
(which makes $v_M \ll v$)
$r \,\!$ is the distance of the orbiting body from the primary focus
$a \,\!$ is the semi-major axis of the body's orbit.
Using an online calculator I deduce that the delta v you need is 25.07 km/s
This is independent of the mass of the planet.
Ok, let's start over with a different approach, what is the velocity exactly.
Lets just use our trusted elliptical orbits.
Then using equations from this link you can calculate the speed at any point of a eclipse with,
$$ v^2 = \mu\left(\frac{2}{r} - \frac{1}{a}\right) $$
which leads to 44.31 km/s at perihelion. | {
"domain": "physics.stackexchange",
"id": 18248,
"tags": "gravity, orbital-motion, planets"
} |
Filtering random noise from a signal using excel | Question: I was given an excel file, and I was asked to filter the noise so that signal would be as close as possible to the original one, using also excel.
I kinda understand that I need to do $(x(n)+z)-z$ ($z$ being the noise), but I don't understand how to find the exact formula to find the noise when the noise itself is generated using rand(). We were given option to use DFT FFT FIR IIR, so I guess the solution would be that, but my lecturer didn't really teach us about FIR and IIR (due to corona I guess, not enough time) and the lecture never really cover the part of using DFT FFT to signal with noise. So, how to solve this problem using excel?
Signal given (without noise) is as below:
$$x(n) = \sin\left(2\pi \frac{8}{128}n\right)$$
Information Frequency: 8Hz
Sampling Frequency: 128Hz
Answer:
I kinda understand that I need to do $(x(n)+z)-z$ ($z$ being the noise)
There goes an engineering joke that is just "If you know you've got noise in your signal, why don't you just subtract it!".
It's a joke, because the main property of noise is that you do not know its value, so you cannot just subtract it!
I don't understand how to find the exact formula to find the noise when the noise itself is generated using rand().
Exactly. That's because (if rand is actually random), there is no formula.
So, you cannot subtract the noise like that completely.
What you can do is filter (and all the options you've been given are filters, or filter banks, with DFT being identical to the FFT, not quite sure you're fully grasping your assignment there)!
So, you want to build a filter that lets through nothing but your signal of interest – your 8 Hz sine wave.
(In the following: italics denote terms that your lecture most probably introduced. If you don't know them – look them up! Relevant to future understanding, and quite possibly to your grade.)
As you know (or should know, sounds like that was the whole point of the lecture), filtering is applying a linear system to a signal. And I'd recommend you consider a FIR first, so that filtering is just convolution of the signal with the impulse response of that filter (i.e. the filter coefficient vector gets convolved with the signal).
Because of linearity, you can look at how the convolution with that filter affects a) the pure signal and b) the pure noise, because if you add the results, you'll get the same as if you filtered signal added to noise.
So, you're looking for an impulse response that maximizes the amplitude of the sine passing through it. Convolution has a formula, and hint: if you use the same (just ever so slightly modified) as the thing you convolve with that you want to maximize, then you get a high amplitude.
This is as far as I want to do your homework. | {
"domain": "dsp.stackexchange",
"id": 8826,
"tags": "filters, discrete-signals, dft"
} |
Is the universe problem for one-counter automata with restricted alphabet size undecidable? | Question: Consider the following universe problem.
The universe problem. Given a finite set $\Sigma$ for a class of languages, and an automaton accepting the language $L$, decide if $L=\Sigma^*$.
In [1], it is stated and proved that the universe problem is undecidable for a particular class of one-counter automata. This result then follows for the class of all non-deterministic one-counter automata. I'm wondering if it is known whether this problem is still undecidable when we restrict the size of the input alphabet of the automaton.
I think that with alphabet size 1 the problem becomes decidable, but what about size 2? And if that turns out to be decidable what is the smallest value of $n \in \mathbb{N}$ such that the problem is undecidable.
I think it's probable that the answer to this question is known but I'm having trouble finding an answer. If it is already known then I would appreciate a reference.
[1] Ibarra, O. H. (1979). Restricted one-counter machines with undecidable universe problems. Mathematical systems theory, 13(1), 181-186
Answer: It must be undecidable for an alphabet with two symbols. It is possible to code any alphabet into two letters, e.g., map 16 symbols to the length 4 binary strings $aaaa, aaab, \dots, bbbb$. Then equality to $\Sigma^*$ is equivalent to equality to all possible codes for strings. In the 16 letter example this means equality to all strings of a multiple of four letters. Clearly that is not universality. That is obtained by adding those binary strings that are not coding. That is a regular set and can be generated by a one counter automaton.
The same explanation, with $\LaTeX$ for those who appreciate it.
Assume universality is undecidable for $\Sigma$. Let $h: \Sigma^* \to \{0,1\}^*$ be an injective morphism. Now $L = \Sigma^*$ iff $h(L) = h(\Sigma^*)$. This in turn is equivalent to $h(L) \cup R = \{0,1\}^*$ where $R$ is the (fixed) regular language $\{0,1\}^* - h(\Sigma^*)$. Hence we cannot decide whether the binary one counter language $h(L) \cup R$ is universal. Note that language is one counter as the family is closed under morphisms and union (with regular languages).
As you state "I think that" I can also confirm the question is decidable for a one letter alphabet. It is decidable for push-down automata (hence context-free languages) as one letter CFL are (effectively) equivalent to regular languages. | {
"domain": "cs.stackexchange",
"id": 1442,
"tags": "formal-languages, reference-request, automata, undecidability, decision-problem"
} |
Relation between Magnitude of Greenhouse Effect and the Concentration of GHGs like $CO_2$ | Question: The Average Surface Temperature of the Earth is calculated by the following equation:
$$\sigma T_s^4=\frac{(1-A)\Omega}{4}+\Delta E$$
where,
$\sigma$= Stefan-Boltzmann Constant
$T_s$= Average Surface Temperature of Earth
$A$= Global Albido
$\Omega$= Total Solar Irradiance
And, $\Delta E$= Magnitude of Greenhouse Effect
The observed average surface temperature of Earth is about $288K$, Global Albido as seen by satellites is about $0.3$ and the total solar irradiance is about $1370W/m²$. Putting these values in the above equation, $\Delta E$ comes out to be about $150W/m²$.
Now, according to this
PDF I found online, the concentration of $CO_2$ is related to $\Delta E$ by the following equation:
$$\Delta E=133.26+0.044[CO_2]$$
When we put $\Delta E=150$, we get the $[CO_2]=380$ which is the actual concentration of $CO_2$ in the atmosphere according to some online sources in ppm. So the above relation seems correct.
How was this relation calculated? And, how to calculate such relations for other Greenhouse gases, such as $H_2O$?
In other words, given the Equation:
$$\Delta E=x+y[H_2O]$$
Find x and y.
What I noticed: The coefficient of $[CO_2]$ in the above mentioned equation is $0.044$. And, the molecular mass of $CO_2$ is $44.01 amu $. So, Molecular mass might be involved in the calculation of $y$.
My approach: I focused on finding $y$ as $x$ can be calculated later from the same equation since $\Delta E$ is known and concentration of $H_2O$ should be available online. According to my understanding of the Greenhouse effect, $y$ represents how much energy per unit area does $1$ $ppm$ of a $GHG$ can trap and transmit down to Earth. I tried searching online for some data on the same, but could find none.
Please throw some light on the topic.
Thank You.
Answer:
Δ=133.26+0.044[2]
It’s just a numerical coincidence that the 0.044 looks like the molar mass of CO2. That equation gives 0.044 * 380 = 16 W/m2, which is the reduction in surface downwelling IR if you remove all the CO2 from the present day atmosphere (e.g., see Table 1 of Zhong and Haigh, 2013, Weather, https://doi.org/10.1002/wea.2072, pdf).
Looking at water vapour in the same way exposes some of the limits of the model they’ve set up for this tutorial, which was presumably kept simple just to teach some concepts to their students. To do the same thing for water vapour as was done for CO2 would require removing 208 W/m2 of greenhouse effect (again, see Table 1 of that article), more than the total greenhouse effect of 150 W/m2 in their model. The discrepancy is because they have too much solar radiation in the surface energy budget, so the diagnosed greenhouse effect doesn’t have to do as much work to balance a 288 K surface temperature.
A more realistic model would exclude the 80 W/m2 of solar radiation absorbed by the atmosphere, leaving an absorbed solar flux at the surface of 160 W/m2. The balance is then $390 = 160 + \Delta E$, so $\Delta E = 230$ W/m2 and the revised perturbation equation for CO2 is,
$$\Delta E = 214 + 0.044[CO2]$$
The difficulty with H20 is that, unlike CO2, the concentration of water vapour is highly variable in space and time, so the equivalent equation will depend a lot on what you chose as present-day H2O concentration. A ball-park figure for mean H2O concentration is 4000 ppm, which leads to the equation,
$$\Delta E = 22 + 0.052[H2O]$$
I should emphasise that this model isn’t intended for any serious calculation. It’s just a tool to get students used to quantifying parts of the Earth system and how they can trade off against each other. | {
"domain": "earthscience.stackexchange",
"id": 2219,
"tags": "meteorology, atmosphere, temperature, homework, greenhouse-gases"
} |
Stock quote checking script | Question: I know that this code would look a lot better if I made it so that checking the current prices against the open prices were in a function. This would avoid rewriting it for every stock I want to check. But I'm not sure how to get started on doing that properly. Do any of you have some tips to get me started?
from yahoo_finance import Share
apple = Share('AAPL')
appleopen = float(apple.get_open())
applecurrent = float(apple.get_price())
if appleopen > applecurrent:
print(("Apple is down for the day. Current price is"), applecurrent)
else:
print(("Apple is up for the day! Current price is "), applecurrent)
applechange = (applecurrent - appleopen)
if applechange > 0:
print(('The price moved'),abs(applechange),("to the upside today."))
else:
print(('The priced moved'),abs(applechange),("to the downside today."))
print('-----------------------')
nflx = Share('NFLX')
nflxopen = float(nflx.get_open())
nflxcurrent = float(nflx.get_price())
if nflxopen > nflxcurrent:
print(("Netflix is down for the day. Current price is"), nflxcurrent)
else:
print(("Netflix is up for the day! Current price is "), nflxcurrent)
nflxchange = (nflxcurrent - nflxopen)
if nflxchange > 0:
print(('The price moved'),abs(nflxchange),("to the upside today."))
else:
print(('The priced moved'),abs(nflxchange),("to the downside today."))
Answer: As you yourself noted, you need a function:
def financial_info(acronym):
company = Share(acronym)
company_open = float(company.get_open())
company_current = float(company.get_price())
if company_open > company_current:
print(company, " is down for the day. Current price is ", company_current)
else:
print(company, " is up for the day! Current price is ", company_current)
# ...
I just wrote a part of it, you sure are able to finish it :) | {
"domain": "codereview.stackexchange",
"id": 15835,
"tags": "python, finance"
} |
Why doesn't a ball thrown upwards fall behind us if we're moving forward? | Question: If we consider ourselves traveling in a bus and if we a throw
a ball upwards, the ball should fall behind us according to law of inertia, right? If so why doesn't the ball always fall behind us each time when we throw it upwards? Earth is also moving like bus right?
Answer: Assuming no air resistance, the Earth not rotating, the bus moving at constant velocity etc.
If you are looking at the motion of the ball relative to the bus, ie sitting inside the bus, what would you observe?
You would observe the ball going vertically upwards and then coming down vertically downwards.
How would you explain this?
The only force on the ball is its weight which acts vertically downwards and there are no horizontal forces, so if you threw the ball with no horizontal velocity relative to the bus, the ball would continue to have no horizontal velocity as there are no horizontal forces.
What would an observer on the ground see?
That observer would observe you throwing the ball upwards whilst at the same time the ball would have a horizontal velocity equal to that of you and the bus.
Thus the observer on the ground would see you moving at a constant horizontal velocity and the ball moving with the same constant horizontal velocity ie the ball would always be seen by the observer on the ground to be vertically above you and the trajectory of the ball to the observer on the ground would be a parabola - projectile motion.
With air resistance in an open topped bus there would be a horizontal force on the ball in a direction opposite to that of the bus and so the magnitude of the horizontal velocity of the ball would decrease and the ball would progressively lag behind you and return behind you. | {
"domain": "physics.stackexchange",
"id": 56757,
"tags": "newtonian-mechanics, inertial-frames, relative-motion, inertia"
} |
Need help showing this Hydrogen wave function is normalized | Question: I need to verify that the following Hydrogen atom wave function
$$\Psi(x,y,z;t=0)\equiv\frac{4}{(2a)^{3/2}}\left[\frac{1}{\sqrt{4\pi}}e^{-r/a}+A\frac{r}{a}e^{-r/(2a)}\left(-iY_1^{+1}+Y_1^{-1}+\sqrt{7}Y_1^0\right)\right]$$
is normalized for $A=1/(12\sqrt{6})$.
I know that this implies showing that the inner product of $\Psi$ with itself equals 1, though getting there has proven to be a challenge.
I have tried both plugging in the definitions of the spherical harmonics and solving the integral directly (this leads to a very large number of terms) and by substituting in $R_{nl}$ radial functions (but this leaves the first term as a radial function on its own).
It feels like the second method would be the intended way to approach the problem but I could really use a tip in the right direction.
Edit: I forgot that $Y_0^0=\sqrt{1/4\pi}$. Plugging this into the first term and massaging the coefficients to obtain $R_{nl}$'s for both terms allows for a representation of $\Psi$ in terms of Hydrogen $\psi_{nlm}$ wave functions.
Answer: Orthogonality of the spherical harmonics will eliminate the cross-terms : $\int d\Omega Y_{\ell m}Y_{\ell m'}^* =\delta_{\ell\ell'}\delta_{mm'}$ so your term with spherical harmonics, upon $\int d\Omega$, will give you
$9\vert A\vert^2 r^2 e^{-r/a}/a^2$.
You are rapidly left with
$$
\frac{8}{a^3}
\int_0^\infty dr\, r^2 \left(e^{-2a/r} + \vert A\vert^2 \frac{r^2}{a^2}e^{-r/a}9 \right)
$$
where the $1/4\pi$ factor has been eliminated using $\int d\Omega=4\pi$. The rest in integration by parts.
Edit This question alreay has your $\psi$ in the form
$$
\psi(r,\theta,\phi)=\sum_{n\ell m}c_{n\ell m} \psi_{n\ell m}(r,\theta,\phi)
$$
so normalization amounts to verifying that $\sum_{n\ell m}\vert c_{n\ell m}\vert^2=1$ since the solutions $\psi_{n\ell m}(r,\theta,\phi)$ are orthogonal under integration | {
"domain": "physics.stackexchange",
"id": 39739,
"tags": "quantum-mechanics, homework-and-exercises, wavefunction, normalization"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.