anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
What is the cause for the fine structure? | Question: What is the cause for the fine structure? I know that it describes the splitting of the spectral lines of atoms but I don't understand its cause,
any help is appreciated
Answer: The splitting of spectral lines into close pairs is due to the electron's angular momentum and the fact that an electron can have either of two spin states. | {
"domain": "physics.stackexchange",
"id": 59633,
"tags": "quantum-mechanics, atomic-physics, spectroscopy"
} |
How do you sub-sample an FFT? | Question: I'm currently trying to understand this paper on a Sparse Fourier Transform.
On page four, there is a section on a sub-sampled FFT. The purpose of this section is to show that you can compute a sub-sampled FFT using less computations than the original FFT. So if I have a signal of size N, I could compute a subsampled FFT of size B which would require O(N + B log B) operations.
However, I'm confused as to how you would obtain this subsampled FFT. The idea is that we want the original FFT spectrum to be sampled at B locations. Obviously we could miss coefficients by only sampling the spectrum at B locations, however, it seems like the bigger problem is that subsampling the signal would result in aliasing frequencies.
Essentially I'm not sold on the fact that the described algorithm would result in a sub sampled FFT of the original spectrum. Instead, I think the spectrum could be aliased resulting in incorrect coefficients.
Answer:
it seems like the bigger problem is that subsampling the signal would result in aliasing frequencies
No, subsampling in frequency domain corresponds to aliasing in time domain. So the idea here is to purposefully alias in time domain so that you get a sub-sampled FFT. That is exactly why 'Claim 3.7' of paper mentions $y_i=\sum_{j=0}^{n/B-1}x_{i+Bj}$. These are $n/B$ copies of $x[n]$ shifted by $B$ and overlapped. Each copy is positioned at $0,B,2B,\ldots,(\frac{n}{B}-1)B$. This aliased signal can be used to compute FFT ($B\log_2 B$ operations) to arrive at subsampled FFT. Total number of operations will the operations need to create the aliasing (summing of non-zero values of $x$ which equals $O(\text{supp}(x))$ in addition to the operation needed to compute FFT of aliased $x$ ($O(B\log_2 B)$).
For example, if a time domain signal is $1024$ samples long, ideally you need $1024$ point FFT. But if you sub sample by taking every 8th sample, $B=128$, $n/B=1024/128=8$, the time domain signal at $\tilde{x}[0]$ will be having addition from $x[0]$,$x[128]$,$x[256]$,$\ldots$,$x[896]$. Like this for $\tilde{x}[n]=\sum_0^{7}x[128k+n]$. You can now use this $\tilde{x}$ to compute $B$-point FFT to arrive at the Subsampled FFT. | {
"domain": "dsp.stackexchange",
"id": 8776,
"tags": "fft, frequency-spectrum, dft, aliasing"
} |
no rviz interface in Ubuntu 11.04 (natty) | Question:
I have exactly the same problem as described in this question, i.e. the rviz interface does not show up if I start roscore and "rosrun rviz rviz".
I have installed the latest ros-diamondback-* packages from the official repository (version 1.4.1-s1308330112~natty). I tried several different window managers including GNOME, Unity, and KDE Plasma. I tried four different computers with Natty, and the most disturbing part is that it works on exactly one machine (with KDE), but for no apparent reason; it is the same hardware (HP Z800 workstation).
I have successfully run Gazebo with Ogre and glxgears, so there is no obvious problem with my 3D acceleration (NVIDIA). There are no helpful error messages in the logs. There are a few warnings and errors, but they show up on the working machine as well.
Originally posted by roehling on ROS Answers with karma: 1951 on 2011-07-06
Post score: 1
Original comments
Comment by tfoote on 2011-07-09:
The error may not be useful to you but please post it anyway. Also what's the output of 'dpkg -l ros-diamondback-visualization'
Answer:
The problem turned out to be the environment variable GDK_NATIVE_WINDOWS. I have set this variable in my .bashrc to circumvent problems with Eclipse (see http://blogs.gurulabs.com/dax/2009/10/what-gdk-native.html for a more in-depth explanation). Without this variable, the interface shows up just fine.
However, a colleague of mine has set the same variable and it causes no problems for him. He uses Ubuntu Lucid, though.
Originally posted by roehling with karma: 1951 on 2011-07-10
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 6059,
"tags": "rviz, ubuntu-natty, ubuntu"
} |
KenKen puzzle solver | Question: I made a program that solves KenKen Puzzles using graphics.py, and I was just wondering if my code was reasonably Pythonic.
neknek.py
import options
import copy
from graphics import *
from random import randint
from itertools import product
class Cell:
def __init__(self, x, y):
self.options = [1, 2, 3, 4, 5, 6]
self.x = x
self.y = y
self.answer = 0
self.text = Text(Point(x * 100 + 50, y * 100 + 50), str(self.options))
self.text.setSize(10)
self.marker = Circle(Point(self.x * 100 + 50, self.y * 100 + 80), 10)
self.marker.setFill(color_rgb(255, 0, 0))
def update(self):
if len(self.options) == 1 and self.answer == 0:
self.answer = self.options[0]
self.text.setText(str(self.answer))
self.marker.setFill(color_rgb(0, 255, 0))
elif self.answer > 0:
self.text.setText(str(self.answer))
self.marker.setFill(color_rgb(0, 255, 0))
elif self.answer == 0:
self.text.setText(str(self.options))
def remove_options(self, options):
if len(list(set(self.options) - set(options))) > 0:
self.options = list(set(self.options) - set(options))
class Group:
def __init__(self, cells, goal):
self.cells = cells
self.goal = goal
self.options = []
def happy(self):
raise NotImplementedError
def check_bend(self):
x = []
y = []
for cell in self.cells:
x.append(cell.x)
y.append(cell.y)
return not (self._bend_helper(x) or self._bend_helper(y))
def _all_options(self):
temp = []
for cell in self.cells:
if cell.answer > 0:
temp.append([cell.answer])
else:
temp.append(cell.options)
temp = list(product(*temp))
li = []
for x in temp:
li.append(list(x))
return li
def _bend_helper(self, li):
for i in li:
if li.count(i) == len(li):
return True
return False
def _check_group_options(self, cell, i):
for option in self.options:
if i == option[self.cells.index(cell)]:
return True
return False
def _remove_group_options(self):
possible = self._all_options()
remove = []
for option in self.options:
if option not in possible:
remove.append(option)
for option in remove:
self.options.remove(option)
def _remove_cell_options(self):
for cell in self.cells:
remove = []
for option in cell.options:
if not self._check_group_options(cell, option):
remove.append(option)
cell.remove_options(remove)
def solve(self):
self._remove_group_options()
self._remove_cell_options()
for cell in self.cells:
cell.update()
class AddGroup(Group):
def __init__(self, cells, goal):
super().__init__(cells, goal)
if self.check_bend():
try:
self.options = copy.deepcopy(options.add_options[len(self.cells)][goal + '*'])
except KeyError:
self.options = copy.deepcopy(options.add_options[len(self.cells)][goal])
else:
self.options = copy.deepcopy(options.add_options[len(self.cells)][goal])
def happy(self):
total = 0
for cell in self.cells:
total += cell.answer
return total == int(self.goal)
class SubGroup(Group):
def __init__(self, cells, goal):
super().__init__(cells, goal)
if self.check_bend():
try:
self.options = copy.deepcopy(options.sub_options[len(self.cells)][goal + '*'])
except KeyError:
self.options = copy.deepcopy(options.sub_options[len(self.cells)][goal])
else:
self.options = copy.deepcopy(options.sub_options[len(self.cells)][goal])
def happy(self):
return max(self.cells[0].answer, self.cells[1].answer) - min(self.cells[0].answer, self.cells[1].answer) == int(self.goal)
class MulGroup(Group):
def __init__(self, cells, goal):
super().__init__(cells, goal)
if self.check_bend():
try:
self.options = copy.deepcopy(options.mul_options[len(self.cells)][goal + '*'])
except KeyError:
self.options = copy.deepcopy(options.mul_options[len(self.cells)][goal])
else:
self.options = copy.deepcopy(options.mul_options[len(self.cells)][goal])
def happy(self):
total = 1
for cell in self.cells:
total *= cell.answer
return total == int(self.goal)
class DivGroup(Group):
def __init__(self, cells, goal):
super().__init__(cells, goal)
if self.check_bend():
try:
self.options = copy.deepcopy(options.div_options[len(self.cells)][goal + '*'])
except KeyError:
self.options = copy.deepcopy(options.div_options[len(self.cells)][goal])
else:
self.options = copy.deepcopy(options.div_options[len(self.cells)][goal])
def happy(self):
return max(self.cells[0].answer, self.cells[1].answer) / min(self.cells[0].answer, self.cells[1].answer) == int(self.goal)
class RowColGroup(Group):
def __init__(self, cells):
super().__init__(cells, '21')
self.options = options.row_col_options
def happy(self):
total = 0
for cell in self.cells:
total += cell.answer
return total == int(self.goal)
def _remove_used_answers(self):
remove = []
for cell in self.cells:
if cell.answer != 0:
remove.append(cell.answer)
for cell in self.cells:
cell.remove_options(remove)
def _remove_used_options(self):
remove = []
used_options = []
for cell in self.cells:
if len(cell.options) == used_options.count(cell.options) + 1:
for i in cell.options:
remove.append(i)
else:
used_options.append(cell.options)
for cell in self.cells:
cell.remove_options(remove)
def _count_options(self):
options = [0, 0, 0, 0, 0, 0, 0]
for cell in self.cells:
for option in cell.options:
options[option] += 1
for i in range(1, 7):
if options[i] == 1:
for cell in self.cells:
if i in cell.options and cell.answer != i:
cell.options = [i]
def solve(self):
self._remove_used_answers()
self._remove_used_options()
self._count_options()
for cell in self.cells:
cell.update()
class Puzzle:
def __init__(self, cells, groups, rows, cols, win):
self.width = 100 * 6
self.height = 100 * 6
self.cells = cells
self.win = win
self.prompt = Text(Point((self.width + 150) / 3, self.height + 25), '')
self.prompt.setSize(10)
self.prompt.draw(self.win)
self.input = Entry(Point(2 * (self.width + 150) / 3, self.height + 25), 5)
self.input.setSize(10)
self.input.draw(self.win)
self.groups = groups
self.rows = rows
self.cols = cols
@classmethod
def from_gui(cls):
width = 100 * 6
height = 100 * 6
cells = {}
win = GraphWin('KenKen', width + 200, height + 50)
prompt = Text(Point((width + 150) / 3, height + 25), 'How many groups are there?')
prompt.setSize(10)
prompt.draw(win)
input = Entry(Point(2 * (width + 150) / 3, height + 25), 5)
input.setSize(10)
input.draw(win)
groups = []
rows = []
cols = []
r = Rectangle(Point(width + 10, 20), Point(width + 190, 50))
r.setFill(color_rgb(255, 0, 0))
r.draw(win)
t = Text(Point(width + 100, 35), 'Remove')
t.setSize(10)
t.draw(win)
r = Rectangle(Point(width + 10, 60), Point(width + 190, 90))
r.setFill(color_rgb(0, 255, 0))
r.draw(win)
t = Text(Point(width + 100, 75), 'Enter')
t.setSize(10)
t.draw(win)
for i in range(6):
l = Line(Point((i + 1) * 100, 0), Point((i + 1) * 100, height))
l.setWidth(4)
l.draw(win)
l = Line(Point(0, (i + 1) * 100), Point(width, (i + 1) * 100))
l.setWidth(4)
l.draw(win)
for y in range(6):
for x in range(6):
c = Cell(x, y)
cells[x, y] = c
c.text.draw(win)
win.getMouse()
group_num = int(input.getText())
input.setText('')
for i in range(group_num):
g = randint(0, 255)
r = randint(g, 255)
b = randint(g, 255)
color = color_rgb(r, g, b)
prompt.setText("Enter the goal and operator for group {}, then select the cells.".format(i))
group_cells = []
while True:
p = win.getMouse()
inp = input.getText().split()
op = inp[1]
goal = inp[0]
x, y = p.getX(), p.getY()
if y > height or x > width:
if op == '+':
g = AddGroup(group_cells, goal)
elif op == '-':
g = SubGroup(group_cells, goal)
elif op == '*':
g = MulGroup(group_cells, goal)
else:
g = DivGroup(group_cells, goal)
groups.append(g)
g.solve()
input.setText('')
break
else:
if cells[x // 100, y // 100] in cells:
group_cells.remove(cells[x // 100, y // 100])
cells[x // 100, y // 100].marker.undraw()
else:
group_cells.append(cells[x // 100, y // 100])
cells[x // 100, y // 100].marker.setFill(color)
cells[x // 100, y // 100].marker.draw(win)
for y in range(6):
group_cells = []
for x in range(6):
group_cells.append(cells[x, y])
rows.append(RowColGroup(group_cells))
for x in range(6):
group_cells = []
for y in range(6):
group_cells.append(cells[x, y])
cols.append(RowColGroup(group_cells))
prompt.setText('')
input.setText('')
puzzle = cls(cells, groups, rows, cols, win)
return puzzle
def copy(self):
win = GraphWin('KenKen', 100 * 6, 100 * 6 + 50)
for i in range(6):
l = Line(Point((i + 1) * 100, 0), Point((i + 1) * 100, 600))
l.setWidth(4)
l.draw(win)
l = Line(Point(0, (i + 1) * 100), Point(600, (i + 1) * 100))
l.setWidth(4)
l.draw(win)
cells = {}
for y in range(6):
for x in range(6):
c = Cell(x, y)
c.options = copy.deepcopy(self.cells[x, y].options)
c.answer = self.cells[x, y].answer
c.marker.draw(win)
c.text.draw(win)
if c.answer > 0:
c.update()
cells[x, y] = c
groups = []
for group in self.groups:
c = []
for cell in group.cells:
c.append(cells[cell.x, cell.y])
if type(group) is AddGroup:
g = AddGroup(c, group.goal)
elif type(group) is SubGroup:
g = SubGroup(c, group.goal)
elif type(group) is MulGroup:
g = MulGroup(c, group.goal)
else:
g = DivGroup(c, group.goal)
g.options = copy.deepcopy(group.options)
groups.append(g)
rows = []
for row in self.rows:
c = []
for cell in row.cells:
c.append(cells[cell.x, cell.y])
r = RowColGroup(c)
r.options = copy.deepcopy(row.options)
rows.append(r)
cols = []
for col in self.cols:
c = []
for cell in col.cells:
c.append(cells[cell.x, cell.y])
c = RowColGroup(c)
c.options = copy.deepcopy(col.options)
cols.append(c)
return Puzzle(cells, groups, rows, cols, win)
def solve(self):
tries = 0
while self.total() < 36 and tries < 60:
tries += 1
for row in self.rows:
row.solve()
self.prompt.setText('Solving Rows... Continue.')
for col in self.cols:
col.solve()
self.prompt.setText('Solving Columns... Continue.')
for group in self.groups:
group.solve()
self.prompt.setText('Solving Groups... Continue.')
if self.total() < 36:
for row in self.rows:
for cell in row.cells:
if cell.answer == 0:
for option in cell.options:
result = self.try_solve(cell, option)
if result is not None:
self.win.close()
return result
elif self.happy():
return self
self.win.close()
return None
def try_solve(self, cell, option):
c = self.copy()
c.cells[cell.x, cell.y].answer = option
c.cells[cell.x, cell.y].options = [option]
result = c.solve()
return result
# cell.answer = option
# return self.solve()
def happy(self):
all_groups = self.rows + self.cols + self.groups
for group in all_groups:
if not group.happy():
return False
return True
def total(self):
total = 0
for y in range(6):
for x in range(6):
if self.cells[x, y].answer > 0:
total += 1
return total
def get_input(self):
p = self.win.getMouse()
x, y = p.getX(), p.getY()
if x >= self.width:
if 20 < y < 50:
self.remove_options(self.win.getMouse(), int(self.prompt.getText()))
else:
self.solve()
def remove_options(self, p, i):
pass
if __name__ == '__main__':
p = Puzzle.from_gui()
p = p.solve()
p.win.getMouse()
options.py is just every possible combination for different groups.
Answer: There was no real explanation here of what this code does, and frankly there was a lot of it. Also, I had no way to run this code, so I will stick with some more generally pythonic things, which is what you asked for anyways.
In general your code looks pretty good. For code tagged beginner, it is in fact excellent. I will however recommend getting familiar with python comprehensions. You have lots of loops that can be made considerably more pythonic with comprehension. I will show a few examples from your code, but first...
Pep8:
Python has a strong idea of how the code should be styled, and it is expressed in pep8.
I suggest you get a style/lint checker. I use the pycharm ide which will show you style and compile issues right in the editor.
The primary violation was due to line length issues. Breaking things to 80 columns can seem a bit uncomfortable at first, but it makes the code easier to read.
Comprehensions
So, this will go over a few styles of loops in your code and show a comprehension equivalent. I won't spend any time discussing the details, but hopefully the syntax is fairly straightforward, and the compactness and clarity will motivate you you to go study them.
This:
temp = list(product(*temp))
li = []
for x in temp:
li.append(list(x))
Can be:
li = [list(x) for x in product(*temp)]
This:
remove = []
for option in self.options:
if option not in possible:
remove.append(option)
for option in remove:
self.options.remove(option)
Can be:
self.options = [o for o in self.options if o in possible]
This:
total = 0
for cell in self.cells:
total += cell.answer
Can be:
total = sum(cell.answer for cell in self.cells)
This:
x = []
y = []
for cell in self.cells:
x.append(cell.x)
y.append(cell.y)
Can be:
x, y = zip(*((cell.x, cell.y) for cell in a))
Note:
I didn't actually test any of these, so they may contain silly typos. Have any questions? Hit me up in comments. | {
"domain": "codereview.stackexchange",
"id": 25807,
"tags": "python, beginner, python-3.x, recursion"
} |
Can a tidally locked planet have a horizontal rotational axis? | Question: Is it hypothetically possible for a planet to be both tidally locked, and still have a rotation "horizontally"? Where the substellar point would in effect be like the point of a top, spinning and yet also always pointing inwards toward the star?
Answer: Not really. The angular momentum of such a system is not conserved, as the rotation axis changes over time.
Simple-mindedly If a planet had, at some point in its orbit, its rotation axis pointing at its star, then that axis would remain pointing in the same direction (so not at the star, in general) throughout the orbit, and for almost all of the orbit there would therefore be tidal effects, which would slow the rotation.
In real life I imagine there would be seriously hairy interactions between the orbital motion and the planet's spin which would result in the axis wobbling all over the place and probably ending up in some more conventional direction in due course (the rotation axes of planets do change over time due to interactions like this). | {
"domain": "physics.stackexchange",
"id": 28831,
"tags": "gravity, astrophysics, orbital-motion, planets, rotation"
} |
If the Sun were to suddenly become a black hole of the same mass, what would the orbital periods of the planets be? | Question: I am interested in theoretical and practical considerations.
Answer: It would be exactly the same, (atleast in Newtionian picture), no gravitational fields outside planets radius would change. The easiest way to see this I think is to use the gravitational analogoue of Gauss law.
Since we have spherical symmetry in both cases Int G dA = G*4*pi*r^2 ~ M
So G is constant.
See http://en.wikipedia.org/wiki/Gauss's_law | {
"domain": "physics.stackexchange",
"id": 758,
"tags": "general-relativity, black-holes, solar-system"
} |
Dual gyro system: will it resist a turning force? | Question: Let's say there are 2 gyroscopes. Both are connected to the same frame (orange). Both are spinning at the same speed, but in opposite directions (red).
If I spin the frame (green arrow), the gyros will precess. If the orange frame was weak, it would fold in half.
Assuming the orange frame is strong enough, the the gyros' precession would cancel out.
Would the frame-spinning force (green) encounter resistance? Or would it spin just as freely as if the gyros were stopped?
Answer: You can see the result of this experiment on YouTube:
If the two gyroscopes spin in the same direction, they behave like a single gyroscope with twice the angular momentum. (not part of your question)
If the two gyroscopes spin in opposite direction, their angular momentum cancel. They behave like a non-spinning gyroscope. However, be aware that the mount experience "a lot" of stress -- in contrast to the non-rotating gyroscope case. | {
"domain": "physics.stackexchange",
"id": 66498,
"tags": "angular-momentum, gyroscopes, precession"
} |
number of subsegments that contain A[i] as the minimum | Question: I've recently been thinking about this problem.
Given an array $A$ containing $n$ integers and an index $i$, find the number of subgements of $A$ containing $A[i]$ as their minimum.
To better illustrate the problem, consider the array $A = [5, 1, 7]$ and we want to find the number of subsegments containing $A[1] = 1$ as their minimum. Clearly the answer is 4 and we can easily enumerate them: $[1], [5, 1],[5, 1, 7], [1, 7]$.
What the fastest way to compute the number of subsegments?
Answer: You haven't stated what a subsegment is, but I assume that you mean a "contiguous subsequence", that is, $A[s],A[s+1],\ldots,A[t]$. Also, I'm assuming that $A[i]$ is still a minimum if there are other equal values (you can easily adapt the algorithm for the other option).
Let $\ell \leq i$ be the smallest index such that $A[s] \leq A[i]$ for all $\ell \leq s \leq i$, and let $r \geq i$ be the largest index such that $A[r] \leq A[i]$ for all $i \leq s \leq r$. Then $A[i]$ is a minimum of $A[s],\ldots,A[t]$ iff $\ell \leq s$ and $t \leq r$. Thus there are $i-\ell+1$ options for $s$, namely $\ell,\ell+1,\ldots,i$, and similarly $r-i+1$ options for $t$, for a total of $(i-\ell+1)(r-i+1)$.
You can easily find $\ell,r$ in linear time by scanning left of and right of $i$ (respectively), so this algorithm runs in linear time. | {
"domain": "cs.stackexchange",
"id": 9489,
"tags": "optimization, time-complexity, asymptotics, combinatorics, arrays"
} |
Uploading huge dataset | Question: I have few questions:
Is there a website to upload huge research dataset (over 100GB) for free?
Which type of compression (rar, zip ... etc) is good for jpeg images?
In case of dataset of 120GB. what is the best split for this big files (eg: 20 GB each)?
Answer: Don't compress files that are already compressed (like how JPEG/PNG images or video files are). Their inherent compression is usually good enough, and you only trade some 5% in lower compressed size for a much lengthier and often non-seekable decompression that results in using twice the disk space.
If you need to batch the files together, just use tar. | {
"domain": "datascience.stackexchange",
"id": 1481,
"tags": "dataset, bigdata, data, image-classification"
} |
Cook reduction for search problems, by universal property? | Question: A search problem is a relation $R\subseteq \Sigma^*\times\Sigma^*$. A function $f\colon \Sigma^*\to\Sigma^*$ solves $R$ if $(x,f(x))\in R$ for all $x\in\Sigma^*$. Define a search problem to be reasonable if for all $(x,y)\in R$ the word $x$ is at least as long as $y$.
Let $R$ and $S$ be reasonable search problems. Consider the following two properties.
There is a Cook-reduction from $R$ to $S$ (That is, there is a polynomial-time oracle Turing machine $M$ such that for all $f$ solving $S$, the function $M^f$ solves $R$. This is Definition 3.1 of Goldreich's "P, NP, and NP-Completeness: The Basics of Computational Complexity".)
For all functions $f$ such that $S\in\mathsf{FP}^f$, we have $R\in\mathsf{FP}^f$. (Here $\mathsf{FP}^f$ is the set of search problems solved by $M^f$ for some polynomial-time oracle Turing machine $M$.)
Clearly (1.) implies (2.). Also, if $S$ solves itself (i.e. $S$ is a function) then (2.) implies (1.) because we can take the oracle in (2.) to be $S$ itself to obtain an appropriate $M$. Does (2.) imply (1.) in general? I'd guess the answer is no, but I can't think of a counterexample.
Answer: (2.) implies (1.) by a standard compactness argument.
First note that it suffices to exhibit a Cook-reduction $M$ from $R$ to $S'$ for any relation $S'$ such that $S(x,y)\iff S'(x,y)$ for all but finitely many values of $x$. This is because we can then give a Cook-reduction from $R$ to $S$ by simulating $S'$ using $S$.
Suppose for contradiction that there is no such Cook-reduction. Let $M_1,M_2,\dots$ be a enumeration of polynomial-time oracle Turing machines. Pick a sequence $S_1\supseteq S_2\supseteq \dots$ as follows. Let $S_1=S$. Suppose we have constructed $S_k$ such that $S_k(x,y)\iff S(x,y)$ for all but finitely many $x$. We know that $M_k$ is not a Cook-reduction from $R$ to $S_k$: there is some $f$ solving $S_k$ and some $x$ such that $R(x,M_k^f(x))$ does not hold. Set $S_{k+1}$ to be the relation that agrees with $f$ on all the oracle queries used by evaluating $M_k^f$ on $x$, and otherwise agrees with $S_k$.
Pick a function $f$ solving $S_k$ for all $k$. Clearing $S\in\mathsf{FP}^f$, but by construction $R\notin\mathsf{FP}^f$. | {
"domain": "cstheory.stackexchange",
"id": 1803,
"tags": "cc.complexity-theory, reductions, function"
} |
Method to return date ranges of 1 year | Question: I made a method that takes 2 DateTimes as a parameter, a startDate and a stopDate.
The routine should return a list of DateTime ranges by checking if the range in the parameters is above one year TimeSpan. If it is, it should separate the range into smaller ranges of one year, starting form startDate and adding a year until reaching the limit, and then adjusting the last range accordingly (as it might only contain a few months).
Can this method be made better or even recursive?
// startDate = 2016-01-01
// endDate = 2020-06-01
List<Tuple<DateTime,DateTime>> list = GetYearsBetweenDates(startDate, endDate);
//list = 2016-01-01 2016-12-31
// 2017-01-01 2017-12-31
// 2018-01-01 2018-12-31
// 2019-01-01 2019-12-31
// 2020-01-01 2020-06-01
private List<Tuple<DateTime, DateTime>> GetYearsBetweenDates(DateTime startDate, DateTime stopDate)
{
var list = new List<Tuple<DateTime, DateTime>>();
var oneYear = TimeSpan.FromDays(365);
var tempStopDate = startDate + oneYear;
var tempStartDate = startDate;
bool first = true;
if (stopDate - startDate < oneYear)
{
list.Add(new Tuple<DateTime, DateTime>(startDate, stopDate));
return list;
}
while (tempStopDate != stopDate)
{
list.Add(new Tuple<DateTime, DateTime>(tempStartDate, tempStopDate));
if (first)
{
tempStartDate += oneYear.Add(TimeSpan.FromDays(1)); // We should not have the same date twice
first = false;
}
else
tempStartDate += oneYear;
tempStopDate += oneYear;
if (tempStopDate > stopDate)
{
tempStopDate = stopDate;
list.Add(new Tuple<DateTime, DateTime>(tempStartDate, stopDate));
}
}
return list;
}
Answer: If you perform all calcuclations dynamically your method becomes a simple few lines long loop:
private IEnumerable<DateTimeRange> GetYearsBetweenDates(DateTime startDate, DateTime stopDate)
{
for (int i = startDate.Year; i <= stopDate.Year; i++)
{
yield return new DateTimeRange
(
start: new DateTime(i, startDate.Month, startDate.Day),
end:
i == stopDate.Year
? new DateTime(i, stopDate.Month, stopDate.Day).AddYears(1)
: new DateTime(i, startDate.Month, startDate.Day).AddYears(1)
);
}
}
So what I have I changed?
The method returns an IEnumerable<T> and uses the yield return which makes it deferred (this means it is executed only if enumerated).
The T is a new struct that is easier to understand than a Tuple:
struct DateTimeRange
{
public DateTimeRange(DateTime start, DateTime end)
{
Start = start;
End = end;
}
public DateTime Start { get; }
public DateTime End { get; }
}
To get all dates you call it with ToList or ToArray
var dates = GetYearsBetweenDates(startDate, endDate).ToList();
or you put it in a foreach loop
foreach(var range in GetYearsBetweenDates(startDate, endDate))
{
// do something
}
You can also exagerate and make it pure LINQ method without any manual loop but personaly I think the for loop looks better:
private IEnumerable<DateTimeRange> GetYearsBetweenDates3(DateTime startDate, DateTime stopDate)
{
return Enumerable.Range(startDate.Year, stopDate.Year - startDate.Year + 1)
.Select(x => new DateTimeRange
(
start: new DateTime(x, startDate.Month, startDate.Day),
end:
x == stopDate.Year
? new DateTime(x, stopDate.Month, stopDate.Day).AddYears(1)
: new DateTime(x, startDate.Month, startDate.Day).AddYears(1)
));
}
What else?
var list = new List<Tuple<DateTime, DateTime>>();
You should always choose meaningful names for all variables. You did it well for the others. This could have been results or dates.
var oneYear = TimeSpan.FromDays(365);
It's safer to use the AddYear(1) method rather then hardcoding the number. Especially if you don't want to care about leapyears. | {
"domain": "codereview.stackexchange",
"id": 22824,
"tags": "c#, datetime"
} |
Is the Joule Thomson coefficient constant | Question: The Joule Thomson coefficient for various gases can be found in textbooks, e.g. have I found that hydrogen has $\mu_{jt}=-0,024735$ K/bar and an inversiontemperatur of around 200 K.
Not having the background to understand its derivation: Is $\mu_{jt}$ constant for a given gas or is it a function of some parameter (apart from that it changes sign around the inversion temperature)?
Answer: It is not a constant for a given gas. A value given in a table must be for some specific pressure and temperature. To see this, look at a diagram showing isenthalps (curves of constant enthalpy). They are not straight lines! However they are pretty straight in the low pressure region when the temperature is comfortably below the inversion temperature. I expect the value given in a table is probably the value at low pressure at some chosen temperature below the inversion temperature. | {
"domain": "physics.stackexchange",
"id": 71836,
"tags": "thermodynamics"
} |
Strong Acid/Strong Base Titration | Question: Bromothymol Blue is an indicator that turns yellow in acid, blue in base.
If I were to titrate NaOH with HCl, what color should I look for at the equivalence point? Should I titrate until the solution turns from blue to yellow or will there be an intermediate colour of green?
I know the salt produced (NaCl) has a pH of 7.
Answer: http://antoine.frostburg.edu/chem/senese/101/acidbase/indicators.shtml
This is a link to a webpage that talks all about pH indicators and the different pH ranges if you're looking for more info.
When it comes to your question, the color should go from yellow to green because in your breaker is where you add the indicator fluid and HCl acid and you titrate that with the NaOH base. If you titrate further than your green color you have now made your solution more basic. | {
"domain": "chemistry.stackexchange",
"id": 5660,
"tags": "titration"
} |
X and Y profile of a round object using a 45degree mirror (why is the Y profile smaller) | Question: I'm trying to measure the X and Y profiles of a round object using a single camera. The camera has a direct view of the object in the X profile but sees the Y profile of the object as a reflection on a mirror mounted at 45 degrees. The problem is that the Y profile seen by the camera is smaller than the X profile of the object. Although the object is round. See the attached diagram, photo of the setup and a photo from the camera.
I would think that the reflection of the Y profile will be the same distance from the camera as the X profile and therefore have the same size. But it does not.
Why is the Y profile smaller than the X profile?
Or why is the Y profile (reflection) further away from the camera?
I did get the mirror from my wife's makeup so it could be that the reflective coating is at a radius. But the glass itself seems flat.
Answer: The way you have drawn your diagram suggests the mirror reflecting the y-profile is at the same distance from the camera as the object is. But the 'effective' distance to the y-profile image also needs to include the distance from the object to the mirror. See my adaption of your original sketch, | {
"domain": "physics.stackexchange",
"id": 73657,
"tags": "optics, reflection"
} |
Ergodicity clarification | Question: On page 201 of https://stanford.edu/~dntse/Chapters_PDF/Fundamentals_Wireless_Communication_chapter5.pdf,
it is mentioned that
This observation suggests that the capacity result (5.89) holds for a much
broader class of fading processes. Only the convergence in (5.91) is needed.
This says that the time average should converge to the same limit for almost all
realizations of the fading process, a concept called ergodicity, and it holds in
many models.
I just wanted to confirm if my interpretation of this is correct: in equation (5.91), the $h[m]$ are random variables taken from a discrete random process $h$. This leads to another discrete random process $\log(1+ |h|^2 SNR)$ and assuming it is ergodic in the mean, equation 5.91 holds.
Answer: I'll change the notation a little. Let's say you have a discrete-time process, $x[n]$, random or deterministic, that exists for all discrete times $n$, and let's say it goes into some "decent" (I think continuous) function $f(x)$.
The time-average of the result of that function is:
$$ \overline{f(x)} \triangleq \lim_{N \to \infty} \frac{1}{2N+1} \sum\limits_{n=-N}^{+N} f\big( x[n] \big) $$
Now, let's say that $x[n]$ is a stationary random process, so all statistics of $x[n]$ are constant with respect to (discrete) time $n$. Then the probabilistic-average of the result of that function is the expectation value:
$$ \mathbb{E}\Big\{ f\big( x[n] \big)\Big\} \triangleq \int\limits_{-\infty}^{+\infty} \mathrm{p}_x(\alpha) f\big( \alpha \big) \ \mathrm{d}\alpha $$
where $\mathrm{p}_x(\alpha)$ is the probability density function (p.d.f.) of the random variable $x[n]$ and is independent of $n$:
$$ \int\limits_{\alpha}^{\alpha + \Delta \alpha} \mathrm{p}_x(u) \ \mathrm{d}u = \mathbb{P}\Big\{\alpha \le x[n] < \alpha + \Delta \alpha \Big\} $$
or, for tiny $\Delta \alpha$,
$$ \mathrm{p}_x(\alpha) = \lim_{\Delta \alpha \to 0} \frac{1}{\Delta \alpha} \mathbb{P}\Big\{\alpha \le x[n] < \alpha + \Delta \alpha \Big\} $$
and $\mathbb{P}\big\{\cdot\big\}$ means the probability of the event defined therein.
Now, my understanding of the root meaning to the term ergodic as applied to a random process $x[n]$, is that every time average is the same as the probabilistic average. That is, for any function $f(\cdot)$,
$$ \overline{f(x)} = \mathbb{E}\Big\{ f\big( x[n] \big)\Big\} $$
That's what "ergodic" means. These are two different ways of getting to the average of something, and "ergodic" means that those two different ways of getting to the average, get to the same average. | {
"domain": "dsp.stackexchange",
"id": 12452,
"tags": "discrete-signals, digital-communications, stochastic, ergodic"
} |
Existence of Tripoles? | Question: With multipole expansions, we speak only of monopoles, dipoles, and $2^n$-poles. Why is there nothing like a tripole? So how would something like $rsin(3 \theta)$ be expressed with a multipole expansion? Is the reason for not having terms like this in a multipole expansion that it is redundant, or is there some more fundamental reason?
Answer: The function $\sin 3\theta$ on the unit sphere is not an eigenfunction of the Laplacian on the sphere, i.e. the angular part of the Laplacian, i.e. of $L^2$, so it is not convenient a basis vector in problems whose Hamiltonian involves the Laplacian.
The function $\sin 3\theta$ may be written as a combination of spherical harmonics $Y_{lm}$ with many different values of $(l,m)$ so it is a "mixture" of multipoles of different "rank". For the more natural basis of functions on the sphere that may see as basis vectors, see
The table of spherical harmonics
https://en.wikipedia.org/wiki/Table_of_spherical_harmonics
For example, the spherical harmonics $Y_{3,\pm 3}$ are proportional to
$$ Y_{3,\pm 3} \sim \exp(\pm 3i\phi) \sin^3 \theta $$
which is very similar to $\sin 3\theta$ but has the extra $\phi$-dependence. Similarly, one may look at the function $Y_{30}$ which is similar to $\sin 3\theta$ but prefers cosines and so on. Either $\sin^3 \theta$ or $\cos^3\theta$ (check it!) without any $\phi$-dependence is a combination of $Y_{30}$ and $Y_{10}$.
Once one realizes why the spherical harmonics are the preferred, more natural basis, we may carefully discuss the spherical harmonics' association with the multipole expansion. For example, we learn that $Y_{3,m}$ for any $m$, including the functions similar to yours above, are associated with octupoles, not "tripoles"!
More generally, $Y_{\ell m}$ is the angular part of the $2^\ell$-pole.
The powers of two are a natural way to describe the terms in the multipole expansions for reasons explained elsewhere, e.g. here:
https://physics.stackexchange.com/a/127496/1236
In the multipole terminology, a "tripole" would correspond to a triplet (e.g. vertices of a triangle) of charges. If their total charge would be nonzero, there would be a leading "monopole" term. If the total charge cancelled, the system of 3 charges would still have a dipole moment. Unless the three (nonzero) charges would lie on the same line, the dipole moment couldn't be canceled. | {
"domain": "physics.stackexchange",
"id": 15873,
"tags": "multipole-expansion"
} |
First layer weights for transfer learning with new input tensor in keras.applications models? | Question: In the pre-implemented models in keras (VGG16 ect) it is specified that we can change shape of the inputs of the models and still load the pre-trained imagenet weights.
What I am confused about is then what happens to the first layer weights? If the input tensor has a different shape, then the number of weights will be different than for the pre-trained models isn't it?
Here is the implementation of the Keras VGG16 model for reference.
Answer: The first layers are convolution and pooling ones:
For the convolutional layers, the only weights are the kernels and the biases, and they have fixed size (e.g. 3x3x3, 5x5x3) and do not depend on the input tensor shape.
The pooling layers do not have weights at all.
That's why you can reuse the weights independently from the input tensor shape.
With dense layers (i.e. the final layers), you need shapes to match, so you cannot reuse them if they do not. | {
"domain": "datascience.stackexchange",
"id": 2297,
"tags": "machine-learning, keras, cnn"
} |
How to find Young's elastic modulus of a sphere using a camera? | Question: So I have a deformable ball and a high-speed camera. How do I measure Young's elastic modulus?
I was thinking of looking at the coefficient of restitution of the ball, but it doesn't seem to be directly related to the elastic modulus, to my surprise (I might be wrong, though). Alternatively, I could use Hertzian contact laws, but this gives me the corrected elastic modulus, which also contains Poisson's ratio, which I do not have. I have only one ball size.
Do you have any suggestions?
Answer: I think that you could use the Hertzian contact problem for this.
Let's say that you have a plane surface with a Young modulus and a mass that are much higher than the ones from your sphere. You can use the equation
$$h_0 = \left(\frac{m}{k}\right)^{2/5} v^{4/5}\, ,$$
with $m$ the mass of your sphere, and $v$ its speed. Here, $h_0$ represents the deformation when the sphere starts to bounce back. If your sphere is much more compliant than your surface you can assume that all the deformation is happening in that object. The speed is the value you achieve just before impact, so you can vary it with different initial heights. Actually, the equation above is obtained from a balance of energy, so you could rederive it for different heights instead of speeds (see reference).
The other parameters are:
$$k = \frac{4}{5D}\sqrt{R}\, ,$$
with $R$ the radius of the sphere and
$$D = \frac{3}{4}\frac{(1 - \nu^2)}{E}\, .$$
You could vary $v$ and then perform a nonlinear regression to find $\nu$ and $E$.
Reference
Landau, L. D., & Lifshitz, E. M. (1986). Theory of elasticity (Vol. 7, No. 3). New York: Pergamon Press, Oxford. | {
"domain": "physics.stackexchange",
"id": 78845,
"tags": "collision, elasticity"
} |
How do I calculate the ensemble-average net charge of an amino acid at given pH? | Question: I am given an amino acid with an ionizable side chain at a certain pH. How do I determine the net charge of that amino acid when there are mixed protonation states of one or more of the groups at that pH (pKa of side chain, for example, is really close to the pH)?
Amino acids have terminal carboxyl and amino groups; some amino acids have ionizable side chains. When determining the charge of an amino acid, you have to take into consideration the pH and the pKa's of each of these groups. When the pKa of one group (or more) is close enough to the pH, a fraction of the amino acids will be deprotonated at that group and the other fraction of amino acids will be protonated at that group in solution. Thus, when determining the average net charge across the ensemble (or a time-averaged charge of a single particle), you have to take this into account.
I am asking for the expected value of the net charge (which would not be an integer); this number is relevant, for example, for the migration speed of the amino acid (or a protein) in gel electrophoresis or the strength of interaction with ion exchange chromatography media.
For example, a carboxylic acid/carboxylate group at a pH equal to its pKa would have an average charge of minus one half because half of the functional groups would be protonated (charge of zero) and half would be deprotonated (charge of minus one).
Answer: The Henderson-Hasselbalch relationship describing each ionizable group is:
$$\mathrm{pH} = \mathrm{p}K_\mathrm{a} + \log \frac{\ce{[A-]}}{\ce{[AH]}}$$
We can solve for the ratio:
$$10^{(\mathrm{pH} - \mathrm{p}K_\mathrm{a})}= \frac{\ce{[A-]}}{\ce{[AH]}}$$
However, we really want the fraction of protonated among the total (not the ratio of deprotonated to protonated).
$$10^{(\mathrm{pH} - \mathrm{p}K_\mathrm{a})} = \frac{[\mathrm{total}] - \ce{[AH]}}{\ce{[AH]}} = \frac{[\mathrm{total}]}{\ce{[AH]}} - 1$$
Add one to both sides:
$$10^{(\mathrm{pH} - \mathrm{p}K_\mathrm{a})} + 1 = \frac{[\mathrm{total}]}{\ce{[AH]}}$$
Take the reciprocal:
$$\frac{\ce{[AH]}}{[\mathrm{total}]} = \frac{1}{10^{(\mathrm{pH} - \mathrm{p}K_\mathrm{a})} + 1}\tag{1}$$
This is still general for any acid/base group. For example, we could use it to calculate the charge of ammonia/ammonium ($\ce{NH3(aq) + H+(aq) <=> NH4+(aq)}$). At very basic pH, the charge would be zero, at very acidic pH, +1. To get the average charge at any pH, we take the charge at very basic pH and add the result of equation [1] using the $\mathrm{p}K_\mathrm{a}$ value of ammonium.
For any amino acid (or any other molecule with ionizable groups with $i$ different $\mathrm{p}K_\mathrm{a}$ values), you take the charge of the species at very basic pH (all groups deprotonated), plus the following:
$$\sum_i \frac{1}{10^{(\mathrm{pH} - \mathrm{p}K_\mathrm{a,i})} + 1}\tag{2}$$
This is only an approximation because there might be some cross talk between ionizable groups (i.e. if one group becomes negatively charged, it becomes more "difficult" for the neighboring group to become negatively charged). It also becomes more complicated for polyprotic groups, but all the groups in amino acids are monoprotic with water as a solvent. | {
"domain": "chemistry.stackexchange",
"id": 12533,
"tags": "acid-base, biochemistry, ph, amino-acids"
} |
Decomposition of spherical harmonics via Clebsh-Gordan coefficients | Question: The tensor product of two states with spin can be decomposed into irreducible representations via Clebsh-Gordan coefficients
$$|j_1, m_1, j_2, m_2 \rangle = \sum C^{JM}_{j_1, m_1, j_2, m_2} |JM\rangle\,.$$
Since spherical harmonics $Y_{\ell m}$ are representations of $SO(3)$, I would have expected a similar decomposition, i.e.
$$Y_{\ell_1 m_1} (\Omega) Y_{\ell_2 m_2}(\Omega) = \sum C^{L M}_{\ell_1 m_1 \ell_2 m_2} Y_{L M}(\Omega)\,.$$
However, the Wikipedia page on Clebsh-Gordan coefficients instead gives the expansion
$$Y_{\ell_1 m_1} (\Omega) Y_{\ell_2 m_2}(\Omega) = \sum_{L,M} \sqrt{\frac{(2\ell_1 + 1)(2\ell_2 + 1)}{4\pi (2 L+1)}} C^{L M}_{\ell_1 m_1 \ell_2 m_2}C^{L 0}_{\ell_1 0 \ell_2 0} Y_{L M}(\Omega)\,.$$
How can I understand where these additional terms come from? I've found some derivations of the expression in Sakurai's Modern Quantum Mechanics, and I can follow the derivation, but I don't understand where the discrepancy arises on the level of representation theory.
Answer: The “missing” Clebsch is hidden by the nature of the spherical harmonics as coset functions, i.e. functions over $SU(2)/U(1)$.
The best way to understand the occurrence of this CG is by expressing the spherical harmonics in terms of full $SU(2)$ group functions:
\begin{align}
Y_{LM}(\beta,\alpha)=\sqrt{\frac{2L+1}{4\pi}}D^{L*}_{M_10}(\alpha,\beta,\gamma).
\tag{1}
\end{align}
The $\gamma$ dependence (i.e. the $U(1)$ factors) drops out because the second projection $M_2=0$.
As a special case of combining full group functions, we thus have
\begin{align}
D^{L*}_{M_10}(\Omega)D^{\ell*}_{m_10}(\Omega)=
\left[\langle L M_1\vert\langle \ell m_1\vert\right]
R(\Omega)\left[ \vert L 0\rangle \vert \ell 0\rangle\right]^*
\end{align}
and so one CG is needed to combine the kets:
\begin{align}
\vert L 0\rangle \vert \ell 0\rangle = \sum_{J}
C_{L0;\ell 0}^{J0}\vert J 0\rangle \tag{2}
\end{align}
and one is needed to combine the bras.
Note the proportionality factor in (1) is what produces the various $\sqrt{2L+1}$ factors in your expression.
FYI there’s quite a sneaky way of evaluating the CG of (2) in Claude Cohen-Tannoudji’s QM book (with Diu and Laloe) | {
"domain": "physics.stackexchange",
"id": 62417,
"tags": "quantum-mechanics, quantum-spin, representation-theory"
} |
Tension in a massless string being pulled at its ends with unequal forces | Question: There is a question in my textbook. If a massless inextensible string is pulled on with a force of $10 N$, at both ends, what is the tension in the string?
It’s a very common question. The answer is $10 N$, cf. e.g. this & this Phys.SE posts. It can be proved using Newton’s $2nd$ and $3rd$ law. If we think of the string as a series of links in a chain, for example, or if we think about the adjacent molecules in the string, then we can prove using Newton’s $2nd$ and $3rd$ law that tension in the string is $10N$ at each point along its length.
But what if we pulled on the ends of the string with forces of unequal magnitudes? This question occurred to me and I kind of got confused. My intuition says that the string has a net force acting on it, and hence it would accelerate. But because the string is massless, Newton’s 2nd law did not help me understand this situation. My question is,
If we pull on the ends of a string that is massless and inextensible, with forces of $60N$ and $70N$ respectively, what would be the tension in the string?
Will it be $60N$? Will it be $70N$?
I gave it some thought, and I thought this situation is similar to an atwood machine, two masses $6kg$ and $7kg$ respectively, hanging from a pulley. The pulley is massless and frictionless. The string is massless and inextensible. Because of gravity, one end of the string is being pulled on with $60N$, and the other end is being pulled on with $70N$, isn’t this situation similar? If I work out the tension in the string using $T$ = $\frac{2m_1m_2g}{m_1 + m_2}$, it gives $T$ = $64.6N$.
So can I say that If we pull on the ends of a string that is massless and inextensible, with forces of $60N$ and $70N$ respectively, tension in the string would be neither $60N$, nor $70N$, but somewhere in between ($64.6N$)?
Answer: The arrangement you describe is impossible. The tension of the string will be 70N. Whatever was trying to restrain the end of the string with a force of 60N will be subject to a force of 70N by the string. As a result it will accelerate subject to a net force of 10N. The reaction on the string will be 70N. | {
"domain": "physics.stackexchange",
"id": 63134,
"tags": "newtonian-mechanics, forces, acceleration, free-body-diagram, string"
} |
Decay width average in the isospin invariant limit | Question: Suppose we have the following experimental values for $\eta' \rightarrow \eta \pi \pi$ decay width:
$\Gamma_{\eta' \rightarrow \eta \pi^+ \pi^-} = 0.086 \pm 0.004$
$\Gamma_{\eta' \rightarrow \eta \pi^0 \pi^0} = 0.0430 \pm 0.0022$
We are investigating this decay in the isospin invariant limit and want to compare our results with experimental rates. So we should average over these 2 values to find the experimental decay width in this limit.
It is written in one paper that we should average
$\Gamma_{\eta' \rightarrow \eta \pi^+ \pi^-} = 0.086 \pm 0.004$
2$\Gamma_{\eta' \rightarrow \eta \pi^0 \pi^0} = 0.0860 \pm 0.0044$
in a specific way (which is not my question).
I don't know why we should twice the second value and then average?
Answer: The average is taken this way because their theoretical prediction does not distinguish between $\Gamma_{\eta' \rightarrow \eta \pi^+ \pi^-}$ and $2\Gamma_{\eta' \rightarrow \eta \pi^0 \pi^0}.$ See for example equation 2.1. The experimental value, however, is different for both decay channels. Therefore, in order to make a meaningful comparison, they choose to compare to the average of both values. Since the theoretical equivalence include the factor of $2$, the average should also include it. | {
"domain": "physics.stackexchange",
"id": 13463,
"tags": "quantum-field-theory, particle-physics"
} |
Simulating the martingale betting system with roulette | Question: I wrote a program that simulates many instances of trying the martingale betting system with roulette in Haskell.
At the answerer's suggestion I opted to leave the IO monad in main and ensure that the functions martingale and martingale', which handled the actual testing, were entirely pure, but the answerer also suggested that I could push my call to newStdGen all the way down into martingale'. I believe this would remove the requirement that my functions even require a StdGen as an argument.
My question then: Is it more natural for functions that compute random variable x to require StdGens as an argument, or for them to create StdGens and perhaps return IO x? Also, is my source code posted below reasonable and could it be improved or made more natural?
-- file: Martingale.hs
-- a program to simulate the martingale doubling system
import System.Random (randomR, newStdGen, StdGen)
import System.Environment (getArgs)
import Control.Monad (replicateM)
red = [1,3,5,7,9,12,14,16,18,19,21,23,25,27,30,32,34,36]
martingale :: StdGen -> Bool
martingale = martingale' 1 0
martingale' :: Real a => a -> a -> StdGen -> Bool
martingale' bet acc gen
| acc >= 5 = True
| acc <= -100 = False
| otherwise =
let (randNumber, newGen) = randomR (0,37) gen :: (Int, StdGen)
in if randNumber `elem` red
then martingale' 1 (acc + bet) newGen
else martingale' (bet * 2) (acc - bet) newGen
main :: IO ()
main = do
args <- getArgs
let iters = read $ head args
gens <- replicateM iters newStdGen
let results = map martingale gens
countWins = length $ filter (== True) results
prob = fromIntegral countWins / fromIntegral iters
print prob
Answer:
Is it more natural for functions that compute random variable x to require StdGens as an argument, or for them to create StdGens and perhaps return IO x?
I think in most cases you're going to be better off passing around a pure source of randomness or a seed value so that you can guarantee deterministic output if need be by using the same seed in multiple runs. That is, you made the right choice.
Also, is my source code posted below reasonable and could it be improved or made more natural?
I don't think there's anything wrong per se, but it's very narrowly focused without much opportunity for reusability. You'll not often go wrong if you attempt to mirror the reality of what you're modeling in your functions and type signatures. For instance, consider that a roulette wheel is an unlimited source of randomly chosen values. One way to express this in your Haskell program would be...
wheel :: StdGen -> [Int]
wheel = randomRs (0, 37)
It's useful to encapsulate this because there's nothing about martingale betting systems that involves a random element, the system itself is entirely deterministic.
martingale :: Int -- ^ Current bet
-> Bool -- ^ Result of current spin
-> (Int, Int) -- ^ (Next bet, winnings/losings)
martingale bet True = (1 , bet)
martingale bet False = (bet * 2, negate bet)
Here I've separated out just the betting system aspect of your program, because tracking winnings or choosing ranges are separate functions from determining bets. This way also you can substitute different betting strategies fairly easy as long as it has the same type signature.
You'd tie this all together with a simulation function something like this.
simulate :: (Int -> Bool) -- ^ A picking function
-> (Int -> Bool -> (Int, Int)) -- ^ A betting function
-> Int -- ^ An initial bet
-> (Int -> Maybe Bool) -- ^ Decides when to leave the table
-> [Int] -- ^ A wheel
-> Bool -- ^ Took winnings or unacceptable loss?
This is probably way overkill to start with, but with it you could test biased wheels, or different levels of acceptable losses, or different strategies against particular wheels. At this point also you'd want to start using type aliases, newtypes, or your own data types to disambiguate what's going on. E.g., you'd end up with...
simulate :: (Slot -> Bool)
-> (Stake -> Bool -> (Stake, Winnings))
-> Stake
-> (Winnings -> PlayState)
-> [Slot]
-> PlayState
And used like,
simulate betOnRed martingale 1 quitWhileAhead unbiasedWheel
simulate betOnLow antimartingale 50 bailBelowFifty houseBiasedWheel | {
"domain": "codereview.stackexchange",
"id": 9442,
"tags": "haskell, random, simulation"
} |
Using mega (=10^6) when writing by hand? | Question: I just solved a problem which had answer 1.7*10^6 m (m=meters)
If I wanted to write this using M=10^6 it would be 1.7 Mm, which if I write it by hand would look like "1.7 mm" which is confusing. Is there a convention for this?
Answer: Two conventions.
First - use a capital M - make sure you make it big and pointy, so it cannot be confused with lower case:
When it is right next to the lower case 'm', the difference should stand out clearly.
Second - some people use the "computer short hand" E6:
1.7E6 m
This is generally understood to mean (but quicker to write than) $1.7\cdot 10^{6}\mathrm{\;m}$, but more often used with a keyboard than when written by hand - and it can lead to confusion (see for example Rob's comment - who has seen that some students appear to think that 1.23e6 = $1.23 \cdot e^{6} \approx 496$). Note that using a capital $E$ as opposed to a lower-case $e$ ought to reduce the confusion... but when things are open to misinterpretation, somebody will misinterpret. And the consequences can be significant.
If all else fails, take the extra second and write the exponential in full. Time taken to communicate your intention clearly is invariably time well spent. This is true in exams as in life. | {
"domain": "physics.stackexchange",
"id": 21850,
"tags": "soft-question, notation"
} |
Web ToDo application in JavaScript, HTML, CSS | Question: I would like to share with you the code from the application I just finished. It is a web application written in HTML, CSS, JavaScript and jQuery. It uses LocalStorage to store your tasks. The application also displays the date of adding and completing the task. You can add, edit, delete and mark tasks as done.
I would ask for some code review. I do not know good practices in JavaScript yet so I need help from specialists. The application works fully and can be tested here.
The whole code (if it is illegible here) can be found on my GitHub here.
$(function() {
if (typeof(Storage) !== "undefined") {
var AddTaskButton = document.querySelector("#add-task-text");
var input = document.querySelector("#add-task-input");
var tasks = (JSON.parse(localStorage.getItem('ToDoApp')) != null) ? JSON.parse(localStorage.getItem('ToDoApp')) : [];
AddTaskButton.addEventListener('click', () => {
addTask();
});
input.addEventListener('focus', () => {
input.addEventListener('keydown', (key) => {
if(key.keyCode === 13) {
addTask();
}
});
});
var addTask = () => {
var taskContent = document.querySelector('#add-task-input').value;
taskContent = taskContent.replace(/^\s+|\s+$/g, '');
taskContent = taskContent.trim();
if(taskContent != 0 ){
var task = prepareTaskForLocalstorage(null, getDateAndTime(), taskContent, false);
tasks.push(task);
saveTaskToLocalStorage(tasks);
listTasksFromLocalStorage(tasks, true);
document.querySelector('#add-task-input').value = '';
}
};
var getDateAndTime = () => {
var d = new Date();
var year = d.getFullYear();
var month = d.getMonth();
month += month;
if (month < 10) {
month = '0' + month;
}
var day = d.getDate();
if(day < 10) {
day = '0' + day;
}
var hour = d.getHours();
var minutes = d.getMinutes();
if (minutes < 10) {
minutes = '0' + minutes;
}
var fullDateAndTime = year + '-' + month + '-' + day + ' ' + hour + ':' + minutes;
return fullDateAndTime;
};
var prepareTaskForLocalstorage = (completion_date, create_date, content, ifChecked) => {
return {
"completion_date": completion_date,
"create_date": create_date,
"content": content,
"checked": ifChecked
};
};
var saveTaskToLocalStorage = (task) => {
localStorage.setItem("ToDoApp", JSON.stringify(task));
};
var listTasksFromLocalStorage = (ta, animate) => {
document.getElementById('list-of-todos').innerHTML = "";
document.getElementById('list-of-done-tasks').innerHTML = "";
for (var i = 0; i < ta.length; i++) {
if(!ta[i].checked) {
var ul = document.querySelector('#list-of-todos');
var li = document.createElement('li');
if(animate == true && i == ta.length-1) {
li.classList.add('anim');
}
var div_structure = `
<div class="check-box">
<input type="checkbox">
<label for="checkBox"></label>
</div>
<div class="task-text">` + ta[i].content + `</div>
<div class="edit-delete-date-hour">
<span class="edit">Edit</span>
<span class="delete">Delete</span>
<span class="date-hour" title="The date and time the task was added">` + ta[i].create_date + `</span>
</div>
`;
li.innerHTML = div_structure;
ul.prepend(li);
} else {
var ul = document.querySelector('#list-of-done-tasks');
var li = document.createElement('li');
var div_structure = `
<div class="task-text">` + ta[i].content + `</div>
<div class="edit-delete-date-hour">
<span class="moveToToDo">To Do</span>
<span class="delete">Delete</span>
<span class="date-hour" title="The date and time the task was added">` + ta[i].create_date + `</span>
<span class="date-hour date-hour-completion" title="The date and time when the task was completed">` + ta[i].completion_date + `</span>
</div>
`;
li.innerHTML = div_structure;
ul.prepend(li);
}
}
};
listTasksFromLocalStorage(tasks, false);
$(document).on('click', '.check-box label', function() {
makeTaskDone(this);
});
var makeTaskDone = (t) => {
var doneTaskContent = t.parentNode.parentNode.getElementsByClassName('task-text')[0].innerHTML;
var obj = tasks.find(o => o.content === doneTaskContent);
obj.checked = true;
obj.completion_date = getDateAndTime();
saveTaskToLocalStorage(tasks);
listTasksFromLocalStorage(tasks, false);
};
$(document).on('click', '.moveToToDo', function() {
moveTaskToToDoList(this);
});
var moveTaskToToDoList = (t) => {
var clickedElement = t.parentNode.parentNode;
var taskContent = clickedElement.getElementsByClassName('task-text')[0].innerHTML;
var obj = tasks.find(o => o.content === taskContent);
obj.checked = false;
saveTaskToLocalStorage(tasks);
listTasksFromLocalStorage(tasks, false);
};
$(document).on('click', '.delete', function() {
deleteTask(this);
});
var deleteTask = (t) => {
var clickedElement = t.parentNode.parentNode;
clickedElement.classList.add('anim-hide');
setTimeout(function() {
var taskContent = clickedElement.getElementsByClassName('task-text')[0].innerHTML;
var obj = tasks.find(o => o.content === taskContent);
tasks.splice(tasks.indexOf(obj), 1);
saveTaskToLocalStorage(tasks);
listTasksFromLocalStorage(tasks, false);
}, 400);
};
$(document).on('click', '.edit', function() {
editTask(this);
});
var editTask = (t) => {
var clickedElement = t.parentNode.parentNode;
var taskElement = clickedElement.getElementsByClassName('task-text')[0];
var taskContent = clickedElement.getElementsByClassName('task-text')[0].innerHTML;
taskElement.setAttribute("contenteditable", "true");
taskElement.focus();
var obj = tasks.find(o => o.content === taskContent);
taskElement.addEventListener('focusout', () => {
taskElement.setAttribute("contenteditable", "false");
var taskNewContent = clickedElement.getElementsByClassName('task-text')[0].innerHTML;
obj.content = taskNewContent;
saveTaskToLocalStorage(tasks);
listTasksFromLocalStorage(tasks, false);
});
taskElement.addEventListener('keydown', (key) => {
if(key.keyCode === 13) {
taskElement.setAttribute("contenteditable", "false");
var taskNewContent = clickedElement.getElementsByClassName('task-text')[0].innerHTML;
obj.content = taskNewContent;
saveTaskToLocalStorage(tasks);
listTasksFromLocalStorage(tasks, false);
}
});
};
} else {
confirm.log('Unfortunately, LocalStorage does not work on your computer');
}
});
html,
body {
margin: 0;
padding: 0;
width: 100%;
min-height: 100%;
height: auto;
height: auto;
}
html {
position: relative;
}
body {
font-family: 'Roboto', sans-serif;
background: #005AA7;
background: -webkit-linear-gradient(to bottom, #FFFDE4, #005AA7);
background: linear-gradient(to bottom, #FFFDE4, #005AA7);
}
#container {
width: 100%;
max-width: 2000px;
margin-left: auto;
margin-right: auto;
height: 100%;
padding-bottom: 50px;
}
h1 {
margin: 30px 0;
font-size: 75px;
color: #fff;
text-shadow: 5px 0px 1px rgba(198, 167, 39, 1);
}
header, footer {
text-align: center;
}
#lists {
display: flex;
flex-wrap: wrap;
justify-content: space-evenly;
}
#todo > header,
#done > header {
margin-top: 20px;
font-size: 30px;
font-weight: 400;
color: #fff;
}
.tasks-list {
border:20px ridge #e6c335;
min-height: 600px;
width: 500px;
background: #20002c;
background: -webkit-linear-gradient(to top, #cbb4d4, #20002c);
background: linear-gradient(to top, #cbb4d4, #20002c);
}
.tasks {
width: 100%;
text-align: left;
}
ul {
list-style-type: none;
padding: 0;
}
li {
font-size: 25px;
width: 96%;
margin-left: auto;
margin-right: auto;
border-radius: 4px;
margin-bottom: 10px;
height: 100%;
box-sizing: border-box;
padding: 6px 10px;
background: #3f4c6b;
color: #fff;
}
li:last-child {
border-bottom: 0px;
}
.check-box {
width: 20px;
position: relative;
float: left;
}
.check-box label {
cursor: pointer;
position: absolute;
width: 20px;
height: 20px;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background: #e7c41c;
}
.check-box label:after {
opacity: 0.3;
content: '';
position: absolute;
width: 9px;
height: 5px;
background: transparent;
top: 5px;
left: 5px;
border: 3px solid #514405;
border-top: none;
border-right: none;
transform: rotate(-45deg);
}
.check-box label:hover::after {
opacity: 0.5;
}
.check-box input[type=checkbox]:checked + label:after {
opacity: 1;
}
.task-text {
float: left;
margin-left: 10px;
max-width: 90%;
padding-top: 4px;
min-height: 20px;
line-height: 20px;
font-weight: 400;
word-wrap: break-word;
}
#list-of-done-tasks {
margin-top: 50px;
}
#list-of-done-tasks > li > .task-text {
margin-left: 0px;
}
.edit-delete-date-hour {
clear: both;
font-size: 13px;
padding-top: 7px;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.moveToToDo:hover,
.edit:hover,
.delete:hover {
cursor: pointer;
}
.moveToToDo::after,
.edit::after {
content: '';
margin-right: 3px;
}
.delete::after {
content: "\00a0\00a0\007C";
color: #74f90e;
}
.date-hour {
font-size: 11px;
}
#list-of-done-tasks > li > .edit-delete-date-hour > .date-hour-completion::before {
content: "\00a0\2192\00a0\00a0";color: #74f90e;
}
.date-hour::before {
content: "\00a0";
}
#add-task {
width: 96%;
margin-left: auto;
margin-right: auto;
border-radius: 4px;
height: 35px;
box-sizing: border-box;
margin-top: 50px;
margin-bottom: 40px;
display: flex;
}
input {
border: none;
}
input[type="text"],
textarea {
width: 90%;
background: #e7c41c;
color: #171c2b;
box-sizing: border-box;
padding: 1px 10px;
font-size: 20px;
border: 2px dashed #e7c41c;
border-right: 3px solid #bc9f14;
}
input[type="text"]:focus,
textarea:focus {
color: #0b0e16;
border: 2px dashed #a38a11;
outline: none!important;
}
#add-task-input,
#add-task-text {
float: left;
}
#add-task-text{
text-align: center;
font-size: 22px;
font-weight: 400;
width: 14.5%;
box-sizing: border-box;
height: 35px;
line-height: 33px;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
background: #e7c41c;
color: #0b0e16;
}
#add-task-text:hover {
cursor: pointer;
background: #d8b21a;
}
footer {
position: absolute;
bottom: 10px;
left: 0;
right: 0;
font-size: 15px;
color: #b1ea20;
}
.anim {
animation-name: animation-show;
animation-duration: 1s;
}
@keyframes animation-show {
from {opacity: 0;}
to {opacity: 1;}
}
.anim-hide {
animation-name: animation-hide;
animation-duration: 0.5s;
}
@keyframes animation-hide {
from {opacity: 1;}
to {opacity: 0;}
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<meta name="Description" content="ToDo application for planning your precious time">
<meta name="author" content="Fabian Zwoliński">
<title>ToDo App</title>
<link rel="stylesheet" type="text/css" href="styles/normalize.css">
<link href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700" rel="stylesheet">
<link rel="stylesheet" type="text/css" href="styles/style.css">
<link type="text/css" media='(max-width: 1120px)' rel='stylesheet' href='styles/responsive.css' />
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
</head>
<body>
<div id="container">
<header>
<h1>Simple ToDo App</h1>
</header>
<div id="lists">
<div id="todo" class="tasks-list">
<header>TASKS TO DO</header>
<div class="tasks">
<div id="add-task">
<input type="text" id="add-task-input" maxlength="60">
<div id="add-task-text">ADD</div>
<div style="clear: both;"></div>
</div>
<ul id="list-of-todos">
</ul>
</div>
</div>
<div id="done" class="tasks-list">
<header>DONE TASKS</header>
<div class="tasks">
<ul id="list-of-done-tasks">
</ul>
</div>
</div>
</div>
</div>
<footer>
© Copyright 2018, Fabian Zwoliński
</footer>
<script src="scripts/script.js"></script>
</body>
</html>
Answer: ECMA Script 6 Features
You are using ES6 Template Literals, but you aren't fully utilizing the syntactic sugar they provide.
Template literals don't just give you multi line strings, they give you string interpolation, which does away with the old string-concatenation style of building a string dynamically.
Note that:
var div_structure = `
<div class="task-text">` + ta[i].content + `</div>
<div class="edit-delete-date-hour">
<span class="moveToToDo">To Do</span>
<span class="delete">Delete</span>
<span class="date-hour" title="The date and time the task was added">` + ta[i].create_date + `</span>
<span class="date-hour date-hour-completion" title="The date and time when the task was completed">` + ta[i].completion_date + `</span>
</div>
`;
Can become:
var div_structure = `
<div class="task-text">
${ta[i].content}
</div>
<div class="edit-delete-date-hour">
<span class="moveToToDo">To Do</span>
<span class="delete">Delete</span>
<span class="date-hour" title="The date and time the task was added">
${ta[i].create_date}
</span>
<span class="date-hour date-hour-completion" title="The date and time when the task was completed">
${ta[i].completion_date}
</span>
</div>
`;
And a few more line breaks and indentation around the tokens can assist with readability:
Instead of this:
<div class="task-text">${ta[i].content}</div>
Use this:
<div class="task-text">
${ta[i].content}
</div>
Using jQuery
Without seeing your whole project, it appears the only coupling this code has to jQuery is by utilizing its Event Delegation feature. Since you are only filtering elements by class name, it's easy to roll your own event delegation framework.
However, if you are using jQuery heavily elsewhere, I see no need to decouple that from your code.
But a simple event delegation framework could be:
function delegateEvent(eventType, isMyElement, handleEvent) {
document.documentElement.addEventListener(eventType, (event) => {
var currentTarget = event.target;
while (currentTarget && !isMyElement(currentTarget)) {
currentTarget = currentTarget.parentNode;
}
if (currentTarget) {
handleEvent(event, currentTarget);
}
});
}
And to use it:
delegateEvent("click", (el) => el.classList.contains("edit"), (event, target) => {
event.preventDefault();
editTask(target);
});
No need for jQuery. And no need to wait for the document to be "ready" or "loaded" because the document.documentElement property references the <html> tag the very moment that JavaScript begins executing. Bubbling events do bubble up to the <html> element. | {
"domain": "codereview.stackexchange",
"id": 30836,
"tags": "javascript, jquery, css, html5, to-do-list"
} |
Is semiconductor theory really based on quantum mechanics? | Question: Often semiconductors are cited as the big application of quantum mechanics (QM), but when looking back at my device physics book basically no quantum mechanics is used. The concept of a quantum well is presented and some derivations are done, but then the next chapter mostly ignores this and goes back to statistical physics along with referencing experimentally verified constants to explain things like carrier diffusion, etc.
Do we really need quantum mechanics to get to semiconductor physics? Outside of providing some qualitative motivation to inspire I don't really see a clear connection between the fields. Can you actually derive transistor behaviour from QM directly?
Answer:
Do we really need quantum mechanics to get to semiconductor physics?
It depends what level of understanding you're interested in. For example, are you simply willing to take as gospel that somehow electrons in solids have different masses than electrons in a vacuum? And that they can have different effective masses along different direction of travel? That they follow a Fermi-Dirac distribution? That band gaps exist? Etc.
If you're willing to accept all these things (and more) as true and not worry about why they're true, then quantum mechanics isn't really needed. You can get very far in life modeling devices with semi-classical techniques.
However, if you want to understand why all that weird stuff happens in solids, then yes, you need to know quantum mechanics.
Can you actually derive transistor behavior from QM directly?
It depends on the type of transistor. If you're talking about a TFET (or other tunneling devices, like RTDs and Zener diodes), then I challenge you to derive its behavior without quantum mechanics! However, if you're talking about most common transistors (BJTs, JFETs, MOSFETs, etc.), then deriving their behavior from quantum mechanics is a lot of work because the systems are messy and electrons don't "act" very quantum because of their short coherence time in a messy environment. However, the semi-classical physics used for most semiconductor devices does absolutely have a quantum underpinning. But there's a good reason it's typically not taught from first principles.
Anecdote: One time, I was sitting next to my advisor at a conference, and there was a presentation that basically boiled down to modeling a MOSFET using non-equilibrium greens functions (which is a fairly advanced method from quantum mechanics). During the presentation, my advisor whispered to me something along the lines of: "Why the heck are they using NEGF to model a fricking MOSFET?!?" In other words, just because you can use quantum mechanics to model transistors, doesn't mean you should. There are much simpler methods that are just as accurate (if not more accurate). | {
"domain": "physics.stackexchange",
"id": 80577,
"tags": "quantum-mechanics, semiconductor-physics"
} |
Is "applying a voltage" the same as "applying a potential" to an electrode? | Question: From what I understand, voltage is the potential difference, but it seems like the terms are used interchangeably. This is confusing me because I am only just learning what these terms mean. I'm also not sure how either is "applied" - I think applying a voltage to an electrode would mean providing a difference in charge somewhere to allow a current to flow from one to the other, and applying a potential would mean providing it potential energy, which would be done the same way as when applying a voltage?
Answer: You are right. Voltage is an electric potential difference. The concept of potentials is more general (e.g. gravitational potential) in physics. | {
"domain": "physics.stackexchange",
"id": 26362,
"tags": "electricity, terminology, potential, voltage"
} |
Find all k-sum paths in a binary tree | Question: Description:
Given a binary tree and a sum, find all root-to-leaf paths where each path's sum equals the given sum.
Note: A leaf is a node with no children.
Leetcode
Code:
class Solution {
private List<List<Integer>> paths = new ArrayList<>();
public List<List<Integer>> pathSum(TreeNode root, int sum) {
traverse(root, sum, new ArrayList<Integer>());
return paths;
}
private void traverse(TreeNode root, int sum, ArrayList<Integer> path) {
if (root != null) {
path.add(root.val);
if (root.left == null && root.right == null && sum == root.val) {
paths.add((ArrayList) path.clone());
}
traverse(root.left, sum - root.val, path);
traverse(root.right, sum - root.val, path);
path.remove(path.size() - 1);
}
}
}
Answer: It's a fine solution. I have a few minor comments.
It's easy to overlook that pathSum doesn't clear the content of paths, which will affect the returned value from subsequent calls, which is likely to be unexpected by callers. In this online puzzle it doesn't seem to matter, but I think it's better to avoid any possible confusion.
The clone with the cast is ugly. The .clone() method except on arrays is a bit controversial, with questionable benefits if any. I suggest to avoid it. You can replace it with a nice clean paths.add(new ArrayList<>(path))
The signature of traverse would be better to use List instead of ArrayList.
I would use an early return in traverse, to have less indented code. | {
"domain": "codereview.stackexchange",
"id": 31106,
"tags": "java, algorithm, programming-challenge, tree, interview-questions"
} |
Get character occurence count along with character in a string | Question: I want to find character sequence count in a given string.
Sample Input: aaaabbbbaaacccbbb
Output: a4b4a3c3b3
My below function is working great and giving me the same result. but can this be optimized?
function getCharCount(str) {
var result = str.charAt(0);
var count = 1;
if (str.length == 1) {
result += count;
return result;
} else {
for(var i=1;i<str.length;i++) {
if(str.charAt(i) != str.charAt(i-1)) {
result += count + str.charAt(i);
count = 1;
} else {
count++;
}
if (i == str.length - 1) {
result += count;
}
}
return result;
}
}
Answer: I don't see any problem in your code except using of equality and inequality operators. Use strict equality(===) & inequality(!==) operators.
I'll suggest to use RegEx.
.replace(/(.)\1*/g, function(m, $1) {
return $1 + m.length;
})
The RegEx (.)\1* will match a single non-line-break character and check if that is followed by the same character any number of times. m here is the complete match and $1 is the first chaptured group value i.e. the character.
var res = 'aaaabbbbaaacccbbb'
.replace(/(.)\1*/g, function(m, $1) {
return $1 + m.length;
});
console.log(res); | {
"domain": "codereview.stackexchange",
"id": 23114,
"tags": "javascript, performance, strings"
} |
How to send data of a 3d camera connected with one ros computer into second ros computer to compute the data in second computer? | Question:
My robot running ros in raspberry pi so it's computational power is low. I need to send the 3d camera data connected to rasperry pi of robot into my laptop via wifi to compute the data and form a 3d map. I'm using kinect 360 as the 3d camera and kinetic distriution of ros.
Originally posted by Manish12344321 on ROS Answers with karma: 1 on 2020-03-14
Post score: 0
Answer:
You should be able to do this by the ROS networking instructions.
You just need them on the same network and setup the ROS master URI so that they can create the important connections.
Originally posted by stevemacenski with karma: 8272 on 2020-03-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 34588,
"tags": "ros-kinetic"
} |
What could cause training CNN accuracy to drop after 7th epoch? | Question: I am training a CNN on some new dataset.
Usually, the accuracy steadily improves over 10-20 epochs.
I have created a new but similar dataset (using same methods) but now I see a sharp drop after 7th epoch, from which is never recovers.
What might this mean I am doing wrong?
3520/3662 [===========================>..] - ETA: 7s - loss: 0.1260 - acc: 0.9753
3552/3662 [============================>.] - ETA: 5s - loss: 0.1294 - acc: 0.9752
3584/3662 [============================>.] - ETA: 4s - loss: 0.1283 - acc: 0.9754
3616/3662 [============================>.] - ETA: 2s - loss: 0.1360 - acc: 0.9751
3648/3662 [============================>.] - ETA: 0s - loss: 0.1348 - acc: 0.9753
3662/3662 [==============================] - 199s 54ms/step - loss: 0.1387 - acc: 0.9752
Epoch 7/50
32/3662 [..............................] - ETA: 3:29 - loss: 1.1921e-07 - acc: 1.0000
64/3662 [..............................] - ETA: 3:44 - loss: 1.1921e-07 - acc: 1.0000
96/3662 [..............................] - ETA: 3:43 - loss: 0.5037 - acc: 0.9688
128/3662 [>.............................] - ETA: 3:36 - loss: 0.6296 - acc: 0.9609
160/3662 [>.............................] - ETA: 3:29 - loss: 0.8059 - acc: 0.9500
192/3662 [>.............................] - ETA: 3:23 - loss: 0.7555 - acc: 0.9531
224/3662 [>.............................] - ETA: 3:20 - loss: 0.7915 - acc: 0.9509
256/3662 [=>............................] - ETA: 3:17 - loss: 0.6926 - acc: 0.9570
288/3662 [=>............................] - ETA: 3:14 - loss: 0.7276 - acc: 0.9549
320/3662 [=>............................] - ETA: 3:11 - loss: 0.9570 - acc: 0.9406
352/3662 [=>............................] - ETA: 3:08 - loss: 1.0990 - acc: 0.9318
384/3662 [==>...........................] - ETA: 3:05 - loss: 1.1531 - acc: 0.9271
416/3662 [==>...........................] - ETA: 3:02 - loss: 1.1032 - acc: 0.9303
448/3662 [==>...........................] - ETA: 3:00 - loss: 1.2832 - acc: 0.9174
480/3662 [==>...........................] - ETA: 2:58 - loss: 1.3991 - acc: 0.9104
512/3662 [===>..........................] - ETA: 2:55 - loss: 1.4691 - acc: 0.9062
544/3662 [===>..........................] - ETA: 2:53 - loss: 1.4420 - acc: 0.9081
576/3662 [===>..........................] - ETA: 2:50 - loss: 1.5018 - acc: 0.9045
608/3662 [===>..........................] - ETA: 2:48 - loss: 1.4492 - acc: 0.9079
640/3662 [====>.........................] - ETA: 2:46 - loss: 1.3768 - acc: 0.9125
672/3662 [====>.........................] - ETA: 2:44 - loss: 1.3112 - acc: 0.9167
704/3662 [====>.........................] - ETA: 2:42 - loss: 1.3431 - acc: 0.9148
736/3662 [=====>........................] - ETA: 2:40 - loss: 1.3066 - acc: 0.9171
768/3662 [=====>........................] - ETA: 2:38 - loss: 1.2941 - acc: 0.9180
800/3662 [=====>........................] - ETA: 2:36 - loss: 1.3431 - acc: 0.9150
832/3662 [=====>........................] - ETA: 2:34 - loss: 1.4464 - acc: 0.9087
864/3662 [======>.......................] - ETA: 2:32 - loss: 1.5980 - acc: 0.8993
896/3662 [======>.......................] - ETA: 2:30 - loss: 1.7568 - acc: 0.8895
928/3662 [======>.......................] - ETA: 2:28 - loss: 2.0263 - acc: 0.8728
960/3662 [======>.......................] - ETA: 2:27 - loss: 2.1434 - acc: 0.8656
992/3662 [=======>......................] - ETA: 2:25 - loss: 2.3017 - acc: 0.8558
1024/3662 [=======>......................] - ETA: 2:23 - loss: 2.3557 - acc: 0.8525
1056/3662 [=======>......................] - ETA: 2:21 - loss: 2.5133 - acc: 0.8428
1088/3662 [=======>......................] - ETA: 2:20 - loss: 2.6468 - acc: 0.8346
1120/3662 [========>.....................] - ETA: 2:18 - loss: 2.8590 - acc: 0.8214
1152/3662 [========>.....................] - ETA: 2:16 - loss: 2.9475 - acc: 0.8160
1184/3662 [========>.....................] - ETA: 2:14 - loss: 3.1265 - acc: 0.8049
1216/3662 [========>.....................] - ETA: 2:13 - loss: 3.2430 - acc: 0.7977
1248/3662 [=========>....................] - ETA: 2:11 - loss: 3.3794 - acc: 0.7893
1280/3662 [=========>....................] - ETA: 2:09 - loss: 3.4964 - acc: 0.7820
1312/3662 [=========>....................] - ETA: 2:07 - loss: 3.5831 - acc: 0.7767
1344/3662 [==========>...................] - ETA: 2:05 - loss: 3.7257 - acc: 0.7679
1376/3662 [==========>...................] - ETA: 2:04 - loss: 3.8030 - acc: 0.7631
1408/3662 [==========>...................] - ETA: 2:02 - loss: 3.8883 - acc: 0.7578
1440/3662 [==========>...................] - ETA: 2:00 - loss: 4.0369 - acc: 0.7486
1472/3662 [===========>..................] - ETA: 1:58 - loss: 4.0587 - acc: 0.7473
1504/3662 [===========>..................] - ETA: 1:56 - loss: 4.2081 - acc: 0.7380
1536/3662 [===========>..................] - ETA: 1:55 - loss: 4.2673 - acc: 0.7344
1568/3662 [===========>..................] - ETA: 1:53 - loss: 4.4167 - acc: 0.7251
1600/3662 [============>.................] - ETA: 1:51 - loss: 4.4895 - acc: 0.7206
1632/3662 [============>.................] - ETA: 1:49 - loss: 4.5694 - acc: 0.7157
1664/3662 [============>.................] - ETA: 1:48 - loss: 4.6074 - acc: 0.7133
1696/3662 [============>.................] - ETA: 1:46 - loss: 4.6916 - acc: 0.7081
1728/3662 [=============>................] - ETA: 1:44 - loss: 4.7726 - acc: 0.7031
1760/3662 [=============>................] - ETA: 1:42 - loss: 4.8415 - acc: 0.6989
1792/3662 [=============>................] - ETA: 1:40 - loss: 4.8810 - acc: 0.6964
1824/3662 [=============>................] - ETA: 1:39 - loss: 4.9279 - acc: 0.6935
1856/3662 [==============>...............] - ETA: 1:37 - loss: 4.9558 - acc: 0.6918
1888/3662 [==============>...............] - ETA: 1:35 - loss: 4.9828 - acc: 0.6901
1920/3662 [==============>...............] - ETA: 1:33 - loss: 4.9921 - acc: 0.6896
1952/3662 [==============>...............] - ETA: 1:32 - loss: 4.9846 - acc: 0.6901
1984/3662 [===============>..............] - ETA: 1:30 - loss: 5.0423 - acc: 0.6865
2016/3662 [===============>..............] - ETA: 1:28 - loss: 5.1062 - acc: 0.6825
2048/3662 [===============>..............] - ETA: 1:26 - loss: 5.1602 - acc: 0.6792
2080/3662 [================>.............] - ETA: 1:25 - loss: 5.1970 - acc: 0.6769
2112/3662 [================>.............] - ETA: 1:23 - loss: 5.2022 - acc: 0.6766
2144/3662 [================>.............] - ETA: 1:21 - loss: 5.2449 - acc: 0.6740
2176/3662 [================>.............] - ETA: 1:20 - loss: 5.2937 - acc: 0.6710
2208/3662 [=================>............] - ETA: 1:18 - loss: 5.3337 - acc: 0.6685
2240/3662 [=================>............] - ETA: 1:16 - loss: 5.3295 - acc: 0.6687
2272/3662 [=================>............] - ETA: 1:14 - loss: 5.3538 - acc: 0.6673
2304/3662 [=================>............] - ETA: 1:13 - loss: 5.3843 - acc: 0.6654
2336/3662 [==================>...........] - ETA: 1:11 - loss: 5.4072 - acc: 0.6640
2368/3662 [==================>...........] - ETA: 1:09 - loss: 5.4022 - acc: 0.6643
2400/3662 [==================>...........] - ETA: 1:08 - loss: 5.4510 - acc: 0.6613
2432/3662 [==================>...........] - ETA: 1:06 - loss: 5.4853 - acc: 0.6591
2464/3662 [===================>..........] - ETA: 1:04 - loss: 5.5188 - acc: 0.6571
2496/3662 [===================>..........] - ETA: 1:02 - loss: 5.5707 - acc: 0.6538
2528/3662 [===================>..........] - ETA: 1:01 - loss: 5.5958 - acc: 0.6523
2560/3662 [===================>..........] - ETA: 59s - loss: 5.6455 - acc: 0.6492
2592/3662 [====================>.........] - ETA: 57s - loss: 5.6691 - acc: 0.6478
2624/3662 [====================>.........] - ETA: 55s - loss: 5.7044 - acc: 0.6456
2656/3662 [====================>.........] - ETA: 54s - loss: 5.7631 - acc: 0.6419
2688/3662 [=====================>........] - ETA: 52s - loss: 5.7964 - acc: 0.6399
2720/3662 [=====================>........] - ETA: 50s - loss: 5.8467 - acc: 0.6368
2752/3662 [=====================>........] - ETA: 48s - loss: 5.8783 - acc: 0.6348
2784/3662 [=====================>........] - ETA: 47s - loss: 5.8918 - acc: 0.6340
2816/3662 [======================>.......] - ETA: 45s - loss: 5.9279 - acc: 0.6317
2848/3662 [======================>.......] - ETA: 43s - loss: 5.9405 - acc: 0.6310
2880/3662 [======================>.......] - ETA: 42s - loss: 5.9640 - acc: 0.6295
2912/3662 [======================>.......] - ETA: 40s - loss: 6.0037 - acc: 0.6271
2944/3662 [=======================>......] - ETA: 38s - loss: 6.0260 - acc: 0.6257
2976/3662 [=======================>......] - ETA: 36s - loss: 6.0370 - acc: 0.6250
3008/3662 [=======================>......] - ETA: 35s - loss: 6.0639 - acc: 0.6233
3040/3662 [=======================>......] - ETA: 33s - loss: 6.0743 - acc: 0.6227
3072/3662 [========================>.....] - ETA: 31s - loss: 6.0897 - acc: 0.6217
3104/3662 [========================>.....] - ETA: 29s - loss: 6.1048 - acc: 0.6208
3136/3662 [========================>.....] - ETA: 28s - loss: 6.1351 - acc: 0.6189
3168/3662 [========================>.....] - ETA: 26s - loss: 6.1443 - acc: 0.6184
3200/3662 [=========================>....] - ETA: 24s - loss: 6.1685 - acc: 0.6169
3232/3662 [=========================>....] - ETA: 23s - loss: 6.1822 - acc: 0.6160
3264/3662 [=========================>....] - ETA: 21s - loss: 6.1858 - acc: 0.6158
3296/3662 [==========================>...] - ETA: 19s - loss: 6.1991 - acc: 0.6150
3328/3662 [==========================>...] - ETA: 17s - loss: 6.2073 - acc: 0.6145
3360/3662 [==========================>...] - ETA: 16s - loss: 6.2729 - acc: 0.6104
3392/3662 [==========================>...] - ETA: 14s - loss: 6.2945 - acc: 0.6091
3424/3662 [===========================>..] - ETA: 12s - loss: 6.2875 - acc: 0.6095
3456/3662 [===========================>..] - ETA: 11s - loss: 6.3225 - acc: 0.6073
3488/3662 [===========================>..] - ETA: 9s - loss: 6.3431 - acc: 0.6061
3520/3662 [===========================>..] - ETA: 7s - loss: 6.3633 - acc: 0.6048
3552/3662 [============================>.] - ETA: 5s - loss: 6.4058 - acc: 0.6022
3584/3662 [============================>.] - ETA: 4s - loss: 6.3980 - acc: 0.6027
3616/3662 [============================>.] - ETA: 2s - loss: 6.4217 - acc: 0.6012
3648/3662 [============================>.] - ETA: 0s - loss: 6.4537 - acc: 0.5992
3662/3662 [==============================] - 200s 55ms/step - loss: 6.4730 - acc: 0.5980
Epoch 8/50
Answer: I don't know anything about the data you're using so I'll offer some suggestions.
Your learning rate might be too high. Maybe you're over-shooting the gradient descent such that after you approach a nice vector of coefficients, you're then jumping too far in one epoch and thus losing progress towards the "global optima".
Your activation function might be saturating your units. | {
"domain": "datascience.stackexchange",
"id": 5073,
"tags": "cnn, accuracy"
} |
Quantum capacity for ensemble of Pauli channels | Question: In Preskill's quantum computing notes Chapter 7 approximate page 82, he shows that a Pauli channel has capacity $Q \geq 1-H(p_I,p_X,p_Y,p_Z)$ where $H$ is Shannon entropy and $p_I, p_X, p_Y, p_Z$ are the probabilities of the channel acting like the appropriate Pauli matrix. In particular this gives us the 'hashing bound' or 'random coding bound' for the quantum capacity of the depolarizing channel $Q(p) \geq 1-H(p,1-p)-p\log_23$.
He then describes work of Shor and Smolin [1]: if you take a $m$-repetition code and concatenate it with a suitable random code you can do better than the hashing bound. The argument for this is that taking $m-1$ measurements the inner repetition code thought of as a super channel is a Pauli channel with entropy $H_i$. Then averaging over the $2^{m-1}$ possible classical measurements you can find the average entropy of the superchannel $\langle H \rangle$.
[1] P.W. Shor and J.A. Smolin, “Quantum Error-Correcting Codes Need Not Completely Reveal the Error Syndrome” quant-ph/9604006; D.P. DiVincen, P.W. Shor, and J.A. Smolin, “Quantum Channel Capacity of Very Noisy Channels,” quant-ph/9706061.
Then by random coding on this new channel you can achieve a rate $R=\frac{1-\langle H \rangle}{m}$ (dividing by $m$ to get this rate in bits/original channel use).
I don't see how random coding works. You have a random code which is optimal for each particular channel but how do you decide which one to use? By the time you know the classical measurements for your channel you have already sent the codeword.
So two questions:
1) If you have an ensemble of Pauli channels with average entropy $\langle H\rangle$, can you by using random coding achieve a rate $1-\langle H \rangle$?
2) If you can't do this, am I misinterpreting the results of Shor and Smolin or Preskill's exposition?
Answer: For your question (1), the answer is yes. The 'hashing construction' for quantum encoding is independent of the quantum channel you use, so if you use this construction for encoding, you can send information over an ensemble of Pauli channels at the right rate. (Actually, this is slightly incorrect ... you do need that the input state you use in the formula for coherent information is the same for all the channels, but in the construction discussed by Preskill it is.)
The original paper Shor and Smolin does not mention an ensemble of channels. If you take a depolarizing channel near the quantum-capacity threshold you can achieve a larger quantum capacity by considering a new superchannel where one signal of the superchannel is five consecutive uses of the original channel. When you apply the 'hashing bound' to this superchannel (acting on a 32-dimensional quantum space rather than a 2-dimensional one), you find that the formula for quantum capacity is larger than five times the formula for the original channel. There is only one channel being considered here: the superchannel composed of five uses of the original channel, and not an ensemble of channels. The ensemble of channels comes in when Preskill gives the intuition for what is happening in Shor-Smolin.
(And of course you should replace five in the above paragraph with the appropriate value of $m$, which is only five for some of these constructions.) | {
"domain": "cstheory.stackexchange",
"id": 1512,
"tags": "quantum-computing, quantum-information"
} |
Measuring vs gauging | Question: How would one describe the difference between measuring and gauging? As I understand it at my company, we use a fixed tool to gauge something, but we will use a set of calipers to measure something. Is this a correct understanding?
Answer: I think you have it correct.
We used "go / no go" gauges to check some things and made measurements for others. | {
"domain": "engineering.stackexchange",
"id": 2355,
"tags": "measurements"
} |
How does a capacitor pass the flow? | Question: Capacitor has just two conductor surfaces which have no touch. How can it pass the electric flow when there's no touch between the surfaces?
Answer: Crudely: When negative charges accumulate on one capacitor plate, they repel electrons on the other plate and charge in motion is current. | {
"domain": "physics.stackexchange",
"id": 29555,
"tags": "electric-circuits, electric-current, capacitance"
} |
Total Collapse in Understanding a Pulley | Question:
In the following system, the tension T is equal in magnitude to the weight W.
But how can I set this up in a system of equations?
This is my problem:
$\Sigma F=0=\left( \begin{array}{c}
0\\
-W\\
\end{array} \right) + \left( \begin{array}{c}
-T\cos(a)\\
-T\sin(a)\\
\end{array} \right) \implies T=\frac{-W}{\sin(a)}$, which is obviously wrong.
Where is the flaw in my reasoning here? I have been trying for a while now to understand it, because I am embarrassed to post it here, but now I just want to know how my logic fails me...
Answer: you error is that you applied the equations for a force in a system that is not isolated (the pulley makes a force as it is attached to the wall, same with the rope at the left). You can ignone the pulley if you analyze the individual parts separately.
For the mass: $T-W=0$ so $T=W$ | {
"domain": "physics.stackexchange",
"id": 22648,
"tags": "homework-and-exercises, newtonian-mechanics, equilibrium"
} |
Degrees of freedom in the infinite momentum frame | Question: Lenny Susskind explains in this video at about 40min, as an extended object (for example a relativistic string) is boosted to the infinite momentum frame (sometimes called light cone frame), it has no non-relativistic degrees of freedom in the boost direction. Instead, these degrees of freedom are completely determined by the (non-relativistic) motions in the plane perpendicular to the boost direction.
I dont see why this is, so can somebody explain to me how the degrees of freedom are described in this infinite momentum frame?
Answer: Without seeing the quote/context I can only imagine that it means something like: if you take, say, a cube moving at close to c in the z direction, then (in the frame in which it's moving) its z extent gets Lorentz contracted to virtually zero, so it is effectively now a square in the xy plane and has only the degrees of freedom that a square in the xy plane has. | {
"domain": "physics.stackexchange",
"id": 5809,
"tags": "relativity, reference-frames, coordinate-systems, degrees-of-freedom"
} |
Publishing multiple variables using a single publisher | Question:
I have written a publisher to publish a float32 value to a topic depth_m. Now, I need to publish two more variables and they are of type int, not float. I would like all three of these variables to be published to the same topic. I am referring to the official tutorial. Do I just replace Float32 in the following lines with Int32 and add it to the existing publisher?
ros::Publisher depth_pub = pub_.advertise<std_msgs::Float32>("depth_m", 1000);
std_msgs::Float32 msg;
I believe I will have to edit msg to a different variable name msg1, msg2. According to me the whole thing will have the following lines:
ros::Publisher depth_pub = pub_.advertise<std_msgs::Float32>("depth_m", 1000);
ros::Publisher depth_pub2 = pub_.advertise<std_msgs::Int32>("depth_m", 1000);
std_msgs::Float32 msg;
std_msgs::Int32 msg1;
std_msgs::Int32 msg2;
msg.data=depth;
msg1.data=var1;
msg2.data=var2;
depth_pub.publish(msg);
depth_pub2.publish(msg1);
depth_pub2.publish(msg2);
Is this the right strategy? If do this, will the whole thing act as a single message or multiple messages? Can I avoid two advertise lines by using just std_msgs instead of std_msgs::Float32?
Originally posted by skr_robo on ROS Answers with karma: 178 on 2016-09-28
Post score: 0
Answer:
You cannot publish multiple different types of messages to the same topic. The ROS middleware does not support that. Each type will need its own topic.
You have two options:
if there is no correlation between your Float32 and your Int32 values (ie: they just happen to be published by the same program, but do not share any relationship), you could do what you are doing now: just create two separate publishers and use those.
if it makes sense for all three values to be published as a single message, you could create a new custom message containing three fields (two of type int32, one of type float32). Then create a publisher that publishes your custom message type instead of the Int32 or Float32 you have now. See the Creating a ROS msg and srv tutorial for how to do that.
Note that I'd only recommend using option 2 if this makes sense for your data flows. Don't put things together just because it allows you to "avoid two advertise lines".
Originally posted by gvdhoorn with karma: 86574 on 2016-09-28
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by skr_robo on 2016-09-28:
Actually the three values are related and it would be helpful for them to stay together. I will try creating custom message type and will update soon. Thank You.
Comment by skr_robo on 2016-09-28:
It worked. Thank You. | {
"domain": "robotics.stackexchange",
"id": 25857,
"tags": "ros, publisher"
} |
Confusion about twin paradox | Question: Suppose that a man departs from Earth to reach a planet that is ninety-nine light-years away with 0.9802 light speed, c. According to special relativity, it takes him 40 years (measured by his own clock) to reach the planet and then return to Earth. But for the Earth's clock, 202 years have passed. When he reaches the planet, he would see his own clock reading 20 years and the Earth's clock reading 3.96 years, but the people on Earth would see their own clock reading 101 years.
To make things simple, think of two rockets, each moving without change in velocity. The man rides the first going out and the second going out. Both of the rockets would meet at the planet, and it is time for him to change his rocket (change in frame). The time needed for him to return to earth is 3.96 years as well. So this just indicates that during the change of frame, at that instant 194.08 years pass when he looks at the Earth clock. In other words, when he is in the first rocket the moment he reaches the planet, he reads the Earth clock reading 3.96 years. But the instant he is in the second rocket, when he looks at the Earth clock, it reads 198.04 years!
Why does the change in the frame (suppose that no time is needed to change his frame) alter the reading of the Earth clock so suddenly? And how?
Answer: When I face west, Los Angeles is 3000 miles in front of me (in my frame, of course). When I turn my body 90 degrees and face north, Los Angeles is 3000 miles to my left (in my new frame). Do you want to say that Los Angeles moved several thousand miles in the instant that I changed frames?
When I am at the end of my outbound journey, the earth clock reads 3.96 right now (in my frame, of course). When I change my velocity by 180 degrees, the earth clock reads 3.96 about 980 years ago (in my new frame). Do you want to say that earth clocks jumped forward almost 1000 years in the instant that I changed frames?
Los Angeles is what it is, and does not care what frame I use to describe it. If I change frames (and turning around is changing frames in space), I will have a new and different way to describe the same old location in space of the same old Los Angeles. An earth clock reading 3.96 is what it is and does not care what frame I use to describe it. If I change frames (and switching from an outbound rocket to an inbound rocket is changing frames), I will have a new and different way to describe the position in spacetime of the same old reading on the same old earth clock.
Again: "The clock reads 3.96 right now" and "The clock was reading 3.96 980 years ago" are perfectly analogous to "Los Angeles is 3000 miles ahead" and "Los Angeles is 3000 miles to my left". Each of these four statements is true in some frames and not in others. If you combine a statement that is true only in Frame A with a statement that is true only in Frame B, you are of course going to appear to get nonsense (like "Los Angeles moved several thousand miles in an instant" or "those clocks jumped 980 years in an instant"). But really, there's nothing to worry about.
Original post ends here.
This addendum is in response to a comment from which I will quote, because comments have a way of disappearing:
I still have a doubt. If I am correct, the event E mentioned above refers to "Earth clock reads 3.96 in return ship frame". If so...
Los Angeles is a location in space. Frames on earth --- like lines of latitude/longitude, or the grid I impose when I face north and define "forward, back, left, right", or the grid I impose when I face northeast and do the same thing --- give different descriptions of that location (like "3000 miles to the left" or "2121 miles to the left and 2121 miles backward"). But the location is what it is. "Los Angeles in the latitude/longitude frame" is not a location; it is a description of a location. There is no such location as "Los Angeles in the latitude/longitude frame".
.
"Earth clock reads 3.96" is an event in spacetime. Frames on spacetime --- like the grids imposed by travelers on various spaceships --- give different descriptions of that event (by assigning it time and location coordinates). But the event is what it is. "Earth clock reads 3.96 in return ship frame" is not an event; it is a pair of coordinates that describe an event. There is no such event as "Earth clock reads 3.96 in return ship frame". | {
"domain": "physics.stackexchange",
"id": 82429,
"tags": "special-relativity, reference-frames, time-dilation"
} |
Why do wheels roll if the force of friction does not depend on surface area? | Question: Introductory textbooks explain that friction does not depend on surface area. They usually illustrate this with an image like figure 10.41 on the left shown below:
They then also explain that a block of mass $m$ will be able to attain a maximum angle $\theta_{max}$ without slipping down hill. At this maximum angle, the force of static friction will be as follows $F^f_{s,max} =\mu_s F^n$ where $\mu_s$ is the coefficient of static friction. But now imagine the following scenario depicted in the rightmost image above where we have a block of mass $m$ at rest placed on an incline at its maximal angle before slipping occurs
If we now take the exact same situation but replace the block of mass $m$ with a wheel or sphere of mass $m$ as shown in the image above, intuitively, we know that the wheel will not be at rest unlike the block. It will roll down the incline. But the force of gravity for the block and the wheel are same. So I must infer that the frictional force on a wheel is less than the frictional force on an otherwise equal block ($\theta_{block}=\theta_{wheel}$, $m_{block}=m_{wheel}$ etc..).
I have two explanations for this but I'm not sure of their validity. The first is to realize that for the wheel/sphere, the center of mass is not directly above its 'point of contact with the incline' and hence is unstable and will "topple over" due to gravity. This "toppling occurs" immediately and hence the static frictional force possibly does not have enough time (?) to grow and become equal and opposite to the x component of the force of gravity. The second explanation is to suppose that the coefficient of friction does in fact depend on surface area. So do either of these explanations validly explain why the static frictional force is lower for the sphere than it is for the block or is there some other reason?
Answer: Sure, the wheel will not remain at rest. But it also will not slide. And that was the condition you set up for the block. So there is no difference in terms of friction between the two.
There is only a difference in terms of toppling tendency, which you also touched at in your next paragraph. This is due to the mismatched centre-of-mass.
Imagine placing the brick on one end instead. If its centre of mass is located beyond the edge of the contact area, then it will topple over. This does not mean that static friction can't hold on to it but rather that the brick is "lifting itself off" of the surface by hinging over the contact point so that static friction doesn't apply anymore. After toppling over, the momentum it gains might keep it toppling (depending on the surface it lands on). If the brick then happens to continue toppling all the way down, then at no point during this tumbling did it slide. It just "let go" of the previous spot to which it was stuck (it was stuck along the parallel direction only) due to static friction.
A wheel does the same thing just with an infinitely small contact area. It basically "topples" constantly but never slides - this is what we call rolling. (This is why the invention of the wheel was quite a paradigm shift, theoretically at least, in engineering: motion without kinetic friction means motion without energy loss ideally. That's quite something.) | {
"domain": "physics.stackexchange",
"id": 81863,
"tags": "newtonian-mechanics, rotational-dynamics, friction"
} |
Which melts faster - ice cream or lollipop/popsicle? | Question: My five-year-old wants to know whether ice cream or an ice lollipop (aka a popsicle) melts faster. Does the dairy content in ice cream make it melt more slowly or more quickly?
Update:
Assume constant mass and surface area to volume ratio
Answer: The ice cream melts faster. That is because there is milk in ice cream, while there is water in a popsicle. There is ice coating on the popsicle, while there is none on the ice cream, therefore making the ice cream melt faster than the popsible. | {
"domain": "physics.stackexchange",
"id": 81258,
"tags": "thermodynamics, everyday-life, phase-transition"
} |
Generalizing the "Extended System Method" | Question: After looking into molecular dynamics simulations for NVT and NPH ensembles, I noticed a peculiar kind of Lagrangian transform they do.
Starting with a Lagrangian like,
\begin{align}
\mathcal{L}(q, \dot{q}) = \frac{1}{2} \sum_i \| \dot{q}_i \|^2 + U(q)
\end{align}
In Andersen, Hans C. "Molecular dynamics simulations at constant pressure and/or temperature." The Journal of chemical physics 72.4 (1980): 2384-2393. (also in this), the author makes a transform to a new Lagrangian,
\begin{align}
\mathcal{L}_P(\phi, \dot{\phi}, V, \dot{V}) = \mathcal{L}(V^{1/3} \phi, V^{1/3} \dot{\phi}) + \frac{\eta}{2} \dot{V}^2 + P V
\end{align}
where $ P $ is the conserved pressure and $ \eta $ is the "mass" of the piston. With this new scaled space (ie. $ q \rightarrow q/V^{1/3} $) Lagrangian, the author proves that pressure is conserved.
Similarly, in Nosé, Shuichi. "A unified formulation of the constant temperature molecular dynamics methods." The Journal of chemical physics 81.1 (1984): 511-519., the author transforms to,
\begin{align}
\mathcal{L}_T(\phi, \dot{\phi}, s, \dot{s}) = \mathcal{L}(\phi, s \dot{\phi}) + \frac{\nu}{2}\dot{s}^2 + g k T \log s
\end{align}
with $ T $ being the conserved temperature and $ \nu $ the "mass" for the time scaling term. The author then proves that this time scaled (ie. $ dt \rightarrow dt/s $) lagrangian conserves temperature.
There is a noticeable symmetry here. Space scaling gives pressure conservation while time scaling gives temperature conservation. I would imagine this could be done for a number of ensemble control parameters, like external field magnetization.
What is the generalization to this method?
EDIT:
I guess it would be useful to know what I am looking for. Imagine I discovered that energy and momentum generate space-time translation in Quantum. I would like to be pointed to Lie-groups and their applications in Quantum and QFT. Same this here, just for this subject.
Answer: Consider a Hamiltonian transform like,
\begin{align}
\mathcal{H}_\Lambda\big(q', p', \phi, \pi\big) &= \mathcal{H}\big(q(q', \phi), p(p', \phi)\big) + \mathcal{H}_\phi(\phi, \pi; \Lambda) \\
\end{align}
where $ \phi $ serves the purpose of $ s, V $ above and $ \pi $ is its momentum variable. $ \Lambda $ is our temperature or pressure kind of variable.
Simulating Hamiltons equations generates a microcanonical distribution with a partition function (ie. density of states),
\begin{align}
\mathcal{Z}_\Lambda &= \int d\phi~d\pi \int d^Nq'~d^Np'~\delta\big[ \mathcal{H}_\Lambda\big(q', p', \phi, \pi\big) - E\big] \\
&= \int d\phi~d\pi \int d^Nq~d^Np~\mathcal{J}(\phi)~\delta \big[\mathcal{H}\big(q, p\big) + \mathcal{H}_\phi(\phi, \pi; \Lambda) -
E\big]
\end{align}
where the Jacobian is $ \mathcal{J}(\phi) = \left[ \frac{\partial^N q'}{\partial^N q} \frac{\partial^Np'}{\partial^Np}\right] $
You can use something like a Dirac delta function property or a Laplace transform
From the composition property,
\begin{align}
\mathcal{Z}_\Lambda &= \sum_i \int d\pi~d^Nq~d^Np~\frac{\mathcal{J}(\phi^{(0)}_i)}{\frac{\partial}{\partial \phi} \mathcal{H}_\phi \big|_{\phi^{(0)}_i}} \\
&= \frac{1}{i2\pi} \int_{i\mathbb{R}} d\beta~e^{\beta E} \int d\phi~d\pi \int d^Nq~d^Np~e^{ -\beta \big[ \mathcal{H}(q, p) + T(\pi) + U(\phi;\Lambda) - \beta^{-1} \log \mathcal{J}(\phi) \big] } \\
&= \frac{1}{i2\pi} \int_{i\mathbb{R}} d\beta~e^{\beta E} \int d\phi~d\pi~e^{ -\beta \big[T(\pi) + U(\phi;\Lambda) - \beta^{-1} \log \mathcal{J}(\phi) \big] } \mathcal{Z}(\beta, N)
\end{align}
Where $ \phi^{(0)}_i $ is a zero of $ \mathcal{H}\big(q, p\big) + \mathcal{H}_\phi(\phi, \pi; \Lambda) -
E $ and $ \mathcal{H}_\phi(\phi, \pi; \Lambda) = T(\pi) + U(\phi;\Lambda) $
NVT Ensemble
\begin{align}
\mathcal{H}_\phi(\phi, \pi; T) &= \frac{1}{2} \pi^2 + (3N+1)T \log \phi ~~;~~
\phi^{(0)} = e^{- \frac{\mathcal{H}(q, p) + \frac{1}{2} \pi^2 - E}{(3N+1)T}} ~~;~~
\mathcal{J}(\phi) = \phi^{3N} ~~;~~
\frac{\partial}{\partial \phi} \mathcal{H}_\phi &= \frac{(3N+1)T}{\phi} \\
\mathcal{Z}_\Lambda &= (3N+1) T \int d\pi~d^Nq~d^Np~e^{- \frac{\mathcal{H}(q, p) + \frac{1}{2} \pi^2 - E}{(3N+1)T} (3N + 1)} \propto \int d^Nq~d^Np~e^{- \frac{\mathcal{H}(q, p)}{T}}
\end{align} | {
"domain": "physics.stackexchange",
"id": 43152,
"tags": "statistical-mechanics, lagrangian-formalism, molecular-dynamics"
} |
What is a protective epitope? | Question: What is a protective epitope? An epitope is basically a part of antigen. So does it mean that when the epitope combines with an antibody, it helps in the functioning of the antibody instead of going against it?
Answer: Lets define the nomenclature first: An antigen is a large structure (protein, virus, bacteria and so an) which is recognized by the immune system as foreign. The word antigen derives from the abbreviation ANTIbody GENerator. Exposure of our immune system to an antigen results in an immune response and the generation of many antibodies.
An epitope is a small part of an antigen - typically these are small structural elements or small peptides (8-11 amino acids in length) which are recognized by the binding site of an antibody. See the picture below (from here):
Here the antigen is the albumin protein. On the surface you can see 8 different epitopes which lead to the generation of 8 highly specific antibodies against the epitopes (antikörper is the german word for antibody). Every antibodies is highly specific for its epitope and will recognize no other epitope and helps raising an immune response against it. What happens after an antibody bound to its epitope can be read here.
Antibodies directed against a protective epitope are directed against (highly) conserved structures of the antigen. For example the recognize a highly conserved structure of a virus which means that even when this virus mutates (like influenza does) it will still recognize the conserved epitope and give protection against this virus (or better: antigen). | {
"domain": "biology.stackexchange",
"id": 2669,
"tags": "molecular-biology, immunology, antibody"
} |
How do I derive the expression for velocity in $S_n$ frame that has a velocity $v$ with respect to $S_{n-1}$ frame? | Question: If there are $n$ frames and the $i$th frame has velocity $v$ with respect to $i-1$th frame. How do I derive the relation between velocity in $S_0$ and $S_n$ frame?
I found velocity in nth frame to be
$u_n=\gamma^nu_0-v(\sum_{i=1}^n\gamma)$
What happens when n tends to infinity?
Here $\gamma=\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$
Answer: Suppose you have your base frame $S_0$ and a frame $S_1$ moving at a relative speed $v_1$. Then you have a second frame $S_2$ moving at $v_2$ relative to $S_1$. To calculate the speed of $S_2$ relative to your base frame you use the equation for the relativistic addition of velocities:
$$ v_{02} = \frac{v_1 + v_2}{1 + \frac{v_1v_2}{c^2}} \tag{1} $$
You could then use this to calculate $v_{02}$, then use it again to calculate $v_{03}$, and so on though that is rapidly going to get tedious. This is where the concept of rapidity mentioned by Ken G comes in.
Firstly let's write all our velocities as a fraction of the speed of light, $v/c$, in which case equation (1) simplifies to:
$$ v_{02} = \frac{v_1 + v_2}{1 + v_1v_2} \tag{2} $$
Now suppose we take the inverse hyperbolic tangent of this. This seems a strange thing to do, but you'll see why this simplifies things. The atanh function is:
$$ \text{atanh}(x) = \tfrac{1}{2}\ln\left(\frac{1+x}{1-x}\right) $$
If we take the atanh of equation (2) we get:
$$ \text{atanh}(v_{02}) = \tfrac{1}{2}\ln\left(\frac{1+\frac{v_1 + v_2}{1 + v_1v_2}}{1-\frac{v_1 + v_2}{1 + v_1v_2}}\right) $$
This apparently horrendous equation simplifies very easily. We just multiply everything inside the $\ln$ by $1 + v_1v_2$ and gather terms and we get:
$$\begin{align}
\text{atanh}(v_{02}) &= \tfrac{1}{2}\ln\left(\frac{(1+v_1)(1 + v_2)}{(1 - v_1)(1 - v_2)}\right) \\
&= \tfrac{1}{2}\ln\left(\frac{1+v_1}{1 - v_1}\right) + \tfrac{1}{2}\ln\left(\frac{1 + v_2}{1 - v_2}\right) \\
&= \text{atanh}(v_1) + \text{atanh}(v_2)
\end{align}$$
So the $\text{atanh}$ of $v_{02}$ is calculated just by adding the $\text{atanh}$s on $v_1$ and $v_2$. Calculating the relative velocity for the third frame just means adding $\text{atanh}(v_{03})$:
$$ \text{atanh}(v_{03}) = \text{atanh}(v_1) + \text{atanh}(v_2) + \text{atanh}(v_n) $$
And it should be obvious the general case is:
$$ \text{atanh}(v_{0n}) = \sum_{i = 1}^n \text{atanh}(v_i) $$
The quantity $\text{atanh}(v)$ is called the rapidity, and this is what Ken means when he says the rapidities just add together.
The reason why we get this surprising behaviour is that a frame $S_1$ moving at $v_1$ is related to our rest frame by a hyperbolic rotation of a hyperbolic angle $\theta_1 = \text{atanh}(v_1)$. A second frame $S_2$ moving at $v_2$ relative to $S_1$ is rotated relative to $S_1$ by $\theta_2 = \text{atanh}(v_2)$, and the angles of rotation just add. So relative to use it is rotated by:
$$ \theta_{02} = \theta_1 + \theta_2 = \text{atanh}(v_1) + \text{atanh}(v_2) $$
That's why the rapidities add. | {
"domain": "physics.stackexchange",
"id": 35705,
"tags": "special-relativity, velocity, inertial-frames"
} |
Purpose of validation data NN | Question: Aside from using validation data to tune the hyperparameters is there any other benefit to including validation data to the model?
All I ever read about is it being used to tune hyperparameters and check for overfitting. Is the checking for overfitting separate from tuning the hyperparameters?
Training: Tune the parameters (weights and biases)
Validation: Tune hyperparameters
Test: Evaluate the model
So, if we are NOT tuning the hyperparameters, the validation set is pointless?
Answer: The whole idea of a validation set is that the model does not know about this data, so you can get an unbiased estimation of the model's performance. Then based on this unbiased estimation you find the best parameters for the model. The problem is, that finding the hyperparameters is in itself a way of training your model. So with optimized hyperparameters the model starts to overfit on the validation set. That is why to check the real accuracy of the model you actually need a different part of data, that your model never saw. This is the testing part of data.
Therefore in your case, when you do not have any hyperparameters, you can just use the division on train and test.
If you want a higher accuracy of estimating your model's accuracy than it is better to use Cross Validation instead of just dividing on train and test. Neural networks usually do not do full cross validation because it means increasing the computation time several times (5 for 5-fold cross validation).
With hyperparameters the ideal way is to do a double cross validation. One for validation set and one for test set. This is too expensive computationally so it is only used on very simple models, like a ridge regression.
Also very few models, that I know of, do not have hyperparameters. And usually those that do not have hyperparameters perform poorly compared to the ones that have. Ridge regression is often better than linear regression. Neural networks with variable number of layers perform better than fully automated neural networks. | {
"domain": "datascience.stackexchange",
"id": 7046,
"tags": "neural-network"
} |
Is a String with no parantheses considered having balanced parantheses? | Question: Say I have a language over the alphabet {x,y,(,)} . The language's rules are: any string over that alphabet with balanced parentheses.
Clearly x(), (xy) , ()()xxx, ((xx)) are accepted. BUT
My question is, is a string with no parentheses considered as having balanced parentheses? For example are: y, xxyy, xyxyxxx accepted?
And then also what about an empty String?
Answer: This is what formal definitions are for...
That said, it makes the most sense to allow no parentheses. Roughly speaking, a string has balanced parentheses if every left parenthesis is "matched" by a right parenthesis (of course this is far from being a formal definition). If there is no left parenthesis, then everything is OK.
Once you accept that, there is absolutely nothing wrong about the empty string – it is a string like any other. In particular, it is a string over your alphabet with balanced parentheses.
One way to formally define your language is using a context-free grammar. While there are many possible grammars, here is one that tried to capture the "matching" aspect when scanning the string from left to right:
$$
\begin{align*}
&S \to \epsilon \\
&S \to xS \\
&S \to yS \\
&S \to (S)S
\end{align*}
$$ | {
"domain": "cs.stackexchange",
"id": 8184,
"tags": "context-free, pushdown-automata"
} |
Does spin really have no classical analogue? | Question: It is often stated that the property of spin is purely quantum mechanical and that there is no classical analog. To my mind, I would assume that this means that the classical $\hbar\rightarrow 0$ limit vanishes for any spin-observable.
However, I have been learning about spin coherent states recently (quantum states with minimum uncertainty), which do have a classical limit for the spin. Schematically, you can write down an $SU(2)$ coherent state, use it to take the expectation value of some spin-operator $\mathcal{O}$ to find
$$
\langle \mathcal{\hat{O}}\rangle = s\hbar*\mathcal{O},
$$
which has a well defined classical limit provided you take $s\rightarrow \infty$ as you take $\hbar\rightarrow 0$, keeping $s\hbar$ fixed. This has many physical applications, the result usually being some classical angular momentum value. For example, one can consider a black hole as a particle with quantum spin $s$ whose classical limit is a Kerr black hole with angular momentum $s\hbar*\mathcal{O}$.
Why then do people say that spin has no classical analog?
Answer: You're probably overthinking this. "Spin has no classical analogue" is usually a statement uttered in introductory QM, where we discuss how a quantum state differs from the classical idea of a point particle. In this context, the statement simply means that a classical point particle as usually imagined in Newtonian mechanics has no intrinsic angular momentum - the only component to its total angular momentum is that of its motion, i.e. $r\times p$ for $r$ its position and $p$ its linear momentum. Angular momentum of a "body" in classical physics implies that the body has an extent and a quantifiable motion rotating around its c.o.m., but it does not in quantum mechanics.
Of course there are many situations where you can construct an observable effect of "spin" on the angular momentum of something usually thought of as "classical". These are just demonstrations that spin really is a kind of angular momentum, not that spin can be classical or that the angular momentum you produced should also be called "spin".
Likewise there are classical "objects" that have intrinsic angular momentum not directly connected to the motion of objects, like the electromagnetic field, i.e. it is also not the case that classical physics does not possess the notion of intrinsic angular momentum at all.
"Spin is not classical" really is just supposed to mean "A classical Newtonian point particle possesses no comparable notion of intrinsic angular momentum". (Note that quantization is also not a particular property of spin, since ordinary angular momentum is also quantized, as seen in e.g. the azimuthal quantum number of atomic orbitals) | {
"domain": "physics.stackexchange",
"id": 93329,
"tags": "quantum-mechanics, classical-mechanics, angular-momentum, quantum-spin, semiclassical"
} |
Use 2SAT to show that an implication graphs must have a cycle if it's not satisfiable | Question: Using 2SAT and implication graphs, how could I prove the following properties of implication graphs:
Suppose there is a directed path between literals l1 and l2 in G_φ. Then there is also a directed path between their complements. Then, there is τ, a truth assignment satisfying φ where τ(l1) is true, then τ(l2) must also be true.
Using this, show that φ is unsatisfiable <=> there is some directed cycle in G_φ containing a variable x and its complement.
Where G_φ is the directed implication graph of 2SAT containing formula φ with n variables. Hence 2n vertices with one for every possible literal in φ, and edges (not l1, l2) and (not l2, l1) for every clause (l1 ∨ l2) in φ.
My first intuition was a proof by contradiction however I failed to construct a general enough assumption. I then tried to show that if the truth assignment means that l1 and l2 are true, then by building a cycle connecting all variables, the assignment is only valid when those edges exist. However this doesn't seem rigorous enough since I'm not properly understanding why the cycle requires the complement of x to exist.
Currently I build G by adding a vertex for every variable x and it's complement as well. Then for each clause (a v b) I add an edge between not a and b and not b and a.
However I fail to see how this would specifically form a cycle.
Working of the sipser textbook.
Answer: Here is a proof sketch. We will show that the given formula is unsatisfiable iff there exists a cycle containing both $x$ and $\lnot x$, for some variable $x$.
Suppose first that there exists a cycle containing both $x$ and $\lnot x$. The existence of a path $x \to^* y$ in the implication graph means that in a satisfying assignment, if $x$ holds then so does $y$ (you can prove it by induction on the length of the path). Since there is a cycle containing both $x$ and $\lnot x$, there are paths $x \to^* \lnot x$ and $\lnot x \to^* x$. In any satisfying assignment, either $x$ or $\lnot x$ holds, and both cases lead to contradiction.
Suppose next that there are no cycles containing both $x$ and $\lnot x$. We will construct a satisfying assignment as follows. Choose some variable $x$. If there is a path $x \to^* \lnot x$, assign $x = F$, and for each literal $\ell$ such that $\lnot x \to^* \ell$, assign the value to the underlying variable which makes $\ell$ true. You cannot be assigning the same variable two different truth values, since if $\lnot x \to^* y$ and $\lnot x \to^* \lnot y$ then also $y \to^* x$ and so also $\lnot x \to^* x$, completing a cycle involving $x$ and $\lnot x$.
If there is a path $\lnot x \to^* x$, assign $x = T$, and continue analogously.
If neither of these paths exist, assign $x$ arbitrarily, and continue as before. This cannot lead to a contradiction, for the following reason. Suppose that we assign $x = F$, and that there are paths $x \to^* y$ and $x \to^* \lnot y$. Then there is also a path $y \to^* \lnot x$ and so a path $x \to^* \lnot x$, contradicting the assumption that none of the two paths exists.
If after this process there is some unassigned variable left, choose one of them, and repeat until the assignment covers all variables. | {
"domain": "cs.stackexchange",
"id": 15710,
"tags": "graphs, computability, satisfiability, 2-sat"
} |
Quark Radius Upper Bound | Question: If quarks had internal structure (contradicting current beliefs), what is the lowest
upper bound on their "radius" based on current experimental results?
If possible, I'd prefer to only consider experiments which probe protons and
neutrons (not other shorter-lived particles since their interpretations get biased
more by the standard model).
My only understanding is that this radius must be less than roughly 0.2 fm since
spacings are found to be 2 fm in high-energy proton scattering experiments. I
imagine "higher-energy scattering experiments" and "excited angular momentum
experiments" have probed this further, but am not familiar with any other results.
Or, is there some other reason why this radius must be zero? Honestly, with the
surprise of quarks 10,000x smaller than the electron cloud, it wouldn't be surprising
if we found some internal structure after another 10,000x zoom.
Answer: As I mentioned in another answer, what people actually report these days is not really an upper bound on the radius of a particle. Instead, what you'll find is a lower bound on what is sometimes called the "contact interaction scale" - the energy at which you start to see effects of interactions among the constituents of a quark, if they exist.
For example, the most recent information I can find is this paper from the CMS experiment. It presents lower bounds on the contact interaction scale ranging from $7.5\text{ TeV}$ to $14.5\text{ TeV}$, depending on which model of substructure you're looking at. (In order to extract a lower bound from the data you get out of the detector, you need to make some assumptions about what kind of substructure you might be looking for.) So roughly speaking, we're reasonably sure that the types of substructure considered in the paper do not have any effect at processes involving less than $7.5\text{ TeV}$ of energy.
You can convert these limits into distances using the formula $\lambda = h c/E$, which tells you the wavelength corresponding to a particle with that limiting energy. This is just a rough order-of-magnitude bound, but it's as close as you can get to declaring an upper bound on the quark's radius with the knowledge we have today. Based on the values in the paper, it's $1.6\times 10^{-19}\text{ m}$. | {
"domain": "physics.stackexchange",
"id": 4104,
"tags": "standard-model, scattering, protons, quarks"
} |
Preplexing local cost map problem | Question:
Update: 3 images one with after set position issued. One with global map on, one with global map off, and the gazebo image of the simulation.
Update: setting the static map parameter to false in the global_costmap yaml file and the blue map line disappears. Not sure why since the static map is blank. Turning off the Global Map panel (topic: /move_base/global_costmap/costmap) in rivz also makes the blue line go away.
Update using a the light blue crescent map shape is coming from
nav_msgs::OccupancyGrid topic /move_base/global_costmap/costmap. When I toggle the global panel off this line disappears. Not sure what this line represents. Robot still runs backwards in both rviz and gazebo if DWA is on.
When I run with the local cost map parms below my robot in rviz and gazebo run backwards and it plots no local (green) path. The aqua blue dots seem to be map related while the red dots are laser. Note they seem to have different orientations. But why?
global_costmap:
global_frame: /map
robot_base_frame: /base_link
update_frequency: 30.0
publish_frequency: 30.0
static_map: true # note setting to false removes blue line
width: 20.0
height: 20.0
origin_x: -10.0
origin_y: -10.0
local_costmap:
global_frame: /odom
robot_base_frame: /base_link
update_frequency: 10.0
publish_frequency: 10.0
static_map: false
rolling_window: true
width: 16.0
height: 16.0
origin_x: 0.0
origin_y: 0.0
when I run with
static_map: true
rolling_window: false
my robot runs forward and does display a current local path. These are the only changes I make to get this results. Static map can be set to false here and as long as DWA is false also it will move forward.
Originally posted by rnunziata on ROS Answers with karma: 713 on 2013-10-23
Post score: 0
Original comments
Comment by David Lu on 2013-10-25:
What release are you running?
Comment by David Lu on 2013-10-25:
Also, the blue dots are your laser scans, and they are where you expect them to be?
Comment by rnunziata on 2013-10-25:
I am running the latest hydro and gazebo 1.9. The blue dots do not make since to me...since the laser is pointing in the opposite direction. The active laser scan are the red dot and cresent line above the yellow goal arrow. The blue dots are maybe the map data but why are they on the other side.
Comment by dornhege on 2013-10-29:
You have displayed global costmap and shown the local costmap parameters. Can you check global costmap parameters.
Comment by David Lu on 2013-11-01:
Is your configuration using the ObstacleLayer or the VoxelLayer?
Comment by rnunziata on 2013-11-01:
I am strictly 2d ObstacleLayer
Comment by David Lu on 2013-11-01:
Can you clarify what topic you're seeing the blue dots in your visualization from?
Comment by rnunziata on 2013-11-01:
updated question with topic data. /move_base/global_costmap/costmap
Comment by David Lu on 2013-11-01:
Odd. Can you give a zoomed out full resolution version of just the data from /move_base/global_costmap/costmap and whatever whatever your laser scan is?
Comment by rnunziata on 2013-11-01:
Have Video ...how to up load Ogg Video (video/ogg)? I try attach file but it does not return. File is 6meg. Is that too big?
Comment by David Lu on 2013-11-01:
Does it need to be video? Can you just grab two screenshots?
Comment by rnunziata on 2013-11-01:
done...see updated question
Comment by rnunziata on 2013-11-01:
Hold on...I believe my static map was not as clean as I thought on second inspection. dots so small you can hardly see them in GIMP Image editor. The map was probably bad orientation to begin with. Sorry to waist you time.
Answer:
Error on my part....static global map was not blank. Had bad data on it did not see it in GIMP Image editor.
Originally posted by rnunziata with karma: 713 on 2013-10-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 15944,
"tags": "ros"
} |
Python text game | Question: I've written a simple text game in Python to show you. I would be glad if you would spare a quick look and point out the style errors so I wouldn't pick up bad habits.
""" A text game "Bob, the Cookie Cooker". """
import random
COINS_FOR_WIN = 1000000
FINE_COOKIE_PRICE = 30
POOR_COOKIE_PRICE = 10
PRICE_DEVIATION = 5
HUGE_DOUGH_AMOUNT = 1000
DOUGH_PRICE = 10
MAX_LOAN = 10000
LOAN_PRICE = 100
TICKET_PRICE = 1
LOTTERY_PRIZE = 2
BEG_MONEY = 8
TIME_COOKIE_COOK = 1
TIME_BIG_DEAL = 60
TIME_SMALL_DEAL = 30
TIME_GRANDMA = 60
TIME_BANK_SERVICE = 5
TIME_LOTTERY = 1
TIME_BEGGING = 1440
CHANCE_LOTTERY = 1000
CHANCE_SKILL_COOK = 1000
CHANCE_SKILL_GRANDMA = 100
cookies = 0
minutes = 0
skill = 0
coins = 0
dough = 0
bribe = 1
debt = 0
term = 0
def main():
""" The main function. """
intro()
while menu():
if check_for_end():
break
def intro():
""" Print's out a welcoming string. """
print ( """
.--,--.
`. ,.' Bob,
|___| the Cookie Cooker
:o o: O
_`~^~'_ |
/' ^ `\=)
.' _______ '~|
`(<=| |= /'
| |
|_____|
~~~~~~~ ===== ~~~~~~~~
""" )
print ("\tWelcome, my dear player! I am Bob, the Cookie Cooker! I dream to")
print ("\tcook my way to the top. But I am not a smart guy, so that's why")
print ("\tI'm talking to You. I need your guidance. Please, turn me into a")
print ("\tCookie Master! Help me to become rich and famous! So let's go!")
print ()
def menu():
""" Player selects action he/she wants to do. """
methods = { 1: work, 2: buy, 3: grandma, 4: bank, 5: lottery, 6: beg}
print ("Cookies made: {}. Game-time spent: {}.".format(cookies, minutes))
print ("Skill: {}%. Coins: {}. Dough: {} kg.".format(skill, coins, dough))
print ( """
Action:
1. Go to work.
2. Go to supermarket.
3. Visit grandma.
4. Visit bank.
5. Buy a lottery ticket.
6. Beg for money.
7. Exit.
""" )
selection = get_valid_int(1, 7)
if selection != 7:
methods[selection]()
return True
else:
return False
def get_valid_int(min_bound, max_bound):
""" Prompts input until a positive integer in provided range is entered. """
while True:
user_input = input(">> ")
if is_int(user_input):
user_input = int(user_input)
if min_bound != None and user_input < min_bound:
print ("Number is too small.")
continue
if max_bound != None and user_input > max_bound:
print ("Number is too big.")
continue
break
return user_input
def is_int(string):
""" Returns if string can be converted into a positive integer or 0. """
try:
value = int(string)
return True
except ValueError:
print ("Input is not n integer.")
return False
def work():
""" Player cooks & sells specified amount of cookies. """
print ("How many cookies do you want to cook?")
amount = get_valid_int(0, None)
cookie_loop(amount)
def cookie_loop(amount):
""" Tries to cook and sell the specified amount of cookies. """
global minutes
for i in range(amount):
minutes += TIME_COOKIE_COOK
if dough > 0:
cook_cookie(i)
else:
be_confused(i)
def cook_cookie(i):
""" Cooks and sells a cookie. """
global cookies, skill, coins, dough
cookies += 1
dough -= 1
price = random.randrange(PRICE_DEVIATION)
if random.randrange(101) > skill:
# A poor cookie is cooked
price += POOR_COOKIE_PRICE
print ("You sell a poor cookie #{} for {} coins.".format(i+1, price))
else:
# A fine cookie is cooked
price += FINE_COOKIE_PRICE
print ("You sell a FINE cookie #{} for {} coins.".format(i+1, price))
coins += price
# There's a 1 in a CHANCE_SKILL_COOK to improve your skill by 1%:
if random.randrange(CHANCE_SKILL_COOK) == 42 and skill < 100:
skill += 1
def be_confused(i):
""" Stands confused for no dough is left. """
print ("Minute #{} passes. You're confused - no dough is left.".format(i+1))
def buy():
""" Player visits the supermarket to buy dough. """
global minutes, coins, dough
print ("Hello! We are the biggest supermarket in town. We only sell dough.")
print ("How much would you like?")
offer = get_valid_int(0, None)
if offer == 0:
print ("Well, okay then.")
elif offer > HUGE_DOUGH_AMOUNT and offer * (DOUGH_PRICE / 2) <= coins:
# Big deals come 50% off
print ("Wow. That's a big deal! It's 50% for big clients. Here you go!")
coins -= int (offer * (DOUGH_PRICE / 2) )
minutes += TIME_BIG_DEAL
elif offer * DOUGH_PRICE <= coins:
# Standart deals for standart clients
print ("Thank you for using our services. Come again!")
coins -= int (offer * DOUGH_PRICE)
minutes += TIME_SMALL_DEAL
else:
print ("I appreciate your intentions, but you don't have enough money.")
return
dough += offer
def grandma():
global minutes, skill, coins, bribe
""" Player visits grandma to get some cookie cooking experience. """
print ("Gradma: howdy son! Me sees ya wanna get some tips, so tell me ")
print ("how much ya hearts says you shoulda give me.")
offer = get_valid_int(0, coins)
if offer < bribe:
print ("Ya greedy bastard! You think wisdom comes for free?")
print ("Me teach ya a lesson and take your money for nothin'.")
else:
bribe += offer
print ("Grandma tells you a secret cookie cooking secret.")
if random.randrange(CHANCE_SKILL_GRANDMA) == 7 and skill <= 95:
# There's 1 in CHANCE_SKILL_GRANDMA to increase your skill by 5%
skill += 5
minutes += TIME_GRANDMA
coins -= offer
def bank():
""" Player goes to bank to loan some coins or to repay his debts. """
print ("Bank: we loan up to {} for {} coins.".format(MAX_LOAN, LOAN_PRICE))
print ("Don't forget that we find everyone who doesn't repay in time.")
print ("Enter '1' to request a loan.")
print ("Enter '2' to repay your debt.")
print ("Enter '3' for a term reminder.")
print ("Enter '4' to exit the bank.")
selection = get_valid_int(1, 4)
if selection == 1:
take_a_loan()
elif selection == 2:
repay_debt()
elif selection == 3:
if debt == 0:
print ("You've got no debts!")
else:
print ("Repay your debts before minute #{}.".format(term))
else:
print ("Bank says you a cold official good bye.")
def take_a_loan():
""" Player tries to take a loan at bank. """
global minutes, coins, term, debt
if debt != 0:
print ("We can't give you another loan. You already have debts.")
elif coins < LOAN_PRICE:
print ("You don't have enough yellows to purchase a loan.")
else:
debt = get_valid_int(0, MAX_LOAN)
if debt > 0:
minutes += TIME_BANK_SERVICE
term = int (minutes + debt / 2)
coins += debt
print ("Thank you for using our services. Repay in time.")
print ("You have to repay the debt before minute #{}.".format(term))
def repay_debt():
""" Player tries to repay debt at bank. """
global minutes, coins, term, debt
if debt == 0:
print ("You've got no debt to repay.")
elif coins >= debt:
minutes += TIME_BANK_SERVICE
coins -= debt
debt = 0
term = 0
print ("Thank you. Come again.")
else:
print ("You don't have enough money to repay your debt.")
def lottery():
""" Player plays lottery. """
global minutes, coins
print ("Enter nothing to purchase a lottery ticket, anything else to exit.")
while input(">> ") == "":
if coins >= TICKET_PRICE:
minutes += TIME_LOTTERY
coins -= TICKET_PRICE
print ("You bug a ticket for {}.".format(TICKET_PRICE))
if random.randrange(CHANCE_LOTTERY) == 666:
# There's 1 in CHANCE_LOTTERY to win a lottery
coins += LOTTERY_PRIZE
print ("You won {}.".format(LOTTERY_PRIZE))
else:
print ("You didn't win.")
else:
print ("You don't have enough money to purchase a lottery ticket.")
break
def beg():
""" Player goes on the streets begging people for coins. """
global minutes, coins
beg_money = random.randrange(BEG_MONEY)
print ("You beg people for money on the streets for whole day.")
print ("People give you this many coins: {}.".format(beg_money))
minutes += TIME_BEGGING
coins += beg_money
def check_for_end():
""" Checks if player won or lost. """
if term > minutes:
print ("GAME OVER. You forgot to pay your debt and bank got you.")
return True
elif coins > COINS_FOR_WIN:
print ("GAME OVER. You won in {} minutes.".format(minutes))
return True
else:
return False
if __name__ == "__main__":
main()
Writing a program in a single source file raises worrying things:
Use of global variables (how would you eradicate them here?)
Everything is very diverse, there's little unification over type of content.
The navigation gets more and more difficult as file expands.
To fight these problems I found a following solution:
import sharedclass # Global stuff in a class shared by multiple game states
import gamestate1 # Holds a game state class with its vars, defs and everything
import gamestate2 # Game state holds a class with run method
import gamestate3 # Run method returns an int that tells which stage to switch to
shared = sharedclass.Shared()
associations = [ gamestate1.Game_State1(),
gamestate2.Game_State2(),
gamestate1.Game_State3()]
index = 0
while shared.is_running:
index = associations[index].run(shared)
if index == -1:
break
What do you think of it? Is it an OK program design?
Answer: This is a very good start - the code is broken up into logical functions, generally follows the style guide and includes explanatory docstrings. Well done!
One of the pet peeves in Python's style guide, which was being discussed on SO a short while ago, is using whitespace to line up operators, e.g.:
COINS_FOR_WIN = 1000000
FINE_COOKIE_PRICE = 30
would generally be written:
COINS_FOR_WIN = 1000000
FINE_COOKIE_PRICE = 30
This reduces the maintenance overhead if you add or rename a constant, or change one of the values.
You can use multiline strings and textwrap.dedent (see Avoiding Python multiline string indentation), rather than line after line of print, to neaten up the blocks of text:
print(textwrap.dedent('''
.--,--.
`. ,.' Bob,
|___| the Cookie Cooker
:o o: O
_`~^~'_ |
/' ^ `\=)
.' _______ '~|
`(<=| |= /'
| |
|_____|
~~~~~~~ ===== ~~~~~~~~
Welcome, my dear player! I am Bob, the Cookie Cooker! I dream to
...
'''))
There are generally two ways to avoid global state:
Pass all necessary state into and out of a function explicitly; or
Encapsulate the state in another object (e.g. a dictionary or class).
For the first, for example:
def cookie_loop(amount, minutes):
""" Tries to cook and sell the specified amount of cookies. """
for i in range(amount):
minutes += TIME_COOKIE_COOK
if dough > 0:
cook_cookie(i)
else:
be_confused(i)
return minutes
As the required state is passed in and returned explicitly, there's no need for global, but you now need to call e.g. minutes = cookie_loop(amount, minutes) rather than cookie_loop(amount) in work.
However, note that some functions take and modify many different parts of the state, so passing these all explicitly back and forth would quickly get out of hand:
minutes, coins, bribe = grandma(minutes, skill, coins, bribe)
This suggests encapsulation might be helpful, e.g. you could have a dictionary:
game_state = dict(
cookies=0,
minutes=0,
skill=0,
coins=0,
dough=0,
bribe=1,
debt=0,
term=0,
)
Now you only pass a single state parameter around, then e.g. state['minutes'] += TIME_COOKIE_COOK. Alternatively, you could look into OOP, and develop a CookieGame class that holds all of the state and provides methods for manipulating it, rather than passing it around to different functions.
In terms of splitting it up into smaller parts, note that the various tasks could be standalone functions, that take and mutate a single object representing a player (which holds e.g. the player's skill and money) and return the elapsed time, and could live in separate modules. For example:
from grandma import visit_grandma
from kitchen import bake_cookies
from supermarket import buy_dough
# import other standalone tasks
TASKS = {
'Visit grandma': visit_grandma,
'Bake some cookies': bake_cookies,
'Buy some dough': buy_dough,
# build dictionary of tasks
}
# user makes a choice from the keys
minutes += TASKS[key_choice](user) # call task function with user object
This should make adding a new task as easy as importing the appropriate function and adding it to TASKS. Here e.g. grandma.py would hold all of the constants specific to that task, along with the task function (and any supporting sub-functions). Utility functions like get_valid_int would be in a separate file, e.g. utils.py, so that any task module can access them. | {
"domain": "codereview.stackexchange",
"id": 13073,
"tags": "python, beginner, python-3.x, adventure-game"
} |
Is CNN permutation equivariant? | Question: If I use stacked CNN layers with 3x3 kernels, zero padding, and with no pooling layers, the output feature map will consist of feature vectors, each vector of which is related directly to the original 3x3 block of the input image, right?
Therefore, for example, I could send the output feature map vectors as timesteps to LSTM and it would supposedly learn dependencies between clusters of the image given the image clusters have a temporal dependency.
Answer: If you use a single layer CNN, then each vector in the resulting activation maps would be related to the original 3x3 block. However, if you stack multiple CNN layers, you increase the receptive field of each resulting vector, as shown in the image below (taken from here):
After the CNNs, you can certainly compute an LSTM. There are, however, some design decisions you would need to take: disregarding the batch dimension, the LSTM takes as input a sequence of vectors (2D tensor) but you have a 3D tensor (height $\times$ width $\times$ num.channels) so, how are you going to make the 3D tensor fit as input to the LSTM? You could compute the LSTM of columns/rows of the image independently, in forward or reverse direction. You could also "collapse" one dimension (e.g. by averaging or taking the maximum value). | {
"domain": "datascience.stackexchange",
"id": 8020,
"tags": "machine-learning, neural-network, cnn, lstm"
} |
Can I have LAMP on the bench / in the field? | Question: I am interested in LAMP for detecting small amounts of DNA (loop-mediated isothermal amplification and yes, I know the initials don't match).
I am trying to figure out exactly how clean/(sterile?) everything needs to be.
On one hand:
-papers repeatedly stress that the assay is very sensitive and to avoid contamination
-these very nice guidelines (https://www.rtlamp.org/get-started/rt-lamp-open-access/) include the requirement for a PCR hood. I think this website is otherwise reasonable and thorough as it mentioned that HNB dye should be in its trisodium form (which most papers do not mention)
On the other hand:
-I'll be working with DNA, not RNA; less fragile
-the point of LAMP is supposed to be that it requires more basic equipment than PCR. If it still requires a clean bench, that sort of flies out of the window.
-a quick search for 'field LAMP amplification' found a publication (https://pubs.rsc.org/en/content/articlelanding/2022/ew/d2ew00433j - 'In-field LAMP assay for rapid detection of human faecal contamination in environmental water'). But I don't have enough experience in the field to tell whether this is a reasonable approach.
Answer: I don't do chemistry or bench work, so I can't speak to conditions required for LAMP in the lab. However, I know people who have developed a portable LAMP set-up that works in the field. It's still something of a prototype, with testing going on, but this paper provides you some idea. The process doesn't require very stringent cleaning. So to answer your question, yes, you can use LAMP in the field. I'd assume in the lab too, since the field set-up is portable and I've seen them use the unit in the lab.
https://www.sciencedirect.com/science/article/pii/S0956566320305820 | {
"domain": "biology.stackexchange",
"id": 12270,
"tags": "dna, bacteriology, assay-development"
} |
Elastic collision of rotating bodies | Question: How would you explain in detail elastic collision of two rotating bodies to someone with basic understanding of classical mechanics?
I'm writing simple physics engine, but now only simulating non-rotating spheres and would like to step it up a bit.
So what reading do you recommend so I could understand what exactly is happening when two spheres or boxes collide (perfectly in 2 dimensions)?
Answer: I worked on a physics engine written in C# that does just this.
Here are my notes on this topic.
Objects have both translational and rotational momentum.
When two objects collide, the overall algorithm goes like this:
1> Find the total momentum of both objects. Calculate the translational and rotational momentum, the vector sum of this is the total momentum of the object.
2> Split the momentum using the usual momentum splitting equation you would ordinarily use. (As in here)
Each object now has their new momentum. The next step is to decide how much of that momentum is translational and rotational.
3> Imagine a vector A which goes from the point of collision to the center of mass of the object that was hit. The component of the incoming momentum vector which is parallel with A forms the new translational momentum vector, the rest of the vector represents rotational momentum.
The extra notes I have linked to show more details on my methematical working, and also a description of how to handle inelastic collisions.
You can find the physics engine here, and an implementation of the collision handling here | {
"domain": "physics.stackexchange",
"id": 8906,
"tags": "newtonian-mechanics, angular-momentum, conservation-laws, collision, rigid-body-dynamics"
} |
`ValueError: There must be exactly two hue levels to use `split`.', while two levels? | Question: The following plotting issue occurs in preparation for a model build:
feature = 'Forks'
hue = 'dataset'
target = 'SalePrice'
data[hue].value_counts()
data[[feature, target, hue]].head()
sns.violinplot(
data=data[[feature, target, hue]].dropna(),
x=feature,
y=target,
hue=hue,
split=True,
)
Output:
train 401125
test 12457
Name: dataset, dtype: int64
Forks SalePrice dataset
205615 NaN 9,500.00 train
92803 None or Unspecified 24,000.00 train
98346 NaN 35,000.00 train
169297 None or Unspecified 19,000.00 train
274835 None or Unspecified 14,000.00 train
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [69], in <cell line: 7>()
4 data[hue].value_counts()
5 data[[feature, target, hue]].head()
----> 7 sns.violinplot(
8 data=data[[feature, target, hue]].dropna(),
9 x=feature,
10 y=target,
11 hue=hue,
12 split=True,
13 )
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\seaborn\_decorators.py:46, in _deprecate_positional_args.<locals>.inner_f(*args, **kwargs)
36 warnings.warn(
37 "Pass the following variable{} as {}keyword arg{}: {}. "
38 "From version 0.12, the only valid positional argument "
(...)
43 FutureWarning
44 )
45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 46 return f(**kwargs)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\seaborn\categorical.py:2400, in violinplot(x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, dodge, orient, linewidth, color, palette, saturation, ax, **kwargs)
2388 @_deprecate_positional_args
2389 def violinplot(
2390 *,
(...)
2397 ax=None, **kwargs,
2398 ):
-> 2400 plotter = _ViolinPlotter(x, y, hue, data, order, hue_order,
2401 bw, cut, scale, scale_hue, gridsize,
2402 width, inner, split, dodge, orient, linewidth,
2403 color, palette, saturation)
2405 if ax is None:
2406 ax = plt.gca()
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\seaborn\categorical.py:541, in _ViolinPlotter.__init__(self, x, y, hue, data, order, hue_order, bw, cut, scale, scale_hue, gridsize, width, inner, split, dodge, orient, linewidth, color, palette, saturation)
539 if split and self.hue_names is not None and len(self.hue_names) != 2:
540 msg = "There must be exactly two hue levels to use `split`.'"
--> 541 raise ValueError(msg)
542 self.split = split
544 if linewidth is None:
ValueError: There must be exactly two hue levels to use `split`.'
And now I'm stuck, because ??sns.categorical._ViolinPlotter does not clarify for me where self.hue_names comes from and why len(self.hue_names) would not just be 2. Does anybody have any insights?
Answer: A violin plot normally draws symmetric violins for each unique value of hue.
If hue takes on exactly two values, the two violins for can be combined into one asymmetric "split" violin by passing split=True.
If the hue column has fewer or more unique values, it makes no sense to plot a split plot.
In your case it seems that the hue column named dataset has just one unique value 'train' in it and a split violin does not make sense.
Refer to the documentation and look at the two plots below. The first one draws two violins for each value of x, one for alive passengers and the other for dead passengers, while the second one draws one violin with the left and right sides being asymmetric and corresponding to dead and alive passengers. | {
"domain": "datascience.stackexchange",
"id": 11871,
"tags": "python"
} |
What is advantage of using Dryad instead of Spark? | Question: I found that Apache-Spark very powerful in Big-Data processing. but I want to know about Dryad (Microsoft) benefits. Is there any advantage for this framework than Spark?
Why we must use Dryad instead of Spark?
Answer: Dryad is an academic project, whereas Spark is widely deployed in production, and now has a company behind it for support. Just focus on Spark. | {
"domain": "datascience.stackexchange",
"id": 767,
"tags": "bigdata"
} |
Assign a unique number to duplicate items in a list | Question: I made a program, but I not happy with the quantity of code. The end result is good, but I believe it can be made much easier, only I don't know how.
The functionality: If equal items > 1 in list, then assign all equal items a unique set number. Below I made a unit test. I'm not happy with the class CreatSet. Can somebody advise me on how this can be implemented better?
import unittest
class Curtain(object):
def __init__(self, type, fabric, number):
self.type = type
self.fabric = fabric
self.number = number
self.set_number = None
def __str__(self):
return '%s %s %s %s' % (self.number, self.type, self.fabric, self.set_name)
def __eq__(self, other):
return self.type == other.type and self.fabric == other.fabric
class CreatSet(object):
def make_unique(self, original_list):
checked = []
for e in original_list:
# If curtain: type and fabric is equal
if e not in checked:
checked.append(e)
return checked
def create_set(self, curtains):
# Uniuqe items in list
unique_list = self.make_unique(curtains)
result = []
for x in unique_list:
# Create set list
set_range = []
for y in curtains:
if y == x:
set_range.append(y)
# Add set range into list
result.append(set_range)
# Create set number
set_result = []
set_number = 0
for x in result:
if len(x) == 1:
set_result.append(x[0])
else:
set_number += 1
for y in x:
y.set_number = set_number
set_result.append(y)
# Return list ordered by number
return sorted(set_result, key=lambda curtain: curtain.number)
class TestCreateSet(unittest.TestCase):
def setUp(self):
self.curtains = []
self.curtains.append(Curtain('pleatcurtain', 'pattern', 0))
self.curtains.append(Curtain('pleatcurtain', 'plain', 1))
self.curtains.append(Curtain('pleatcurtain', 'pattern', 2))
self.curtains.append(Curtain('foldcurtain', 'pattern', 3))
self.curtains.append(Curtain('pleatcurtain', 'plain', 4))
self.curtains.append(Curtain('foldcurtain', 'plain', 5))
self.curtains.append(Curtain('pleatcurtain', 'pattern', 6))
self.curtains.append(Curtain('foldcurtain', 'pattern', 7))
def test_auto_set(self):
creat_set = CreatSet()
result = creat_set.create_set(self.curtains)
# Creating set
self.assertEqual(result[0].set_number, 1) # pleatcurtain, pattern
self.assertEqual(result[1].set_number, 2) # pleatcurtain, plain
self.assertEqual(result[2].set_number, 1) # pleatcurtain, pattern
self.assertEqual(result[3].set_number, 3) # foldcurtain, pattern
self.assertEqual(result[4].set_number, 2) # pleatcurtain, plain
self.assertEqual(result[5].set_number, None) # foldcurtain, plain
self.assertEqual(result[6].set_number, 1) # pleatcurtain, pattern
self.assertEqual(result[7].set_number, 3) # foldcurtain, pattern
if __name__ == '__main__':
unittest.main()
Answer: As you've already implemented Curtain.__eq__, if you also implement Curtain.__hash__ you can use it as a dictionary (or collections.Counter...) key, or a set member:
def __hash__(self):
return hash(self.type) ^ hash(self.fabric)
Now make_unique is trivial:
def make_unique(self, original_list):
return set(original_list)
(Note: if you require order to be retained, this will need additional work.)
This also allows you to easily determine how many of each distinct Curtain you have:
>>> from collections import Counter
>>> Counter(curtains)
Counter({<__main__.Curtain object at 0x02A15190>: 3,
<__main__.Curtain object at 0x031A0CF0>: 2,
<__main__.Curtain object at 0x031A0ED0>: 2,
<__main__.Curtain object at 0x0329EA30>: 1})
(Note: implementing Curtain.__repr__ would make this more readable!)
As pointed out in the comments, the __hash__ implementation I suggest has an issue - if either attribute type or fabric is changed, the hash will be different. You could protect these attributes by making them "read-only" using properties:
class Curtain(object):
def __init__(self, type, ...):
self._type = type # note leading underscore on attribute
@property # defining setter but no getter
def type(self):
return self._type
Alternatively, you can implement Curtain.__lt__ etc., then sort the list and use itertools.groupby to get your groups of "equal" Curtains.
Either way, I would not implement CreatSet as a class, there's no need (as should be clear from the fact that there is no __init__ and no class or instance attributes). Just have one class, and two functions:
class Curtain(object):
...
def create_set(curtains):
...
def make_unique(curtains):
...
Your code is generally compliant with the style guide (well done!) but you could do with some explanatory docstrings. | {
"domain": "codereview.stackexchange",
"id": 10555,
"tags": "python"
} |
Why does the colour of gold sol change with particle size? | Question: Regarding the colour of gold sol, my book says
Finest gold sol is red in colour; as the size of particles increases, it appears purple, then blue and finally golden.
This is the image I found on Wikipedia which portrays the same phenomenon.
The same page also discusses the effect of size etc. on the colour, but I don't understand exactly what's happening.
Why shouldn't the colour be the same for all sizes? Isn't it characteristic of the molecules which make the sol as the light absorbed and reflected depends on them?
Does this mean that when we break down any other thing to very small particles, it's sol will change in colour?
Answer: The color of conductive nanoparticles depends on plasmonics. In brief, quantum behavior predominates in the few electrons of a nanoparticle, rather than as a continuous conduction band. Surface plasmons strongly influence the color and polarization of light in sols and colloids. For example, for hundreds of years, stained glass has been made with gold or selenium to get pinks and reds. | {
"domain": "chemistry.stackexchange",
"id": 11562,
"tags": "physical-chemistry, experimental-chemistry, color, colloids"
} |
3 mass spring collision | Question:
A small ball with mass $M$ moves along a horizontal smooth table with velocity $v_0$. It hits the system of two balls with masses $m$ and $2m$ with attached spring between them as shown in the picture. What is the maximum ratio $\alpha = m/M$, when the balls will collide one more time. Assume that collisions between the balls are perfectly elastic.
I'm not sure where to get started, but I don't want a full solution just yet. what i'm looking for is a concept on where to start this problem, i.e. momentum conservation, or energy, or something else.
Answer: I haven't tried it but this would be my first attempt.
For starters, forget that $m$ is attached to $2m$ and focus on the collision between $M$ and $m$. You should be able to compute the velocities of $M$ and $m$ instantaneously after the collision using conservation of momentum and energy. Once you have these, you can now focus on the motion of the spring system right after the collision. The key is two separate the motion of $m$ and $2m$ into the motion of their center of mass and their relative motion. The motion of the center of mass is very simple: it moves at constant velocity. You should be able to compute this velocity from the velocity of $m$ right after the impact. The relative motion will be that of a single body attached to the spring with mass equal to the reduced mass $(m\times 2m)/(m+2m)=2m/3$. After you have solved for its motion you can return to the original problem and reinterpret your results in term of the motion for $m$. You can then check whether $M$ and $m$ touch each other again. | {
"domain": "physics.stackexchange",
"id": 75168,
"tags": "homework-and-exercises, newtonian-mechanics, collision, spring"
} |
What happens when we connect a metal wire between the 2 poles of a battery? | Question: As I remembered, at the 2 poles of a battery, positive or negative electric charges are gathered. So there'll be electric field existing inside the battery. This filed is neutralized by the chemical power of the battery so the electric charges will stay at the poles.
Since there are electric charges at both poles, there must also be electric fields outside the battery. What happens when we connect a metal wire between the 2 poles of a battery? I vaguely remembered that the wire has the ability to constrain and reshape the electric field and keep it within the wire, maybe like an electric field tube. But is that true?
Answer: Yes Sam, there definitely is electric field reshaping in the wire. Strangely, it is not talked about in hardly any physics texts, but there are surface charge accumulations along the wire which maintain the electric field in the direction of the wire. (Note: it is a surface charge distribution since any extra charge on a conductor will reside on the surface.) It is the change in, or gradient of, the surface charge distribution on the wire that creates, and determines the direction of, the electric field within a wire or resistor.
For instance, the surface charge density on the wire near the negative terminal of the battery will be more negative than the surface charge density on the wire near the positive terminal. The surface charge density, as you go around the circuit, will change only slightly along a good conducting wire (Hence the gradient is small, and there is only a small electric field). Corners or bends in the wire will also cause surface charge accumulations that make the electrons flow around in the direction of the wire instead of flowing into a dead end. Resistors inserted into the circuit will have a more negative surface charge density on one side of the resistor as compared to the other side of the resistor. This larger gradient in surface charge distribution near the resistor causes the relatively larger electric field in the resistor (as compared to the wire). The direction of the gradients for all the aforementioned surface charge densities determine the direction of the electric fields.
This question is very fundamental, and is often misinterpreted or disregarded by people. We are all indoctrinated to just assume that a battery creates an electric field in the wire. However, when someone asks "how does the field get into the wire and how does the field know which way to go?" they are rarely given a straight answer.
A follow up question might be, "If nonzero surface charge accumulations are responsible for the size and direction of the electric field in a wire, why doesn't a normal circuit with a resistor exert an electric force on a nearby pith ball from all the built up charge in the circuit?" The answer is that it does exert a force, but the surface charge and force are so small for normal voltages and operating conditions that you don't notice it. If you hook up a 100,000V source to a resistor you would be able to measure the surface charge accumulation and the force it could exert.
Here's one more way to think about all this (excuse the length of this post, but there is so much confusion on this question it deserves appropriate detail). We all know there is an electric field in a wire connected to a battery. But the wire could be as long as desired, and so as far away from the battery terminals as desired. The charge on the battery terminals can't be directly and solely responsible for the size and direction of the electric field in the part of the wire miles away since the field would have died off and become too small there. (Yes, an infinite plane of charge, or other suitably exotic configurations, can create a field that does not decrease with distance, but we are not talking about anything like that.) If the charge near the terminals does not directly and solely determine the size and direction of the electric field in the part of the wire miles away, some other charge must be creating the field there (Yes, you can create an electric field with a changing magnetic field instead of a charge, but we can assume we have a steady current and non-varying magnetic field). The physical mechanism that creates the electric field in the part of the wire miles away is a small gradient of the nonzero surface charge distribution on the wire. And the direction of the gradient of that charge distribution is what determines the direction of the electric field there.
For a rare and absolutely beautiful description of how and why surface charge creates and shapes the electric field in a wire refer to the textbook:
"Matter and Interactions: Volume 2 Electric and Magnetic Interactions"
by Chabay and Sherwood,
Chapter 18 "A Microscopic View of Electric Circuits"
pg 631-640. | {
"domain": "physics.stackexchange",
"id": 644,
"tags": "electricity, electric-circuits, electric-fields, batteries, short-circuits"
} |
Is my LoginView class valid? | Question: I know the Model, Controller and View's purpose, but I have never really found a good concrete class example of a View class.
Usually I see people having some small render() method being called from the Controller, or their Controller holds all the View logic. So I tried my best to create a View class that seemed proper and somewhat clean to me in regards to the MVC pattern.
I've found a useful image that shows how the data flow is supposed to be and how the View sits in it.
Is this a valid View class?
Anything I could improve on / suggestions?
Readability, usability and / or efficiency.
LoginView.php
namespace View;
use View\View;
class LoginView extends View
{
private $isUserLoggedIn;
private $isFormTokenValid;
public function index()
{
// Logged in Users have no reason to view this page.
if ($this->isUserLoggedIn) {
$this->httpResponse->redirect('/');
}
// Assume csrf attack, refresh self for a fresh View.
if ($this->isFormTokenValid === false) {
$this->httpResponse->redirect();
}
// Header title.
$this->templateData['title'] = 'Log In - Site Name';
// Header stylesheets.
$this->templateData['styleSheets'] = [
'/stylesheets/login.css'
];
// Header javascripts.
$this->templateData['javaScripts'] = [
'//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js',
'/javascripts/login-form-handler.min.js'
];
// Form input error messages.
$formErrorMessages = [
'email' => [
'NotEmpty' => 'Enter your email address.',
'Email' => 'You did not enter a valid email address.'
],
'password' => [
'NotEmpty' => 'Enter your password.'
]
];
// Replace form error codes with corresponding error messages.
if (isset($this->templateData['form']['errors'])) {
$this->templateData['form']['errors'] = $this->replaceArrayValuesByKey($this->templateData['form']['errors'], $formErrorMessages);
}
// If not set by User input, check 'remember' checkbox by default.
if (!isset($this->templateData['form']['values']['remember'])) {
$this->templateData['form']['values']['remember'] = true;
}
$this->renderTemplate();
}
public function setIsUserLoggedIn($boolean)
{
$this->isUserLoggedIn = $boolean;
}
public function setIsFormTokenValid($boolean)
{
$this->isFormTokenValid = $boolean;
}
public function setFormValueOf($name, $value)
{
$this->templateData['form']['values'][$name] = $value;
}
public function setFormErrorCodeOf($name, $code)
{
$this->templateData['form']['errors'][$name] = $code;
}
public function setHasLoginFailed($boolean)
{
$this->templateData['hasLoginFailed'] = $boolean;
}
}
View.php
namespace View;
use Http\HttpResponse;
abstract class View
{
protected $httpResponse;
protected $templatePaths;
protected $templateData;
public function __construct(
HttpResponse $httpResponse,
$headerTemplatePath = null,
$bodyTemplatePath = null,
$footerTemplatePath = null
) {
$this->httpResponse = $httpResponse;
if ($headerTemplatePath) {
$this->templatePaths['header'] = $headerTemplatePath;
}
if ($bodyTemplatePath) {
$this->templatePaths['body'] = $bodyTemplatePath;
}
if ($footerTemplatePath) {
$this->templatePaths['footer'] = $footerTemplatePath;
}
}
public function replaceArrayValuesByKey(array $target, array $source)
{
foreach ($target as $k => $v) {
for ($i = 0, $c = count($v); $i < $c; ++$i) {
if (isset($source[$k][$v[$i]])) {
$target[$k][$i] = $source[$k][$v[$i]];
}
}
}
return $target;
}
public function renderTemplate()
{
extract($this->templateData);
if (isset($this->templatePaths['header'])) {
require_once $this->templatePaths['header'];
}
if (isset($this->templatePaths['body'])) {
ob_start([$this, 'autoIndent']);
require_once $this->templatePaths['body'];
ob_end_flush();
}
if (isset($this->templatePaths['footer'])) {
require_once $this->templatePaths['footer'];
}
}
private function autoIndent($buffer)
{
$content = '';
$lines = explode(PHP_EOL, $buffer);
// TODO: calculate indentation levels of header template
foreach ($lines as $line) {
$content .= str_repeat(' ', 12) . $line . PHP_EOL;
}
return $content;
}
}
You can view the LoginController class that controls this View here, if interested.
Answer: As @Pinoniq mentioned in the comments, a view shouldn't be responsible for redirect users via business rules. The responsibility of a view should be solely to convert a PHP template into HTML and to capture that output into a variable to be used as you see fit.
Based on that criteria alone, the redirect violates that rule. That sort of stuff should all be happening at the controller level. As soon as you require data to be passed into a view, your controller will need to check whether the user is logged in or not before it can fetch their corresponding user data so it makes sense to have it there from that perspective alone.
Not to mention from a testing perspective, it's much easier to test that re-direct at the controller level than it is to test it in the view. This also removes the dependency on HttpResponse which means that you don't need to mock a response just to test your view creation logic now which is also a big win. Assuming you're writing tests for all of this logic ;)
Finally, I'd highly recommend taking a look at Twig or some other PHP compatible template engine. It seems to me like you're getting a bit over zealous in defining a lot of your page header and body blocks programatically whereas you'd be much better off in taking this same approach, but with template inheritance instead, in a template engine.
P.S. I don't think initializing an array with = [ ]; is valid PHP for your $formErrorMsg. | {
"domain": "codereview.stackexchange",
"id": 11141,
"tags": "php, optimization, object-oriented, classes, mvc"
} |
What is this strange mantis-like coral reef dwelling creature? | Question: On the first episode of BBC's Blue Planet II they visited a coral reef and showed some odd reef-dwelling creatures as part of their establishing shots, but did not name the creatures.
In particular creature that resembles a mantis:
I tried a Google reverse image search and it very 'helpfully' identified it as "Documentary film". So close, yet so far.
Answer: The specimen is indeed a mantis shrimp (to be specific, a peacock mantis shrimp).
These predators have highly sophisticated vision and, depending on the species, either spearlike limbs or clubs that move so fast the water can't keep up, causing dangerous cavitational effects that can even break glass. | {
"domain": "biology.stackexchange",
"id": 7959,
"tags": "species-identification, marine-biology"
} |
the rain,umbrella and the sun rays | Question: when rain falls vertically, a man running on the road keeps his umbrella tilted but a man standing on the road keeps his umbrella vertical to protect himself from rain.
When I studied the topic further, I found that both men (whether running or standing) keep their umbrella vertical to avoid vertical sun rays. Why is it so?
I m stuck here and can't relate these two situations anyway.
Thanks in advance for any help.
Answer: I like this question! Here will be my attempt to explain it.
-First, let's imagine rain fell much more slowly. Say it took a few seconds for rain to get from the top of the persons head down to the ground. Now, if someone was standing still in this rain they would hold the umbrella straight up so that no rain hit them. But imagine they start running very fast through the rain. Say they're running faster than the raindrops are falling. If they had no umbrella then most of the raindrops would hit the front of their body. You can imagine them running through rain drops which are basically suspended in air so many raindrops hit the front of their body. Since the rain coming from above is falling slowly hardly any of it will hit the top of their head (certainly compared to the amount that would hit the front of their body.) So this person running very fast through slow moving rain would want to hold their umbrella almost entirely horizontal to stop them from "running into" any rain drops.
-Now lets speed the rain up so that it is moving VERY fast, much faster than the person can run. Now if the person is running they won't actually get hit by that many raindrops from the front. Imagine a raindrop a step or two in front of the person. In the other scenario with slow rain it would be easy for the person to "run into" that rain drop and get wet. However, in this scenario that rain drop is going to fall and hit the ground before the person can run into it because this rain moves fast. So in this case, with rain falling faster than the person can run, whether the person is standing still or running they will want to hold the umbrella vertically.
Now, sunlight is light and not rain. This means that sunlight comes down from the sky at the speed of light. This is much much faster than a person can run so there is no worry that by running too fast the person is going to "run into the light" and get sun on their face for example. This is like the second scenario. This means it makes sense for the person in sunlight to hold their umbrella vertically whether they are standing or running.
I'll just point out that there's a more mathematical answer where you can consider the problem in the reference frame of the running person and see that the velocity vector of the rain changes directions as the person runs faster. This could also be analyzed using special relativity to see if the angle of the sun rays changes as the person runs. In my answer I'm neglecting all relativistic effects ($v\ll c$). I'll leave the relativistic/vector answer to someone else. | {
"domain": "physics.stackexchange",
"id": 34337,
"tags": "kinematics"
} |
Can I pour a solution into another vessel with no air bubbles? | Question: I have been trying to create a method to create a homogeneous agar solution.
What are appropriate techniques to ensure homogeneous solidification of an agar-based aqueous solution?
The new solution consists of distilled boiling water, $\pu{20 g/L}$ agar, $\pu{10 g/L}$ $\ce{NaCl}$ and $\pu{1 g/L}$ $\ce{CuSO4}$.
An issue with the agar is that it only dissolves between 90-100C which makes it difficult to dissolve using a hot plate or other methods as it also needs to be homogeneous.
I've made progress speaking to both the microbiology and chemistry departments of my university. The current idea to make the solution homogeneous is to heat it in an autoclave set at 121C. This will also sterilize the solution making the phantom last longer. Unfortunately we have no 3L beakers to do it all in one go. They have plenty of 2L and some 5L beakers, but nothing that is 3L.
A new idea is to heat a round bottomed flask in the autoclave and then transfer the semi-cooled solution to a perspex container that will have a base on it. (flask and container shown in the photo below)
The problem is how do we ensure the solution is still relatively homogeneous and won't be full of air bubbles? Are there any simple ways to solve this?
Answer: There are two aspects to consider for the intended "air bubble free" (or at least, "air bubble less") transfer from your container where the liquid is prepared into the container where the liquid is supposed to solidify:
Your first container shown is a round bottom flask. Initially, an increase in inclination of the flask will increase the flow of liquid out of the flask. If however the inclination is too high, the air that aims to refill the inner volume of the container, initially occupied by the liquid, will no longer continously enter the flask. Instead, there will be an alternation of sorting liquid and entering air; which may in turn yield air bubbles in the sorting liquid. Depending on the viscosity of the liquid, the air bubbles may remain in the liquid for quite some time. From the perspective to lower the potential air intake by the liquid during the transfer, a wide mouth bottle, or even a beaker were better suited than a round bottom flask.
The distance between the first and the receiving container. For your purpose, it is not good to pour the liquid directly into the receiver. To lower splash and splatter in the transfer by decantation, an easy aide is using a glass rod, like shown here:
(picture source)
In your instance, it may be more convenient to use a slab of glass / a pane of glass that is just beneath the mouth of the first container, and bridges all the distance to the receiving container; as schematically drawn below:
Here, intentionally the slab of glass is partially immersed in the liquid already present in the receiving container. If done in a coordinated team of two, one holding and moving gently the slab of glass, the second gently decantating the liquid onto this "chute", you might cast larger volumes into trough-shaped containers / fish tanks, too. Once solidified, you would cut-out the block / shape you need. Overall, it reminds me a little to ballistic gelatin... | {
"domain": "chemistry.stackexchange",
"id": 8306,
"tags": "experimental-chemistry, solutions, phase"
} |
Software to Draw Laboratory Apparatus | Question: So I am preparing a blog post about chemistry for lay people and I have decided to start talking about the basics. It turns out that it would be super useful to create chemical apparatus on demand to illustrate the post. Such as this one:
I particularly like the clean vector image look. This is what I am looking for. A software that allows me to assemble these vector-graphics style 2D chemical apparatus as well as let me control the color (and texture, if possible) of the contents inside the flasks.
It is meaningful that the software contains correct laboratory glassware, preferably the classical stuff that is instantly recognizable like the bunsen burners and tripods rather than heat mantles, but that is me being picky. So far, anything goes.
Any ideas?
Answer: The short answer to this topic is that the typically used programs for this
tasks vary in their coverage of of lab ware to prepare such illustrations and
focus on drawing molecules, exporting molecules in machine readable (chemical)
formats understood e.g., by databases, and performing some computations (e.g.,
averaged molecular and isotopic weight). Beside popularity of the programs if
you want to share/collaborate with colleagues intermediate files, an additional
point to consider are the graphic formats these programs offer for file I/O.
Thus, a native export of the .svg formats understood e.g., by
inkscape may be an advantage.
Without aim to provide an exhaustive description, the following examples may
illustrate this with templates and building a distillation.
ACD ChemSketch contains quite a number of lab utensils in the template library. The free (as in free beer) version disables some functions which however are not relevant to drawing the beakers, flasks, etc. The representation of them did not change for decades, however the Windows program offers a native export as .png and .pdf. Recent releases improved interaction with wine to equally work well enough in Linux, too (this includes the current version 2021.1.3). It may take some tinkering to adjust the the individual pieces' orientation to build a setup.
Overview of the chemical lab utensils:
A distillation:
ChemDraw
offers templates which may be bitonal, or in color (see, e.g., here. Primarily written for Windows and Mac, with only varying success to be deployed in Linux, the program is widely used in academia and industry (definitively not for free as in free beer, often accessed within a campus license). The templates include parts aligned to fit better into the round bottom flasks. Among the export formats are .png and .svg. The later allows you e.g., to adjust the fill and stroke of the paths, or to remove the ace label (which actually is a trade mark of Ace Glass, NJ).
With many chemistry-relevant functions removed, the ChemDraw JS page allows to get familiar with them (stamp button opens a pull-down menu), to save the drawings in the native format (Structure -> Get .cdxml), as .png (-> Get image) or vector file (-> get .svg).
Some of the templates:
(image credit to a Russian blog post)
A distillation (color adjustments with Inkscape):
ChemDoodle is the youngest of these three sketchers with the largest number of lab utensils in the template library. Capable to interact with many chemistry-relevant file formats (including the
public .cdxml of ChemDraw), the export of the graphics includes many options
for round-trip edits, and export e.g., as either .png, or (optionally layered) vector format (.svg, .ps, .pdf) and anticipate their use in web pages and services like twitter. The purchase of one of their licenses offers the user to choose between a program for Windows, or Mac, or Linux; this includes the option to shuttle the license key among the operating systems.
An overview of the chemistry templates:
With light retouches in inkscape, an illustration of a short-path distillation:
Contrasting to the programs above, chemix's focus is about drawing a lab setup exported either as bitmap or vector file. (Maybe drawing organic structures will be added.) By number, the inventory of lab utensils (still) is smaller than e.g., the one offered in ChemDoodle, though it contains material absent in the other collections (e.g., a waterless condenser, or the GHS symbols).
In addition to standard options to move and scale the objects, there are interesting details in handling the objects like (incomplete list):
joining the elements is guided by snap-points like magnets
both color and height of liquids in the containers may be adjusted within the interface, including boiling-like bubbles
a tilt of the container automatically affects the meniscus of the liquid
changing the height of the lab boy affects the scissoring
The green arc sign in the illustration below mark utensils you access when entering a paid subscription. Based on their twitter feed, there is continuing development and addition of utensils for this application running remotely in your web browser.
An illustration:
The comparison with the utensils in the lab may reveal differences between the sets offered (e.g., Chemix' missing pressure release for a distillation present in Chemsketch and ChemDraw's sets/how you should mount safely a distillation) may be seen; thus, design with care for detail. | {
"domain": "chemistry.stackexchange",
"id": 16256,
"tags": "software"
} |
sensor data windowing and normalization | Question: I am getting up to speed with working with sensor accelerometer data. I am looking to conduct a FFT on this signal. I found that the signal being non periodic is causing issues so I have tried to look at using windowing. From what I have read, Hanning window is the way forward. The issue lies here for me.
Attached is an image of the original signal and the signal post windowing. I applied the hanning window but it seems to reduce the first part of the signal alot. The initial spike in the negative axis is the impact with the rest being post impact acceleration. After applying the windowing technique, my ends have smoothed but the impact part of the signals amplitude has reduced alot (which makes sense based of the hanning window shape).
My question is
Am I using the correct type of window for my signal?
Should I be using the window differently than I have in the image?
would normalizing my signal help in any way and if so what type of normalization technique I should look at?
Any help appreciated
Answer: Window functions are typically used to reduce spectral leakage and scalloping loss in the frequency domain. Usually that means giving up a bit of amplitude and/or frequency resolution in the process. The different window functions exist so you can tune it depending on which of these properties is most important to your application. You are applying the window correctly and your observation about the reduced signal amplitude near the ends of the waveform is exactly what the window function is designed to do. Specifically, it is intended to reduce the discontinuity between the beginning and the end of the signal. Conceptually, the FFT assumes that the time domain waveform repeats forever, so if you can imagine pasting a bunch of copies of you signal together, then without the window function you'd have a big jump where they are spliced together, which would show up as harmonics in your FFT output.
Selecting a window function is more about what you're trying to measure in the frequency domain and less about how the time domain waveform looks after the window has been applied. For example, if you're looking to make an accurate amplitude measurement of a single sinusoidal tone, then a flat top window would be a good choice. However, it as very poor frequency resolution, so it would not be good at separating multiple tones. The Hanning window is a good middle of the road window function, so it is probably a good place to start. To find a more suitable window function, you need to decide what types of measurements you're trying to make and what types of input signals you expect.
Because you're essentially multiplying every point of your signal to something less than 1, the window function will apply some amount of gain (attenuation) equal to the average of the window function itself. Normalization just means compensating for this so that your amplitude measurement is correct. Some DSP libraries apply this compensation for you. If you're not sure, I think you can check by just averaging the window function itself. If it equals 1, then it's normalized. If not, then you need to divide your measured amplitude by this value to find the correct amplitude. | {
"domain": "dsp.stackexchange",
"id": 10627,
"tags": "fft, window-functions, sensor, normalization"
} |
Principle of Least Action Question | Question: Let's say we have a particle with no forces on it. The path that this classical particle takes is the one that minimizes the integral
$$\frac{1}{2}m\int_{t_i}^{t_f}v^2dt.$$
So if we graph this for the actual path a particle takes it is a straight, horizontal line on the $(t,v^2)$ plane. But couldn't we lessen the integral if we first slow down and then speed up near the end to create a sort of parabolic line that has less area under the $(t,v^2)$ plane? So why doesn't the particle take this path? What am I missing in my thinking?
Answer: You have to minimize the integral subject the the constraint that the initial and final positions $x(t_i)$ and $x(t_f)$ are held fixed. In particular, $\Delta x = \int_{t_i}^{t_f} v(t)\, dt$ is held fixed. If the particle slowed down than sped up as you suggested, the action would be less, but it wouldn't have a high enough average speed to cover the full $\Delta x$ in time. You can play around with a few specific trajectories and check for yourself. | {
"domain": "physics.stackexchange",
"id": 32451,
"tags": "lagrangian-formalism, variational-principle, boundary-conditions, action"
} |
Is Gravity Energy? | Question: This might be stupid, but is gravity a form of energy? And, if so, couldn't we use it for power?
Answer: Gravitation is something that gives rise to a force. Like most forces, it can be put to work, just like in dams.
But gravitation is not energy: it's an interaction between physical bodies with mass.
NB: presence of mass warps space which affects the way massless things like light propagate. That does not mean that light is affected by gravitation directly. Only indirectly. | {
"domain": "physics.stackexchange",
"id": 5548,
"tags": "gravity, energy"
} |
Liouville Theorem analogue in generalized velocities? | Question: The Liouville Theorem concerns dynamics in phase space: does an analogue exist in configuration space, and, if not, could you give a motivation / proof why?
Answer: Here is a direct counterexample to complete Qmechanic's answer.
Take $L= q \dot{q}^2/2$ for $q>0$. As a consequence
$$p = q\dot{q}$$
and so
$$dp = q d\dot{q} + \dot{q} dq\:.$$
Therefore, the canonical volume in terms of Lagrangian variables is
$$dp\wedge dq = q d\dot{q} \wedge dq\:.$$
Since the left-hand side is preserved by solutions of the equation of motion and $q=q(t)$ is not constant in time along these solutions, it is not possible that
$$ d\dot{q} \wedge dq$$
is also constant along the motion of the system. So the apparantly natural volume constructed out of Lagrangian variables is not constant in time on the motion of the system, differently from the canonical volume. | {
"domain": "physics.stackexchange",
"id": 45594,
"tags": "hamiltonian-formalism, coordinate-systems, phase-space, volume, complex-systems"
} |
Read integers from text file and write to CSV file | Question: Background:
I've recently started learning bash due to my new job in a VFX company. We backup all of our media to LTO tapes (one master and one clone). I was tasked with writing a script that split the tape list of master and clone to a CSV file. I feel I've done so in a crude manner as this was my first ever script and would love some feedback as to how I could improve the efficiency / syntax / code in general so I can learn from this experience.
Here is the text file: https://1drv.ms/t/s!AkWewdosAYGuhiaBbehlLBF64Qsg
I've been executing the script by calling it like this:
$ sh /scriptname.sh filename.txt
Code:
#!/bin/bash
# This script will split the presstore list of tapes into a .CSV file with two seperate coloumns.
file="$1"
echo "Splitting tape list....."
touch tempsplit.csv #creates temporary file for use later in script
while IFs= read line
do
lastchar=$(echo $line | tail -c 2)
if [ "$lastchar" == : ] #Ommits any lines that end with : else error
then
echo -ne
elif [ "$lastchar" -ge 0 -a "$lastchar" -le 9 ] #Selects lines that end in number
then
breakdown=$(echo "$line" | cut -d':' -f2,6) #selects fileds 2 & 6 containing tape numbers
master=$(echo "$breakdown" | cut -d'a' -f1) #cuts first number
clone=$(echo "$breakdown" | cut -d':' -f2) #cuts second number
final=$(echo -e "$master,$clone" | tr -d ' ' >> tempsplit.txt) #outputs to a temp csv file
fi
done < $file
touch tapelist_split.csv
awk 'NR % 2 == 0' tempsplit.txt | sort -n >> tapelist_split.csv #removes every 2nd line, sorts numerically, converts to a .CSV file
rm -rf tempsplit.txt #removes tempfile
echo "Complete"
Summary:
I'm not sure why the while read statement doesn't work if I don't call file="$1" as in line 3 and again output to that varaible in line 20 done < $file- an explanation on this would be amazing.
I realise I shouldn't have to call my first if statement, as I only care about the numbers but if I don't I get an error: "integer expected" when I run the script - does anyone know why this might be.
The reason I create the file "tempslipt.txt" is because my code to remove every 2nd line and sort the file wasn't working within the wile read statement so I figured this was a clean way of doing it.
My code may not be very efficient or good which is why I'm asking for tips on how I can refine and correct it so in future I can write much cleaner scripts.
Answer: Brave attempt :-) This can be so much better ;-)
Running scripts
The script has the shebang #!/bin/bash, but you invoke it as sh script.sh.
The purpose of a shebang is to make scripts runnable as ./script.sh.
When invoked this way, the shell looks at the shebang, and runs the script with the specified executable.
sh is often symlinked to Bash, but not always.
In short, if a script doesn't require Bash, then use the shebang line #!/bin/sh and run it as ./script.sh or sh ./script.sh.
If it requires Bash, then use the shebang line #!/usr/bin/env bash and run it with ./script.sh or bash ./script.sh.
Filtering lines
Bash is not particularly well-suited to filter lines by patterns.
grep is a great tool for that.
So instead of reading line by line with Bash to filter,
look for ways to use grep.
In this example, replace the loop with:
grep '[0-9]$' "$file" | while IFS= read line; do
...
done
Empty conditional branches
I think your intention here was to do nothing when the condition is true:
if [ "$lastchar" == : ]
then
echo -ne
elif ...
If you ever need to do nothing, you can use true or : like this:
if ...; then
:
elif ...
But this is not a good example to use this trick,
because a better solution exists.
A better solution would have been to drop the if, and change the elif to if.
An even better solution would have been to call continue if the line doesn't match the required pattern, so the loop body would be flatter:
if ! [ "$lastchar" -ge 0 -a "$lastchar" -le 9 ]; then
continue
fi
breakdown=...
With my tip with grep in the previous point, you don't need a condition at all.
Extracting two numbers
This is very inefficient:
breakdown=$(echo "$line" | cut -d':' -f2,6)
master=$(echo "$breakdown" | cut -d'a' -f1)
clone=$(echo "$breakdown" | cut -d':' -f2)
final=$(echo -e "$master,$clone" | tr -d ' ' >> tempsplit.txt)
The problem is that for each line this runs many processes: echo, cut, tr, and multiple $(...) sub-shells.
I see a simpler way to achieve what you want. Looking at these sample lines:
tape with barcode: 000053 and is: offline at listed location: MCR Shelves and is: Full and is copy of tape with barcode: 000047
tape with barcode: 000044 and is: offline at listed location: MCR Shelves and is: Full and is copy of tape with barcode: 000042
We could extract the two numbers and put a comma in between like this:
Replace all non-digits at the beginning with empty string (= remove)
Replace all non-digits with a comma
Try this simple pipeline:
grep '[0-9]$' file | head | sed -e 's/^[^0-9]*//' -e 's/[^0-9][^0-9]*/,/'
This works for the head of the file, but not the entire file. On closer look, there are lines that have a third number between the target numbers, which breaks the above pattern:
tape with barcode: 000484 and is: online at listed location: i40 QUANTUM and is: Appendable and is copy of tape with barcode: 000483
After replacement this line becomes:
000484 and is: online at listed location: i40 QUANTUM and is: Appendable and is copy of tape with barcode: 000483
Observe that there is a space after the first number and before the last. So to handle such case, we could change the logic of the second step to: "replace everything between two spaces with a comma".
Let's give that a try, on lines containing "i40":
grep 'i40.*[0-9]$' file | head | sed -e 's/^[^0-9]*//' -e 's/ .* /,/'
It seems the script could be replaced with:
grep '[0-9]$' "$file" | \
sed -e 's/^[^0-9]*//' -e 's/ .* /,/' | \
awk 'NR % 2 == 0' | sort -n > tapelist_split.csv
No temporary files needed.
If you have the GNU version of sed (typically in Linux), then you can use it delete every 2nd line instead of awk, slightly simpler:
grep '[0-9]$' "$file" | \
sed -e 's/^[^0-9]*//' -e 's/ .* /,/' -e '1~2d' | \
sort -n > tapelist_split.csv
Quoting
When you use variables as command parameters,
it's important to double-quote them to protect from word splitting and globbing. So a loop should read from $file like this:
while ...; do ...; done < "$file"
Understanding command parameters
Why did you use the -rf flags here?
rm -rf tempsplit.txt
There's no good reason to use those flags in the above script.
The -r flag is to delete directories recursively. But the parameter above is a single file. The -r flag serves no purpose.
The -f flag is to force deleting a file that might be protected, or to suppress error in case the file doesn't exist. Neither is the case here. The flag serves no purpose.
Flags in a script that serve no purpose or noise, confusing.
Understand all the flags you are using,
and make sure they have a reason to be there.
The field separator
The variable name is incorrect here:
while IFs= read line
Should have been:
while IFS= read line
Variable names in Bash are case sensitive. | {
"domain": "codereview.stackexchange",
"id": 29914,
"tags": "beginner, bash, csv"
} |
Heisenberg's uncertainty principle: $ \Delta p $ | Question: So I was reading this paper, "Limits to Binary Logic Switch Scaling—A Gedanken Model". The following is the paper's abstract:
In this paper we consider device scaling and speed limitations on irreversible von Neumann computing that are derived from the requirement of "least energy computation." We consider computational systems whose material realizations utilize electrons and energy barriers to represent and manipulate their binary representations of state.
So, logically, a bit of physics is used in the paper. What the author does on the second page is rewrite $ x = \dfrac{\hbar}{\Delta p }$ to $ x = \dfrac{\hbar}{\sqrt{2mE_{\text{bit}}}}$. I am aware that $ p = \sqrt{2mE}$, but why can you use $\Delta p = \sqrt{2mE}$? Is this allowed or does the author make a mistake?
Answer: This is an estimation tool not uncommon in theoretical physics. Namely, one knows the value of some quantity for a given problem and therefore assumes that the scale of the problem with regards to that quantity is of the same order of magnitude as the known value. In other words, we assume that the error in our known value must not be too much greater than the value itself, otherwise we wouldn't actually know the value.
For instance, the converse of this argument is sometimes used when discussing the absolute mass of the neutrino flavors. The neutrino relative masses have been measured, so when one needs an estimation of the absolute mass of a neutrino, the best guess is that it is roughly of the same order as the mass difference. It would be strange, the argument goes, that neutrino masses should be so tightly packed compared to their actual values. Why should we have so many significant figures on such a (comparatively) large value?
It is likely that this is what the author means: for an estimation of the minimum scale for a switch, it is reasonable to assume that the scale of the momentum of the charge carriers is of the same order as the momentum itself. If the error were much larger, we wouldn't actually know the momentum. If the error were much smaller, this would cease to be an estimation.
Edit: Here's another way to put it that is more tightly focused to this question. Within the classical model of an electron gas (the Drude model), electrons behave like particles within an ideal gas. Therefore, their velocity (and by extension their momentum) distribution function is a Boltzmann Distribution. If you follow that link, you'll notice that the mean, mode, and standard deviation (the square root of the variance) of such a distribution all scale as $a$ (the scale parameter of the distribution). That means that the mean is actually proportional to the standard deviation. That is the mathematical way of saying, "The bigger your guessed value is, the bigger your error in that guess will be." | {
"domain": "physics.stackexchange",
"id": 10936,
"tags": "quantum-mechanics, heisenberg-uncertainty-principle"
} |
What is dust made of? | Question: I was cleaning my blinds today, and wanted to know what the primary components of dust are. I know that it is made of microorganisms and other particles, but I do not want to guess that it will be $\ce{N}$ or $\ce{Si}$.
I can guess that other materials outside our house might have more than one component for dust. But what about inside the house?
Answer: The dust in the house mostly contains sand, dirt and dead skin cells.
First, let's start with the easiest, sand. The chemical formulae of sand is $\ce{SiO2}$, so it contains silicon and oxygen.
Next, dirt has rock, sand, and clay. Sand is made of silicon and oxygen. Rock is mostly made up of graphite (carbon) and some minerals like phosphorus and silicon. Clay is made up of silicon and oxygen and some minor minerals like aluminium.
Lastly, dead skin, like other organic compounds, contains carbon, hydrogen and oxygen, maybe some nitrogen as it may have protein in it.
So basically, dust is made up of carbon, hydrogen, oxygen, nitrogen, silicon, phosphorus and some minor minerals. | {
"domain": "chemistry.stackexchange",
"id": 10615,
"tags": "everyday-chemistry, elements"
} |
Husky, turtlebot like simulators for ROS-lunar? | Question:
I have been using ROS-Lunar, I tried Husky and Turtlebot but they both arent supported by Lunar.
Are there any similar simulators that I can get to use for my gazebo?
If not, is there any way by which I can use husky for lunar, or any other indigo supported package for lunar?
Originally posted by MukulKhanna on ROS Answers with karma: 25 on 2017-08-26
Post score: 0
Answer:
Have you tried downloading and compiling the source? You may also have to install the dependencies from the source too https://github.com/turtlebot/turtlebot_simulator/issues/67.
Originally posted by jayess with karma: 6155 on 2017-08-26
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by MukulKhanna on 2017-08-27:
Yes, it becomes really messy because then I have to set all the environment variables, the launch files, the worlds etc and in the end I get stuck. I read somewhere that its not advisable to try to use packages meant for indigo into lunar or any other distro for that matter.
Comment by jayess on 2017-08-27:
That may be the only way for you to do this with that particular package. | {
"domain": "robotics.stackexchange",
"id": 28708,
"tags": "gazebo, ros-lunar, ros-indigo, gazebo-simulator"
} |
Relativized world in which P ≠ NP = coNP | Question: Do we know of an oracle relative to which P ≠ NP but NP = coNP?
Answer: Some oracles of this sort were given in other answers on this site:
https://cstheory.stackexchange.com/a/1545 gives references to an oracle $A$ such that $\mathrm{EXP}^A=\mathrm{NP}^A=\mathrm{ZPP}^A$.
https://cstheory.stackexchange.com/a/38765 gives a reference to an oracle $A$ such that $\mathrm{EXP}^A=\oplus\mathrm P^A=\mathrm{NP}^A=\mathrm{ZPP}^A$ and $\oplus_3\mathrm P^A=\mathrm P^A$.
Note that $\mathrm{EXP}^A=\mathrm{NP}^A$ implies $\mathrm{NP}^A=\mathrm{coNP}^A$ and $\mathrm P^A\ne\mathrm{NP}^A$ by the relativized time-hierarchy theorem.
Furthemore, https://cstheory.stackexchange.com/a/12366 gives a reference to a paper that lists many oracle separations between the most common classes. | {
"domain": "cstheory.stackexchange",
"id": 5630,
"tags": "complexity-classes, oracles, relativization"
} |
Why isn't there such a thing as "internal momentum"? | Question: The three most well-known conserved quantities in classical physics are energy, momentum, and angular momentum.
Suppose we have a system with no external forces acting on it. We can talk about the system's internal energy as the sum of all kinetic and potential energies of its particles and interactions between particles. If you think about subparts of the system, the internal energy is just what remains of the kinetic and potential energies when you consider the total system's center of mass reference frame (which is inertial because there are no external forces).
We can also talk about an analogue of this when it comes to angular momentum. The so-called spin angular momentum is what remains of the angular momentum of the when you consider the total system's center of mass reference frame.
Why isn't there an analogue of this for linear momentum?
Answer: It boils down to our choice of the reference frame which we use to define "internal energy" or "internal momentum".
For example, consider a gas in a mass-less container that does not deform, and let the container experience sudden change of motion, e.g. its motion is quickly stopped by impacting on some heavy body with a locking mechanism or a velcro patch. The gas will not stop immediately, it will continue to move with respect to the stopped container, will compress somewhere and expand elsewhere, complicated flow can develop, and only later the gas will settle to equilibrium state in which it is at rest with respect to the container.
Now, right after the container stops, what is internal energy and internal momentum of the gas?
If we define internal energy/momentum with respect to the container, which is quite sensible thing to do, we get higher value of energy than if we define it with respect to the center of mass of the gas, and we get non-zero internal momentum.
If we define internal energy/momentum with respect to center of mass of the gas, then internal momentum stays zero. But this frame can be cumbersome for describing state of the gas, as in this frame, not only the gas elements, but also the container itself moves with acceleration, and does work on the gas. | {
"domain": "physics.stackexchange",
"id": 99842,
"tags": "classical-mechanics, reference-frames, momentum, conservation-laws"
} |
ROS Answers SE migration: ButtonEvent | Question:
Hi
I am using ROS Hydro
I wanted to subscribe to kobuki_msgs::ButtonEvent using roscpp client.
However, the compiler does not recognize this event and throws an error message
error: ‘ButtonEvent’ in namespace ‘kobuki_msgs’ does not name a type
I already successfully subscribed to all others events like kobuki_msgs::BumperEvent, kobuki_msgs::CliffEvent, but I am not able to use kobuki_msgs::ButtonEvent
I would appreciate any help on this matter.
Thanks
Anis
Originally posted by Anis on ROS Answers with karma: 253 on 2015-05-29
Post score: 0
Answer:
Have you included the header file "kobuki_msgs/ButtonEvent.h"? It's a different header file than for BumperEvent and CliffEvent.
Originally posted by tfoote with karma: 58457 on 2015-05-29
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Anis on 2015-05-29:
Thank you for the answer. It was actually missing. Now, everything works after I included the header file. In fact, I was including the other events header files in a superclass, and I did not figure out in the sub-class that the header file of Button is missing.
Thank you | {
"domain": "robotics.stackexchange",
"id": 21799,
"tags": "ros-hydro"
} |
Is s-grammar powerful enough to generate all possible DCFL? | Question: In s-grammar all productions are in form of A → , A∈V , a∈T , ∈V*
"... and any pair (A, a) occurs at most once in P." [P. Linz, 6th ed. , p. 144]
s-grammar is Unambiguous and i think (not sure) we can describe all Unambiguous-CFL by s-grammar. I want to know can s-grammar describe all possible DCFL or not ? according to this sentence, I think we can't do it but i'm not sure about that:
Unfortunately, not all features of a typical programming language can be expressed by an s-grammar. [P. Linz, 6th ed. , p. 152]
but all languages which described by an s-grammar is Deterministic.
I say this, because we can made 2-state DPDA for any simple-grammar with this definition:
R ≝ Production Rules of CFG
(x,y,"LBL") is a labeled-edge between x and y with “LBL” as a label
∀r∊R: r= (A,aⱰ) ( A∊V ⋀ a∊T ∧ Ɒ∊V*) add (q,q,"a,A/Ɒ") to E
Add (q,q,"ε,z/Sz′") to E
Add (q,f,"ε,z′/z′") to E
if there is any DCFL that we can not provide an s-grammar for it , show me that please and correct me if i'm wrong.
Thanks.
Answer: Actually the example of a language not accepted can be quite simple, due to a technicality. The language $a^*$ is not generated by a s-grammar.
In fact, an s-grammar cannot generate $\varepsilon$. In order to remove $S$ from the stack we have to apply at least one production, and any production will produce a terminal symbol.
But even if we see this as a technicality, we cannot generate two strings, one of which is the prefix of another. If we can generate a string $\alpha$ which then is accepted because all veriables have been rewritten (the stack only contains the new $z'$), then how would we generate a longer string $\alpha\beta$? It must follow the same computation initially.
This is the case because the PDA you produce is actually a PDA with empty stack acceptance: when the stack is empty (or actually only has $z'$) we must accept. It is well known that deterministice PDA with empty stack acceptance can only generate prefix-free languages. Addingan end-of-string marker is usually the remedy.
The real-time property (reading a symbol every step) is a larger problem.
Consider the language $\{ a^i b^j c^i \mid i,j \ge 1\} \cup \{ a^i b^j d^j \mid i,j \ge 1\}$. It can be accepted by a DPDA. push $a$'s, push $b$'s. Then when reading a $c$ we pop the $b$'s and compare the $a$'s and $c$'s. Otherwise when reading a $d$ we compare the $d$'s with the $b$'s using the stack. Thus you need popping of stack symbols without reading input. A real-time PDA cannot do that (and neither the s-grammar). The source I know for this refers to Autebert, Berstel, Boasson: Context-Free Languages and Pushdown Automata in the Handbook of Formal Languages.
Of course the PDA has only a single state. I do have to check: it seems that also the single state restriction reduces the languages accepted. | {
"domain": "cs.stackexchange",
"id": 16401,
"tags": "context-free, formal-grammars, nondeterminism"
} |
What's the difference between global and local costmap's static_map? | Question:
In navigation tutorial : http://www.ros.org/wiki/navigation/Tutorials/RobotSetup#Costmap_Configuration_.28local_costmap.29_.26_.28global_costmap.29
I found that in global_costmap_params.yaml exist static_map: true.
Also in local_costmap_params.yaml exist static_map: false.
What's the difference between them?
Why global set to true,and local set to false?
If I want to do online slam with gmapping(means dynamically build a map when navigation),
what setting should I use?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2011-07-17
Post score: 7
Answer:
The difference between the global and local costmap is that the global planner uses the global costmap to generate a long-term plan while the local planner uses the local costmap to generate a short-term plan. There are many different ways to configure the two costmaps, but in the tutorials only the global costmap is configured to use a map while the local planner operates in the odometric frame.
Setting the "static_map" parameter to true just means that you'll be taking an outside map source for navigation. That map could come from SLAM or it could come from a source like the map_server. You shouldn't have to change the way you've configured the navigation stack to run SLAM, you'll just replace the map_server with your SLAM algorithm of choice.
To see an example of the navigation stack running with SLAM, you might want to check out the navigation_stage package. Specifically, you'll be interested in the move_base_gmapping_5cm.launch file.
The "static_map" parameter is, admittedly, a bit poorly named. It should probably be something like "external_map" instead, but there are legacy reasons for why it has its current name. There was a time where the navigation stack couldn't support dynamically changing maps, but now it handles them just fine.
Originally posted by eitan with karma: 2743 on 2011-07-18
This answer was ACCEPTED on the original site
Post score: 17
Original comments
Comment by eitan on 2011-07-25:
Yes, gmapping can just replace the map_server with global static_map=true. An example of setting static_map=false, rolling_window=true would be a situation where you want to run navigation for the robot in the odometric frame without any map at all.
Comment by sam on 2011-07-19:
So can gmapping just replace map_server with global static_map=true? And when is the case/example that I should set global static_map=false,rolling window=true?
Comment by eitan on 2011-07-19:
For the tutorial, only the global costmap uses an a priori map, so the static_map parameter is set to true. The local costmap, however, only uses local sensor information to build an obstacle map, so static_map is set to false.
Comment by sam on 2011-07-18:
So in navigation tutorial, why global static_map set to true,and local static_map set to false?
Comment by ctguell on 2013-11-03:
@eitan is there a way to add a fake obstacle for the local planner to take into acount in the planning? | {
"domain": "robotics.stackexchange",
"id": 6164,
"tags": "ros, navigation, static-map, costmap-2d"
} |
Atomic Force Microscopy - Lateral vs. Depth Resolution | Question: I'm trying to gain a better understanding of Atomic Force Microscopy, specifically the relationship between its lateral and depth resolution. I've seen a variety of metrics on the two, but in general, the depth resolution appears to be about two orders of magnitude more accurate than the lateral resolution. See the page Fundamental Theory of Atomic Force Microscopy for an example.
My question is, does the lower lateral resolution limit the usefulness of the higher depth resolution? In my current mental model, it seems like the height of a sample could vary significantly between points that aren't laterally resolvable. If that's the case, would the generated "height map" contain large discontinuities?
In the papers I've seen, the "height maps" are rather smooth, so I wonder if I'm missing something.
Answer:
My question is, does the lower lateral resolution limit the usefulness of the higher depth resolution?
No. The only thing really (size-wise) that AFM can't measure are individual atoms. You use STM for that. Otherwise, the lateral scale of things you want to measure is bigger anyway. I.e., I don't think your statement is accurate:
it seems like the height of a sample could vary significantly between points that are(n't?) laterally resolvable.
It is possible to have artifacts where a hole in your sample is so deep and narrow that the tip can't properly image it. | {
"domain": "physics.stackexchange",
"id": 57993,
"tags": "microscopy, imaging"
} |
robot_localization ekf faster than realtime offline post-processing | Question:
Hi,
I'm using a Robot Localization EKF configured to receive twist - linear and angular velocity derived from a wheel encoder and odometry derived from SLAM position estimates.
The EKF is working reasonably well in real time, however I'd like to be able to replay ROS bagged data through this EKF in faster than realtime.
I've tried speeding up the bag file replay and am getting some errors in position from the EKF.
Is faster than realtime offline / post-processing possible with the Robot Localization EKF?
Thanks
Originally posted by runerer on ROS Answers with karma: 26 on 2019-06-01
Post score: 0
Answer:
Thanks for the response. I believe the issues I was having were related to the size of the bag data I was trying to replay. Once I stripped out the topics that are not used by the EKF from the bag data the high rate replay worked fine.
Originally posted by runerer with karma: 26 on 2019-06-09
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 33099,
"tags": "navigation, ekf, robot-localization, ros-kinetic"
} |
Why stabilizer codes are named additive quantum codes? | Question: I noticed that stabilizer codes and additive quantum codes are equivalent, but why?
I am confused at the word "additive" since the operation of stabilizer genertors is multiplication.
Answer: A stabilizer code is also called an additive code, because it is closed under the sum of its elements.
The namesake is described on page 33 of "Stabilizer Codes and Quantum Error Correction" (link).
Additionally, additive quantum codes are the quantum version of additive codes found in coding theory. | {
"domain": "quantumcomputing.stackexchange",
"id": 1182,
"tags": "stabilizer-code"
} |
Reaction of Cu in hydrochloric acid bath with exposure to air over several months? | Question: I assume that copper will be slowly oxidized by oxygen, dissolved in water, forming copper(I) oxide (as stated here):
$$\ce{4 \overset{\pm0}{Cu} + O2 -> 2 \overset{+I}{Cu}_2O}$$
Reaction rate might be affected by the hydrochloric acid (cf. here)
And some of the copper could actually be converted to $\ce{CuCl2}$ (cf. here):
$$\ce{2HCl (aq) + \overset{\pm0}{Cu}(s) -> \overset{+II}{Cu}Cl2(aq) + H2(g)}$$
But what will happen to $\ce{Cu2O}$?
Will it degrade to $\ce{CuO}$, which then forms $\ce{CuCl2}$? That would mean that one should find higher concentrations of $\ce{CuCl2}$ in the solution, as time goes on.
Will it not react any further but instead get solved in the hydrochloric acid solution, because it is soluable in acids?
Or will something else happen?
Note: The concentration of hydrochloric acid solution is around 10 %.
Related Questions:
Copper Doesn't React with Hydrochloric acid
Copper and hydrochloric acid
Answer: In the presence of excess hydrochloric acid, copper oxides are not the immediate reaction product of copper metal and oxygen (e.g. from the air). The air does oxidize the copper...
$$\ce{Cu + 1/2 O2 + 2 H+ -> Cu^{2+} + H2O}$$
... but the final oxidation product is copper(II) ion, not copper(I). (Copper(I) is formed initially but is more prone to air oxidation than copper metal is, so it is relatively quickly converted to copper(II).) Additionally, in the presence of chloride anions, the copper(II) ions are preferentially complexed by chloride. The low concentration of hydroxide means that hydroxide complexes are not favored, and chloride is a better ligand than water for copper.
The actual complexes formed vary in their stoichiometry and their color. $\ce{CuCl2(aq)}$, $\ce{CuCl3^{-}(aq)}$ and $\ce{CuCl4^{2-}(aq)}$ are all possible:
$$\ce{Cu^{2+} + n Cl- -> CuCl_n^{(n-2)-}(aq)}$$
$$n \in \{2, 3, 4\}$$
Nurdrage made a nice video showing how the reaction of $\ce{HCl}$, air, and copper metal leads to copper chloride. It's the first of the three methods he presents for making copper chloride. As he notes, the reaction is very slow.
$%edit$ | {
"domain": "chemistry.stackexchange",
"id": 4224,
"tags": "inorganic-chemistry, redox, transition-metals"
} |
How to Roslaunch node in GDB? | Question:
this tutorial says it's possible to Roslauch nodes in gdb, but this doesn't apply to my case.
I launch the node in the following way:
<node name="Multi_obj_qp_node" pkg="dumbo_Multi_obj_control" type="qp_multi_obj_control_node" cwd="node" respawn="false" output="screen" launch-prefix="xterm -e gdb -d=$(find dumbo_Multi_obj_control)/ros/bin/ -e=qp_multi_obj_control_node" >
<rosparam command="load" file="$(find dumbo_Multi_obj_control)/config/Multi_obj_lp.yaml"/>
But I got the following error:
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
Roslaunch got a 'No such file or directory' error while attempting to run:
`xterm -e gdb -d=/home/yuquan/myrepo/dam/ros/dumbo_apps/dumbo_Multi_obj_control/ros/bin/ -e=qp_multi_obj_control_node /home/yuquan/myrepo/dam/ros/dumbo_apps/dumbo_Multi_obj_control/ros/bin/qp_multi_obj_control_node __name:=Multi_obj_qp_node __log:=/home/yuquan/.ros/log/fa2d2a5c-2381-11e2-b6dc-0022150df8c3/Multi_obj_qp_node-5.log
Please make sure that all the executables in this command exist and have
executable permission. This is often caused by a bad launch-prefix.
One of my friends says gdb works with a single node just like a usual executable. But in my case I need to launch yaml together. Could any one give me some suggestions?
Originally posted by yuquan on ROS Answers with karma: 68 on 2012-10-31
Post score: 4
Answer:
Try:
<node name="Multi_obj_qp_node"
pkg="dumbo_Multi_obj_control"
type="qp_multi_obj_control_node"
respawn="false" output="screen"
launch-prefix="xterm -e gdb --args" >
<rosparam command="load" file="$(find dumbo_Multi_obj_control)/config/Multi_obj_lp.yaml"/>
</node>
Also, you can ignore the "_DummyThread" error, it's an inconsequential error in Ubuntu's python distribution.
Originally posted by jbohren with karma: 5809 on 2012-10-31
This answer was ACCEPTED on the original site
Post score: 9
Original comments
Comment by yuquan on 2012-11-03:
Yes, It works! Great! | {
"domain": "robotics.stackexchange",
"id": 11579,
"tags": "ros"
} |
Static access pattern in Distributed Databases | Question:
The access patterns of user requests may be static, so that they do
not change over time, or dynamic. It is obviously considerably easier to plan for
and manage the static environments than would be the case for dynamic distributed
systems. Unfortunately, it is difficult to find many real-life distributed applications
that would be classified as static. The significant question, then, is not whether a
system is static or dynamic, but how dynamic it is. Incidentally, it is along this
dimension that the relationship between the distributed database design and query
processing is established.
What does it mean for an access pattern to be “static”? Could you show a practical example of a static access pattern?
Answer: I found what you copied in the following text
The idea is that users access (querying, updating or etc ..) of the database changes with time. It may be at its peaks in sale seasons for example if the database belongs to a commercial applications. etc .. - if it is always fixed and predicted then it is static.
From your copied text: it says that: "it is difficult to find many real-life distributed applications that would be classified as static. The significant question, then, is not whether a system is static or dynamic, but how dynamic it is." --
perhaps, a static access pattern is a collection of sensor nodes that update the database with their readings in a periodic time, with equal size updates always. -- or a set of servers that get a synchronization clock in a periodic manner (quite theoritical i guess) | {
"domain": "cs.stackexchange",
"id": 756,
"tags": "distributed-systems, databases"
} |
Spin Orbit ($LS$) interaction energy | Question: Well, I am currently using a pretty old book by H.E White "Atomic Spectra", and he defined spin orbit interaction energy as the product of the resultant frequency and the projection of spin angular momentum on the orbital angular momentum. My question is why? On what basis did he defined the spin orbit interaction energy as such.
Answer: There's a semiclassical way of deriving the Hamiltonian term corresponding to the spin orbit interaction in an atom, but I don't know if this is what you're looking for (the correct way would be using the relativistic correction included in Dirac's equation), anyhow:
Consider the classical picture of an atom orbiting the nucleus, now in the electron's frame the nucleus of course appears to be rotating the electron, this orbiting leads to a magnetic field equal to
\begin{align}
\mathbf{B} = \frac{E\times \mathbf{v}}{c^2} \tag{1}
\end{align}
which you can obtain by doing the Lorentz transformations of the fields in SR (I showed you this derivation recently here). Now the $E$ field felt by the electron can be written as the gradient of its potential energy, $\mathbf{E} = -\nabla V(\mathbf{r})$ or in polar coordinates: $-\frac{\mathbf{r}}{r}\frac{dV(\mathbf{r})}{dr}: (*)$ The spin orbit term results from the interaction of this $\mathbf{B}$ field with the electron's spin:
\begin{align}
H = -(1/2) \mathbf{m}\cdot \mathbf{B} \tag{2}
\end{align}
Now by substituting $(*)$ in $(1)$ you end up with a $\mathbf{r}\times \mathbf{v}$ term which you can express as the orbital anglular momentum $\mathbf{L},$ and the magnetic moment $\mathbf{m}$ in $(2)$ is equal to:
$$
\mathbf{m} = \frac{ge\hbar}{2m_e}\mathbf{S}
$$
with $g$ the Landé factor and the factor $1/2$ the Thomas factor. Inserting everything back into $(2)$ we obtain:
\begin{equation}
H = \frac{e\hbar^2}{2m_e c^2 r}\frac{dV(\mathbf{r})}{dr}\mathbf{S}\cdot \mathbf{L} \tag{3}
\end{equation}
In case your model is hydrogen-like then you can substitute the $1/r\frac{dV}{dr}$ term in $(3)$ by its corresponding Coulomb potential. If you plan to look at a more recent book covering such topics, Stephen Blundell's book comes recommended. | {
"domain": "physics.stackexchange",
"id": 24454,
"tags": "angular-momentum, atomic-physics, quantum-spin, spectroscopy, spin-orbit"
} |
Multiple Camera recording with openCV | Question:
Hi, atm my goal is to connect several standard USB webcameras (2-4) to one single computer and try to run/record them at the same time.
I'm using MSVS with C++ and the openCV libraries.
Getting data from one single camera just works fine.
cv::VideoCapture vidCap(0);
if(vidCap.isOpen())
cout << ...
this just works fine and i can record pictures.
But if I trie to set up an other VideoCapture object:
cv::VideoCapture vidCap2(1);
if(vidCap2.isOpen())...
it allways fails!
I tried to run it in different threads, but this didn't work either - even got a bluescreen after closing the programm.
And even with some software tools like Dorgem I got a bluescreen after trying.
I'm using Windows 7 64bit as operating system.
Any clues about what could be wrong here? I'd be happy about any help.
Originally posted by wtom on ROS Answers with karma: 1 on 2012-05-24
Post score: 0
Answer:
Hi.
First of all, your problem is an OpenCV problem so I would think you might have better luck asking in the OpenCV group: http://tech.groups.yahoo.com/group/OpenCV/
Secondly, as pointed out here (http://stackoverflow.com/questions/10194033/opencv-with-2-cameras-vc) OpenCV does not seem to handle >1 USB cameras on Windows all that well in general.
One suggestion would be to try something different. For example, you could try to actually use ROS to communicate with the cameras and set them up as ROS topics. Then create a node that subscribes to the images and does whatever OpenCV processing you need to do. There's no guarantee this would work, but it's worth the shot I'd think.
Kind regards, Stefan Freyr.
Originally posted by StFS with karma: 182 on 2012-05-25
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by mjcarroll on 2013-01-17:
You may also find the OpenCV equivalent of ROS Answers helpful: http://answers.opencv.org/questions/ | {
"domain": "robotics.stackexchange",
"id": 9543,
"tags": "ros, c++, opencv, usb, webcam"
} |
Spatial Correlation Function and Ensemble average | Question: Well, I was reading the Statistical Mechanics book by Pathria, to understand the concepts of the correlation function. I want to quote some lines.
Spatial correlation functions are based on n-particle densities. The one-body number density is defined by the average quantity
\begin{equation}
n_1(\vec{r})=\langle \sum_{i}\delta(\vec{r}-\vec{r_j})\rangle
\end{equation}
This defines the local number density in which $n_1(\vec{r})d\vec{r}$ is a measure of the probability of finding a particle inside an elemental volume dr located at position r.
Now my question is about the averaging. Is it not the ensemble average? Because particle number density at a given point inside material is truely a random variable. So we need some distribution function to be average. So my question is, what kind of average was that?
Answer: Yes, that is an ensemble average. You create many realisations of your system and count how many particles are "around" each point $\vec{r}$. Or, if the system is ergodic, you just take the time average of the same quantity. | {
"domain": "physics.stackexchange",
"id": 61664,
"tags": "statistical-mechanics, correlation-functions, soft-matter"
} |
Propagation of light in transparent media: absorption and reemission or scattering? | Question: In the two Phys.SE questions What is the mechanism behind the slowdown of light/photons in a transparent medium? and Why glass is transparent? transparent media were discussed. But I'd like to clarify one detail: is a photon absorbed (and delayed) by the medium and then reemitted, or scattered instantly?
Is e.g. a laser beam still coherent after passing glass? As medium molecules are disordered, this should distort the phase of photons taking different paths.
Answer: There is no unambiguous correct answer to this question because it isn't well posed in terms of logical positivism: what is the difference between the two processes? There is no way to tell which happens if you don't muck up the intermediate state with a measurement.
If you mean this in terms of some quantum field theory with given fields and interactions and asymptotic states, then you can ask how the processes appear in a Feynman description. The scattering process in QED is always two-step, the absorption and emission are separate space-time points. But the emission can precede the absorption both in coordinate time and in proper time along the electron's world line, so you should include "emitted and then absorbed" to the list of possibilities.
Light does not have to be resonant in order to scatter off an atom. The amount of scattering/emission-reabsorption is smaller away from resonance. A light wave is also a long coherent field, and this field can acquire a phase push from the emission-reabsorption, leading the phase-velocity to be bigger than the speed of light.
The issue of "how come the phases add up coherently" is adressed by two things: there is a large scale difference between the atoms and the light wavelength. Each atom scatters the light independently and randomly into a spherical waves, which add up coherently in the original direction only to alter the phase velocity by a constant amount.
There is no scattering from the bulk of a perfect crystal, for long wavelengths, because there is still a discrete translation invariance which means momentum is conserved up to big jumps, and the big jumps give waves with the wrong frequency for long enough wavelengths. But there are discrete momentum additions which are allowed for a short wavelength x-ray in an atomic crystal, and if the photon momentum comes out different but at the same frequency due to the coherence in a different direction, that's called diffraction.
If you want scattering in a crystal, you need to scatter off defects which have a good amount of random variation in a box the size of one wavelength. Similarly, if you scatter of a fluid, you need fluctuations in density to be meaningful in one wavelength. This is easier for blue light than for red light, so transparent fluids scatter blue. | {
"domain": "physics.stackexchange",
"id": 1476,
"tags": "optics, visible-light, photons, solid-state-physics, laser"
} |
How can the universe be dodecahedron-shaped? | Question: Physicsworld references "Dodecahedral space topology as an explanation for weak wide-angle temperature correlations in the cosmic microwave background" by J.-P. Luminet et al., published in Nature (Nature 425 (2003) 593).
They claim that a universe with the same shape as the twelve-sided polygon can explain measurements of the cosmic microwave background.
How could this be true? I thought the universe's shape is spherically symmetric. Does this mean the universe has sharp edges (like a dodecahedron)?! Doesn't this mean that the universe prefers some directions to the others?
Answer: The universe is not believed to have a boundary. But the global topology of space does not have to be a 3-sphere $S^3$ (for constant positive curvature), infinite flat space $R^3$ (for zero curvature) or a 3D hyperbolic pseudosphere $H^3$ (for negative curvature). General relativity only describes the local structure and curvature, not the global topology.
The Killing–Hopf theorem (all complete connected Riemannian manifolds of constant curvature are isometric to a quotient of a sphere, Euclidean space, or hyperbolic space by a group acting freely and properly discontinuously) indicates that for constant curvature spaces various topologies are possible. These correspond to taking a chunk of one of the "default" spaces mentioned above and identifying the faces so that going through one of them means one enters the volume from one of the other faces. The most obvious is taking a box of flat space and identifying opposite faces so it acts as a 3-torus.
It is possible to construct several nontrivial finite volume zero or positively curved spaces (and even more for negative curvature, where there might be an infinite number of possibilities). The Poincaré Dodecahedral Space, formed by identifying opposite faces of a regular dodecahedron after a 36 degree rotation, has been discussed as a cosmological model. This is the one the question dealt with. The reason for the shape is that it corresponds to a particular symmetry group of the sphere, the extended icosahedron group. While the space can be defined in many ways, they are all equivalent (and different from the spaces when one uses a different group).
This kind of space is in one sense spherically symmetric: the curvature is constant. But all directions are not the same, so it is not quite isotropic. There are no sharp edges, since the joining up is smooth (even in the "corners"). Most importantly, it would act as a "hall of mirrors" where distant objects would be repeated in some pattern if their light has reached us. This allows for searching for the topology by looking at correlations in the cosmic microwave background along circles on the sky (their sizes are set by the size of the space and its topology).
There is currently no evidence for a nontrivial topology. WMAP placed a limit of 25.6 giga parsec on the cell size, even for very general topologies. The Planck collaboration placed limits on the radius $R_i$ of the largest inscribed sphere in the topological domain compared to the co-moving distance to the last surface of scattering $\chi\approx 14.0$ Gpc. They found that for a flat universe, $R_i>0.92\chi$ for the 3-torus, $R_i>0.71$ for the prism, and $R_i>0.5\chi$ for the slab, while in a positively curved universe $R_i>1.03\chi$ for the dodecahedral space, $R_i>1.0\chi$ for the truncated cube, and $R_i>0.89\chi$ for the octahedral space. For other considered topologies $R_i>0.94\chi$ at the 99% confidence level. | {
"domain": "physics.stackexchange",
"id": 54924,
"tags": "general-relativity, special-relativity, cosmology, spacetime, observable-universe"
} |
How can convolution be a linear and invariant operation? | Question: I'm having a slight breakdown right now with a seemingly simple question. Say I have a system that convolves an input function with itself to produce an output function:
$g(x) = f(x) ∗ f(x)$
I've heard countless times that convolution is a linear operation and just assumed it was fact. However, I'm trying to understand how this is possible.
In my system above, it doesn't seem like this system could be linear since convolution contains multiplication. So multiplying an input signal with itself can't possibly be linear. However, it would appear to be shift invariant.
Am I correct in my logic? I seem to have a misunderstanding of convolution being a linear operation.
Answer: Convolution of an input signal with a fixed impulse response is a linear operation. However, if the input-output relation of a system is
$$y(t)=(x*x)(t)\tag{1}$$
then the system is non-linear, which is straightforward to show. Similarly, any convolution with a kernel that depends on the input signal is a non-linear operation.
On the other hand, a system with input-output relation
$$y(t)=(x*h)(t)\tag{2}$$
is linear (and time-invariant) because it convolves any input signal $x(t)$ with a fixed impulse response $h(t)$, which is independent of the input signal. | {
"domain": "dsp.stackexchange",
"id": 9819,
"tags": "convolution, linear-systems"
} |
Operators and Enhancers/Silencers | Question: Wikipedia has two images, of a eukaryotic gene and of a prokaryotic gene.
They show the difference that the prokaryotic gene also has an operator while the eukaryotic gene does not. Both also have enhancers and silencers separately. I thought enhancers and silencers were types of operators. Is this wrong?
Answer: Enhancers and silencers are binding sequences for transcriptional activators or repressors, in which case the sequence is often located some distance upstream or downstream of the gene it regulates. See regulation of transcription for information about how these interact with their target genes (through DNA bending, mediator, etc.). A note about enhancers and silencers, though, they're not necessarily required for transcription: they help the gene attain robust up- or down-regulation of transcription.
The operator, on the other hand, influences whether the promoter will do something, or nothing. If we're talking about inducible systems, the repressor is bound to the operator, blocking action to/from the promoter (an inducer will bind the repressor and keep the operator from blocking the promoter). If we're talking about a repressible system, the repressor will need a corepressor to bind the operator to abrogate transcription by blocking action to/from the promoter. | {
"domain": "biology.stackexchange",
"id": 4294,
"tags": "genetics, molecular-biology, molecular-genetics"
} |
LPA* implementation keeps looping | Question: Short story
I am currently trying to implement LPA* in an existing navigation system and find the algorithm seems to loop forever, expanding the same vertices over and over again. I am wondering what is causing this to happen, and what I can do to rectify this.
Long story
The navigation system uses Dijkstra so far. As I am extending it to react to changes in the traffic situation (and thus edge costs) by changing the already-calculated route, I decided to go for LPA*, as it is essentially an evolution of Dijkstra.
The existing Dijkstra implementation diverts from the canonical one in a few ways:
The cost of a vertex is the cost to reach the destination from that vertex. Cost thus descreases as we travel along the route, the destination (theoretically) has a cost of zero. This way, the route graph remains valid as the vehicle position changes, as long as the destination remains the same.
If the destination is off-road, it is not directly represented by a vertex. Instead, a handful of nearby vertices are given a cost slightly above zero and inserted in the priority queue. (That initial cost can still be lowered—it is as if each of these nodes were directly connected to the destination with a vertex whose cost is equal to the cost of the node.)
Current position works in a similar way: it may be off-road or in the middle of a vertex, so we simply add a penalty to the nearest vertex.
The algorithm terminates after the last vertex in the graph has been visited, even if all vertices around the current position have already been expanded. That allows us to keep the route graph even if the vehicle takes a wrong turn and ends up in a place from which the destination is much more expensive to reach.
Each vertex maintains a pointer to the next edge along the cheapest path from that vertex to the destination. Where ambiguities exist, the pointer may point to any acceptable edge (the decision is probably stable across multiple runs, but not guaranteed to be).
Instead of infinity, we use $2^{31} - 1$, or 0x7FFFFFFF (largest positive signed 32-bit integer) as pseudo-infinity. Since cost represents travel time in tenths of seconds, the maximum is in the order of magnitude of several years, and any numbers likely to be encountered in practice are several orders of magnitude below it.
Most edges are traversable in both directions (and, technically, if they aren’t, their cost is simply pseudo-infinity). Therefore, most successors of any given vertex are also predecessors of it, and vice versa.
I have maintained these diversions and kept the existing data structures, adding just a rhs member to the vertex data structure. Further deviations from canonical LPA* I introduced:
For the moment, Dijkstra is still used for the original route (I am planning to move everything to LPA* in a later step). When a vertex is visited, I set its rhs member to its value so it will appear locally consistent to a subsequent LPA* run.
I use a fixed heuristic, $h = 0$. This is valid for A* (effectively turning it into Dijkstra) and thus also for LPA*, and makes it easier for Dijkstra and LPA* to coexist.
Because $h = 0$, there is no need for the two-dimensional keys in the priority queue (as both elements if each key would always be identical). That permits me to keep using the Fibonacci heap for the priority queue in the same manner as with Dijkstra.
Rather than a simple addition of edge and route costs, I use a function with some overflow protection. If one of the two operands is 0x7FFFFFFF (pseudo-infinity), this is also returned as a result.
Similar to the Dijkstra implementation, I do not terminate until the entire route graph is flooded.
However, if I change some edge costs in an already-flooded route graph and then run my LPA* implementation on it, it keeps looping. Debug output shows me that it is expanding the same vertices over and over again, with their costs increasing. What appears to happen here:
A vertex $v$ is updated.
All vertices in $Succ(v)$ (essentially all neighbors of $v$) are updated.
For each $u \in Succ(v)$, all vertices in $Succ(u)$ are updated. Since (in most cases) $v \in Succ(u)$, $v$ is updated again and the cycle starts over.
What I have tried:
After updating $v$, specifically exclude its shortest-path predecessor (i.e. the next vertex towards the destination) from being updated. That causes the LPA* implementation to terminate in a time simila to Dijkstra, but leaves me with loops in the route graph (i.e. two adjacent vertices both pointing to the edge connecting them).
If, in addition to the above, while updating a vertex, I skip other vertices which would create a loop condition, that leaves me with the part of the route graph “upstream” from the changed segments having maximum cost.
Where is the error, and how can I fix it?
Full story
https://github.com/mvglasow/navit/tree/traffic. The commit which added the actual LPA* implementation is 61d9a1a; some preparation work was also done in the previous ones.
Answer: I have solved the issue, and my LPA* implementation now gives me a valid route that is identical to the one I get when I run Dijkstra from scratch on the updated route graph for the test cases that I have tried.
Fist off: The two things that I tried above would break LPA*, so I reverted them.
The first fault was simply a sign error when calling the function to determine edge costs, leading the program to use costs for the wrong direction (though only in one place, not in another).
The other fault was in the call to the function which determines edge costs. Arguments passed to it are a vehicle profile (which determines which edges are usable and what their costs are), the edge, a direction (i.e. first to second vertex or vice versa) and, optionally, a vertex.
The vertex argument was misleadingly named from. In reality, when specified, it is the vertex at which we leave the edge and move on to the next when following the route to the destination. If that vertex has the edge pointer (see above) set to the edge being examined, the function will report the cost of the edge from to be pseudo-infinity (as traversing the edge in that direction will move us further away from the destination). Documentation was scarce and I had to dig through the source code to infer what that argument really was supposed to do.
I found I passed the wrong vertex to the function; fixing this now-obvious error made some of the simpler test cases work.
I also now clear the edge pointer at the beginning of updateVertex() (it is set again in the function as we examine each edge); though this may not be relevant to the issue here.
About predecessors and successors, we can simply use all immediate neighbors (any vertex directly connected to the current one by an edge). If an edge cannot be traversed in a particular direction, we assume its cost in that direction to be infinity.
With these fixes, I get a correct route for simpler test cases. For the more complex ones, LPA* would terminate in a timely manner, but when iterating through the route graph, I would encounter a loop.
Further investigation revealed that this was due to “loop” edges in the graph, which connect a vertex with itself. If the edge pointer of the vertex pointed to that faulty edge, the iteration would loop forever upon hitting one of these points. Dijkstra apparently never picked up such edges as part of the cheapest path (I didn’t investigate this further), but LPA* did.
Due to the way the implementation works (position and destination need not correspond to points on the route graph), such loops can be legit (think of a circular road with only one single access road connecting to it), and parts of it can even become part of the cheapest path when the current position or destination is located on such an edge. However, there is no legit reason to have such a segment appear in the middle of the cheapest path.
I resolved this by checking for this in the cost function: if the segment is a loop and it is not connected to the position or destination, its cost is assumed to be pseudo-infinity to prevent it from becoming part of the cheapest path. | {
"domain": "cs.stackexchange",
"id": 11550,
"tags": "algorithms, graphs, shortest-path"
} |
CUDA 8.0 is compatible with my GeForce GTX 670M Wikipedia says, but TensorFlow rises an error: GTX 670M's Compute Capability is < 3.0 | Question: According to Wikipedia, the GeForce GTX 670M has a Compute Capability of 2.1 (and a Fermi micro-architecture), which is confirmed by TensorFlow (I can read "2.1" in the error it rises).
Wikipedia says that CUDA 8.0 supports compute capabilities from 2.0 to 5.x (Fermi micro-architecture included). It even says that it's the "last version with support for compute capability 2.x (Fermi)". However, the error rised by TensorFlow says that my being-used CUDA version support at least compute capability of... 3.0... And thus my GeForce GTX 670M is ignored and my CPU is used to compute :-( . It's a big problem.
Since several CUDA versions are installed in my computer, I wanted to be sure it was CUDA 8.0 that was being used. So I typed, in the PyCharm's terminal: nvcc --version which outputs:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Mon_Jan__9_17:32:33_CST_2017
Cuda compilation tools, release 8.0, V8.0.60
Thus, CUDA 8.0 is actually being used.
The error is the following:
2019-08-01 22:04:28.366003: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX
2019-08-01 22:04:28.561338: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:
name: GeForce GTX 670M major: 2 minor: 1 memoryClockRate(GHz): 1.24
pciBusID: 0000:01:00.0
totalMemory: 3.00GiB freeMemory: 2.48GiB
2019-08-01 22:04:28.561814: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1093] Ignoring visible gpu device (device: 0, name: GeForce GTX 670M, pci bus id: 0000:01:00.0, compute capability: 2.1) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
My setup
I've described the problem. Now I'm going to give you more informations about my setup.
I'm working on Windows 10 with PyCharm and Keras (so TensorFlow)
Since I thought I had to use CUDA 8.0, I checked in the TensorFlow Website which version I should use. I've found that: for CUDA 8.0, I should use TensorFlow GPU version 1.4.0 with CUDNN version 6.
In reality, I'm using: CUDA V8.0.60, CUDNN V6.0 (found on the CUDNN Website for CUDA 8.0) and TensorFlow-GPU V1.4.0
Here is a screenshot of my Windows 10's %PATH%'s value. Note that the being-used CUDA version ("CUDA_PATH") is actually 8.0, as you can see it on the screenshot.
Final question
Could you please tell me what version of CUDA I should use for being able to use my GeForce GTX 670M as a compute unit for my networks training? Wikipedia seems to be wrong...
Answer: CUDNN and Tensorflow require a GPU which has a compute capacity of 3.0 at least: not only the CUDA version must take account of this CC, but also these both programs.
Indeed: https://stackoverflow.com/questions/38542763/how-can-i-make-tensorflow-run-on-a-gpu-with-capability-2-0/38543201#38543201 | {
"domain": "datascience.stackexchange",
"id": 5767,
"tags": "tensorflow, gpu, distribution, parallel"
} |
What type of Interspecific relationship does this graph depict? | Question: The other day in class, our AP Biology teacher presented us with the following graph and asked us to determine which of the following interspecific relationships it represents:
(A) commensalism
(B) predation
(C) mutualism
(D) competition
(E) parasitism.
She explained to us that since there is not enough information in the graph to explain why species A suddenly drops off after time "x" and species B suddenly rises after time "x", the best answer out of the choices is (A) commensalism.
But to me, that seems to be a faulty line of thinking. After all, my teacher arrived at that answer only under the assumption that the graph/question itself is good to begin with. I tried finding graphs of commensalism using Google Images, and I actually found the original source of the graph that our teacher presented us.
Now, the question on Regentsprep actually gives us a point of reference and states that we are considering two herbivores in a grassland environment, so in my opinion it's reasonable to conclude that the relationship between species A and B is competition.
The problem is that my teacher never gave us a frame of reference: she just told us to identify the relationship in this graph without any "backstory". So what are your thoughts: is it possible to conclude that "A" is the best answer choice if we are not given any information aside from the graph?
Answer: Competition makes the most sense, IMO.
For every other type of relationship, A and B should be dependent and roughly proportional: If parasites kill their host, they die in proportion. If predators eat all their prey, they drop in numbers due to food scarcity until food repopulates, the graph would almost look like a double helix. Mutual/commensals also share a dependent and proportional graph. The inverse proportionality of the sample chart, to me, gives away competition: one organism is flat out winning against another through some method. | {
"domain": "biology.stackexchange",
"id": 5024,
"tags": "population-biology"
} |
Convert HTML input string to JavaScript Array literal | Question: I am trying to accept JavaScript array literals in an HTML text input.
The problem is that HTML text inputs are captured as strings, such that an input of ['name', 'who', 1] becomes "['name', 'who', 1]".
My intention is for the following samples to yield the corresponding outputs.
"['river',"spring"]" // ["river","spring"]
"[{key:'value'},20,'who']" // [{"key":"value"},20,"who"]
The way I worked around the problem is by using eval in the code snippet below:
const form = document.querySelector('.form');
const inputField = document.querySelector('.input');
const btnParse= document.querySelector('.btn');
const out = document.querySelector('.out');
form.addEventListener('submit', (e)=> {
e.preventDefault();
try {
parsed = eval(inputField.value);
if(Array.isArray(parsed)) {
out.textContent = JSON.stringify(parsed);
} else throw new Error('input is not a valid array' );
} catch(err) {
out.textContent = `Invalid input: ${err.message}`;
}
});
<form class="form">
<fieldset>
<legend>Enter array to parse</legend>
<input class="input" type="text">
<input class="btn" type="submit" value="parse">
</fieldset>
</form>
<div>
<p class="out">
</p>
</div>
My solution, however, allows for the execution of any Javascript code inputted into the text field, which is a huge vulnerability.
What alternative way is there to converting JavaScript array literal HTML text inputs into JS array objects without using eval?
Answer: I agree with @Mast that you are asking the wrong question. IMHO your question should be: "Is it a good idea to get user input in form of a JavaScript array?" and the short answer woudl be "no". The long answer would be:
Why did you choose this form of input? Aren't there better input methods? Where is the string coming from?
However, if you don't have any other choice, then the proper solution would be to use a more lenient "JSON" parser. One possiblity could be to use a YAML parser. YAML is a superset of JSON that allows (among other things) JSON-like markup. | {
"domain": "codereview.stackexchange",
"id": 32507,
"tags": "javascript, array, html, parsing"
} |
Compare the quality of IQ data from different Software Defined Radios (SDRs) | Question: I want to know what methods are best to scientifically evaluate the quality of different Software Defined Radios (SDRs) that spit out IQ samples.
For this question, I would like to set the boundary conditions as follows:
Their ADCs are from different vendors but contain the exact bit resolution.
They may have very different RF front ends or even be based on different architectures such as Direct conversion, Direct sampling or even Super-Heterodyne. We do not care about this as long as they tune to the band we specify and provide IQ.
During the test, they have same exact type of antenna plugged in and all other environment parameters are maintained same.
SDRs are tuned to the exact band to RX at an officially supported band in the specification.
After gathering, lets say, 2-3 IQ recordings (one from each SDR) through GNU Radio, how can we compare the quality of these IQ files?
Doubts: Can we tune the receivers to the same FM channel and ask some block in GNU-Radio to report the SNR so we can chose the one with highest SNR as the SDR that offered the most quality IQ?
Appreciate your time and knowledge.
Answer: First and foremost, I would recommend against over the air testing for this given the significant challenge in really being able to provide the same signal to each radio (since you have both temporal and spatial constraints that you cannot simultaneously meet).
I would instead use one GNU radio as a transmitter (or any other repeatable high quality source) to provide a consistent and repeatable waveform that can be used for each of the radios under test with a cabled test where the attenuation can be carefully and repeatably controlled over a wide range.
With this, there are several tests that immediately come to mind:
An ideal test waveform would be band limited complex Gaussian noise (pseudo-random noise as a repeatable pattern) since it will best occupy all possible positions on the complex IQ plane as well as all frequencies in the band of interest. This would then test the radio for a wide range of modulations, but if you have specific modulations in mind then those such waveforms for test purposes would also be ideal. (OFDM waveforms are well represented as complex Gaussian distributed bandlimited waveforms by the way, but it would be sufficient enough to simply create a complex random waveform and band-limit it for this generic case.)
An ideal test metric would be the correlation coefficient, $\rho$, as it will show errors from all sources including noise figure, quadrature error, LO phase noise, ADC quantization noise, IP3, IP2, etc. For more information on that see Noise detection and How can I find SNR, PEAQ, and ODG values by comparing two audios?
The test should be done over a wide range of input power levels to show the related sensitivity and maximum power handling for comparative receivers. Thus the test metric for all receivers would be $\rho$ vs $P_{in}$.
The $\rho$ may be limited by the transmitter if that used is not of sufficient quality, so care must be taken to confirm that the $\rho$ for the transmit waveform is better than that which would result for all the tests performed (or represent the ceiling of the measurement). For the same reason start with the highest test power with known confirmed $\rho$ and adjust the power level in the cabled test receiver using an inline passive variable attenuator (with a rated much higher input P1dB) since any active device or variable amplifier is more likely to introduce additional distortion.
Additional tests involve how the receiver rejects interference such as intentional jammers or co-site transmitters. This is usually done with one tone testing in an adjacent band to help distinguish those receivers with better LO phase noise, and two tone interference testing in such a way to test for IP3 (and IP2 for zero-IF receivers in particular). The same test as above with the correlation coefficient vs input power is repeated, but in the presence of the jammers which can be done with unmodulated tones. Different recievers can have different sensitivities based on their architecture (or mis-managed spurs from a poor design), so a robust test would sweep the test jammers over a wide frequency range. If the radios are intended for specific applications, then the interference tests can be limited to cases appropriate for their known environment.
Instantaneous dynamic range in band would also be of interest. As a simple test for this an in band single tone could be used at maximum input and the noise floor and spurious free dynamic range could be assessed in an FFT of the captured IQ signal. (You could also do specific linearity and single-tone and two-tone in band testing if you wanted to isolate these characteristics from what was otherwise in the catch-all $rho$ test described above). | {
"domain": "dsp.stackexchange",
"id": 7099,
"tags": "software-defined-radio"
} |
Convert .data file to .csv | Question: I'm using a data called 'adults.data', I need to work with that data as a '.csv' file. Can anyone guide me on how can I change the format?
I tried opening the file in excel and then save it as csv, but the new file contains only one column containing all the '.data' columns.
Answer:
One way is to convert .data file to excel/csv using Microsoft Excel which provides an option to get external data (use Data tab --> from Text). Check this video for demo
Other way you can utilize python to read .data files and directly work with dataframes
import pandas as pd
df = pd.read_csv("adults.data")
(OR)
df = pd.read_table("adults.data") | {
"domain": "datascience.stackexchange",
"id": 9807,
"tags": "dataset, data, csv"
} |
Do Maxwell's Equations overdetermine the electric and magnetic fields? | Question: Maxwell's equations specify two vector and two scalar (differential) equations. That implies 8 components in the equations. But between vector fields $\vec{E}=(E_x,E_y,E_z)$ and $\vec{B}=(B_x,B_y,B_z)$, there are only 6 unknowns. So we have 8 equations for 6 unknowns. Why isn't this a problem?
As far as I know, the answer is basically because the equations aren't actually independent but I've never found a clear explanation. Perhaps the right direction is in this article on arXiv.
Apologies if this is a repost. I found some discussions on PhysicsForums but no similar question here.
Answer: It isn't a problem because two of the eight equations are constraints and they're not quite independent from the remaining six.
The constraint equations are the scalar ones,
$$ {\rm div}\,\,\vec D = \rho, \qquad {\rm div}\,\,\vec B = 0$$
Imagine $\vec D=\epsilon_0\vec E$ and $\vec B=\mu_0\vec H$ everywhere for the sake of simplicity.
If these equations are satisfied in the initial state, they will immediately be satisfied at all times. That's because the time derivatives of these non-dynamical equations ("non-dynamical" means that they're not designed to determine time derivatives of fields themselves; they don't really contain any time derivatives) may be calculated from the remaining 6 equations. Just apply ${\rm div}$ on the remaining 6 component equations,
$$ {\rm curl}\,\, \vec E+ \frac{\partial\vec B}{\partial t} = 0, \qquad {\rm curl}\,\, \vec H- \frac{\partial\vec D}{\partial t} = \vec j. $$
When you apply ${\rm div}$, the curl terms disappear because ${\rm div}\,\,{\rm curl} \,\,\vec V\equiv 0$ is an identity and you get
$$\frac{\partial({\rm div}\,\,\vec B)}{\partial t} =0,\qquad
\frac{\partial({\rm div}\,\,\vec D)}{\partial t} =-{\rm div}\,\,\vec j. $$
The first equation implies that ${\rm div}\,\,\vec B$ remains zero if it were zero in the initial state. The second equation may be rewritten using the continuity equation for $\vec j$,
$$ \frac{\partial \rho}{\partial t}+{\rm div}\,\,\vec j = 0$$
(i.e. we are assuming this holds for the sources) to get
$$ \frac{\partial ({\rm div}\,\,\vec D-\rho)}{\partial t} = 0 $$
so ${\rm div}\,\,\vec D-\rho$ also remains zero at all times if it is zero in the initial state.
Let me mention that among the 6+2 component Maxwell's equations, 4 of them, those involving $\vec E,\vec B$, may be solved by writing $\vec E,\vec B$ in terms of four components $\Phi,\vec A$. In this language, we are left with the remaining 4 Maxwell's equations only. However, only 3 of them are really independent at each time, as shown above. That's also OK because the four components of $\Phi,\vec A$ are not quite determined: one of these components (or one function) may be changed by the 1-parameter $U(1)$ gauge invariance. | {
"domain": "physics.stackexchange",
"id": 12912,
"tags": "electromagnetism, classical-electrodynamics, maxwell-equations, differential-equations, degrees-of-freedom"
} |
How to thoroughly distinguish a coordinate singularity and a physical singularity | Question: In a course on general relativity I am following at the moment, it was shown that the singularity $r=2M$ in the Schwarzschild solution is a consequence of the choice of coordinates. Introducing Kruskal-Szekeres coordinates $(u,v)$ resolves this problem: the singularity at $r=2M$ disappears, but if one draws a $(u,v)$ graphs with light cones and such, one still recognizes the event horizon at $r=2M$. The singularity at $r=0$ remains and is said to really be an essential singularity.
So in general: if you can find a coordinate transformation to get rid of a divergence in your metric, it is not a true singularity. However, it struck me that the Kruskal-Szekers coordinates were only discovered in 1960 (44 years after the Schwarzschild solution).
This leaves me to wonder: is there a more systematic way of distinguishing physical vs. 'fake' singularities? In Carroll's book, I've read something about contractions of curvature quantities diverging at real singularities: E.g. $R^{\alpha \beta \gamma \delta}R_{\alpha\beta\gamma\delta}\propto r^{-6}$ such that $r=0$ is a real singularity (and $r=2M$ not). Could anyone make this ad-hoc rule more quantitative?
Answer: I'm expanding on what @twistor59 says. Coordinatization of spacetime does not have physical consequences, it's is just some way you choose to parametrize the manifold (and some parametrizations might be singular). So whether there is actually something physically weird can be decided only by calculating the physical quantities ("observables"). Presumably, this qualitative behaviour must be independent of which coordinates you choose to describe the physics.
What are the physical observables which are independent of coordinatizations? They would have to be scalars, since otherwise they'd transform under a change of coordinates.
At the level of (torsion-free) GR with the Einstein-Hilbert action (no higher derivative terms), the only quantities/variables you can play with are: the metric, the connection (not really a tensor) and the curvature. So, nontrivial scalars must be built from the curvature (and the metric, when you contract indices). Wikipedia seems to have a handy list. :-)
If we include the Levi-Civita symbol $\varepsilon^{\mu \nu \rho \sigma}$ then we can also get $\det(g)$ ($\varepsilon^{\mu \nu \rho \sigma}$ is a pseudotensor but since the determinant has two of those, the quantity has a definite sign, independent of the handedness of your coordinates). I can't think of a good example illustrating how this quantity (~volume scale) tells us about singularities, or a reason for why it doesn't. | {
"domain": "physics.stackexchange",
"id": 7411,
"tags": "general-relativity, singularities"
} |
Can triethylamine/pyridine be used in the synthesis of Labetalol? | Question: As part of my homework, I had to plan the synthesis of labetalol, a blood pressure drug, from its component parts. One of the steps involved was the following SN2 reaction:
The solutions manual just shows the two being added together, but my mind harkened back to previous reactions I had learned in which triethylamine and pyridine were used when $\ce{HCl}$ was being eliminated. Would that be a reasonable thing to do in this instance, or would these amines compete with 4-phenylbutan-2-amine?
Answer: I disagree with IT Tsoi's answer. Triethylamine absolutely will compete, and form ammonium salts. Those salts may react with 4-phenylbutan-2-amine to lead to product, but more likely they will precipitate. And be general gunk in the reaction flask. Pyridine will probably be less of an issue.
In general direct alkylation of amines is a pretty ugly reaction with overalkylation products likely. A better reaction in this vein would be to make the acetamide and alkylate that with the chloroacetophenone, and then deacetylate, or similar strategy.
If you must alyklate the amine, use Hunig's base (N,N-diisopropylethylamine) which is too sterically hindered to alyklate. Also, add the electrophile to excess nucleophile, if the reaction is faster this will decrease polylaklyation probabilistically, and the carbonyl alpha to the amine may have enough electron withdrawing character to slow down the increase in nucleophiliciity that you get from a more substituted amine.
Further mucking up this route is the phenol with two carbonyls on the ring, that is probably quite acidic as well, and may be deprotonated with the base scavengers. Use a phosgene protecting group to cover up both the phenol and the aramide, and this problem will be side-stepped. | {
"domain": "chemistry.stackexchange",
"id": 5509,
"tags": "organic-chemistry, acid-base"
} |
Enigma Machine Simulation in Python | Question: Background and Example
This code simulates the Enigma machine, minus the plugboard. Here's some test code that illustrates how the machine's construction and use:
>>> r1 = Rotor("VEADTQRWUFZNLHYPXOGKJIMCSB", 1)
>>> r2 = Rotor("WNYPVJXTOAMQIZKSRFUHGCEDBL", 2)
>>> r3 = Rotor("DJYPKQNOZLMGIHFETRVCBXSWAU", 3)
>>> reflector = Reflector("EJMZALYXVBWFCRQUONTSPIKHGD")
>>> machine = Machine([r1, r2, r3], reflector)
>>> x = machine.encipher("ATTACK AT DAWN")
>>> machine.decipher(x)
'ATTACK AT DAWN'
Rotors
The Rotor class is pretty simple. A Rotor knows how to rotate itself, and provides methods for navigating connections with the adjacent circuits through the encipher and decipher methods.
class Rotor:
"""
Models a 'rotor' in an Enigma machine
Rotor("BCDA", 1) means that A->B, B->C, C->D, D->A and the rotor has been
rotated once from ABCD (the clear text character 'B' is facing the user)
Args:
mappings (string) encipherings for the machine's alphabet.
offset (int) the starting position of the rotor
"""
def __init__(self, mappings, offset=0):
self.initial_offset = offset
self.reset()
self.forward_mappings = dict(zip(self.alphabet, mappings))
self.reverse_mappings = dict(zip(mappings, self.alphabet))
def reset(self):
"""
Helper to re-initialize the rotor to its initial configuration
Returns: void
"""
self.alphabet = Machine.ALPHABET
self.rotate(self.initial_offset)
self.rotations = 1
def rotate(self, offset=1):
"""
Rotates the rotor the given number of characters
Args: offset (int) how many turns to make
Returns: void
"""
for _ in range(offset):
self.alphabet = self.alphabet[1:] + self.alphabet[0]
self.rotations = offset
def encipher(self, character):
"""
Gets the cipher text mapping of a plain text character
Args: character (char)
Returns: char
"""
return self.forward_mappings[character]
def decipher(self, character):
"""
Gets the plain text mapping of a cipher text character
Args: character (char)
Returns: char
"""
return self.reverse_mappings[character]
Reflector
Pretty straightforward. A Reflector can reflect a character and is used to put the input back through machine's rotors.
class Reflector:
"""
Models a 'reflector' in the Enigma machine. Reflector("CDAB")
means that A->C, C->A, D->B, B->D
Args: mappings (string) bijective map representing the reflection
of a character
"""
def __init__(self, mappings):
self.mappings = dict(zip(Machine.ALPHABET, mappings))
for x in self.mappings:
y = self.mappings[x]
if x != self.mappings[y]:
raise ValueError("Mapping for {0} and {1} is invalid".format(x, y))
def reflect(self, character):
"""
Returns the reflection of the input character
Args: character (char)
Returns: char
"""
return self.mappings[character]
Machine
This class exposes the encipher and decipher methods. Most of the enciphering is done through the helper function encipher_character.
class Machine:
"""
Models an Enigma machine (https://en.wikipedia.org/wiki/Enigma_machine)
Args:
rotors (list[Rotor]) the configured rotors
reflector (Reflector) to use
"""
ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
def __init__(self, rotors, reflector):
self.rotors = rotors
self.reflector = reflector
def encipher(self, text):
"""
Encipher the given input
Args: text (string) plain text to encode
Returns: string
"""
return "".join((self.encipher_character(x) for x in text.upper()))
def decipher(self, text):
"""
Deccipher the given input
Args: text (string) cipher text to decode
Returns: string
"""
for rotor in self.rotors:
rotor.reset()
return self.encipher(text)
def encipher_character(self, x):
"""
Runs a character through the machine's cipher algorithm
1. If x is not in the known character set, don't encipher it
2. For each of the rotors, determine the character in contact with x.
Determine the enciphering for that character, and use it as the next
letter to pass through to the next rotor in the machine's sequence
3. Once we get to the reflector, get the reflection and repeat the above
in reverse
4. Rotate the first rotor, and check if any other rotor should be rotated
5. Return the character at the terminating contact position as the input
character's enciphering
Args: x (char) the character to encode
Returns: char
"""
if x not in Machine.ALPHABET:
return x
# compute the contact position of the first rotor and machine's input
contact_index = Machine.ALPHABET.index(x)
# propagate contact right
for rotor in self.rotors:
contact_letter = rotor.alphabet[contact_index]
x = rotor.encipher(contact_letter)
contact_index = rotor.alphabet.index(x)
# reflect and compute the starting contact position with the right rotor
contact_letter = Machine.ALPHABET[contact_index]
x = self.reflector.reflect(contact_letter)
contact_index = Machine.ALPHABET.index(x)
# propagate contact left
for rotor in reversed(self.rotors):
contact_letter = rotor.alphabet[contact_index]
x = rotor.decipher(contact_letter)
contact_index = rotor.alphabet.index(x)
# rotate the first rotor and anything else that needs it
self.rotors[0].rotate()
for index in range(1, len(self.rotors)):
rotor = self.rotors[index]
turn_frequency = len(Machine.ALPHABET)*index
if self.rotors[index-1].rotations % turn_frequency == 0:
rotor.rotate()
# finally 'light' the output bulb
return Machine.ALPHABET[contact_index]
Improvements
I'm wondering how the encipher algorithm might be implemented more cleanly (in particular, how the code might be better distributed across the Rotor and Reflector classes). Overall comments on the style and documentation would be much appreciated, too.
FYI: development for this project has been moved to: https://github.com/gjdanis/enigma
Answer: Your Rotor.rotate can be simplified to
def rotate(self, offset=1):
self.rotations = offset
self.alphabet = self.alphabet[offset:] + self.alphabet[:offset]
This saves having to do a costly list addition offset times.
Different commonly used ASCII character classes are included in the string module. You can use string.ascii_uppercase for the uppercase alphabet.
join can take a generator expression directly, so you can get rid of one set of parenthesis in Machine.encipher:
return "".join(self.encipher_character(x) for x in text.upper())
It is better to ask forgiveness than permission. One place where you can use this is the check whether the character to encode is in the character set. Just use try..except in Machine.encipher_character:
# compute the contact position of the first rotor and machine's input
try:
contact_index = Machine.ALPHABET.index(x)
except ValueError:
return x
This way you avoid having to go through the list more often than necessary (the edge case Z will iterate through the alphabet twice, once to see if it is there and once to actually get the index).
In the same function you have the two blocks # propagate contact right and # propagate contact left. The comments already suggest that this would be a perfect place to make them a function. They also only differ by whether or not the rotors are traversed in reverse or not and whether to use rotor.encipher or rotor.decipher. Make a method Machine.rotate_rotors:
def rotate_rotors(self, left=False):
"""propagate contact right or left"""
iter_direction = reversed if left else iter
for rotor in iter_direction(self.rotors):
contact_letter = rotor.alphabet[self.contact_index]
func = rotor.decipher if left else rotor.encipher
self.contact_index = rotor.alphabet.index(func(contact_letter))
I would also add the reflector rotating into a method:
def rotate_reflector(self):
"""reflect and compute the starting contact position with the right rotor"""
contact_letter = Machine.ALPHABET[self.contact_index]
x = self.reflector.reflect(contact_letter)
self.contact_index = Machine.ALPHABET.index(x)
You can then use these like this:
self.contact_index = Machine.ALPHABET.index(x)
self.rotate_rotors()
self.rotate_reflector()
self.rotate_rotors(left=True)
I made contact_index a property of the class now, this way we don't have to pass in the contact_index every time and return it. I also made your comments into docstrings. | {
"domain": "codereview.stackexchange",
"id": 21660,
"tags": "python, object-oriented, enigma-machine"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.