anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Does feature scaling have any benefits if all features are on the same scale? | Question: By scaling features, we can prevent one feature from dominating the decisions of a model. For example, say heights (cm), and age (years) are two features in my data. Since range of heights is larger than of years, a trained model could weight importance of heights much more than years. This could result in a poor model in return.
However, say that all of my features are binary, they take a value of either 0 or 1. In such a case, does feature scaling still have any benefits?
Answer: If all you features are binary, then, you don't need to apply normalization on them. Since their values are on the same scale already. | {
"domain": "ai.stackexchange",
"id": 2663,
"tags": "data-preprocessing, features, feature-engineering"
} |
Tokenizer / lexer | Question: I wanted to make a calculator in python so I first wrote a tokenizer. I have written one before but this time I tried to refine it a bit. Any thoughts on improvements, things I could of done better.
import re
class KeyWord:
def __init__(self, name, regex):
self.name = name
self.regex = regex
class NewToken:
def __init__(self, name, value, start, end):
self.name = name
self.value = value
self.start = start
self.end = end
class Lexer:
def __init__(self):
self.text = ""
self.keyWords = []
self.delimiters = ["+", "-", "/", "*", "%", "(", ")", "\n", " "]
self.ignore = [" "]
self.newTokens = []
self.setTokens()
def setTokens(self):
self.keyWords.append(KeyWord("NUMBER", re.compile("([0-9]*\.[0-9]+)|([0-9]+\.[0-9]*)|([0-9])")))
self.keyWords.append(KeyWord("PLUS", re.compile("\+")))
self.keyWords.append(KeyWord("MINUS", re.compile("-")))
self.keyWords.append(KeyWord("TIMES", re.compile("\*")))
self.keyWords.append(KeyWord("DIVIDE", re.compile("\/")))
self.keyWords.append(KeyWord("MODULO", re.compile("%")))
self.keyWords.append(KeyWord("OPENBRACKET", re.compile("\(")))
self.keyWords.append(KeyWord("CLOSEBRACKET", re.compile("\)")))
def setText(self, text):
self.text = text.strip() + "\n"
def getTokens(self):
self.newTokens = []
word = ""
#Loop through input
for i in range(0, len(self.text)):
ignoreFound = False
for ig in self.ignore:
if self.text[i] == ig:
ignoreFound = True
tokenFound = False
#Look for a delimiter
for d in self.delimiters:
if tokenFound:
break
#If a delimiter is found
if self.text[i] == d:
#Look for keyword
for t in self.keyWords:
match = t.regex.match(word)
if match:
self.newTokens.append(NewToken(t.name, word, (i - len(word)), i))
word = ""
tokenFound = True
break
#Check if delimiter has a token
if not ignoreFound:
for t in self.keyWords:
match = t.regex.match(d)
if match:
self.newTokens.append(NewToken(t.name, d, i, i))
tokenFound = True
break
if not tokenFound and not ignoreFound:
word += self.text[i]
self.newTokens.append(NewToken("EOF", "", i, i))
return self.newTokens
```
Answer:
I have written one before
Yeah, I can tell. Nice. This looks well organized.
def setTokens(self):
PEP-8 asks that you spell it set_tokens.
Similarly for some other setters and getters,
and for assignments to e.g. self.key_words & self.new_tokens.
self.keyWords.append(KeyWord("NUMBER", re.compile("([0-9]*\.[0-9]+)|([0-9]+\.[0-9]*)|([0-9])")))
Hmmm, several remarks.
DRY, you have an opportunity here to loop
over a list of pairs (list of tuples),
so there's just a single .append that we repeatedly call.
Perhaps you have your reasons, but I personally disagree with your definition of NUMBER.
Choose a different name if it is a specialized restricted number from some problem domain.
With alternation you mention frac|real|digit.
The digit seems superfluous, it is subsumed by at least one of the other two.
I'd prefer to see the order real|frac so we can mandate
"starts with at least one digit".
After that, you passed up the opportunity to say \.? for optional decimal.
The frac case would then be "starts with decimal point".
Also your current expression rejects 12 while accepting 1 and 123..
Rather than e.g. "[0-9]", consider saying r"\d".
self.keyWords.append(KeyWord("PLUS", re.compile("\+"))) ...
self.keyWords.append(KeyWord("DIVIDE", re.compile("\/")))
Please run flake8 against your code, and heed its warnings.
Here, I have a strong preference for phrasing it re.compile(r"\+"),
with a raw string, to avoid confusion with e.g. "\t\n" escapes.
Also, the regex / works fine, similar to the regex Z,
it is just a single character, no need for a \ backwhack.
for i in range(0, len(self.text)):
Typical idiom would be for i, ch in enumerate(self.text).
The whole ig loop is much too verbose.
Just test if ch in self.ignore (if self.text[i] in self.ignore)
and be done with it.
Two algorithmic remarks:
It's not yet obvious to me why we need flag + loop to ignore optional
whitespace. Wouldn't a simple continue at top of loop suffice?
Maybe that range is not convenient,
and you'd be happier with a while loop where you increment i yourself.
DRY, I'm not keen on self.delimiters,
it is redundant with those beautiful regexes you went to the trouble of defining.
I'd like to see one or the other of them go,
so you don't have to remember to maintain two things in parallel
when you (or someone else!) is maintaining this a few months from now.
Overall, looks pretty good. | {
"domain": "codereview.stackexchange",
"id": 38584,
"tags": "python, lexical-analysis"
} |
How hard is counting the number of vertex covers after a small perturbation? | Question: Suppose you are given both a graph $G(V,E)$ and the exact number $C$ of vertex covers of $G$. Now suppose that $G$ is subject to a very small perturbation $P$, leading to $G'=P(G)$. More precisely, the perturbation $P$ is restricted to be one of the following:
Addition of $1$ new edge.
Addition of $2$ new distinct edges.
Removal of $1$ existing edge.
Removal of $2$ distinct existing edges.
Question
Given $G$, $C$, and $P$, how hard is to determine the number $C'$ of vertex covers of $G'=P(G)$? Is it possible to exploit the knowledge of $C$ and the fact that the perturbation is so tiny in order to efficiently determine $C'$?
Answer: Since counting vertex covers is #P-complete, your problem is unlikely to be in P; otherwise you could count the number of vertex covers starting from the empty graph on $|V|$ vertices, adding edges one by one. | {
"domain": "cstheory.stackexchange",
"id": 836,
"tags": "cc.complexity-theory, graph-theory, graph-algorithms, counting-complexity"
} |
How to create waypoints for nav2 based on saved .pgm map? | Question:
I need to create a set of waypoints for my robot to go through and perform a specific task. My plan was to manually map my environment, than save it using nav2_map_server, generate an array of waypoints based on .pgm file and pass them to NavigateThroughPoses action server. However waypoints based just on .pgm file just do not work for obvious reasons (differnt origins and scale). Does anyone here have any idea on how to transform those waypoints into my robot's frame of reference?
I am using Ubuntu 20.4 and ROS2:Galactic
Originally posted by sdudiak01 on ROS Answers with karma: 27 on 2022-03-13
Post score: 1
Answer:
Did you try (Python Simple Commander)[https://navigation.ros.org/commander_api/index.html] and it's goThroughPoses() or followWaypoints()? It's nice and easy Python API to Nav2 stack, it seems suitable to your needs.
You can try to calculate your waypoints based on your init_pose (you should know real-world coordinates and coordinates on the PGM map), chosen points on the map (pixels (x,y)) and a resolution of your map (it's meter/pixel, defined in *.YAML) with correct set of translations.
There are some sources I have found, both ROS1 and ROS2, maybe they will help you:
https://husarion.com/tutorials/ros-tutorials/9-map-navigation/ (ROS1, navigation based on map)
https://github.com/ros-planning/navigation2/tree/main/nav2_simple_commander (ROS2, a few demonstrations to highlight a couple of simple autonomy applications)
https://automaticaddison.com/the-ultimate-guide-to-the-ros-2-navigation-stack/ (overall step by step guide to Nav2)
https://osrf.github.io/ros2multirobotbook/simulation.html (traffic-editor from OpenRMF could be an example of how to export map points as a robot path)
Originally posted by ljaniec with karma: 3064 on 2022-03-15
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by sdudiak01 on 2022-03-15:
Thanks! I totally forgot about the .yaml file generated alongside the picture. Based on this file it is possible to get real-world coordinates of selected waypoints
Comment by ljaniec on 2022-03-15:
If it was enough, please accept the answer so others could see it :) | {
"domain": "robotics.stackexchange",
"id": 37502,
"tags": "navigation, ros2, mapping"
} |
Does the Parasympathetic Tract of Colon Sigmoideum Travel with Nervus Vagus and its Nucleus Dorsalis Nervi Vagi? | Question: I have the following tractus now:
nucleus parasymphaticus sacrales -> nervus splanchnic -> ganglion
terminalis -> colon sigmoideum
The tract is parasympathetic.
It suggests me that it should travel along CN 9 or CN 10, most likely with CN 10.
Does tractus colon sigmoideum travel with CN 10 and its nucleus dorsalis nervi vagi?
Answer: I found the following from Wikipedia:
Pelvic splanchnic nerves are the primary source for parasympathetic innervation [for Sigmoid colon].
The non-cranial nerve is extending the scope of the cranial nerve by a synapse between the two neurons. The neurons are transmitting the electrical impulse from one neuron to another.
So this way the splanchnic nerve does not need to be with CN 10 in the region of impact.
The parasympathetic tract of colon sigmoideum can travel with nervus vagus by having a synapse between pelvic splanchnic nerve and CN 10. I verified this from my professor. | {
"domain": "biology.stackexchange",
"id": 274,
"tags": "neuroscience, cranial-nerves"
} |
Dirac's paper on classically radiating electrons: scalar product of $ev_{\gamma}f^{\nu}_{\mu}$ with $v$ is zero? | Question: In Dirac's paper, classical Theory of Radiating Electrons, he analyzes electromagnetic radiation entering and leaving a world tube surrounding the world line of a charge moving under the influence of this electromagnetic radiation, and shows that
$$\frac{1}{2}e^2\epsilon^{-1}\dot{v_{\mu}}-ev_{\gamma}f^{\nu}_{\mu}=\dot{B_\mu}\tag{18}$$
$e$ charge
$\epsilon$ small real number representing the width of a tube surrounding the world line of the charge
$v$ four velocity of the charge
$f$ a function of the actual, retarded and advanced electromagnetic field tensors
$B$ a four vector
He then takes the scalar product of both sides with $v$ to get, using his notation () for the scalar product of two four-vectors:
$$(v\dot{B})= \frac{1}{2}e^2\epsilon^{-1}(\dot{v}v) = 0$$
For this to be true, the scalar product of $v$ with the other term $ev_{\gamma}f^{\nu}_{\mu}$ must equal zero: how does one show this?
note: It's really bizarre that Dirac feels the need to state the far more obvious identities $(v\dot v)=0$, $v^2 = 1$ just before this bit!
Answer: Recall that $F^{\mu\nu}$ is skew-symmetric. Thus
$$
v^\mu v^\nu F_{\mu\nu}\equiv 0
$$
One should point out that, in Dirac's article,
$$
f_{\mu\nu}=F_{\mu\nu}+\frac43e\ddot v_{[\mu}v_{\nu]}
$$
which is still skew in $\mu,\nu$, and therefore $v^\mu v^\nu f_{\mu\nu}$ also vanishes. | {
"domain": "physics.stackexchange",
"id": 38728,
"tags": "homework-and-exercises, electromagnetic-radiation, electrons, classical-electrodynamics"
} |
What is the direction of the electricity flow in a DC circuit? | Question: I know that in AC, the direction of the flow of electrons is constantly changing, but this question is for a DC circuit like an LED with a battery.
Does current in such a circuit flow from the - side of a battery to the + side of a battery? This makes sense to me as the negative side wants to get rid of the negative electrons as it has too many, and the positive side wants to gain electrons so they move in that direction.
But recently I've heard that it's actually the other way round? + > -
I've also heard this is to do with how electricity was first discovered and they got it wrong? Now I'm really confused on which way it actually flows, or if it's just the naming of the positive and negative sides of a battery are inverted..
Could somebody shed some light on this? I would appreciate it very much.
Answer: The direction of current actually is a convention. Before we knew that electrons were the moving charges, people thought that the positive charges were the ones responsible for the current. But it doesn't matter, there are 2 sign flips when you change from negative to positive or visa-versa: one for the direction of current, and another for the sign of the charge. You end up with the same answer as before. | {
"domain": "physics.stackexchange",
"id": 89592,
"tags": "electricity, electric-circuits"
} |
How to re-create the following circuit image? | Question: What would be the best way to re-create the following image of the HHL quantum circuit without compromising on image quality (the image is from this review paper)?
Using qasm2circ I can create the basic circuit. But I don't know any good program/software which will help to produce the labels of the various components and also the nice transparent boxes emphasizing the 3 main steps.
FWIW, I had contacted the corresponding author last week asking about the software they used to generate the image (and also whether the image has any associated copyright license), but haven't received any response so far, which is why I am asking here.
Answer: Edit —
I've revised this answer to make some small improvements in the commands, to tidy up the commands for drawing the wires for instance, because it seemed worthwhile.
Flattering as it is to have this answer be the accepted one for the time being, I think I should point out that the quantikz package (see Daftwullie's answer below) and the qpic package (as pointed out in cnada's answer below) are both libraries with reasonably complete interfaces, and so better for people looking for a quick and simple solution. The code below is probably more suitable for people who are comfortable with TiKZ, and might like to tweak their circuit diagrams with TiKZ commands, but who wouldn't mind having some macros to streamline drawing their diagrams. — Nice as it might be to, for the moment I have no ambition to write a LaTeX package to make these macros available with a nice interface for all purposes (but anyone else who would like to is welcome if they give me some of the credit).
Snippet
See below for all of the code used to generate this example: the following commands are just the ones used to draw the circuit itself. (This snippet involves macros which I have defined for the purpose of this post, which I also define below.)
% define initial positions of the quantum wires
\xdef\dy{1.25}
\defwire (A) at (0);
\defwire (B) at ({-\dy});
\defwire (C) at ({-2*\dy});
% draw wires
\xdef\dt{0.8}
\drawwires [\dt] (15);
\node at ($(B-0)!0.5!(B-1)$) {$/$};
\node at ($(C-0)!0.5!(C-1)$) {$/$};
% draw gates
\gate (B-2) [H^{\otimes n}];
\ctrlgate (B-3) (C-3) [U];
\virtgate (A-3);
\gate (B-4) [\mathit{FT}^\dagger];
\ctrlgate (B-7) (A-7) [R];
\virtgate (C-7);
\gate (B-10) [\mathit{FT}];
\ctrlgate (B-11) (C-11) [U^\dagger];
\gate (B-12) [H^{\otimes n}];
\virtgate (A-12);
\meas (A-14) [Z];
% draw input and output labels
\inputlabel (A-0) [\lvert 0 \rangle];
\inputlabel (B-0) [\lvert 0 \rangle^{\otimes n}];
\inputlabel (C-0) [\lvert b \rangle];
\outputlabel (A-15) [\lvert 1 \rangle];
\outputlabel (B-15) [\lvert 0 \rangle^{\otimes n}];
\outputlabel (C-15) [\lvert x \rangle];
Result
Preamble
You will need a pre-amble which contains at least the amsmath package, as well as the tikz package. You may not need all of the tikz libraries below, but they don't hurt. Be sure to include the commands involving layers.
\documentclass[a4paper,10pt]{article}
\usepackage{amsmath}
\usepackage{tikz}
\usetikzlibrary{shapes,arrows,calc,positioning,fit}
\pgfdeclarelayer{background}
\pgfsetlayers{background,main}
For the purposes of this post, I've defined some ad-hoc macros to make reading the coded circuit easier for public consumption. (The macro format is not exactly good LaTeX practise, but I define them this way in order for the syntax to be more easily read and for it to stand out.) The parameters for dimensions in these gates were chosen to look good in your sample-circuit, and were found by trial-and-error: you can change them to change the appearance of your circuit.
The first is a simple macro to draw a gate.
\def\gate (#1) [#2]{%
\node [
draw=black,fill=white, inner sep=0pt,
minimum width=2.5em, minimum height=2em, outer sep=1ex
] (#1) at (#1) {$#2$}%
}
The second is a macro to draw an 'invisible' gate. This is not really a command which is important for the circuit itself, but helps for the placement of background frames.
\def\virtgate (#1){%
\node [
draw=none, fill=none,
minimum width=2.5em, minimum height=2em, outer sep=1ex
] (#1) at (#1) {};
}
The third is a macro to draw a controlled gate. This command works well enough for your example circuit, but doesn't allow you to draw a CNOT. (Exercise for the reader proficient in TiKZ: make a \CNOT command.)
\def\ctrlgate (#1) (#2) [#3]{%
\filldraw [black] (#1) circle (2pt) -- (#2);
\gate (#2) [#3]
}
The fourth is a macro to draw a "measurement" box. I think it is perfectly reasonable to want to specify an explicit basis or observable for the measurement, so I allow an argument to specify that.
\def\meas (#1) [#2]{%
\node [
draw=black, fill=white, inner sep=2pt,
label distance=-5mm, minimum height=2em, minimum width=2em
] (meas) at (#1) {};
\draw ($(meas.south) + (-.75em,1.5mm)$) arc (150:30:.85em);
\draw ($(meas.south) + (0,1mm)$) -- ++(.8em,1em);
\node [
anchor=north west, inner sep=1.5pt, font=\small
] at (meas.north west) {#2};
}
I define two short macros to produce the labels for the inputs and outputs of wires.
\def\inputlabel (#1) [#2]{%
\node at (#1) [anchor=east] {$#2$}
}
\def\outputlabel (#1) [#2]{%
\node at (#1) [anchor=west] {$#2$}
}
The macros above are all looking for co-ordinates at which to place the gates. I also define macros to define "wires", which have regularly spaced co-ordinates where gates can be located.
The first is a macro which allows you to define a named wire (such as A, B, x3, etc.) and its vertical position in the circuit diagram (these diagrams are left-to-right by default, which you can change most easily using the rotate option of the tikzpicture environment.)
\def\defwire (#1) at (#2){%
\ifx\qmwires\empty
\edef\qmwires{#1}%
\else
\edef\qmwires{\qmwires,#1}%
\fi
\coordinate (#1-0) at ($(0,#2)$)%
}
Having defined a collection of wires, the following command then draws all of them, starting from the same left-most starting point and ending at the same right-most ending point, with increments by a fixed amount (given in the square brackets) and for a given number of time slices. This defines a sequence of 'time-slice' co-ordinates for each wire: for a wire A, it defines the co-ordinates A-0, A-1, and so forth up until A-t (where t is the value of the second argument).
\def\drawwires [#1] (#2);{%
\xdef\u{0}
\foreach \t in {0,...,#2} {%
\foreach \l in \qmwires {%
\coordinate (\l-\t) at ($(\l-\u) + (#1,0)$);
\draw (\l-\u) -- (\l-\t);
}
\xdef\u{\t}
}
}
The final macro is one to draw a background frame for different stages in your circuit. It takes an argument specifying which gates (including the invisible virtual 'gates') are meant to belong to the frame.
\def\bgframe [#1]{%
\node [%
draw=black, fill=yellow!40!gray!30!white, fit=#1
] {}%
}
The circuit diagram itself
Now to begin drawing your circuit.
\begin{document}
\begin{tikzpicture}
We start by defining the relative positions of the wires. (For convenience, I do this using a macro to define the spacing between them, that I can quickly change to adjust the spacing.) Below, I define three wires: A, B, and C.
\let\qmwires\empty
% define initial positions of the quantum wires
\xdef\dy{1.25}
\defwire (A) at (0);
\defwire (B) at ({-\dy});
\defwire (C) at ({-2*\dy});
We now draw the circuit, using the command to draw the wires and define the co-ordinates on the wire, and placing gates independently of one another according to those co-ordinates.
% draw circuit
\xdef\dt{0.8}
\drawwires [\dt] (15);
\node at ($(B-0)!0.5!(B-1)$) {$/$};
\node at ($(C-0)!0.5!(C-1)$) {$/$};
\gate (B-2) [H^{\otimes n}];
\ctrlgate (B-3) (C-3) [U];
\virtgate (A-3);
\gate (B-4) [\mathit{FT}^\dagger];
\ctrlgate (B-7) (A-7) [R];
\virtgate (C-7);
\gate (B-10) [\mathit{FT}];
\ctrlgate (B-11) (C-11) [U^\dagger];
\gate (B-12) [H^{\otimes n}];
\virtgate (A-12);
\meas (A-14) [Z];
% draw input and output labels
\inputlabel (A-0) [\lvert 0 \rangle];
\inputlabel (B-0) [\lvert 0 \rangle^{\otimes n}];
\inputlabel (C-0) [\lvert b \rangle];
\outputlabel (A-15) [\lvert 1 \rangle];
\outputlabel (B-15) [\lvert 0 \rangle^{\otimes n}];
\outputlabel (C-15) [\lvert x \rangle];
Annotations for the circuit
The rest of the circuit diagram is literally commentary. We can do this using a combination of plain-old TiKZ nodes, and the \bgframe macro which I defined above. (Annotations are a little less predictable, so I don't have a good way of making them as systematic as the earlier parts of the circuit, so general TiKZ commands are a reasonable approach unless you know how to make your annotations uniform.)
First the annotations for the stages of the circuit:
% draw annotations
\node [minimum height=4ex] (annotate-1) at ($(A-3) + (0,1)$)
{\textit{Phase estimation}};
\node [minimum height=4ex] (annotate-2) at ($(A-7) + (0,1)$)
{\textit{$\smash{R(\tilde\lambda^{-1}})$ rotation}};
\node [minimum height=4ex] (annotate-3) at ($(A-11) + (0,1)$)
{\textit{Uncompute}};
\node (annotate-a) at ($(C-3) + (0,-1.25)$) {\textit{(a)}};
\node (annotate-b) at ($(C-7) + (0,-1.25)$) {\textit{(b)}};
\node (annotate-c) at ($(C-11) + (0,-1.25)$) {\textit{(c)}};
Next, the annotations for the registers, at the input:
\node (A-in-annotate) at ($(A-0) + (-3em,0)$) [anchor=east]
{\parbox{4.5em}{\centering Ancilla register $S$ }};
\node (B-in-annotate) at ($(B-0) + (-3em,0)$) [anchor=east]
{\parbox{4.5em}{\centering Clock \\ register $C$ }};
\node (C-in-annotate) at ($(C-0) + (-3em,0)$) [anchor=east]
{\parbox{4.5em}{\centering Input \\ register $I$ }};
Finally, the frames for the stages of the circuit.
% draw frames for stages of the circuit
\begin{pgfonlayer}{background}
\bgframe [(annotate-1)(B-2)(B-4)(C-3)];
\bgframe [(annotate-2)(B-7)(C-7)];
\bgframe [(annotate-3)(B-10)(B-12)(C-11)];
\end{pgfonlayer}
And that's the end of the circuit.
\end{tikzpicture}
\end{document} | {
"domain": "quantumcomputing.stackexchange",
"id": 342,
"tags": "resource-request, circuit-construction, qasm2circ"
} |
On tensor manipulation and algebra | Question: I am reading Quantum Field Theory in a Nutshell by Anthony Zee. On page 33, I can't figure out how he got equation 3.
The initial equation is $$[-(k^2 - m^2)g^{\mu\nu} + k^{\mu}k^{\nu}]D_{\nu\lambda}(k)=\delta^{\mu}_{\lambda}$$
which he manipulated to get equation (I.5.3) which is
$$D_{\nu\lambda}(k)=\frac{-g_{\nu\lambda}+k_{\mu}k_{\nu}/m^2}{k^2-m^2}.\tag{I.5.3}$$
Can anyone suggest me how to do it?
Answer: Introducing a complete set of orthogonal projectors
$$\begin{align}P^{\mu\nu}_\perp=&g^{\mu\nu}-\frac{k^\mu k^\nu}{k^2}\\P^{\mu\nu}_\parallel=&\frac{k^\mu k^\nu}{k^2},\end{align}\tag{1}$$
which satisfy
$$P_\perp^{\mu\nu}+P_\parallel^{\mu\nu}=g^{\mu\nu}\tag{2.1}$$
$$(P_{(i)})^{\mu\nu}(P_{(j)})_{\nu\lambda}=(P_{(i)})^\mu_\lambda~\delta^{(j)}_{(i)}\tag{2.2}$$
where $i,j=\perp,\parallel$, you can write the tensor
$$A^{\mu\nu}=-(k^2-m^2)g^{\mu\nu}+k^\mu k^\nu\tag{3}$$
as a linear combination of them. You can see it's
$$A^{\mu\nu}=-(k^2-m^2)P_\perp^{\mu\nu}+m^2P_\parallel^{\mu\nu}.\tag{4}$$
We can also expand its inverse $D_{\nu\lambda}$ in terms of $(1)$
$$D_{\nu\lambda}=aP^{\perp}_{\nu\lambda}+bP^{\parallel}_{\nu\lambda}.\tag{5}$$
Imposing $A^{\mu\nu}D_{\nu\lambda}=\delta^{\mu}_\lambda$ and using $(2)$ you get $a=-1/(k^2-m^2)$ and $b=1/m^2$, i.e. the coeficients of each projector in $(5)$ are the inverse of the coeficients in $(4)$, so
$$D_{\nu\lambda}=\frac{-1}{k^2-m^2}P^{\perp}_{\nu\lambda}+\frac{1}{m^2}P^{\parallel}_{\nu\lambda},\tag{5}$$ | {
"domain": "physics.stackexchange",
"id": 87677,
"tags": "homework-and-exercises, quantum-field-theory, greens-functions, propagator"
} |
Maze solver and generator in Python | Question: After watching Computerphile's video I decided to create my own maze solver and generator in Python. I've never written anything in Python so I'd like to get some feedback about my code, specifically about:
code style
project structure
algorithms implementation
This is my first mini-project and as such I have no idea what is the correct way of doing things (e.g. arguments parsing baffles me a lot, I've no idea how to do it "properly").
mazr.py:
#!/usr/bin/env python3
import argparse
import generator
import solver
def parse_arguments():
parser = argparse.ArgumentParser()
# generator arguments
parser.add_argument("-g", "--generate", default=""
, help="filename used for generated maze")
parser.add_argument("-s", "--size", type=int, default=50
, help="size NxN of generated maze")
# solver arguments
parser.add_argument("-si", "--solve", default=""
, help="maze file to solve")
parser.add_argument("-sg", "--solvegenerated", action="store_true"
, default=False, help="generate and then solve a maze")
parser.add_argument("-dfs", "--dfs", action="store_true"
, default=False, help="solve using dfs algorithm")
parser.add_argument("-dji", "--djikstra", action="store_true"
, default=False
, help="solve using djikstra algorithm")
return parser.parse_args()
def main():
arguments = parse_arguments()
if not arguments.generate == "":
generator.create(arguments.generate, arguments.size)
if arguments.solvegenerated:
solver.solve(arguments.generate+".png", arguments.dfs,
arguments.djikstra)
if not arguments.solve == "":
solver.solve(arguments.solve, arguments.dfs, arguments.djikstra)
if __name__ == "__main__":
main()
generator.py:
#!/usr/bin/env python3
import random
import time
from PIL import Image
def generate_graph(size):
graph = [[] for i in range(size*size)]
graph_time = time.time()
verticies = 0
edges = 0
print("[*] generating graph")
posx = 1
for x in range(size):
posy = 1
for y in range(size):
verticies += 1
# graph[vertex's number] = [(it's real position), [connected
# vertex's number, (wall between verticies position)]]
# vertex's number
v = x*size+y
# append vertex's real position
graph[v].append((posx, posy))
if not x < 1:
graph[v].append([v-size, (posx-1, posy)])
edges += 1
if x+1 < size:
graph[v].append([v+size, (posx+1, posy)])
edges += 1
if not y < 1:
graph[v].append([v-1, (posx, posy-1)])
edges += 1
if y+1 < size:
graph[v].append([v+1, (posx, posy+1)])
edges += 1
# skip one pixel for wall
posy += 2
posx += 2
print(verticies, "verticies")
print(edges, "edges")
print("[#] finished in", time.time()-graph_time, "seconds")
return graph
def generate_maze(graph):
stack = []
path = []
visited = [False for i in range(len(graph))]
maze_time = time.time()
print("[*] generating maze")
# current vertex
v = random.randint(1, len(graph)) - 1
stack.append(v)
while stack:
visited[v] = True
path.append(graph[v][0])
valid = []
for i in range(1, len(graph[v])):
if not visited[graph[v][i][0]]:
valid.append(graph[v][i])
if valid:
choice = random.choice(valid)
path.append(choice[1])
stack.append(v)
v = choice[0]
else:
v = stack.pop()
print("[#] finished in", time.time()-maze_time, "seconds")
return path
def generate_image(filename, size, path):
print("[*] generating image")
image_time = time.time()
maze = Image.new('RGB', (size, size))
maze_matrix= maze.load()
for p in range(len(path)):
maze_matrix[path[p]] = (255, 255, 255)
# entrance and exit points
print("creating entry point at (1, 0)")
maze_matrix[(1, 0)] = (255, 255, 255)
print("creating exit point at", (size-1, size-2))
maze_matrix[(size-1, size-2)] = (255, 255, 255)
maze.save(filename)
print("[#] finished in", time.time()-image_time, "seconds")
def create(filename, size):
# correcting and setting up variables
filename += ".png"
size_real = (2 * size) + 1
creation_time = time.time()
print("[*] creating", filename)
print("size =", size_real, "x", size_real)
graph = generate_graph(size)
path = generate_maze(graph)
generate_image(filename, size_real, path)
print("[#] finished in", time.time()-creation_time, "seconds")
solver.py:
#!/usr/bin/env python3
import time
import os
from PIL import Image
import solve_dfs
import solve_dji
def save(filename, path, entrance, exit, algorithm):
solve = (204, 52, 53)
point = (57, 129, 237)
print("[#] generating image")
saving_time = time.time()
solved = Image.open(filename)
solved.mode = 'RGB'
solved_matrix = solved.load()
for i in range(len(path)):
solved_matrix[path[i]] = solve
solved_matrix[entrance] = point
solved_matrix[exit] = point
filename = os.path.splitext(os.path.basename(filename))[0]+algorithm+".png"
solved.save(filename)
print("saved", filename)
print("[#] finished in", time.time()-saving_time, "seconds")
def generate_graph(maze, width, height):
wall = (0, 0, 0)
verticies = 0
edges = 0
graph = [[] for i in range(width*height)]
print("[*] generating graph")
graph_time = time.time()
for x in range(width):
for y in range(height):
if not maze[x, y] == wall:
verticies += 1
# vertex's number
v = x*width+y
# append position
graph[v].append((x, y))
if not x < 1:
if not maze[x-1, y] == wall:
graph[v].append(v-width)
edges += 1
if x+1 < width:
if not maze[x+1, y] == wall:
graph[v].append(v+width)
edges += 1
if not y < 1:
if not maze[x, y-1] == wall:
graph[v].append(v-1)
edges += 1
if y+1 < height:
if not maze[x, y+1] == wall:
graph[v].append(v+1)
edges += 1
print(verticies, "verticies")
print(edges, "edges")
print("[#] finished in", time.time()-graph_time, "seconds")
return graph
def get_entrance_and_exit(maze, width, height):
wall = (0, 0, 0)
entrance = (0, 0)
exit = (0, 0)
print("[*] searching for entrance and exit")
entry_time = time.time()
for x in range(width):
if not maze[x, 0] == wall:
entrance = (x, 0)
break
for x in range(width):
if not maze[x, height-1] == wall:
exit = (x, height-1)
break
for y in range(height):
if not maze[0, y] == wall:
entrance = (0, y)
break
for y in range(height):
if not maze[width-1, y] == wall:
exit = (width-1, y)
break
print("found entrance at", entrance)
print("found exit at", exit)
print("[#] finished in", time.time()-entry_time, "seconds")
return entrance, exit
def solve(filename, dfs, dji):
print("[*] solving", filename)
solve_time = time.time()
print("opening image file")
try:
maze = Image.open(filename)
except:
print("unable to open file, quiting")
return
width, height = maze.size
print("size =", width, "x", height)
maze.mode = 'RGB'
maze_matrix = maze.load()
graph = generate_graph(maze_matrix, width, height)
entrance, exit = get_entrance_and_exit(maze_matrix, width, height)
if dfs:
path = solve_dfs.alg(graph, entrance[0]*width+entrance[1], exit)
save(filename, path, entrance, exit, "DFS")
if dji:
path = solve_dji.alg(graph, entrance[0]*width+entrance[1]
, exit[0]*width+ exit[1])
save(filename, path, entrance, exit, "DJIKSTRA")
print("[#] finished in", time.time()-solve_time, "seconds")
solve_dfs.py:
#!/usr/bin/env python3
import time
def alg(graph, entrance, exit):
visited = [False for i in range(len(graph))]
path = []
print("[*] solving using dfs")
dfs_time = time.time()
def dfs(v):
if graph[v][0] == exit:
return True
visited[v] = True
for i in range(1, len(graph[v])):
if not visited[graph[v][i]]:
if dfs(graph[v][i]):
path.append(graph[v][0])
return True
return False
try:
dfs(entrance)
except:
print("maze is simply to big for recursion, use other algorithm")
return []
print("solved in", len(path)+1, "steps") # +1 because it lacks exit
print("[#] finished in", time.time()-dfs_time, "seconds")
return path
solve_dji.py:
#!/usr/bin/env python3
import heapq
import time
def alg(graph, entrance, exit):
path = []
visited = [False for i in range(len(graph))]
distance = [9999999999 for i in range(len(graph))]
distance[entrance] = 0
previous = [0 for i in range(len(graph))]
print("[*] solving using djikstra")
djikstra_time = time.time()
# priority queue
pqueue = []
heapq.heappush(pqueue, (0, entrance))
while pqueue:
# distance and vertex
d, v = heapq.heappop(pqueue)
if not visited[v]:
for i in range(1, len(graph[v])):
if distance[graph[v][i]] > d + 1:
distance[graph[v][i]] = d + 1
heapq.heappush(pqueue, (d+1, graph[v][i]))
previous[graph[v][i]] = v
visited[v] = True
v = previous[exit]
while not v == entrance:
path.append(graph[v][0])
v = previous[v]
print("solved in", len(path)+2, "steps") # +2 because it lacks entrance and
# exit
print("[#] finished in", time.time() - djikstra_time, "seconds")
return path
Answer: Just reviewing generator.py.
1. Review
There are no docstrings. What do these functions do? What arguments do they take? What do they return?
"verticies" is a typo for "vertices".
There's repetitive code for printing progress messages and measuring the time taken. I would avoid this repetition using a context manager, like this:
from contextlib import contextmanager
@contextmanager
def timer(message):
"Context manager that reports the time taken by a block of code."
print("[*]", message)
start = time.time()
yield
print("[#] finished in {:.3f} seconds".format(time.time() - start))
and then in create you can write:
with timer("creating {}".format(filename)):
print("size =", size_real, "x", size_real)
with timer("generating graph"):
graph = generate_graph(size)
with timer("generating maze"):
path = generate_maze(graph)
with timer("generating image"):
generate_image(filename, size_real, path)
This keeps the timing code out of the individual functions, making them shorter, simpler, and easier to test.
The graph data structure is complicated, and the only clue as to how it works is this comment:
# graph[vertex's number] = [(it's real position), [connected
# vertex's number, (wall between verticies position)]]
Lists and tuples are very convenient for building simple data structures, but code that accesses them is hard to understand. For example, what does this line test?
if not visited[graph[v][i][0]]:
By reading the comment, we can figure out that graph[v][i][0] is the vertex number of the i-1-th neighbour of vertex number v. But it takes considerable effort to figure this out — first you have to find the comment explaining the data structure, and then you have to figure out that the comment isn't telling the whole truth (because there can be multiple connected vertices, not just one as in the comment).
So let's see if we can simplify this data structure. The first thing to observe is that vertices are represented by integers, not by their coordinates. Why is that? There are two places where this fact is used. First, in the visited list:
visited = [False for i in range(len(graph))]
But we could make this a set instead:
visited = set()
and instead of testing:
if not visited[graph[v][i][0]]:
we can test for membership in the set:
if graph[v][i][0] not in visited:
Second, in this line picking a random vertex:
v = random.randint(1, len(graph)) - 1
But there are other ways of doing this, for example like this:
v = random.choice(list(graph))
After making these two changes, the vertices can be represented by any hashable objects, in particular by tuples (posx, posy).
The other redundant piece of information in the graph data structure is the position of the wall between a vertex and its neighbour. This is redundant because if you have a vertex at \$v_x, v_y\$ and a neighbouring vertex at \$w_x, w_y\$, then the wall must be exactly halfway between them, that is, at $${v_x + w_x \over 2}, {v_y + w_y \over 2}.$$ So we can omit the wall from the data structure, simplifying it to:
# Mapping from vertex coordinates to set of coordinates of
# neighbouring vertices: graph[x, y] = {(x1, y1), (x2, y2), ...}
(We'll see later why it's convenient to have a set of neighbours instead of a list.)
A convenient way to construct such a data structure is to use a collections.defaultdict.
This code iterates over the logical coordinates x, y and maintaining separate real coordinates posx, posy:
posx = 1
for x in range(size):
posy = 1
for y in range(size):
# ...
# skip one pixel for wall
posy += 2
posx += 2
Instead, iterate directly over the real coordinates:
# Vertices are at odd coordinates (leaving room for walls).
coords = range(1, size * 2, 2)
for x in coords:
for y in coords:
Now you can use itertools.product to loop over both coordinates simultaneously:
for x, y in product(coords, repeat=2):
There's repetitive code for adding the neighbours. The repetition can be avoided by making a list of cardinal directions:
# List of cardinal directions.
_DIRECTIONS = [(1, 0), (0, 1), (-1, 0), (0, -1)]
and then iterating over it:
for x, y in product(coords, repeat=2):
for dx, dy in _DIRECTIONS:
nx, ny = x + dx * 2, y + dy * 2
if nx in coords and ny in coords:
graph[x, y].append((nx, ny))
In generate_maze, there are two data structures visited (containing all the visited vertices) and path (containing all the visited vertices plus the walls between them). But the only thing we use visited for is to get a list of unvisited neighbours, so we could use path for this, and avoid the need for visited at all.
Since the image is black-and-white, it's wasteful to use format 'RGB' which has 24 bits per pixel. Format '1' uses 1 bit per pixel. (The saving on disk is only about 50% since PNG is compressed, but still worth it.)
I recommend refactoring generate_image so that it doesn't have to know anything about mazes — if its only job is to make the image then it will be simpler and easier to understand. This can easily be done by adding the entrance and exit to path before calling generate_image.
2. Revised code
#!/usr/bin/env python3
from collections import defaultdict
from contextlib import contextmanager
from itertools import product
import random
import time
from PIL import Image
# List of cardinal directions.
_DIRECTIONS = [(1, 0), (0, 1), (-1, 0), (0, -1)]
def generate_graph(size):
"""Return size-by-size maze graph in the form of a mapping from vertex
coordinates to sets of coordinates of neighbouring vertices, that is:
graph[x, y] = {(x1, y1), (x2, y2), ...}
Vertices are placed at odd coordinates, leaving room for walls.
"""
graph = defaultdict(set)
coords = range(1, size * 2, 2)
for x, y in product(coords, repeat=2):
for dx, dy in _DIRECTIONS:
nx, ny = x + dx * 2, y + dy * 2
if nx in coords and ny in coords:
graph[x, y].add((nx, ny))
return graph
def generate_maze(graph):
"""Given a graph as returned by generate_graph, return the set of
coordinates on the path in a random maze on that graph.
"""
v = random.choice(list(graph)) # Current vertex.
stack = [v] # Depth-first search stack.
path = set() # Visited vertices and the walls between them.
while stack:
path.add(v)
neighbours = graph[v] - path
if neighbours:
x, y = v
nx, ny = neighbour = random.choice(list(neighbours))
wall = (x + nx) // 2, (y + ny) // 2
path.add(wall)
stack.append(neighbour)
v = neighbour
else:
v = stack.pop()
return path
def generate_image(filename, size, path):
"""Create a size-by-size black-and-white image, with white on path and
black elsewhere, and save it to filename.
"""
image = Image.new('1', (size, size))
pixels = image.load()
for p in path:
pixels[p] = 1
image.save(filename)
@contextmanager
def timer(message):
"Context manager that reports the time taken by a block of code."
print("[*]", message)
start = time.time()
yield
print("[#] finished in {:.3f} seconds".format(time.time() - start))
def create(filename, size):
"Create a size-by-size maze and save it to filename."
ext = '.png'
if not filename.endswith(ext):
filename += ext
size_real = (2 * size) + 1
with timer("creating {}".format(filename)):
print("size =", size_real, "x", size_real)
with timer("generating graph"):
graph = generate_graph(size)
with timer("generating maze"):
path = generate_maze(graph)
# Add entrance and exit.
path.update([(1, 0), (size_real - 1, size_real - 2)])
with timer("generating image"):
generate_image(filename, size_real, path) | {
"domain": "codereview.stackexchange",
"id": 35656,
"tags": "python, beginner, algorithm, programming-challenge, pathfinding"
} |
Install RGBDSLAM on ARM | Question:
I've installed RGBDSLAM on a number of computers but ARM is proving to be a bit more challenging. I'm trying to install RGBDSLAM on a raspberry pi 2 model B where ROS Indigo is already installed. I've followed the following walkthroughs with no success:
http://kaze.io/phd/p-hd-diary-pcl-installation-and-configuration-on-osx/
https://richardstechnotes.wordpress.com/2015/11/22/using-the-kinect-and-ros-openni_camera-on-the-raspberry-pi-2/comment-page-1/#comment-1452
Rosdeps all installed fine and here's my latest build error:
$ catkin_make
Base path: /home/ubuntu/catkin_ws
Source space: /home/ubuntu/catkin_ws/src
Build space: /home/ubuntu/catkin_ws/build
Devel space: /home/ubuntu/catkin_ws/devel
Install space: /home/ubuntu/catkin_ws/install
####
#### Running command: "make cmake_check_build_system" in "/home/ubuntu/catkin_ws/build"
####
####
#### Running command: "make -j4 -l4" in "/home/ubuntu/catkin_ws/build"
####
[ 0%] [ 0%] [ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_b
Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_f
Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_s
[ 1%] [ 8%] Built target rgbdslam_generate_messages_cpp
[ 16%] Built target rgbdslam_generate_messages_py
Performing build step for 'siftgpu_proj'
[ 22%] Built target rgbdslam_generate_messages_lisp
make[3]: warning: jobserver unavailable: using -j1. Add `+' to parent make rule.
[ 22%] Built target rgbdslam_gencpp
g++: error: unrecognized command line option '-mfpmath=sse'
make[3]: *** [build/FrameBufferObject.o] Error 1
make[2]: *** [rgbdslam_v2/external/siftgpu_prefix/src/siftgpu_proj-stamp/siftgpu_proj-build] Error 2
make[1]: *** [rgbdslam_v2/external/CMakeFiles/siftgpu_proj.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 22%] Built target rgbdslam_generate_messages
[ 24%] [ 26%] [ 27%] [ 29%] Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/moc_graph_manager.cxx.o
Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/moc_glviewer.cxx.o
Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/qtros.cpp.o
Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/openni_listener.cpp.o
c++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-4.8/README.Bugs> for instructions.
make[2]: *** [rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/moc_graph_manager.cxx.o] Error 4
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [rgbdslam_v2/CMakeFiles/rgbdslam.dir/all] Error 2
make: *** [all] Error 2
make: INTERNAL: Exiting with 5 jobserver tokens available; should be 4!
Invoking "make -j4 -l4" failed
Second attempt using -j1
$ catkin_make -j1
Base path: /home/ubuntu/catkin_ws
Source space: /home/ubuntu/catkin_ws/src
Build space: /home/ubuntu/catkin_ws/build
Devel space: /home/ubuntu/catkin_ws/devel
Install space: /home/ubuntu/catkin_ws/install
####
#### Running command: "make cmake_check_build_system" in "/home/ubuntu/catkin_ws/build"
####
####
#### Running command: "make -j1" in "/home/ubuntu/catkin_ws/build"
####
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_f
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_s
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_b
[ 6%] Built target rgbdslam_generate_messages_cpp
[ 6%] Built target rgbdslam_gencpp
[ 8%] Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/moc_graph_manager.cxx.o
[ 9%] Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/qt_gui.cpp.o
[ 11%] Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/node.cpp.o
[ 13%] Building CXX object rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/glviewer.cpp.o
In file included from /usr/include/GL/freeglut_std.h:128:0,
from /usr/include/GL/glut.h:17,
from /home/ubuntu/catkin_ws/src/rgbdslam_v2/src/glviewer.cpp:24:
/usr/include/GL/gl.h:138:17: error: conflicting declaration 'typedef double GLdouble'
typedef double GLdouble; /* double precision float */
^
In file included from /usr/include/qt4/QtOpenGL/QtOpenGL:5:0,
from /home/ubuntu/catkin_ws/src/rgbdslam_v2/src/glviewer.cpp:22:
/usr/include/qt4/QtOpenGL/qgl.h:85:17: error: 'GLdouble' has a previous declaration as 'typedef GLfloat GLdouble'
typedef GLfloat GLdouble;
^
In file included from /usr/include/GL/gl.h:2059:0,
from /usr/include/GL/freeglut_std.h:128,
from /usr/include/GL/glut.h:17,
from /home/ubuntu/catkin_ws/src/rgbdslam_v2/src/glviewer.cpp:24:
/usr/include/GL/glext.h:468:19: error: conflicting declaration 'typedef ptrdiff_t GLsizeiptr'
typedef ptrdiff_t GLsizeiptr;
^
In file included from /usr/include/qt4/QtOpenGL/qgl.h:79:0,
from /usr/include/qt4/QtOpenGL/QtOpenGL:5,
from /home/ubuntu/catkin_ws/src/rgbdslam_v2/src/glviewer.cpp:22:
/usr/include/GLES2/gl2.h:69:25: error: 'GLsizeiptr' has a previous declaration as 'typedef khronos_ssize_t GLsizeiptr'
typedef khronos_ssize_t GLsizeiptr;
^
In file included from /usr/include/GL/gl.h:2059:0,
from /usr/include/GL/freeglut_std.h:128,
from /usr/include/GL/glut.h:17,
from /home/ubuntu/catkin_ws/src/rgbdslam_v2/src/glviewer.cpp:24:
/usr/include/GL/glext.h:469:19: error: conflicting declaration 'typedef ptrdiff_t GLintptr'
typedef ptrdiff_t GLintptr;
^
In file included from /usr/include/qt4/QtOpenGL/qgl.h:79:0,
from /usr/include/qt4/QtOpenGL/QtOpenGL:5,
from /home/ubuntu/catkin_ws/src/rgbdslam_v2/src/glviewer.cpp:22:
/usr/include/GLES2/gl2.h:70:26: error: 'GLintptr' has a previous declaration as 'typedef khronos_intptr_t GLintptr'
typedef khronos_intptr_t GLintptr;
^
make[2]: *** [rgbdslam_v2/CMakeFiles/rgbdslam.dir/src/glviewer.cpp.o] Error 1
make[1]: *** [rgbdslam_v2/CMakeFiles/rgbdslam.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j1" failed
What do I need to do to get this installed?
** EDIT **
Here is the error I get on Jetson TK1 (2048MB swap size)
$ catkin_make
Base path: /home/ubuntu/rgbdslam_catkin_ws
Source space: /home/ubuntu/rgbdslam_catkin_ws/src
Build space: /home/ubuntu/rgbdslam_catkin_ws/build
Devel space: /home/ubuntu/rgbdslam_catkin_ws/devel
Install space: /home/ubuntu/rgbdslam_catkin_ws/install
####
#### Running command: "make cmake_check_build_system" in "/home/ubuntu/rgbdslam_catkin_ws/build"
####
####
#### Running command: "make -j1 -l1" in "/home/ubuntu/rgbdslam_catkin_ws/build"
####
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_f
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_s
[ 0%] Built target _rgbdslam_generate_messages_check_deps_rgbdslam_ros_ui_b
[ 6%] Built target rgbdslam_generate_messages_cpp
[ 6%] Built target rgbdslam_gencpp
[ 8%] Building CXX object rgbdslam_v2-indigo/CMakeFiles/rgbdslam.dir/src/glviewer.cpp.o
In file included from /usr/include/GL/freeglut_std.h:128:0,
from /usr/include/GL/glut.h:17,
from /home/ubuntu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/glviewer.cpp:24:
/usr/include/GL/gl.h:138:17: error: conflicting declaration 'typedef double GLdouble'
typedef double GLdouble; /* double precision float */
^
In file included from /usr/include/qt4/QtOpenGL/QtOpenGL:5:0,
from /home/ubuntu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/glviewer.cpp:22:
/usr/include/qt4/QtOpenGL/qgl.h:85:17: error: 'GLdouble' has a previous declaration as 'typedef GLfloat GLdouble'
typedef GLfloat GLdouble;
^
In file included from /usr/include/GL/gl.h:2059:0,
from /usr/include/GL/freeglut_std.h:128,
from /usr/include/GL/glut.h:17,
from /home/ubuntu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/glviewer.cpp:24:
/usr/include/GL/glext.h:468:19: error: conflicting declaration 'typedef ptrdiff_t GLsizeiptr'
typedef ptrdiff_t GLsizeiptr;
^
In file included from /usr/include/qt4/QtOpenGL/qgl.h:79:0,
from /usr/include/qt4/QtOpenGL/QtOpenGL:5,
from /home/ubuntu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/glviewer.cpp:22:
/usr/include/GLES2/gl2.h:69:25: error: 'GLsizeiptr' has a previous declaration as 'typedef khronos_ssize_t GLsizeiptr'
typedef khronos_ssize_t GLsizeiptr;
^
In file included from /usr/include/GL/gl.h:2059:0,
from /usr/include/GL/freeglut_std.h:128,
from /usr/include/GL/glut.h:17,
from /home/ubuntu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/glviewer.cpp:24:
/usr/include/GL/glext.h:469:19: error: conflicting declaration 'typedef ptrdiff_t GLintptr'
typedef ptrdiff_t GLintptr;
^
In file included from /usr/include/qt4/QtOpenGL/qgl.h:79:0,
from /usr/include/qt4/QtOpenGL/QtOpenGL:5,
from /home/ubuntu/rgbdslam_catkin_ws/src/rgbdslam_v2-indigo/src/glviewer.cpp:22:
/usr/include/GLES2/gl2.h:70:26: error: 'GLintptr' has a previous declaration as 'typedef khronos_intptr_t GLintptr'
typedef khronos_intptr_t GLintptr;
^
make[2]: *** [rgbdslam_v2-indigo/CMakeFiles/rgbdslam.dir/src/glviewer.cpp.o] Error 1
make[1]: *** [rgbdslam_v2-indigo/CMakeFiles/rgbdslam.dir/all] Error 2
make: *** [all] Error 2
Invoking "make -j1 -l1" failed
Originally posted by jacksonkr_ on ROS Answers with karma: 396 on 2016-04-02
Post score: 0
Answer:
Bit late perhaps, but this could very well be due to your platform not having enough memory to compile all of this. Do you have swap enabled?
(very late) Edit:
g++: error: unrecognized command line option '-mfpmath=sse'
just noticed this line in your "first attempt": sse is an Intel/AMD only instruction set, and is not supported by ARM CPUs.
The other attempts to indeed point to incompatibilities between OpenGL ES and other versions, as @Humpelstilzchen remarked.
Originally posted by gvdhoorn with karma: 86574 on 2016-04-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Humpelstilzchen on 2016-04-05:
The first maybe, the second run looks like an incompatibility with opengl es. There probably is no easy solution for this.
Comment by jacksonkr_ on 2016-04-05:
I noticed that I receive a different error on the Jetson TK1 which is quad for with more ram. I posted the findings in an edit to my original question. I'm going to try an rpi3 tomorrow as well.
Comment by Humpelstilzchen on 2016-04-06:
Check http://answers.ros.org/question/179989/rgbdslam_v2-error-with-make/
Comment by jacksonkr_ on 2016-04-08:
I tried Humpelstilzchen's suggest with no solution. I'm still stuck, witht he Jetson K1 even.
Comment by bogdan on 2016-08-31:
Hello! I have the same problems with Jetson TK1. Did you resolve them?
Comment by jacksonkr_ on 2016-08-31:
I can't remember what I did but it works now on the Jetson TK1. I'm using Ubuntu 14.04 and ROS Indigo. I can install with using the rgbdslam package for ros as well as compiling rgbdslam from the github source.
Comment by Juno Kr on 2017-01-03:
@jacksonkr_ How was RGBDSLAM performance on TK1? Did you try to install on TX1(Ubuntu16.04/JetPack2.3.1/ROS:Kinetic)? What kind version of Kinect did you use?
Comment by jacksonkr_ on 2017-01-05:
RGBDSLAM performance is GREAT and would understandably be even better on the TX1. Kinect v1 model 1414 is my favorite to work with. I've also used Kinect v2 and Intel r200. I encourage Kinect v1 because it uses laser while the other two use infrared bulbs.
Comment by griz11@twc.com on 2017-02-12:
I am trying to build rgbdslam on a TX1 with 16.04. Running into this error. I've seen it before its an intel specific floating point extension. I'm new to catkin_make and ROS so I have no idea how to change this compiler directive. I thought they were all in CMakeCache.txt.
Comment by gvdhoorn on 2017-02-12:
Catkin == basically CMake. So I would expect rgbdslam to set its compiler option in its CMakeLists.txt.
Comment by Dox on 2018-04-04:
Hi everyone! Did anyone succeeded at installing it on RPi3? | {
"domain": "robotics.stackexchange",
"id": 24296,
"tags": "slam, navigation, catkin-make, raspberrypi"
} |
Why are my faeces black in color after eating Oreos | Question: Why are my faeces black in colour the morning after I eat some Oreos?
Day 1 : Eat a handful of Oreos & the next morning your stool is black.
Day 3 : Eat a handful of cocoa flavored biscuits & the next morning your stool is normal.
Day 5 : Eat a handful of Chocolates & the next morning your stool is normal.
I'm aware of the beetroot digestion story, and how it doesn't happen to everyone. What is surprising in the case of Oreos is that it just takes a
handful of Oreos to turn the stool black. But with any other biscuits or chocolates, my faeces are normal.
I don't have anything other than anecdotal evidence but a lot (NOT ALL) of my friends have had similar experience. Just Googling "Black stool after eating Oreos" & I see a lot of people (not all) having similar experience. Do note that to my knowledge it doesn't affect my body in any way.
Why is that the case? Why doesn't it happen to everybody?
What ingredient in Oreos makes it happen? What is the physiological underpinning?
Asking this question with curiosity than caution. Looking for biological/biochemical perspective.
Answer: This is a perfectly reasonable question based on a self-adminstered (human) trial. Morever, the proposer bases their observations not on one subject but several reports. The impromptu survey of friends etc. is also acceptable. By the way, self-reported intakes/output records are an accepted method in nutrition research.
My 2-year-old child exhibited the symptoms described above, after consuming two Oreo biscuits in the afternoon. That evenings soiled nappy (diaper) revealed a dark but otherwise well-textured stool. When my concerned partner brought this to my attention, it immediately reminded me of the use of carmine red and related food dyes for the measurement of gastrointestinal transit times. The principle here is that the dyes are not absorbed in the gut, so the time from consumption to excretion gives a measure of bowel function. Short transit time means no constipation and is considered a sign of great bowel function.
Based on the observations reported for Oreos, my educated guess would be that Oreo products contain miniscule amounts of non-absorbed colorants - something that is generally regarded as safe. Self-reported symptoms are generally considered valid by health professionals. I am still intrigued by the Oreo colouring puzzle, as this colourant must be natural. | {
"domain": "biology.stackexchange",
"id": 7373,
"tags": "metabolism, food, digestive-system"
} |
Wave Optics: Geometrical path length vs Optical path length | Question: I have been solving a physics question requiring me to find the position of first maximum in case of the following situation. Given focal length as 20 cm of the lenses cut into half ans light of wavelength 476 nm.
I deduced that the light rays will not intersect each other but will have some gap.The book tells that S1 and S2 act as secondary sources. My doubt is
why would S1 and S2 be acting as secondary point source? How and why diffraction pattern will be formed?
Which path will be the optical path and geometrical path and why?
I am not able to procced solving the question without having the above concept cleared.
I have attached the solution given in the book below
Answer:
S1 and S2 are images of S from the lower and upper lenses, respectively. As such, they act as point sources. Note, the fact that each lens is a half of a lens, the resulting image won't be a strict point source.
An optical path is define as the index of refraction (n) multiplied by the length (L) the optical ray takes within that medium with that 'n'. Then you add up all the contributions from start to the end point to compute the resulting total optical path. So, if the lenses are in free space, you'll have length of ray S-C + n(of the lens)x(thickness of the lens) + C-S1 length. | {
"domain": "physics.stackexchange",
"id": 98163,
"tags": "optics, interference"
} |
Determining the size of a light source | Question: I have an incoherent light source that is of unknown size, and I was wondering about the possible methods to measure its size. The issue is that I am expecting it to be very small (few micrometers), and if I try to use a pinhole camera much smaller than the source size I will not gain enough light to actually resolve the image.
If I use a reasonable sized pinhole camera (50um) - I will only image the pinhole - not gaining any new information about the source itself.
Right now, I am thinking of calculating it by looking at how quickly the light intensity falls off when it passes a sharp absorbing feature such as a knife edge or a slit. If the source was infinitely small I assume what I would see would be a fall-off that would be approximately as long ('smudged') as the effective mean wavelength of the light. If the source is larger than the wavelength, then the speed at which the light intensity falls off at the edge should be approximately equal to the size of the light source (corrected for the magnification of the imaging system). I guess what I am essentially saying is that the image will be something like the source intensity spatial distribution convoluted with the transmission map, am I correct here?
Are there any methods of estimating the source size by looking at the image of the source through a pinhole that is larger than the source itself, or by looking at the image of the source passing through a slit (two knife edges)?
Are there any other simple ways one can measure or estimate the source size?
It's also worth mentioning that I am dealing with light that is approximately 0.1-0.01nm in wavelength.
Answer: You are working in a spectral range for which finding lenses will be a problem, so we'll reject the idea of using a lens.
Making a small pinhole comparable to the wavelength will be difficult or expensive, but you're right that the spot of light that makes it through the pinhole would be the convolution of the source with the pinhole.
A knife edge should work as well as a pinhole, and should be a lot easier to arrange. Place a razor blade a distance X from the source, then a detector a distance X' beyond the razor blade. Measure the width W of the transition between the fully-shadowed area on the detector and the fully-illuminated area on the detector. The width $S$ of the source is $S = W (X/X')$. | {
"domain": "physics.stackexchange",
"id": 68996,
"tags": "optics, x-rays, imaging"
} |
Gfile error Tensorflow in ROS | Question:
Hello.
I am trying to create a node for image classification using ROS and Tensorflow.
If I run this script with python, everything works perfectly with some differents networks:
import pyrealsense2 as rs
import numpy as np
import cv2
import tensorflow as tf
# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
print("[INFO] Starting streaming...")
pipeline.start(config)
print("[INFO] Camera ready.")
# download model from: https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API#run-network-in-opencv
print("[INFO] Loading model...")
PATH_TO_CKPT = "frozen_inference_graph.pb"
# Load the Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.compat.v1.GraphDef()
with tf.compat.v1.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.compat.v1.import_graph_def(od_graph_def, name='')
sess = tf.compat.v1.Session(graph=detection_graph)
# Input tensor is the image
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Output tensors are the detection boxes, scores, and classes
# Each box represents a part of the image where a particular object was detected
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represents level of confidence for each of the objects.
# The score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# code source of tensorflow model loading: https://www.geeksforgeeks.org/ml-training-image-classifier-using-tensorflow-object-detection-api/
print("[INFO] Model loaded.")
colors_hash = {}
while True:
frames = pipeline.wait_for_frames()
color_frame = frames.get_color_frame()
# Convert images to numpy arrays
color_image = np.asanyarray(color_frame.get_data())
scaled_size = (color_frame.width, color_frame.height)
# expand image dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
image_expanded = np.expand_dims(color_image, axis=0)
# Perform the actual detection by running the model with the image as input
(boxes, scores, classes, num) = sess.run([detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_expanded})
boxes = np.squeeze(boxes)
classes = np.squeeze(classes).astype(np.int32)
scores = np.squeeze(scores)
for idx in range(int(num)):
class_ = classes[idx]
score = scores[idx]
box = boxes[idx]
if class_ not in colors_hash:
colors_hash[class_] = tuple(np.random.choice(range(256), size=3))
if score > 0.6:
left = int(box[1] * color_frame.width)
top = int(box[0] * color_frame.height)
right = int(box[3] * color_frame.width)
bottom = int(box[2] * color_frame.height)
p1 = (left, top)
p2 = (right, bottom)
# draw box
r, g, b = colors_hash[class_]
cv2.rectangle(color_image, p1, p2, (int(r), int(g), int(b)), 2, 1)
cv2.putText(color_image, str(class_)+" "+str(score), p1, cv2.FONT_HERSHEY_SIMPLEX, 0.75, (int(r), int(g), int(b)))
cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
cv2.imshow('RealSense', color_image)
cv2.waitKey(1)
print("[INFO] stop streaming ...")
pipeline.stop()
In the other hand, I have created a ROS package where I launch the following script:
#!/usr/bin/env python3
#Creador: AMC 2021
#Funcion: Nodo de clasificacion de imagen con TensorFlow
#Recibe: RealSense [Image(/camera/color/image_raw)]
#Envia: Realsense [Image(/image_classified)]
import rospy
from sensor_msgs.msg import Image
from std_msgs.msg import String
import pyrealsense2 as rs
import numpy as np
import cv2
import tensorflow as tf
import struct
class TENSORFLOW_ROS_CLASS:
# Function: __init__
#
# Comments
# ----------
# Define neural network
#
# Parameters
# ----------
#
# Returns
# -------
def __init__(self):
# download model from: https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API#run-network-in-opencv
print("[INFO] Loading model...")
PATH_TO_CKPT ="frozen_inference_graph.pb"
# Load the Tensorflow model into memory.
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.compat.v1.GraphDef()
with tf.compat.v1.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.compat.v1.import_graph_def(od_graph_def, name='')
sess = tf.compat.v1.Session(graph=detection_graph)
# Input tensor is the image
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Output tensors are the detection boxes, scores, and classes
# Each box represents a part of the image where a particular object was detected
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represents level of confidence for each of the objects.
# The score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
# Number of objects detected
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# code source of tensorflow model loading: https://www.geeksforgeeks.org/ml-training-image-classifier-using-tensorflow-object-detection-api/
print("[INFO] Model loaded.")
colors_hash = {}
# Function: main
#
# Comments
# ----------
# Set up the node, the publications and subsciptions.
#
# Parameters
# ----------
#
# Returns
# -------
def main(self):
rospy.init_node('tensorflow_ros')
rospy.loginfo("Iniciado nodo tensorflow_ros")
#Publications
self.classification_pub = rospy.Publisher("image_classified", Image, queue_size=100)
#Subscriber
rospy.Subscriber("/camera/color/image_raw", Image, self.callback_image)
rospy.spin()
# Function: callback_image
#
# Comments
# ----------
#
# Parameters
# ----------
# msg: Data of the topic
#
# Returns
# -------
def callback_image(self,msg):
color_image=Image()
color_image=msg
#Publish topic
self.classification_pub.publish(color_image)
if __name__ == '__main__':
detector=TENSORFLOW_ROS_CLASS()
detector.main()
And the error is this one:
[INFO] Loading model...
Traceback (most recent call last):
File "/home/sara/catkin_ws/src/tensorflow_ros/script/tensorflow_ros.py", line 106, in <module>
detector=TENSORFLOW_ROS_CLASS()
File "/home/sara/catkin_ws/src/tensorflow_ros/script/tensorflow_ros.py", line 40, in __init__
serialized_graph = fid.read()
File "/home/sara/.local/lib/python3.8/site-packages/tensorflow/python/lib/io/file_io.py", line 117, in read
self._preread_check()
File "/home/sara/.local/lib/python3.8/site-packages/tensorflow/python/lib/io/file_io.py", line 79, in _preread_check
self._read_buf = _pywrap_file_io.BufferedInputStream(
tensorflow.python.framework.errors_impl.NotFoundError: frozen_inference_graph.pb; No such file or directory
Actually the frozen_inference_graph.pb file is in the same directory of the script, so I don't know what is different respect the python execution.
Any ideas or other example to run a DNN in ROS using tensorflow?
Best regards.
Alessandro
Originally posted by Alessandro Melino on ROS Answers with karma: 113 on 2021-03-22
Post score: 0
Answer:
Actually the frozen_inference_graph.pb file is in the same directory of the script, so I don't know what is different respect the python execution.
I believe this is similar/a duplicate of #q235337.
Summarising: the CWD (current working directory) when running a node with rosrun or roslaunch is not the same as when you start a script yourself.
The CWD for all nodes is $HOME/.ros. So unless there is a file called frozen_inference_graph.pb in $HOME/.ros, the error you mention is expected.
Originally posted by gvdhoorn with karma: 86574 on 2021-03-22
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Alessandro Melino on 2021-03-23:
Yes, that was the problem, I must use the whole path of the file to reference it. Thank you. | {
"domain": "robotics.stackexchange",
"id": 36223,
"tags": "python"
} |
Why do we take $V$ of surface of spherical conducting uniformly charged sphere when we find it's capacitance? | Question: Capacitance $=Q/|∆V|$ and we take $|∆V|$ of the surface of uniformly charged conducting sphere when we are dealing with a capacitor consisting a system such as one sphere is positively charged and the other is negatively charged and the negatively charged sphere is placed at infinity so that potential due to negatively charged sphere on the surface of positively charged surface will be zero and even on the points surrounding the positively charged sphere. While finding capacitance of the positively charged sphere we take |V|(i.e.|V|=|∆V|=V due to -ve sphere - V on the surface of positive sphere)but why don't we take ∆V between other points outside the sphere in the electric field of the positively charged sphere as we take in the parallel capacitor to find the capacitance?
Answer: The original text of the question remains hard for me to understand, but you clarified what was confusing you in the comments and now I can attempt to answer.
Capacitance is always defined as a relationship between two objects. It's not one object that "has a capacitance" it's two objects that "have a capacitance between them." If I have two conductors called $A$ and $B$, the capacitance between them $C_{AB}$ tells us how much voltage we produce when we put a charge $Q$ on conductor $A$. It might help for you to go a few pages forward in your textbook and learn about capacitance matrices - I think that helps with conceptual understanding of capacitance.
However, sometimes we talk about "self capacitance," with only one object in question... so what is that? It's the capacitance between the object and the ground. So first you assume that the ground is the definition of zero voltage. Then you say that the ground is very far from your object. To a good approximation, instead of adding a flat plane of zero voltage to your system far below your conductor, you can instead calculate the voltage difference between your conductor and points that are very far away.
So ultimately in your problem you're imagining putting a charge Q on a spherical conductor. You're asking what is the voltage between that sphere and a point that's very far away. The experimental situation is that you have a spherical conductor floating (or sitting on an insulating stand) pretty far from the ground (relative to its radius). Then to a good approximation, you calculate the voltage difference between the spherical conductor and the ground. | {
"domain": "physics.stackexchange",
"id": 96296,
"tags": "electrostatics, capacitance"
} |
Can/should a reward function depend on something other than state in a DQN | Question: Question: Is it OK to have a reward function on a DQN or any RL algorithm that depends on variables other than the enviroment state? I'm asking because, so far I'm learning from tutorials, but I've never seen a reward function working with something else than enviroment.
My Case: Imagine a trading bot. In order to get the reward, I'd neet to know the position (how much of a stock/cryptocurrency I have) to know if the bot won or lost money in the taken step. But I don't want to have current position on the enviroment's state as it is not important for calibration. Important variables in enviroment are other key market indicators.
I thought on having a separate variable with auxiliar data such as current position. So reward function would rely on new_state, previous_state and aux_data.
Is this OK? Bad practice?
Answer: Reward functions often depend on variables not present in the current state. Trading is the perfect example. Try as we might, we are unlikely to capture every variable necessary to perfectly predict the movement of a price and in effect the agent's reward. This is not typically a desirable property for an environment to have.
But if as you say the present positions are not important for calibration then feel free to test the hypothesis. I can drive a car without a speedometer but do not ask me to drive with a blindfold on. Not every missing variable has disastrous consequences despite their effect on the reward. | {
"domain": "ai.stackexchange",
"id": 4200,
"tags": "reinforcement-learning, machine-learning, deep-rl, dqn, reward-functions"
} |
Can you predict the resistance of conjugated polymers as a function of degree of polymerization (DP)? | Question: Electrical properties of very simple systems can be calculated using NEGF. Is there a way to calculate the resistance (and other electrical properties) of a conjugated polymer, as a function of degree of polymerization (DP)?
Answer: Short answer: No.
First off, the degree of polymerization does not directly relate to the charge transport properties of conducting polymers (more below). You can find highly conductive oligomers.
Secondly, in my opinion, NEGF is not a suitable method to understand the charge transport in polymers. It works for nanoscale devices and molecular wires, but most polymer devices are on the ~100 nm to micron scale.
That's not to say that different oligomer and polymer lengths don't have different transport properties. Absolutely, different molecular weight polymers have different charge mobility, etc. The effects are usually due to a number of hard to characterize factors:
Concentration of defects and traps
Morphology of the film (e.g., higher MW might yield smoother or more ordered films)
Polydispersity (e.g., broader MW distribution is often problematic)
Effective conjugation length
Packing
etc.
The electronic structure of oligomers and conjugated polymers can be simulated fairly well using first-principals methods, including DFT. This can often predict components like the energy levels, band gaps, band dispersion, etc. for crystals and parameters for amorphous films such as the Marcus reorganization energy which is often related to the charge mobility in many of these materials.
What you usually find is that as a function of chain length, charges become more delocalized and the reorganization energy drops (increasing mobility). In reality, this trend should saturate, although most DFT methods incorrectly delocalize charges completely.
Also, most devices with conjugated polymers are amorphous or polycrystalline, making device performance predictions difficult. Transport is not usually band-like, but some form of variable-range hopping between chains or sites within a chain. | {
"domain": "chemistry.stackexchange",
"id": 3943,
"tags": "computational-chemistry, polymers, electricity"
} |
Why pouring sanitizer on hand feel much cooler than water although both perform evaporation? | Question: Which absorb more heat from hand?
Sanitizer or Water?
Both absorb heat from hand than why pouring sanitizer on hand make hand feel much cooler than rhat of water?
Answer: Most sanitizers contain alcohol. The alcohol evaporates faster than water at room temperature. So it absorbs heat at a faster rate than water. You can compare the sensation with those sticky sanitizers used by some stores, that contain no alcohol and leave your hand greasy/sticky. These do not feel cool to me. And by "cool" I mean both temeraturewise and rearding the sensation you feel when you, unknowingly, use them.
Edit - to add more detail
When your hand is in contact with a liquid which is colder than the body temperature there are two things that happen:
Heat is transfered from the warmer body to the colder one. Water has a larger specific heat so it can absorb more heat than other liquids, for the same change in temperature.
The liquid evaporates and this decreases the temperature of the liquid. The relevant quantity here is the latent heat of evaporation but also the vapor pressure which tells you how fast the fluid will evaporate.
So, when you compare water with alcohol both specific heat and latent heat are larger for water than for alcohol. But the alcohol evaporates faster.
And what you have is a thin layer of alcohol in contact with your hand. The main effects is the heat lost by evaporation of the alcohol. The fact that the alcohol has lower specific heat capacity is another "plus" as it heats up faster from the hand and this contribute to increase evaporation rate.
But if you dip your hand in a specific amount of alcohol or water closed in an insulated container so that neither one evaporates in the process and they don't loose heat to the environment, you will have to transfer more heat to the water than to the alcohol to reach the equilibrium. But this is not what gives the sensation of heat or cold. The sensation does not depend on how much heat I am going to loose in the next hour but on the rate of heat transfer right now. At least it looks like this, approximately. The sensation is not linear in the rate of heat transfer, I suppose. | {
"domain": "physics.stackexchange",
"id": 83155,
"tags": "thermodynamics, temperature, evaporation"
} |
is there anyway to fuse 2d lidar readings and camera reading in ros | Question:
Hi i am using a 2d lidar tilted at an angle such that it hits the ground and we are also using a mono-cam for detection of objects (static etc) . Is there any way that i can fuse these two reading i.e the lidar readings onto the camera readings so that i can measure the dist of object through this fuse data(width and x,y positions of the boundaries)
Originally posted by rajnunes on ROS Answers with karma: 73 on 2016-06-17
Post score: 1
Answer:
your answer is here.
Originally posted by dinesh with karma: 932 on 2016-06-19
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 24977,
"tags": "lidar, camera"
} |
Energy stored in system or field? | Question: I am having difficulty in understanding whether fields store momentum and energy or particles store them or both fields and particles store them?
System of potential energy
In the above source it mentions that energy is stored in system but in this source
Energy stored in fields
they mention energy is stored in field. So which one is correct?
Please also give an explanation why?
I am trying to ask here if any of the choices I mentioned is correct, why is it correct? Why are other choices wrong? But I am not trying to ask if momentum and energy can exist together. So I feel it's not a duplicate of the question mentioned.
Answer: They are both correct, because they talk about a different concept of energy. There are dozens of different concepts of energy in physics. In this case, one is related to another in a subordinate matter; energy of EM field is a generalization, a broader concept than electric potential energy.
Electric potential energy of system of particles is a pre-relativistic concept. Let there be some charged particles in some definite region of space, let us call these particles "a system". Potential energy of this system is sum of contributions that depend solely on positions of the particles in space, or mathematically, on coordinates of these particles in some reference frame; it does not depend on their velocities or other variable quantities. It is a property of the system, but it is not necessarily "distributed" across all points of the region. It is just a number assigned to the system, there is no spatial localization of it implied. In non-relativistic theory, the law of conservation of energy states that sum of kinetic energies of all particles and total potential energy is constant:
$$
E_{kinetic} + E_{potential} = const.,
$$
but where in space exactly this energy "is" is immaterial, all that matters is numerical value of this energy and how it depends on positions and velocities of particles.
This concept of potential energy as function of particle positions works (in the sense of allowing formulation of conservation of energy) in many scenarios that are studied in electrostatics.
It however becomes insufficient whenever induced electric fields or other electromotive forces need to be considered. We know that outside the phenomena of electrostatics, the law of conservation of energy needs to be modified to include other contributions to energy, so it will still hold.
In a relativistic theory such as classical electromagnetic theory, whenever the charged particles move, the potential energy assigned in the usual manner, based on particle positions, doesn't give all electromagnetic energy. In other words, in relativistic EM theory one cannot, in general case where particles move, formulate law of conservation of energy using just kinetic energy of particles and potential energy based on particle positions.
However, Maxwell's equations allow us to formulate the law of local conservation of energy, where all energies, including electromagnetic energy, are assigned to a region of space via a distribution, which gives "density" of that energy type at every point of the region. Total energy of this region can be calculated based on values of these densities there. Thus EM energy is thought of as localized to space, with some definite density. Sometimes it is called EM field energy because in macroscopic EM theory, the distribution at any point of space is a function of total electromagnetic field at that point.
However, it should be noted that the formula for this electromagnetic energy is still not unique - there is an infinite number of formulae that give valid distribution of EM energy density and all are equivalent, at least in special relativistic setting, where gravity can be ignored. | {
"domain": "physics.stackexchange",
"id": 59233,
"tags": "electromagnetism, energy, momentum, energy-conservation, field-theory"
} |
What is the spectrum of $\hat x \hat p + \hat p \hat x$? | Question: In quantum mechanics we know that the canonical position $\hat x$ and momentum operator $\hat p$ satisfying
\begin{align}
[\hat x,\hat p] = i \quad (\hbar = 1)
\end{align}
have continuous spectrum.
We also know what the spectrum of the operator
\begin{align}
H = \frac{1}{2}\left(\hat p^2 + \hat x^2 \right),
\end{align}
is discrete, nondegenerate, and it has equal spacing between the eigenvalues.
Is something known about the spectrum and eigenkets of the operator
\begin{align}
\hat O = \hat x \hat p + \hat p \hat x~?
\end{align}
Answer: This is a nice exercise! It can be completely solved with relatively elementary mathematical techniques.
Let us start by assuming $\hbar:=1$, defining the formally selfadjoint differential operator over smooth functions $$D := \frac{1}{2}(XP +PX) = -i \left(x \frac{d}{dx}+\frac{1}{2}I\right)\:,$$ and proving that it admits a unique selfadjoint extension over natural spaces of functions used in elementary formulation of QM in $L^2(\mathbb{R},dx)$: the space $\cal S(\mathbb{R})$ of Schwartz' functions and ${\cal D}(\mathbb{R}):= C^\infty_c(\mathbb{R})$.
Later we will pass to determine the spectrum of the (unique selfadjoint extension of) $D$ we will indicate by $A$.
I stress that without fixing the domain and proving that the operator is selfadjoint thereon or, more weakly, that it admits only one selfadjoint extension on that domain, every physical interpretation as an observable is meaningless and the properties of the spectrum have no clear interpretation.
$\cal S(\mathbb{R})$ and ${\cal D}(\mathbb{R})$ are the most natural and used domains of (essentially) selfadjointness of operators discussed in QM on $L^2(\mathbb{R},dx)$. For instance, the position, momentum, and harmonic Hamiltonian operators are defined thereon giving rise to the known selfadjoint operators.
When $D$ is interpreted as a generator of some one-parameter group of symmetries of a larger group, the domain is fixed in accordance to Garding theory and it may be different form the two cases considered above. Generally speaking, the algebraic properties are not able to fix a selfadjoint extension of the formal observable. For this reason, the analysis of the domain and of selfadjoint extensions is a crucial step of the physical intepretation.
Part 1. To prove that $D$ is essentially selfadjoint, we show that $D$ is the restriction of the selfadjoint generator of a strongly-continuous one-parameter group of unitary operators $U_t$ and we exploit Stone's theorem and a corollary.
If $\psi \in L^2(\mathbb{R},dx)$, we define the natural unitary action of dilation group on wavefunctions
$$(U_t\psi)(x):= e^{t/2}\psi(e^tx)\:.\tag{1}$$
(The apparently extra factor $e^{t/2}$ is actually necessary to preserve the norm of wavefunctions.)
It is not difficult to prove that, if $t \in \mathbb{R}$,
$$\langle U_t \psi|U_t \phi \rangle = \langle \psi|\phi\rangle\:.$$
Furthermore $U_0=I$, $U_tU_s = U_{t+s}$. Finally, it is possible to prove that
$$\int_{\mathbb{R}}|e^{t/2}\psi(e^t x) - \psi(x)|^2 dx \to 0 \quad \mbox{for $t\to 0$,}$$
if $\psi \in L^2(\mathbb{R}, dx)$. Stone's theorem implies that there is a selfadjoint operator $A : D(A) \to L^2(\mathbb{R},dx)$ such that $U_t = e^{itA}$ and its dense domain it is just defined by the set of $\psi \in L^2(\mathbb{R}, dx)$
such that, as $t\to 0$,
$$\int_{\mathbb R}\left|\frac{(U_t\psi)(x)-\psi(x)}{t}-i \psi'(x) \right|^2 dx \to 0$$
for some $\psi' \in L^2(\mathbb{R},dx)$. Evidently
$$A\psi := \psi'$$
Now observe that, if $\psi$ is smooth
$$\frac{\partial}{\partial t}|_{t=0} e^{t/2}\psi(e^t x)= i (D\psi)(x)\:.$$
Actually, a suitable use of Lagrange's theorem and Lebesgue's dominate convergence theorem makes stronger the found result to
$$\int_{\mathbb R}\left|\frac{e^{t/2}\psi(e^t x)-\psi(x)}{t}-i (D\psi)(x) \right|^2 dx \to 0$$
for either $\psi \in {\cal S}(\mathbb{R})$ or $\psi \in {\cal D}(\mathbb{R})$.
Stone's theorem implies that $D$ is a restriction to these dense subspaces of the selfadjoint generator of $U_t$. Actually, since both $U_t {\cal S}(\mathbb{R}) \subset {\cal S}(\mathbb{R})$ and $U_t {\cal D}(\mathbb{R}) \subset {\cal D}(\mathbb{R})$, a known corollary of Stone's theorem implies that $A$ restricted to these spaces admits a unique selfadjoint extension given by $A$ itself.
In other words, $D$ is essentially selfadjoint over ${\cal S}(\mathbb{R})$ and ${\cal D}(\mathbb{R})$ and the unique selfadjoint extensions are exactly the generator $A$ of the unitary group $U_t$ defined in (1).
Part 2. Let us pass to determine the spectrum of $A$. The idea is to reduce to the spectrum of the momentum operator (in two copies) through a (pair of) unitary map(s).
If $\psi \in L^2(\mathbb{R}, dx)$, let us decompose $\psi = \psi_- + \psi_+$, where $\psi_\pm(x) := \psi(x)$ if $x<0$ or $x>0$ respectively, and $\psi_\pm(x) :=0$ in the remaining cases. Evidently $\psi_\pm \in L^2(\mathbb{R}_\pm, dx)$ and the said decomposition realizes the direct orthogonal decomposition
$$L^2(\mathbb{R}, dx) = L^2(\mathbb{R}_-, dx) \oplus L^2(\mathbb{R}_+, dx)\:.$$
It is evident form (1) that $U_t L^2(\mathbb{R}_\pm, dx) \subset L^2(\mathbb{R}_\pm, dx)$ so that, also the generator $A$ of $U_t$ admits these orthogonal subspaces as invariant spaces and the spectrum of $A$ is the union of the spectra of the respective restrictions $A_\pm$.
Let us focus on $L^2(\mathbb{R}_\pm, dx)$ defining a unitary map
$$V_\pm : L^2(\mathbb{R}_\pm, dx) \ni \psi \mapsto \phi_\pm \in L^2(\mathbb{R}, dy)$$
with
$$\phi_\pm(y) = e^{\pm y/2}\psi_+(\pm e^{\pm y})\tag{2}\:.$$
With this definition, it is clear that
$$\phi_\pm (y+t) = e^{\pm t/2}e^{\pm y/2}\psi_\pm(\pm e^{\pm t}e^{\pm y})\:,$$
which means
$$e^{itP}V_\pm = V_\pm e^{\pm itA_\pm}\:,$$
where $P$ is the standard momentum operator.
Since $V_\pm$ is unitary,
$$\sigma_c(A_\pm) = \sigma_c(\pm P) = \mathbb{R}\:,\quad \sigma_p(A_\pm) = \sigma_p(\pm P) = \emptyset\:.$$
We conclude that $$\sigma(A)= \sigma_c(A) = \mathbb{R}\:.$$
The introduced construction also permits us to construct a family of improper eigenvectors of $A$, exploiting the fact that $P$ has a well-known generalized basis of $\delta$-normalized eigenfunctions
$$\phi_k(y) = \frac{e^{iky}}{\sqrt{2\pi}}\:, \quad k \in \mathbb{R} \equiv \sigma_c(P)\:.$$
Taking advantage of the unitaries $V_\pm$, we invert (2) and conclude that $A$ admits a
generalized basis of $\delta$-normalized eigenfunctions (check computations please, I think this is the first time I do them!)
$$\psi^{(k)}_\pm(x) = \frac{(\pm x)^{1\mp ik}}{\sqrt{2\pi}}\quad \mbox{if $x\in \mathbb{R}_\pm$,}\quad \psi^{(k)}_\pm(x) =0 \quad \mbox{otherwise}\:.$$
Notice that, for every $k\in \mathbb{R}$, there is a couple of independent eigenfunctions, so that the spectrum is twice degenerate.
ADDENDUM. The found operator $A$ (unique selfadjoint extension of $D$) is one of the three generators of a unitary representation of the conformal group $PSL(2,\mathbb{R})$, acting on the compactified real line, the one associated to pure dilations. I remember that many years ago I published a paper on the subject, but I do not remember if I analysed the spectrum of $A$ there... | {
"domain": "physics.stackexchange",
"id": 60299,
"tags": "quantum-mechanics, hilbert-space, operators, eigenvalue"
} |
What is meant by "Noisy Intermediate-Scale Quantum" (NISQ) technology? | Question: Preskill introduced recently this term, see for example Quantum Computing in the NISQ era and beyond (arXiv). I think the term (and the concept behind it) is of sufficient importance that it deserves to be explained here in a pedagogical manner. Probably it actually merits more than one question, but the first one needs to be:
What are Noisy Intermediate-Scale Quantum (NISQ) technologies?
Answer: When we talk about quantum computers, we usually mean fault-tolerant devices. These will be able to run Shor's algorithm for factoring, as well as all the other algorithms that have been developed over the years. But the power comes at a cost: to solve a factoring problem that is not feasible for a classical computer, we will require millions of qubits. This overhead is required for error correction, since most algorithms we know are extremely sensitive to noise.
Even so, programs run on devices beyond 50 qubits in size quickly become extremely difficult to simulate on classical computers. This opens the possibility that devices of this sort of size might be used to perform the first demonstration of a quantum computer doing something that is infeasible for a classical one. It will likely be a highly abstract task, and not useful for any practical purpose, but it will nevertheless be a proof-of-principle.
Once this is done, we'll be in a strange era. We'll know that devices can do things that classical computers can't, but they won't be big enough to provide fault-tolerant implementations of the algorithms we know about. Preskill coined the term 'Noisy Intermediate-Scale Quantum' to describe this era. Noisy because we don't have enough qubits to spare for error correction, and so we'll need to directly use the imperfect qubits at the physical layer. And 'Intermediate-Scale' because of their small (but not too small) qubit number.
So what applications might devices in NISQ era have? And how will we design the quantum software to implement them? These are questions that are far from being fully answered, and will likely require quite different techniques than those for fault-tolerant quantum computing. | {
"domain": "quantumcomputing.stackexchange",
"id": 5427,
"tags": "terminology-and-notation, nisq"
} |
Which are the equations needed to calculate how much moving Earth's water with dams would change Earth's rotation speed? | Question: According to this article Drops of Jupiter
Raising 39 trillion kilograms of water 175 meters above sea level will
increase the Earth’s moment of inertia, and thus slow its rotation.
However, the impact will be extremely small. NASA scientists
calculated the shift of such a mass will increase the length of day by
only 0.06 microseconds
is it possible to know which equations NASA used or is it too complicated to calculate that for regular people ? (thus it would be NASA required to do the math)
Answer: You need to calculate the change in the moment of inertia of the Earth and use conservation of angular momentum. In the case of eg. a large dam, (most of) the water will ultimately come from the sea, effectively removing a thin layer of water. The contribution of the removed water to the moment of inertia depends on the distance from the rotation axis and hence on the latitude. This is a fairly simple calculation if we assume the world is all ocean. A more precise calculation would have to take into account the shapes of the oceans. You also need the moment of inertia of the earth, which depends on the density as a function of depth.
The moment of inertia of the lake is $m(R\cos L)^2$ and the moment of inertia of a spherical shell is $\frac{2}{3}mR^2$, where m is the mass of water, $R$ is the Earth's radius and $L$ is the latitude of the lake (30.82305 degrees for Three Gorges). The relative change in the moment of inertia of the earth is then
$$\frac{mR^2(\cos^2 L - \frac{2}{3})}{I}$$
(where $I$ is the Earth's moment of inertia, $8.04×10^{37}$ kg·m$^2$) or
$$1.97×10^{-11}(\cos^2 L - \frac{2}{3}) .$$
Multiplying by the number of microseconds in a day ($8.64 \times 10^{10}$) gives:
$$1.7(\cos^2 L - \frac{2}{3}) = 1.7 × 0.071 = 0.12\,\mu\text{s}$$
Why the difference from NASA's $0.06$ ? Note that the expression changes sign at $\cos^2 L = \frac{2}{3}$ or $L ≈ 35$ degrees (fairly close to the latitude of Three Gorges). The Earth will actually speed up if the lake is at high latitudes and slow down if it is at the equator. The $\frac{2}{3}$ term comes from the "all ocean" assumption. I presume NASA took into account the shapes of the oceans and got a more accurate value for this term. | {
"domain": "earthscience.stackexchange",
"id": 1630,
"tags": "earth-rotation"
} |
Why are there no green stars? | Question: There are red stars, and orange stars, and yellow stars, and blue stars, and they are all understandable save the fact that there is a 'gap': There are no green stars.
Is this because of hydrogen's chemical properties (e.g. the emission spectrum) or some other reason? Or are there just green stars that I have no idea of? If so, I need some pictures; they must look awesome.
Answer: Human color vision is based on three types of "cones" in the eye that respond differently to different wavelengths of light. Thus, not counting overall brightness, the human color space has two degrees of freedom. In contrast, the spectra of stars are very close to a black body, which depends only on effective temperature. As one varies the temperature, the color of a star should make a one-dimensional curve in this color space. Thus, unless some perverse shenanigans are going on, it is intuitive that we necessarily miss most of colors, i.e. there will be no stars of those colors.
Our Sun actually has a peak at about $500\,\mathrm{nm}$, which is a green. However, that's just the peak: since the Sun also radiates lots of light with shorter wavelength (bluer) and also longer wavelengths (redder), the resulting mixture doesn't look green to human eyes.
An image from wikipedia on the color of a blackbody of a given temperature:
Note that the colors at near edges of this color space aren't accurate, because the sRGB standard used in computers only covers a fairly small triangle portion of it. Still, that's a complication that's not very important here. | {
"domain": "astronomy.stackexchange",
"id": 865,
"tags": "star, stellar-astrophysics, hydrogen"
} |
The relation between $T^{\alpha\beta}$ and its trace | Question: I have a simple question. Is it true?
$$T^{\alpha\beta}T_{\alpha\beta}=T^2$$
Where $T$ is the trace of $T^{\alpha\beta}.$
I think they are different.
Answer: Since $T_{\alpha\beta}T^{\beta\gamma} = T^2{_\alpha^{~~\gamma}}$, $T_{\alpha\beta}T^{\alpha\beta}$ is the trace of the squared tensor $T^2_{\alpha\beta}$, rather than the square of the trace of $T_{\alpha\beta}$. | {
"domain": "physics.stackexchange",
"id": 49707,
"tags": "homework-and-exercises, tensor-calculus, trace"
} |
Package/function to convert the word to one hot vector in Python NLP | Question: Is there a package or function in NLP which can be used to convert the word into an one hot vector.
Thank you.
Answer: There are several ways to convert words to one hot encoded vectors. Since I do not know the data structure you store your data. I assume it is going to be a list
from keras.preprocessing.text import Tokenizer
samples = ['The', 'dog','mouse','elephant']
tokenizer = Tokenizer(num_words=len(samples))
This builds the word index
tokenizer.fit_on_texts(samples)
one hot representation
one_hot_results = tokenizer.texts_to_matrix(samples, mode='binary')
By changing the mode from binary to 'tfidf' or 'count', you can make a matrix of any type, apart from one hot.
You can achieve the same result using other packages like sklearn. But it does involve a bit more lines of code. | {
"domain": "datascience.stackexchange",
"id": 2981,
"tags": "machine-learning, python, nlp"
} |
light intensity through a double slit | Question: When a light source with a given intensity I emit through a double slit (or triple slit) how does the intensity of the first maxima change in regard with the initial one?
Answer: Following the classical experiment of the double slit, the light intensity is given by the Fraunhofer diffraction equation.
First argument
If we explicitate Fraunhofer formula with respect to the angle $\theta$ with respect to the orizontal axis (i.e., perpendicular to the the detection wall), we obtain
$$ I(\theta) \propto \cos ^{2}\left[{\frac {\pi d\sin \theta }{\lambda }}\right]~\mathrm {sinc} ^{2}\left[{\frac {\pi b\sin \theta }{\lambda }}\right]$$
Where the sinc function is defined as $sinc(x) = sin(x)/x$ for $x \neq 0$, and $sinc(0) = 1$
We can easily check that the maximum is obtained at $\theta=0$.
Second argument
Since the question concerned the exact intensity of the light, I would like here to post another possible derivation. Consider the following double slit experiment and suppose there is a Electrical field passing through the slits.
The total instantaneous electric field $\vec{E}$ at the point $P$ on the screen is equal to the vector sum of the two sources:
$$\vec{E}=\vec{E_1}+\vec{E_2}. $$
On the other hand, the Poynting flux $S$ is proportional to the square of the total field:
$$S\propto \vec{E}^2=(\vec{E_1}+\vec{E_2})^2=\vec{E_1}^2+\vec{E_2}^2+2\vec{E_1}\cdot\vec{E_2}.$$
Taking the time average of $S$, the intensity $I$ of the light at $P$ may be obtained as:
$$I = \langle S\rangle \propto \langle\vec{E_1}^2\rangle+\langle\vec{E_2}^2\rangle+2\langle\vec{E_1}\cdot\vec{E_2}\rangle. $$
The cross term $2\langle\vec{E_1}\cdot\vec{E_2}\rangle$ represents the correlation between the two light waves. For incoherent light sources, since there is no definite phase relation between $E_1$ and $E_2$ the cross term vanishes, and the intensity due to the incoherent source is simply the sum of the two individual intensities:
$$I_{inc}=I_1+I_2$$
For coherent sources, the cross term is nonzero. In fact, for constructive interference, $E_1=E_2$, and the resulting intensity is
$$I_{inc}=4I$$
which is four times greater than the intensity due to a single source. | {
"domain": "physics.stackexchange",
"id": 44155,
"tags": "optics, double-slit-experiment, interference, poynting-vector"
} |
Swift function to interpret Roman numerals (ported from JavaScript) | Question: I have written some Swift for the first time, being competent enough in Javascript and having some experience in Ruby and Python. For my education, I've written a function that parses a roman numeral string and returns its integer representation, first in Javascript (ES2015+):
const dict = [
['CM', 900], ['M', 1000], ['CD', 400], ['D', 500],
['XC', 90], ['C', 100], ['XL', 40], ['L', 50],
['IX', 9], ['X', 10], ['IV', 4], ['V', 5],
['I', 1],
]
function romanToInt (original) {
let temp = original
let int = 0
while (temp.length > 0) {
let found = false
for (const [glyph, quantity] of dict) {
if (temp.startsWith(glyph)) {
int += quantity
temp = temp.slice(glyph.length)
found = true
break
}
}
if (!found) {
// e.g. Error parsing roman numeral "MDCCLXXVI," at ","
throw new Error(`Error parsing roman numeral "${original}" at "${temp}"`)
}
}
return int
}
try {
romanToInt('MMXIV') // => 2014
} catch (err) {
console.error(err)
}
and then ported it to Swift 4:
let dict: [( glyph: String, quantity: Int )] = [
("CM", 900), ("M", 1000), ("CD", 400), ("D", 500),
("XC", 90), ("C", 100), ("XL", 40), ("L", 50),
("IX", 9), ("X", 10), ("IV", 4), ("V", 5),
("I", 1)
]
enum RomanNumericError: Error {
case badInput(original: String, temp: String)
}
func romanToInt(original: String) throws -> Int {
var temp = original
var int = 0
while temp.count > 0 {
var found = false
for (glyph, quantity) in dict {
if temp.starts(with: glyph) {
int += quantity
temp.removeFirst(glyph.count)
found = true
break
}
}
guard found == true else {
throw RomanNumericError.badInput(original: original, temp: temp)
}
}
return int
}
do {
try romanToInt(original: "MMXIV") // => 2014
} catch RomanNumericError.badInput(let original, let temp) {
print("Error parsing roman numeral '\(original)' at '\(temp)'")
}
I'm wondering about how swift-y my code is in terms of design patterns, especially in terms of error handling. In Javascript, throwing and catching errors is a very common control flow design, and I'm wondering if I'm approaching it from the right angle in Swift.
Answer: Let's do this from inside out. temp.starts(with: glyph) is correct, but
there is also a dedicated method temp.hasPrefix(glyph) for strings.
The loop to find the first dictionary entry with a matching prefix
can be shortened to
guard let (glyph, quantity) = dict.first(where: { temp.hasPrefix($0.glyph) }) else {
throw RomanNumericError.badInput(original: original, temp: temp)
}
(also making the var found obsolete.)
Mutating the temporary string can be avoided by working with a SubString (which is a kind of view into the original string) and
only updating the current search position:
var pos = original.startIndex
while pos != original.endIndex {
let subString = original[pos...]
// ...
pos = original.index(pos, offsetBy: glyph.count)
}
Naming: This is very opinion-based, here are my opinions:
Declare the function as func romanToInt(_ roman: String),
so that it is called without (external) argument name:
romanToInt("MMXIV").
Rename var int to var value.
dict is also a non-descriptive name (and it is not even a dictionary), something like glyphsAndValues might be a better choice.
Summarizing the suggestions so far, we have
func romanToInt(_ roman: String) throws -> Int {
var value = 0
var pos = roman.startIndex
while pos != roman.endIndex {
let subString = roman[pos...]
guard let (glyph, quantity) = glyphsAndValues.first(where: { subString.hasPrefix($0.glyph) }) else {
throw RomanNumericError.badInput(roman: roman, at: subString)
}
value += quantity
pos = roman.index(pos, offsetBy: glyph.count)
}
return value
}
Now the error handling. Yes, throwing an error is a good and Swifty
way to report a failure to the caller. (An alternative is to return an
optional value which is nil in the error case, but that does not
allow to provide additional error information.)
The creation of the error message however should be done in the
error class, by adopting the LocalizedError protocol:
enum RomanNumericError: Error {
case badInput(roman: String, at: Substring)
}
extension RomanNumericError: LocalizedError {
public var errorDescription: String? {
switch self {
case .badInput(let roman, let at):
return "Error parsing roman numeral '\(roman)' at '\(at)'"
}
}
}
The big advantage is that the caller does not need to know which
error the function throws, and can catch a generic Error:
do {
try print(romanToInt("MMXIV"))
try print(romanToInt("MMYXIV"))
} catch {
print(error.localizedDescription)
}
// Output:
// 2014
// Error parsing roman numeral 'MMYXIV' at 'YXIV' | {
"domain": "codereview.stackexchange",
"id": 29223,
"tags": "error-handling, swift, number-systems"
} |
Relativistic Correction to the Hydrogen Atom and Spherically Symmetric Operators | Question: We know, the lowest order relativistic correction to the Hamiltonian for the Hydrogen atom is
$$H^{'}=-\frac{p^{4}}{8m^{3}c^{2}}$$
Where,
$$p=-\frac{\hbar}{i}\nabla$$
So, is $p^{2}=-\hbar^{2}\nabla^{2}$ and $p^{4}=\hbar^{4}\nabla^{2}(\nabla^{2})$?
In various sources, for calculating the first order correction to the energy, a fact has been utilized that the perturbation is spherically symmetric.
a) First of all, what should a spherically symmetric operator look like?
b) Secondly, does this mean that the operator $\nabla^{2}(\nabla^{2})$ is spherically symmetric? If so, how to prove that? Or is there any easy way to intuitively understand that? Any help is appreciated.
Answer: An operator is called spherically symmetric when it is invariant under rotations. The Laplace operator is invariant under rotations and translation (the Euclidean transformations).
In fact, the algebra of all scalar linear differential operators, with
constant coefficients, that commute with all Euclidean
transformations, is the polynomial algebra generated by the Laplace
operator. 1
For a hint to how to show that indeed the Laplace operator is invariant under rotations see e.g. this math.SE post.
For some intuitive hints on why the Laplace operator is spherically symmetric see this & this posts on math.SE and physics.SE. | {
"domain": "physics.stackexchange",
"id": 93825,
"tags": "quantum-mechanics, special-relativity, atomic-physics, perturbation-theory"
} |
How to read an image in Scilab | Question: In Matlab we use below function to read an image :
var = imread('IMAGE PATH');
But, how to do same thing in Scilab ?
Answer: Scilab Image and Video Processing toolbox provides imread() function.
The related How To entry gives example and it's similar to Matlab:
im=imread(filename);
See ATOMS article to learn how to install additional modules in Scilab. | {
"domain": "dsp.stackexchange",
"id": 1722,
"tags": "image-processing, matlab"
} |
Extracting positive frequencies of discrete-time signal | Question: Convolution in the time domain is the same as multiplication in the frequency domain.
My data is sampled at 200 Hz, which means that the Nyquist frequency is 100 Hz, and all frequency content is <= 100 Hz. If I take a rectangle function in the time domain, with 1s everywhere below 100 Hz, FFT it, and convolve my data with that, it shouldn't make a difference. Indeed, the FFT of the rectangle function is sin(pi*x)/(pi*x), which is 0 at all integers except the origin.
When FFTing something like cos(2*pi*x/40), a sine wave at 5 Hz, you get two peaks. One at 5 Hz as expected, and one at -5 Hz. This can be thought of as two spirals e^(2*pi*i*x/40)/2 and e^(-2*pi*i*x/40)/2, one rotating counterclockwise, the other clockwise. When added, their complex parts cancel out, leaving only a real sine wave.
I want to extract only the part of my data with positive frequencies. For example, cos(2*pi*x/40) should become e^(2*pi*i*x/40)/2. This should reasonably make the data contain complex samples.
I thought to do this by creating a rectangle function with 1s where 0 <= x <= 100 Hz, FFTing that, and then convolving my data with the result. The resulting function (according to WolframAlpha) is 0.5 * e^(-pi*i*x) * sin(pi*x/2)/(pi*x/2) (graph, real part blue, imaginary part red). The problem is that the imaginary part is always zero at integers, which is weird. Since my data is real, and this function is always real at integers, the result of convolution must be entirely real, which contradicts the previous paragraph.
What is going on here? And how do I actually get what I want?
Answer: The mistake lies in the modulation of the low pass impulse response. The low pass response equals $1$ for $\omega\in(-\pi/2,\pi/2)$ and is zero otherwise, and you want a frequency response that is zero for $\omega\in(-\pi,0)$ and $1$ for $\omega\in(0,\pi)$. Consequently, you need to shift the spectrum by $\pi/2$, and not by $\pi$ as you did. The corresponding impulse response of the system becomes
$$h[n]=\begin{cases}\displaystyle\frac12,&n=0\\\displaystyle\frac12 e^{jn\pi/2}\frac{\sin(n\pi/2)}{n\pi/2},&n\neq 0\end{cases}\tag{1}$$
which can be rewritten as
$$h[n]=\begin{cases}\displaystyle\frac12,&n=0\\\displaystyle0,&n\textrm{ even},n\neq 0\\\displaystyle\frac{j}{n\pi},&n\textrm{ odd}\end{cases}\tag{2}$$
So only the sample at $n=0$ is real-valued, all samples at even indices (other than zero) vanish, and all samples at odd indices are purely imaginary.
Consequently, the real part of the impulse response is just a scaled unit impulse, and its imaginary part is a scaled discrete-time Hilbert transformer:
$$h[n]=\frac12\big(\delta[n]+jg[n]\big)\tag{3}$$
with $g[n]$ the impulse response of an ideal discrete-time Hilbert transformer. | {
"domain": "dsp.stackexchange",
"id": 6599,
"tags": "fft, discrete-signals, convolution, hilbert-transform, complex"
} |
Describing the set of Running Time of all Turing Machines | Question: Consider the set of all valid Turing Machines descriptions $T_{All}$, and the set of functions that denote the real (not asymptotic) running time of Turing Machines in $T_{All}$, lets call it $R_{All}$.
Since there are infinite functions, not all of which might represent the running time of a valid Turing Machine, how do we describe this subset of $R_{All}$ mathematically (say to a non CS individual) who doesn't know anything about TM etc.
Is there an accurate, self contained, complete mathematical definition that describe this subset of mathematical functions without referring to anything else like TM?
Answer: It is not possible to describe the subset of functions that represent the running time of valid Turing machines in terms of pure mathematics, without reference to the concept of a Turing machine. This is because the concept of a Turing machine is fundamental to the definition of this subset of functions.
A Turing machine is a mathematical model of a hypothetical computer that can perform any computation that is theoretically possible, given enough time and memory. The running time of a Turing machine is the number of steps it takes to complete a computation, as a function of the size of the input.
To define the subset of functions that represent the running time of valid Turing machines, we must first define the concept of a Turing machine and specify the conditions under which a function can be considered to represent the running time of a Turing machine.
For example, we could say that a function f(n) belongs to the subset of functions that represent the running time of valid Turing machines if and only if there exists a Turing machine M and an input x such that the number of steps M takes to compute f(|x|) on input x is equal to f(|x|). Here, |x| represents the size of the input x.
This definition captures the idea that the running time of a Turing machine is a function of the size of the input, and it links the concept of a Turing machine to the mathematical concept of a function. However, it does not provide a self-contained, complete mathematical definition of the subset of functions that represent the running time of valid Turing machines, as it relies on the concept of a Turing machine, which is not a purely mathematical concept. | {
"domain": "cs.stackexchange",
"id": 20778,
"tags": "complexity-theory, time-complexity, algorithm-analysis"
} |
Is SAT a single language or a union of languages? | Question: I know that a language is in NP if a Turing machine can decide the language of its checking relation $\{\text{boolean formula }\#\text{ truth assignment | truth assignment is correct}\}$ in polynomial time.
Here is my confusion. All Turing machine have a finite input alphabet $\Sigma$. Thus, none of them can solve any SAT problem with, say, $|\Sigma|+1$ variables. Thus, do we need another Turing machine with a larger input alphabet and its associated language to cover this problem? By induction, does this make SAT a union of an infinite number of languages?
If SAT is indeed a union of languages, and any Turing machine only responsible for solving boolean satisfiability problems with variables within its input alphabet, a Turing machine can simply "memorize" all possible 3-SAT problems without redundant clauses over this finite set of variables, and regurgitate the answer in linear time?
In other words, is there a loophole in the definintion of NP that allows for solving SAT by looking up pre-prepared answers? If such cheating is allowed, P=NP?
Answer: The way each variable is represented is a matter of encoding of the problem and the machine does not have to assign an independent symbol for each variable. Your question is similar to, we can not represent all graphs in a turning machine since for a fixed alphabet $\Sigma$ we can set the number of vertices to $\Sigma + 1$. Well, we can represent each variable of sat, or each vertex in the graph using a binary number. Hence, a machine whose input and working alphabet is zero and one can represent any graph or Boolean formula. Note that any turing machine can be translated to a turning machine over the binary language on the expense of a constant factor of the time and the place complexity using the following trick.
Let $1, 2, 3, \dots n$ be an enumeration of the elements of the alphabet. Let $m = \left\lfloor \log n \right\rfloor + 1$. Change each symbol with a word of length $m$ in the machine, where the $i$th symbol turns into $\mathrm{bin}(i)$ prefixed with zeroes to reach length $m$. The machine reads the letters in blocks (you can save what word you are reading now in the state of the machine).
Note. The way a machine encodes the input can have a big effect on its time and place complexity note the adjacency lists and adjacency matrices representations of graphs. | {
"domain": "cs.stackexchange",
"id": 15253,
"tags": "complexity-theory, formal-languages, turing-machines, time-complexity, np-complete"
} |
Validating a model that requires database access | Question: Let's say I have this User model:
class User
{
public $id;
public $email;
public $password;
public $errors;
public function isValid()
{
if (strpos($this->email, '@') === false) {
$this->errors['email'] = 'Please enter an email address';
}
if (!$this->password) {
$this->errors['password'] = 'Please enter a password';
} elseif (strlen($this->password) < 4) {
$this->errors['password'] = 'Please enter a longer password';
}
return !$this->errors;
}
}
And let's say I have this UserDAO for retrieving, adding, updating, and deleting users:
class UserDAO
{
protected $conn;
protected $logger;
public function __construct(PDO $dbh, Logger $logger)
{
$this->dbh = $dbh;
$this->logger = $logger;
}
public function getUsers()
{
$rows = null;
try {
$rows = $this->dbh->query("SELECT * FROM users")->fetchAll();
} catch (PDOException $e) {
$this->logger->log($e->getMessage(), __METHOD__);
}
return $rows;
}
public function getUserById($id)
{
$row = null;
try {
$sth = $this->dbh->prepare("SELECT * FROM users WHERE id = ?");
$sth->execute(array($id));
$row = $sth->fetchObject('User');
} catch (PDOException $e) {
$this->logger->log($e->getMessage(), __METHOD__);
}
return $row;
}
public function addUser(User &$user)
{
$success = false;
try {
$sth = $this->dbh->prepare("
INSERT INTO users (email, password) VALUES (?, ?)
");
$sth->execute(array($user->email, $user->password));
if ($success = (bool) $sth->rowCount()) {
$user->id = $this->dbh->lastInsertId();
}
} catch (PDOException $e) {
$this->logger->log($e->getMessage(), __METHOD__);
}
return $success;
}
public function updateUser(User $user)
{
// ...
}
public function deleteUser($id)
{
// ...
}
public function isEmailUnique($email)
{
$count = 0;
try {
$sth = $this->dbh->prepare("SELECT COUNT(id) FROM users WHERE email = LOWER(?)");
$sth->execute(array($email));
$count = $sth->fetchColumn();
} catch (PDOException $e) {
$this->logger->log($e->getMessage(), __METHOD__);
}
return !$count;
}
}
When I process a form, I typically do something like this:
// ...
$userDAO = new UserDAO($dbh, $logger);
$user = new User();
$user->email = filter_input(INPUT_POST, 'email', FILTER_VALIDATE_EMAIL);
$user->password = filter_input(INPUT_POST, 'password');
// validate user
if ($user->isValid()) {
// check if email address is unique (SO UGLY!)
if ($userDAO->isEmailUnique($user->email)) {
$user->errors['email'] = 'Please use a different email address';
}
// save user
if ($user->addUser($user)) {
// ...
} else {
// ...
}
} else {
// do something with $user->errors
}
This works great for simple cases, but how can I improve this? In particular, how can I do validations that require database access? For example, if I wanted to check that the email is unique and make it part of my validation, what would be a better way of doing this?
Answer: Best approach would be to have another Object to do the validation based on specific configuration.
Your User should NOT be responsible for validating itself, neither should the DAO.
Validation is a specific task and should be handled by a specific Class.
Example of this approach would be something like this (omitiong interfaces for brevity):
validation class
Class Validator
{
protected $errors = [];
public function __construct(DaoInterface $daoDependency, $otherDependency)
{
//...
}
public function validate($object, array $rules)
{
foreach ($rules as $rule)
{
$propertyName = $rule['property'];
if (!$this->isValidateProperty($object, $propertyName, $rule['config'])) {
$this->errors[$object->$propertyName] = 'is invalid ' . $rule['message'];
}
}
}
public function isValid()
{
return (bool)count($this->errors);
}
public function getErrors()
{
return $this->errors;
}
protected function isValidProperty($object, $propertyName, $ruleConfig)
{
// $ruleConfig should have information about what validation you want to use and relevant validation should be executed base on it (it can separate objects etc)
if (!property_exists($object, $propertyName)
throw new MissingPropertyException();
if (isset($ruleConfig['email-unique-in-db') {
if ($daoDependecny->emailExists($object->$propertyName) {
return false;
}
}
if (isset($ruleConfig['email-is-valid') {
return (bool)strpos($object->$propertyName, '@');
}
// ... etc but better with objects for each validation type.
}
}
usage
$userDAO = new UserDAO($dbh, $logger);
$user = new User();
$user->email = filter_input(INPUT_POST, 'email', FILTER_VALIDATE_EMAIL);
$user->password = filter_input(INPUT_POST, 'password');
// get your validation rules
// you can have them stored in other object, plain array, yml whatever you like. for instance $rules = new UserConstrains(); or
$rules['config] = [
'email' => [
'email-unique-in-db',
'email-is-valid'
],
'otherfield' => [
'callback' => ['someCallbackFunction']
]
];
// create validator
$validator = new Validator($userDao, $otherDepenency);
$validator->validate($user, $rules);
// validate user
if (validator->isValid()) {
// .. do stuff
} else {
// do something with $user->errors
}
The separation of concerns is the most important thing here.
You can extend/decorate the validator class for specific use cases
Also if would recommend using some specialized validation library for this task for instance symfony2 validation component can be used stand alone: http://symfony.com/doc/current/book/validation.html | {
"domain": "codereview.stackexchange",
"id": 18541,
"tags": "php, object-oriented, validation"
} |
Plotting with interpolation using Matplotlib | Question: Program description:
You are given a set of two functions: $$f=x^3-6x^2+x+5; g=(x-2)^2-6$$
Plot them using Matprolib on a user input segment [a; b].
My solution:
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interpolate # for smoothing
def func(x):
return [pow(x, 3) - 6 * pow(x, 2) + x + 5, pow((x-2), 2) - 6]
# plt.plot([1,5, -3, 0.5], [1, 25, 9, 0.5])
# plt.plot(1, 7, "r+")
# plt.plot(-1, 7, "bo")
f = []
f_1 = []
x = []
interval = [int(x) for x in input("Define segment (i.e. a b): ").split(' ')]
for val in range(interval[0], interval[1] + 1):
x.append(val)
f.append(func(val)[0])
f_1.append(func(val)[1])
linear_space = np.linspace(interval[0], interval[1], 300)
a_BSpline = interpolate.make_interp_spline(x, f)
b_BSpline = interpolate.make_interp_spline(x, f_1)
f_new = a_BSpline(linear_space)
f_1_new = b_BSpline(linear_space)
plt.plot(linear_space, f_new, color="#47c984",
linestyle="solid", linewidth=1)
plt.plot(linear_space, f_1_new, color="#fc6703",
linestyle="solid", linewidth=1)
plt.gca().spines["left"].set_position("zero")
plt.gca().spines["bottom"].set_position("zero")
plt.show()
Input: -10 10
Output:
Question:
Is there any way to make this code more concise?
Thank you in advance.
Answer:
As I pointed out in the comments, I do not think smoothing is needed when plotting functions that are already smooth, such as polynominals. In those cases, the graphs would look rough only if the points plotted are not dense enough (on the x axis).
Since you have already imported and used numpy, it would be better to use numpy.polyval for vectorized evaluation of polynominals.
Drawing spines at zero would not work well if the input range does not include zero. In that case, the "center" position might be used instead of "zero". (I will not implement this logic in my code below)
The top and right spines should not be shown.
Consider using plt.style.use for setting common style information. See here for alternative options for style setting.
Avoid constants like "#47c984". Name them with appropriate constant names for better readability.
It is a good practice to comply to the PEP 8 style when writing Python code. Use an IDE (such as PyCharm) or an online checker (such as this) for automatic PEP 8 violation checking.
When running code outside a method / class, it is a good practice to put the code inside a main guard. See here for more explanation.
Here is an improved version:
import numpy as np
import matplotlib.pyplot as plt
if __name__ == "__main__":
# Define constants (color names come from https://colornamer.robertcooper.me/)
INPUT_MESSAGE = "Lower and upper bounds of x, separated by a space: "
NUM_POINTS = 300
PARIS_GREEN = "#47c984"
BLAZE_ORANGE = "#fc6703"
# Read input
lb, ub = map(float, input(INPUT_MESSAGE).split())
# Compute coordinates of points to be plotted
x = np.linspace(lb, ub, NUM_POINTS)
f_x = np.polyval([1, -6, 1, 5], x)
g_x = np.square(x - 2) - 6
# Plotting
plt.style.use({"lines.linestyle": "solid", "lines.linewidth": 1})
plt.plot(x, f_x, color=PARIS_GREEN)
plt.plot(x, g_x, color=BLAZE_ORANGE)
# Adjust spines
spines = plt.gca().spines
spines["left"].set_position("zero")
spines["bottom"].set_position("zero")
spines["right"].set_visible(False)
spines["top"].set_visible(False)
# Display plot
plt.show() | {
"domain": "codereview.stackexchange",
"id": 40157,
"tags": "python, matplotlib"
} |
Launching gazebo server | Question:
BH
I'm trying to launch a gazebo server with the help of gzserver but I get this error:
Error [RTShaderSystem.cc:409] Unable to find shader lib. Shader generating will fail.Error [Server.cc:285] Could not open file[worlds/empty.world]
When I go to the directory where empty.world xml file is placed I see it exists.
This is its content:
model://sun
model://ground_plane
<max_step_size>0.01</max_step_size><real_time_factor>1</real_time_factor><real_time_update_rate>100</real_time_update_rate>0 0 -9.8
Does someone have a clue what is wrong?
Originally posted by Haz88 on ROS Answers with karma: 41 on 2015-04-17
Post score: 1
Answer:
I encountered 'Error [Server.cc:285] Could not open file' when I tried launching gazebo with roslaunch gazebo_ros. I specified world_name:=<worldname.world> (from within the directory where the .world was located). When I added the full path to the file, the error went away.
Originally posted by mattc_vec with karma: 50 on 2015-06-12
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by DimitriosMalonas on 2016-10-05:
Where do I add the full path? | {
"domain": "robotics.stackexchange",
"id": 21461,
"tags": "ros, gazebo, world"
} |
Text tokenizer example | Question: Here is a partial example of a text tokenizer. I'm looking for ways to improve one particular line in this code:
implicit class textFile(val fileName: String) {
def toDict() = {
io.Source.fromFile(fileName).getLines.flatMap(_.split("\\\\r?\\\\n")).toList
}
}
def filterComments(m: List[String]) : List[String] = m match {
case Nil => List()
case x :: xs => x.takeWhile(c => c != '#').trim :: filterComments(xs)
}
def filterEmptyLines(m: List[String]) : List[String] = m match {
case Nil => List()
case x :: xs => if(x.isEmpty) filterEmptyLines(xs) else x :: filterEmptyLines(xs)
}
def splitParameters(m: List[String]) : List[String] = m match {
case Nil => List()
case x :: xs => x.split("\\s*(=>|,|\\s)\\s*").map(_.trim).toList ++ splitParameters(xs)
}
val dict = "test.txt".toDict()
println(splitParameters(filterEmptyLines(filterComments(dict))))
The line I would like to break up is the last one:
println(splitParameters(filterEmptyLines(filterComments(dict))))
Is there any idiomatic (Scala) way to make this more readable but keeping it compact, on the same line preferable?
Answer: I know you've found a solution you like for the final line of your code, but I'm a bit concerned with the overall performance and characteristics.of your code.
Oh, I do have an alternative to your compactness solution but I will keep that for a different, shorter answer.
Confusion of metaphors
filterEmptyLines filters the lines from dict. filterComments does not. It transforms them from possibly-commented lines into lines with no comments. This may seem like a minor or pedantic point, but calling two different things by the same name can cause errors. stripComments would be a better name, I think.
Overengineered and unsafe functions
None of your three functions are stack safe. Each one of them is recursive but none of them is tail-recursive. This means that processing a large file could blow the stack.
Each one of them can be rewritten safely using combinator functions, which are usually more efficient than pattern matching and explicit recursion (and gives an extra bonus that I'll explain later).
filterComments
This is not tail recursive because it prepends the transformed x to the beginning of the list returned by xs.filterComments. It could be rewritten with a tail-recursive inner helper function, but this function can be done much more simply as a map...
def filterComments(m: List[String]): List[String] =
m map (_.takeWhile(c => c != '#').trim)
filterEmptyLines
Unsafe for the same reason as filterComments. Can be written safely as
def filterEmptyLines(m: List[String]): List[String] =
m filter (! _.isEmpty)
splitParameters
Same as the other two, but multiple recursive applications of ++ is even more expensive. This can be rewritten safely as
def splitParameters(m: List[String]): List[String] =
m flatMap (_.split("\\s*(=>|,|\\s)\\s*").map(_.trim))
Oh, what happened to .toList? Why does that work without it? The answer is that Scala's flatMap does an implicit conversion of the array to a list. Google canBuildFrom if you want to learn something about Scala collection internals.
Over-specific types
Once those three functions are rewritten to use combinators rather than pattern matching, they don't need to take or return List[String]. They could take and return Seq[String]. This gives you more freedom abut what you pass in (could be List[String], could be some other Seq-based collection). No performance penalty (the appropriate filter/map/flatmap of the actual type will be called) and much more flexibility. OK, at the end you would have to convert the sequence back into a list (or whatever you want the final form to be), but this allows you delay that decision till it is important. This is that extra bonus I mentioned before.
And I'm about to explain why using Seq - or possibly Iterator could give a big performance boost.
Multiple traversals and intermediate collections
getLines returns an iterator (a lazy, one-pass collection which only processes each element on demand). But you immediately convert it to a list, reading the whole (potentially large) file into memory. Then each transformation in turn creates an entirely new collection. So you actually create 4 collections in a row, traversing the entire contents of the file 3 times (possibly 4 if there are no empty or comment-only lines). But I'm pretty sure you only care about the final one.
Even if you always want to process the entire file, that's expensive (the bigger the file, the worse it gets). And what if you only want to process the first n lines or process the file in chunks, not wasting memory on processed and not-yet-processed chunks?
There's a pretty simple solution which will give you all those options (but which doesn't force you to overcomplicate things just because you might want those options later).
Be lazy (use views, iterators or streams)
Don't keep a reference to any of the intermediate transformations.
Being Lazy
Option 1: List View
If you're sure you always want to read the whole file into a list, turn that list into a view. When you apply map, filter' orflatMap` to a view, it doesn't process the whole collection yet. It returns the original collection wrapped in a delayed transformation and only applies the transformation when you ask for the elements and only to the elements you ask for. If you just apply another transformation to the new view, again, no processing is done, you just get a new view. So if you apply all three functions to the view and only then ask for the results as a list, the original collection will only be traversed once, applying all three transformations to one element before proceeding to the next.
If you only take the first 10 lines from the final view, it will only process lines until it has 10 non-empty, non-comment-only lines. The rest of the collection inside the view will remain untouched till you ask for it.
And none of the functions needs to know that laziness is happening, if they take and return Seq[String#, because a view is a Seq. Bonus.
The drawback, compared to iterators, is that the entire original collection stays in memory until there is no reference to it and it can safely be garbage-collected.
Option 2: Iterators
getLines returns an iterator. Why not stick with it. map, filter and flatMap work the same with with iterators as with views. So if your three functions take and return Iterator[String], it just works.
The bonus that 1) the whole file isn't even read until you ask for it all to be processed and 2) the original, pre-processed lines are definitely thrown away and garbage collected.
The danger is that you have to be very careful to only traverse each intermediate iterator once. So don't keep any references around.
Option 3: Stream
You could convert the getLines iterator into a stream. This has the lazy file-reading advantage of the iterator and doesn't have the touch-only-once limitation of iterators. However, it's actually a touch trickier than with iterators to make sure you don't keep the whole file in memory. Oh, and Stream is a Seq.
Whichever of the 3 options you choose, you can still have filterEmptyLines and friends take/return either Seq[String] or Iterator[String], because all Seq descendants have a toIterator method and vice versa. So you can write them not to care about laziness or how you actually implement it.
Don't keep references (till you really need one)
For different reasons, you lose some of the benefits of laziness if you keep references to any of the intermediate collections (including the output of toDict). With iterators, it's dangerous. So
"test.txt".toDict().filterComments.filterEmptyLines.splitParameters
is batter than
val dict = "test.txt".toDict()
dict.filterComments.filterEmptyLines.splitParameters
(assuming the implicit class added) | {
"domain": "codereview.stackexchange",
"id": 15614,
"tags": "parsing, scala"
} |
Calculate the commutator between the operator $S_z$ and the operator $S_x$ using the Dirac notation | Question: Calculate the commutator between the operator $S_z$ and the operator $S_x$ using the Dirac notation.
In standard matrix notation I proved the relation $[S_z,S_x]=ihS_y$
My attempt in Dirac Notation: $$
\hat S_z= \frac h 2|\uparrow\rangle \langle \uparrow| \, - \frac h 2 \,|\downarrow\rangle \langle \downarrow|
$$
$$
\hat S_x= \frac h 2|\uparrow\rangle \langle \uparrow| \, - \frac h 2 \,|\downarrow\rangle \langle \downarrow|
$$
Hence, [$\hat S_z$, $\hat S_x$] = $\hat S_z$$\hat S_x$-$\hat S_x$$\hat S_z$
However when I calculate this I obtain the following relation:
$\frac {h^2} 4[|\uparrow\rangle \langle \uparrow|+ |\downarrow\rangle \langle \downarrow|]$ - $\frac {h^2} 4[|\uparrow\rangle \langle \uparrow|+ |\downarrow\rangle \langle \downarrow|]$
Which does not seem to be correct. Could anyone please show me where I might be going wrong.
My guess would be that my spectral decomposition of the operators is incorrect but I'm not sure.
Answer: Your $S_z$ is the same as $S_z$ which is wrong. So the commutator will be wrongfully zero. The correct form is
$$S_x=\frac{\hbar}{2}\bigg(|\uparrow\rangle\langle\downarrow|+|\downarrow\rangle\langle\uparrow|\bigg)$$ | {
"domain": "physics.stackexchange",
"id": 65171,
"tags": "quantum-mechanics, homework-and-exercises, angular-momentum, operators"
} |
Friedmann equations question | Question: Friedmann equations for critical density is:
$$\rho_c = \frac{3H^2}{8\pi G}$$
Is there any other way to write this equation? For example:
$$\rho_c = \frac{3}{8\pi GH^2}$$
I saw the above form on another website, and was wondering if it was right?
I think he used Hubble time instead of Hubble's parameter.
Does this work? Even though it isn't like the original equation of Critical Density?
Answer: The web site you link is using the expression:
$$ \rho_c = \frac{3}{8\pi G \theta^2} $$
where $\theta$ is the Hubble time and is equal to $1/H$. So your second equation should be:
$$ \rho_c = \frac{3}{8\pi G \theta^2} = \frac{3}{8\pi G \left(1/H\right)^2} = \frac{3H^2}{8\pi G} $$ | {
"domain": "physics.stackexchange",
"id": 16318,
"tags": "general-relativity, cosmology, terminology, space-expansion"
} |
Remove stain from synthetic leather | Question: I have a valuable soccer ball. It is not real leather, but a synthetic leather.
I used this cleaning agent to remove a few marks:
https://www.amazon.co.uk/Motsenbockers-Lift-Off-Graffiti-Remover-Trigger/dp/B00N3B25LU
However, I stupidly forgot to wait for the ball to dry and then placed it within an orange plastic carrier bag. Now the cleaning agent has reacted with the dye and there's a yellow-tint stain.
I have tried using acetone and these:
https://www.amazon.co.uk/Dr-Beckmann-Stain-Devils-Survival/dp/B07TKJPH7D/ref=sr_1_14?keywords=dr+beckmann&qid=1575488301&sr=8-14
but I cannot remove the stain.
Does anyone with chemistry knowledge know what else I could try?
UPDATE:
Got it out in the end. I used Dr Beckmann Stain Devils tea & wine, cut some cloth in to the shape of the area I wanted to clean, poured water, then the powder, then water on top and left for hours. Took multiple attempts.
Answer: The dye has probably migrated a bit into your ball`s surface, so whatever (solvent) you try to get it out with again, it might take a while.
Remember the classic school (chromatography) experiment: You draw a line with a felt pen on a piece of paper, 2 cm from the edge, and stick the paper into a glass filled only 1cm high with acetone.
You can try to transport the dye so far into the ball you dont see it any more, or try to "fill the glass up to 3cm with acetone". ;)
And remember: Its hard to tell how resistant your ball and its original painting is to acetone or hair bleach or soda. | {
"domain": "chemistry.stackexchange",
"id": 12913,
"tags": "organic-chemistry, reaction-mechanism, everyday-chemistry"
} |
UDP server and JSON parser script | Question: I am a Python beginner and wrote a script which should run on a Raspberry PI as a UDP server. It reads and writes to a serial device and has to process and forward JSON strings.
Because there is relatively much functionality included and I don't like cross compiling, I was too lazy to write a C program but instead chose Python. To avoid serial read/write blocks I even used threads. However I noticed that sometimes I get timeouts.
Are there some methods to make this script maybe faster or even to guarantee timings? Additionally I am open for style tips.
# Make this Python 2.7 script compatible to Python 3 standard
from __future__ import print_function
# For remote control
import socket
import json
import serial
# For sensor readout
import logging
import threading
# For system specific functions
import sys
import os
import time
# Create a sensor log with date and time
layout = '%(asctime)s - %(levelname)s - %(message)s'
logging.basicConfig(filename='/tmp/RPiQuadrocopter.log', level=logging.INFO, format=layout)
# Socket for WiFi data transport
udp_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
udp_sock.bind(('0.0.0.0', 7000))
client_adr = ""
# Thread lock for multi threading
THR_LOCK = threading.Lock()
#pySerial
pySerial = 0
def init_serial(device_count = 10):
counter = 0
baud_rate = '115200'
while counter < device_count:
com_port = '/dev/ttyACM%d' % (counter)
try:
# rtscts=1 is necessary for raspbian due to a bug in the usb/serial driver
pySerial = serial.Serial(com_port, baud_rate, timeout=0.1, writeTimeout=0.1, rtscts=1)
except serial.SerialException as e:
logging.debug("Could not open serial port: {}".format(com_port, e))
print ("Could not open serial port: {}".format(com_port, e))
com_port = '/dev/ttyUSB%d' % (counter)
try:
# rtscts=1 is necessary for raspbian due to a bug in the usb/serial driver
pySerial = serial.Serial(com_port, baud_rate, timeout=0.1, writeTimeout=0.1, rtscts=1)
except serial.SerialException as e:
logging.debug("Could not open serial port: {}".format(com_port, e))
print ("Could not open serial port: {}".format(com_port, e))
if counter == device_count-1:
return False, 0
counter += 1
else:
return True, pySerial
else:
return True, pySerial
#chksum calculation
def chksum(line):
c = 0
for a in line:
c = ((c + ord(a)) << 1) % 256
return c
def send_data(type, line):
# calc checksum
chk = chksum(line)
# concatenate msg and chksum
output = "%s%s*%x\r\n" % (type, line, chk)
try:
bytes = pySerial.write(output)
except serial.SerialTimeoutException as e:
logging.error("Write timeout on serial port '{}': {}".format(com_port, e))
finally:
# Flush input buffer, if there is still some unprocessed data left
# Otherwise the APM 2.5 control boards stucks after some command
pySerial.flushInput() # Delete what is still inside the buffer
# These functions shall run in separate threads
# recv_thr() is used to catch sensor data
def recv_thr():
global client_adr
ser_line = ""
try:
while pySerial.readable():
# Lock while data in queue to get red
THR_LOCK.acquire()
while pySerial.inWaiting() > 0:
# Remove newline character '\n'
ser_line = pySerial.readline().strip()
try:
p = json.loads(ser_line)
except (ValueError, KeyError, TypeError):
# Print everything what is not a valid JSON string to console
print ("JSON format error: %s" % ser_line)
# Send the string to the client after is was flagged
nojson_line = '{"type":"NOJSON","data":"%s"}' % ser_line
if client_adr != "":
bytes = udp_sock.sendto(nojson_line, client_adr)
else:
logging.info(ser_line)
if client_adr != "":
bytes = udp_sock.sendto(ser_line, client_adr)
THR_LOCK.release()
# Terminate process (makes restarting in the init.d part possible)
except:
os.kill(os.getpid(), 15)
# trnm_thr() sends commands to APM2.5
def trnm_thr():
global client_adr
msg = ""
try:
while pySerial.writable():
try:
# Wait for UDP packet from ground station
msg, client_adr = udp_sock.recvfrom(512)
except socket.timeout:
# Log the problem
logging.error("Read timeout on socket '{}': {}".format(adr, e))
else:
try:
# parse JSON string from socket
p = json.loads(msg)
except (ValueError, KeyError, TypeError):
logging.debug("JSON format error: " + msg.strip() )
else:
# remote control is about controlling the model (thrust and attitude)
if p['type'] == 'rc':
com = "%d,%d,%d,%d" % (p['r'], p['p'], p['t'], p['y'])
THR_LOCK.acquire()
send_data("RC#", com)
THR_LOCK.release()
# Add a waypoint
if p['type'] == 'uav':
com = "%d,%d,%d,%d" % (p['lat_d'], p['lon_d'], p['alt_m'], p['flag_t'] )
THR_LOCK.acquire()
send_data("UAV#", com)
THR_LOCK.release()
# PID config is about to change the sensitivity of the model to changes in attitude
if p['type'] == 'pid':
com = "%.2f,%.2f,%.4f,%.2f;%.2f,%.2f,%.4f,%.2f;%.2f,%.2f,%.4f,%.2f;%.2f,%.2f,%.4f,%.2f;%.2f,%.2f,%.4f,%.2f;%.2f,%.2f,%.2f,%.2f,%.2f" % (
p['p_rkp'], p['p_rki'], p['p_rkd'], p['p_rimax'],
p['r_rkp'], p['r_rki'], p['r_rkd'], p['r_rimax'],
p['y_rkp'], p['y_rki'], p['y_rkd'], p['y_rimax'],
p['t_rkp'], p['t_rki'], p['t_rkd'], p['t_rimax'],
p['a_rkp'], p['a_rki'], p['a_rkd'], p['a_rimax'],
p['p_skp'], p['r_skp'], p['y_skp'], p['t_skp'], p['a_skp'] )
THR_LOCK.acquire()
send_data("PID#", com)
THR_LOCK.release()
# This section is about correcting drifts while model is flying (e.g. due to imbalances of the model)
if p['type'] == 'cmp':
com = "%.2f,%.2f" % (p['r'], p['p'])
THR_LOCK.acquire()
send_data("CMP#", com)
THR_LOCK.release()
# With this section you may start the calibration of the gyro again
if p['type'] == 'gyr':
com = "%d" % (p['cal'])
THR_LOCK.acquire()
send_data("GYR#", com)
THR_LOCK.release()
# Ping service for calculating the latency of the connection
if p['type'] == 'ping':
com = '{"type":"pong","v":%d}' % (p['v'])
if client_adr != "":
bytes = udp_sock.sendto(com, client_adr)
# User interactant for gyrometer calibration
if p['type'] == 'user_interactant':
THR_LOCK.acquire()
bytes = pySerial.write("x") # write a char into the serial device
pySerial.flushInput()
THR_LOCK.release()
# Terminate process (makes restarting in the init.d part possible)
except:
os.kill(os.getpid(), 15)
# Main program for sending and receiving
# Working with two separate threads
def main():
# Start threads for receiving and transmitting
recv=threading.Thread(target=recv_thr)
trnm=threading.Thread(target=trnm_thr)
recv.start()
trnm.start()
# Start Program
bInitialized, pySerial = init_serial()
if not bInitialized:
print ("Could not open any serial port. Exit script.")
sys.exit(0)
main()
Answer: Style
init_serial is quite hard to follow. It took me a while to realize that code in a nested try-except is not a retry. Factor out the identical code into a function. Also, there is no need to return a success status in a separate boolean: returning None is just fine, in a boolean context it is False:
For example:
def init_serial(device_count = 10):
baudrate = '115200'
for counter in range (device_count):
port = open_port('/dev/ttyACM%d' % (counter), baudrate)
if port:
return port
port = open_port('/dev/ttyUSB%d' % (counter), baudrate)
if port:
return port
return None
def open_port(portname, baudrate):
try:
return serial.Serial(portname, baudrate, timeout=0.1, writeTimeout=0.1, rtscts=1)
except serial.SerialException as e:
logging.debug("Could not open serial port: {}".format(portname, e))
print ("Could not open serial port: {}".format(portname, e))
return None
Remote control definitely must be factored out in a separate function. Better yet, create a function for each type an dput them in a dictionary indexed by possible values of p['type'].
For example:
type, com = form_command[p['type']](p)
send_data(type, com)
Synchronization
Your code serializes reads and writes to the serial port. Do they need to be serialized?
In any case, the serial receiver locks the port for unnecessary long time: the heavyweight operations such as JSON parsing, network sends, exception handling and logging have nothing to do with the serial port access, and shall not execute while the lock is acquired.
For a network receiver, the lock is available for a minuscule amount of time, which may very well lead to a starvation.
If you do not have a strong reasons for locking (and I do not see any), just get rid of it. If I am wrong, and there are reasons, minimize the locking time to an absolute necessity. | {
"domain": "codereview.stackexchange",
"id": 8235,
"tags": "python, multithreading, json, socket, serial-port"
} |
Newton's Second Law the real one. Is my theory correct? | Question: In Newton's Principa, $\sum \vec{F}_{ext} = \frac{d\vec{\rho}}{dt}$
If the momentum vector is in mutildimensions, wouldn't a more general equation be
$\sum \vec{F}_{ext} = \vec{\nabla}{\vec{p}}$
I realize we are only taking a single variable derivative with respect to t, but can I still do this?
Answer: Nope, the total momentum $\vec p$ (and similarly the force $\vec F$, which is its time derivative) is just one vector for the whole system (or whole space) so it makes no sense to differentiate it with respect to spatial coordinates, and $\nabla$ is a symbol for the differentiation with respect to $x,y,z$.
That doesn't mean that equations similar to those you are thinking about don't exist. In the mechanics of liquids, solids, and gases, one may talk about the "density" of forces and density of energy and momentum: we want to know not only the total momentum or force but also "where it is located". The density of momentum is combined with the flux of momentum which is expressed by the ($3\times 3$) stress tensor (which is generalized to a $4\times 4$ stress-energy tensor in relativity). There exist equations involving a gradient when one talks about the stress tensor although this page
http://en.wikipedia.org/wiki/Stress_(physics)
doesn't really offer too many of them... You may see the conservation law for the stress-energy tensor
http://en.wikipedia.org/wiki/Energy-momentum_density#Conservation_law
which is a generalization of Newton's second law along the lines you proposed. It has gradients but the right hand side is zero: that's because the force is immediately rewritten as the time derivative of the momentum carried by other parts of the physical system. | {
"domain": "physics.stackexchange",
"id": 2127,
"tags": "newtonian-mechanics"
} |
Compile error using Eigen 3 in Diamondback (Inverse Matrix) | Question:
Hi,
I'm working on writing a FastSLAM 2.0 implementation to use with our system. I have the line of code:
Eigen::Matrix3d sigma_x = (G_v[current_feature].transpose() * (Z[current_feature].inverse() * G_v[current_feature])) + motion_noise_.inverse();
G_v[current_feature] is a 2x3 matrix, Eigen::Matrix<double,2,3>
Z[current_feature] is a 2x2 matrix, Eigen::Matrix2d
motion_noise_ is a 3x3 matrix, Eigen::Matrix3d
So it becomes sigma_x = (3x2) * ((2x2) * (2x3)) + (3x3)
When I try to compile this I get the following error:
/home/msands/ros/smart_wheelchair/chair_SLAM/src/chair_SLAM.cpp:209: error: invalid use of incomplete type ‘const struct Eigen::internal::inverse_impl >’
/home/msands/ros/geometry/eigen/include/Eigen/src/Core/util/ForwardDeclarations.h:233: error: declaration of ‘const struct Eigen::internal::inverse_impl >’
It has something to do with the calling of inverse() but I can't figure out what it is. Anyone have any thoughts?
-Mike
Originally posted by MikeSands on ROS Answers with karma: 86 on 2011-03-24
Post score: 0
Answer:
Well I figured it out but I figured I'd post it up here in case anyone else runs into this issue. Even though inverse() is part of the MatrixXd type you still need to include Eigen/LU for it to complete the type and allow it to compile. (I only had Eigen/Core included)
Originally posted by MikeSands with karma: 86 on 2011-03-24
This answer was ACCEPTED on the original site
Post score: 7 | {
"domain": "robotics.stackexchange",
"id": 5200,
"tags": "ros, eigen, inverse"
} |
Turtlebot odometry/imu calibration fails with Kinect mounted upside down | Question:
Hi all!
I recently added an adxrs613 gyro to my setup and am trying to calibrate the odometry.
Because my kinect is mounted upside down on my robot I expected that some small changed in the code of calibrate.py are needed for everything to function correctly. I expected that the change in calibrate from
scan_delta = 2*pi + normalize_angle(scan_end_angle - scan_start_angle)
to
scan_delta = 2*pi - normalize_angle(scan_end_angle - scan_start_angle)
and in align from
angle = self.scan_angle
to
angle = - self.scan_angle
would suffice to fix this?
However my problem is that, during the calibration run, I notice that especially at the lowest speed (first calibration rotation) the turtlebot rotates much too far (about 1.5x). I guess because the bot rotates way too far, it is not able to find the straight wall anymore and the next calibration run starts off at a wrong position. These higher speed calibration rotations do however seem to rotate approximately 360 degrees, but then of course also end up facing away from the wall and thus not finding it correctly. I checked if the data coming from odometry and imu is making sense by echoing when using keyboard_teleop and it does.
Any suggestions on what's going wrong and how to fix it?
Originally posted by Marco on ROS Answers with karma: 77 on 2011-09-21
Post score: 1
Answer:
You need to change your urdf to account for the fact that the optical frames are now all upside down, which will fix the sign problem.
The calibration tells you that if it can't find the wall start by lowering the default calibration so that it will find the wall.
If the TurtleBot does not successfully return to facing the wall before executing
the next rotation the data will be incorrect. This can be caused by too big of an
error in the existing parameters. If it under rotates, increase the odom parameter and
retry. If it overrotates lower the odom_parameter and retry.
Originally posted by mmwise with karma: 8372 on 2011-09-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 6745,
"tags": "navigation, odometry, calibration, turtlebot, gyro"
} |
How old are Pecten fossils in general? | Question: Many years ago, I found a piece of rock on a lakeside in Switzerland (see picture below).
I think that it could be a "fossil" from some Pecten species (do you agree?).
My question is: how old such a fossil can be, approximately? Is it like 1000 years, or rather 1'000'000 years?
I have no idea, that's why I just would be interested to have your opinion on this topic.
I don't know if it is frequent or rather uncommon to find such "fossils".
Thank you for your help!
Answer: Pectens seem to have evolved from Chlamys in the late Eocene or early Oligocene. The coarseness and rib spacing of Vertipectens has been slowly evolving since the early and Mid-Miocene, so they have been around for at least 16 million years, maybe up to 20 million years. If you want to be precise you will need to read-up on the fine detail. Pectens are a classic case of 'ontogeny recapitulating phylogeny', and their identification is complicated by parallel and convergent evolution of different species. As fossils, they are quite common, and probably more useful as indicators of the palaeo-environment, than as age indicators. Where you find such fossils you will very likely find associated micro-fossils in the same rock, which are more diagnostic of age. | {
"domain": "earthscience.stackexchange",
"id": 876,
"tags": "fossils, paleontology"
} |
Evolution and Phenotypes. | Question: This may be better suited for the English language SE, but When discussing evolutionary changes in species is it proper to refer to their phenotypes?
In this context:
"Imagine if a cow did not have to cart around however many gallons of
culture nor expend personal time and effort into chewing its cud. It
could lose all that anatomy and energy expenditure from its phenotype."
it could just be me, but it sounds strange using it in that context.
Answer: Yes, absolutely. That's what a phenotype is (definition from the biology online dictionary):
noun, plural: phenotypes
(1) The physical appearance or biochemical characteristic of an
organism as a result of the interaction of its genotype and the
environment.
(2) The expression of a particular trait, for example, skin color,
height, behavior, etc., according to the individual’s genetic makeup
and environment.
I don't see anything strange in the sentence you quoted. It is phenotypes that are selected for. Genotype differences that don't affect phenotype will have no selective advantage (or disadvantage) and cannot be acted upon by evolutionary forces. What else would you use when discussing evolutionary changes? That's what the word phenotype is for. | {
"domain": "biology.stackexchange",
"id": 3985,
"tags": "evolution, terminology"
} |
Can people with AIDS get tattoos? | Question: When I do a Google search, most of the results are about whether or not people can get HIV / AIDS from getting a tattoo through dirt needles. I am, however, curious whether or not it is possible to get a tattoo if you have AIDS.
Quoting wikipedia's entry on tattoos:
Tattooing involves the placement of pigment into the skin's dermis,
the layer of dermal tissue underlying the epidermis. After initial
injection, pigment is dispersed throughout a homogenized damaged layer
down through the epidermis and upper dermis, in both of which the
presence of foreign material activates the immune system's phagocytes
to engulf the pigment particles.
If your immune system is not working properly, I can imagine that the phagocytes might never respond to the tattoo ink and that the pigment might not ever enter the fibroblasts?
Answer: Short answer
People with HIV can get tattoos.
Background
In Africa there are countries that tattoo people identified with HIV (Source: Kenya Today) and some people with HIV find comfort in tattooing biohazard symbols and related images on themselves to express their illness (Source: CNN).
However, as rightly mentioned by @AMR, macrophages which are mainly responsible for ink fixation express CD4 and therefore can be infected by the HIV virus targets.
Hence, while I don't think HIV affects tattooing given the fact that HIV-infected people get tattoos, in theory HIV could reduce the ink fixation. | {
"domain": "biology.stackexchange",
"id": 4387,
"tags": "human-biology, immunology, pathology, hiv, aids"
} |
Find files with similar contents | Question: I'm not normally a C programmer but thought I'd take a shot at making something useful. This is a C utility I wrote to judge how similar several files are using the algorithm I think diff traditionally uses. I'm interested in general feedback. I don't write C usually but I do read it from time to time to try and pick up on the best practices.
This program takes a bunch of file paths as arguments and spits out a longest common subsequence (LCS) length for all pairs. It hashes the contents of the files first to speed up the LCS algorithm a bit.
Some things I can call attention to:
Code portability across Unix-like systems. Am I using something that is limiting? I have read that getline() isn't everywhere...
Use of realloc() in hash_file(). Never used it before and took a guess at how I could use it to grow a buffer dynamically.
Best practices in general. Is there a smarter way to do something?
Maybe encoding issues? I don't really assume any encoding until looking at whitespace characters.
This is the meat of the code, but the full source is available if you need it.
static hash_t
hash_string(const char* str)
{
hash_t hash = 0;
int c;
// sdbm hash function
while((c = *str++)) {
if(opt_ignore_whitespace && isspace(c))
continue;
hash = c + (hash << 6) + (hash << 16) - hash;
}
// We use a hash of zero to indicate EOF.
hash = (hash == 0) ? 1 : hash;
return hash;
}
static void
hash_file(FILE *fp, hash_t **buffer)
{
char *line = NULL, *linestart = NULL;
size_t linecap = 0, linecount = 0, lineguess = 100;
ssize_t linelen;
if((*buffer = malloc(lineguess * sizeof(hash_t))) == NULL) {
perror("malloc()");
exit(EXIT_FAILURE);
}
errno = 0;
while((linelen = getline(&line, &linecap, fp)) > 0) {
++linecount;
linestart = line;
// Trim whitespace
if(opt_trim) {
while(isspace(*linestart)) {
++linestart;
--linelen;
}
while(isspace(linestart[linelen - 1])) {
--linelen;
}
linestart[linelen] = '\0';
}
// Ignore the line if it is blank
if(opt_ignore_blanks && (
(linestart[0] == '\r' && linestart[1] == '\n') ||
linestart[0] == '\n' ||
linestart[0] == '\0'
)
) {
--linecount;
continue;
}
// Rellocate memory if we have underestimated the file line count.
if(linecount > lineguess) {
lineguess *= 2;
if((*buffer = realloc(*buffer, lineguess * sizeof(hash_t))) == NULL) {
perror("realloc()");
exit(EXIT_FAILURE);
}
}
(*buffer)[linecount-1] = hash_string(linestart);
errno = 0;
}
if(errno != 0) {
perror("getline()");
exit(EXIT_FAILURE);
}
// Shrink the buffer down to the actual file length.
if((*buffer = realloc(*buffer, (linecount + 1) * sizeof(hash_t))) == NULL) {
perror("realloc()");
exit(EXIT_FAILURE);
}
// "Terminate" the sequence of hashes with zero.
(*buffer)[linecount] = 0;
if(line != NULL) {
free(line);
}
}
static void
hash_files(const char *const* filenames, hash_t **hashes, size_t length)
{
size_t i;
for(i = 0; i < length; ++i) {
FILE *fp = fopen(filenames[i], "r");
if(fp == NULL) {
perror(filenames[i]);
exit(EXIT_FAILURE);
}
hash_file(fp, &hashes[i]);
fclose(fp);
}
}
unsigned long
lcs_length(hash_t *buf1, size_t buflen1, hash_t *buf2, size_t buflen2)
{
size_t i, j;
unsigned long *swap, *this_row, *last_row, lookup1, lookup2, result;
hash_t h1, h2;
if(buflen1 == 0 || buflen2 == 0) {
return 0;
}
// Make sure buf2 is the smaller buffer (to allocate less memory).
if(buflen2 > buflen1) {
size_t tmplen;
hash_t *tmpbuf;
tmpbuf = buf1;
tmplen = buflen1;
buf1 = buf2;
buflen1 = buflen2;
buf2 = tmpbuf;
buflen2 = tmplen;
}
if((this_row = malloc(buflen2 * sizeof(unsigned long))) == NULL) {
perror("malloc()");
exit(EXIT_FAILURE);
}
if((last_row = malloc(buflen2 * sizeof(unsigned long))) == NULL) {
perror("malloc()");
exit(EXIT_FAILURE);
}
for(i = 0; i < buflen1; i++) {
h1 = buf1[i];
for(j = 0; j < buflen2; j++) {
h2 = buf2[j];
if(h1 == h2) {
lookup1 = 0;
if(i > 0 && j > 0) {
lookup1 = last_row[j - 1];
}
this_row[j] = 1 + lookup1;
}
else {
lookup1 = 0;
lookup2 = 0;
if(i > 0) {
lookup1 = last_row[j];
}
if(j > 0) {
lookup2 = this_row[j - 1];
}
this_row[j] = (lookup1 > lookup2) ? lookup1 : lookup2;
}
}
swap = this_row;
this_row = last_row;
last_row = swap;
}
result = last_row[buflen2 - 1];
free(this_row);
free(last_row);
return result;
}
Answer:
I see no reason to mess with errno. It is quite a fragile facility by itself.
Reallocation by doubling the size is a sound strategy. Still, it is just a strategy, and shouldn't be hardcoded.
Is there a reason to treat leading whitespaces (opt_trim) differently from the embedded ones (opt_ignore_whitespaces)? In any case, I'd delegate handling these options to a line reading routine, rather than handling them in a line hashing one.
Using sizeof is a very good practice. On the other hand, using sizeof(hash_t) is not: changing the type of buffer would require much editing of an implementation. Such chores are easily avoided by using sizeof(**buffer) instead.
I didn't really understood an algorithm. A reference would be helpful. | {
"domain": "codereview.stackexchange",
"id": 11726,
"tags": "algorithm, c, file, edit-distance, hashcode"
} |
Design a sampling process to select an element with probability proportional to its appear probability in a simulation | Question: We are given a black box $A$ that can do a simulation. Each time running box A gives a sample $S \in 2^X$ where $X$ is a finite ground set.
Let $\Pr[x]$ be the probability that $x \in X$ appears in the sample $S$.
We try to design a sampling process $B$ which can produce an element in $X$ randomly.
Let $\Pr_1[x]$ be the probability that $x$ is produced by $B$.
We need that $\Pr_1[y]=\frac{\Pr[y]}{\sum_{x \in X} \Pr[x]}$ for each $y \in X$. An approximately correct algorithm would be OK too, i.e., $\Pr_1[y] \approx\frac{\Pr[y]}{\sum_{x \in X} \Pr[x]}$.
How to design such a sampling process $B$ by using the black box $A$?
First try:
1 run A once and obtain a sample $S$.
2 select an element from $S$ uniformly at random.
3 return $x$.
Intuition: First, the probability that $x \in S$ is $\Pr[x]$. In the second step, each element in $S$ is selected with probability $1/|S|$. The expected value of $|S|$ is actually $\sum_{x \in X} \Pr[x]$. At the first glance, the probability that $y$ is returned seems to be $\frac{\Pr[y]}{\sum_{x \in X} \Pr[x]}$. However, that $x \in S$ and $|S|$ are not independent.
Counter Example:
Suppose $X=\{1,2\}$ and the distribution associated with $A$ is $\Pr[\{1\}]=0.5$ and $\Pr[\{1,2\}]=0.5$. Then $\Pr_1[1]=0.5+0.5*0.5=0.75$. However, $\Pr[1]/(\Pr[1]+\Pr[2])=1/(1+0.5)=2/3$.
Any help is appreciated.
Answer: This can be done efficiently if the size of the samples $S$ is not too large. Let $m$ denote the maximum possible size of $S$. Then the following procedure outputs exactly the correct distribution:
Draw a sample $S$ (using the black box $A$). With probability $|S|/m$, keep $S$ and go to step 2. Otherwise, go back to step 1 and draw a new sample.
Choose an $x$ uniformly at random from $S$. Output $x$.
You can verify that this yields the correct distribution. It requires at most $O(m)$ samples (possibly less, depending on the distribution of the size of $S$).
On the other hand, if the samples can be very large, then approximating the distribution accurately may require many samples. Consider the following situation. Randomly choose a subset $X_0$ of $X$ of size $|X|/2$, and imagine a black box $A$ that works as follows: with probability $1/s$, it outputs the set $X_0$; otherwise, it outputs a single element chosen uniformly at random from $X$ (i.e., a random singleton set). Then if you draw $s$ samples, you have about a $1-1/e \approx 0.37$ probability of hitting the large set $X_0$. Moreover, if you don't hit the large set, the probability distribution of your procedure will be off by a total variation distance of $1/s$ (the probability for each element will be off by an additive error of $1/(s|X|)$). So, if you want a distribution that is close to correct in total variation distance, you'll need many samples. If you want an additive error of $\epsilon$ with probability at least $1-\delta$, you might need something like $O(1/\epsilon)$ samples (I'm not sure about the dependence on $\delta$). | {
"domain": "cstheory.stackexchange",
"id": 4459,
"tags": "st.statistics, monte-carlo, lts-simulations"
} |
Physical Significance of $U$ (Internal Energy ) , $H$ (Enthalpy) , $F$ (Free Energy) and $G$ (Gibbs Free Energy)? | Question: I know their mathematical definitions and how these terms are interrelated (mathematically) but I fail to understand the physical meaning of none but one which is INTERNAL ENERGY .
It seems implausible to me that these are just mathematical terms that serve the purpose that
If $T, V, N$ are known
we use $F=F(T, V, N) $
where $F$: Free Energy or Helmholtz Free Energy
If $T, P, N$ are known
we use $G=G(T, P, N)$
where $G$: Gibbs Free Energy
If $S, P, N$ are known
we use $H=H(S, P, N)$
where $H$: Enthalpy
and that's all. They have no physical significance?
What I know of $U$(Internal Energy) is that it is a measure of kinetic energy of system molecules and hence also the system temperature. The more the molecular K. E., more heat energy produced due to molecular collisions and hence more the temperature.
I am expecting similar physical explanations to other thermodynamic variables which I couldn't find even on other stack exchange threads!
Answer: Formally, those quantities -- Helmholtz free energy $F(T, V, N)$, Gibbs free energy $G(P, T, N)$, and enthalpy $H(P, V, N)$ -- are Legendre transforms of the internal energy $U(S, V, N)$. This serves to the purpose of treating a thermodynamic system considering different quantities as the independent variables, as you have mentioned. The physical significance of those thermodynamic potentials, then, has to do with the various sorts of constraints one can put on a system, and how these constraints influence the equilibrium state.
First, let's consider the one you take as the most physically meaningful, the internal energy. When a system is thermally insulated -- that is, cannot exchange heat with its surroundings, and so loosely speaking its entropy should stay constant -- we know the equilibrium state attained by the system is the state of minimum internal energy (that claim is equivalent to the most widespread one that says that in an isolated system, which can't exchange energy of any kind with its surroundings, the equilibrium state maximizes the entropy). So the internal energy really does seem to be what one expects it to be: it is the quantity that is extremized by a system which cannot exchange heat with its surroundings, just like in classical mechanics the potential energy is extremized in the equilibrium state. The claim that the internal energy is a measure of kinetic energy and thus temperature is oversimplified, and really applies only to the classical ideal gas, where you neglect all interactions between the constituents of the gas. In a more general description, essentially the internal energy is really the total energy (accounting for kinetic and potential energy) stored in the system.
Now, imagine that instead of leaving the system thermally insulated -- that is, keeping its entropy constant --, you put it in contact with a thermal bath that holds its temperature fixed. Now, neither internal energy nor entropy are held constant in the system; the system will then tend to reach a physical state in which it extremizes a quantity that brings together both the entropy and the internal energy, and this happens to be exactly the Helmholtz free energy.
The Helmholtz free energy may also be interpreted as a measure of the "useful" energy that can be extracted from a system at constant temperature -- the work you can extract from a system at constant temperature can only be less than or equal to (minus) the change in the Helmholtz free energy. You can then try and interpret the formula $F = U - TS$ as an expression of the total energy of the system ($U$), minus the "useless" energy stored in a disordered fashion contributing to the entropy ($TS$). The Helmholtz free energy plays a central role in the canonical ensemble, and also in Landau's theory of second order phase transitions; you may want to take a look at these topics too.
Imagine now that you put your system in contact with a pressure reservoir (the analogous of a thermal bath, if you want to keep the pressure in your system constant). Essentially what that means is that the system now can exchange mechanical work with its surroundings, and that has the effect of influencing how the systems reaches its equilibrium state. Grossly speaking, in that case what you want to extremize is the internal energy of the system, plus the amount of work that had to be done against the pressure reservoir in order to put the system into that configuration, and that is exactly what is accounted for in the enthalpy $H = U + PV$; if you want a quicker explanation, check the answer mentioned above What exactly is enthalpy? .
As for the Gibbs free energy, following what has been written above, it is specially useful when the system can exchange both heat and work with its surroundings, which is to be considered now as both a pressure and a temperature reservoir. Now, you are taking into account both of the effects described above: the total internal energy $U$, the "useless" energy lost to disorder of the system $-TS$, and the work done against the pressure reservoir $PV$. I'd really recommend you check the answers in Gibbs free energy intuition for a deeper description of that. | {
"domain": "physics.stackexchange",
"id": 53018,
"tags": "thermodynamics, energy, statistical-mechanics, potential"
} |
Unable to publish multiple static transformations using tf | Question:
I have 10 static transformations. Each one is saved in a separate yaml file. I want to publish these transformations using StaticTransformBroadcaster. Please see the code snippet below-
def main():
# create tf static transform broadcaster
tf_broadcaster = tf2_ros.StaticTransformBroadcaster()
for i in range(10):
calibFile = str(i) + '_calibration.yaml'
with open(calibFile, 'r') as f:
params = yaml.load(f)
static_transformStamped = TransformStamped()
static_transformStamped.header.stamp = rospy.Time.now()
static_transformStamped.header.frame_id = params['parent']
static_transformStamped.child_frame_id = params['child']
static_transformStamped.transform.translation = params['trans']
static_transformStamped.transform.rotation = params['rot']
# publish static transformation
tf_broadcaster.sendTransform(static_transformStamped)
# infinite loop
rospy.spin()
Iteratively I am reading each transform and then by using tf, I am trying to publish them. However, the above code doesn't work. It just publishes the last transform only.
My objective is to publish all static transformations. I am using ROS Indigo on Ubuntu 14.04 LTS OS.
Originally posted by ravijoshi on ROS Answers with karma: 1744 on 2018-04-04
Post score: 2
Answer:
I believe you're running into a peculiarity / TF2 specific characteristic of static transform publishers. From #q261815:
Since it's a latched topic it's recommended to only have one Static Transform Publisher per process.
You can resolve this by sending multiple transforms at once via a single broadcaster.
br.sendTransform([t,t2])
Originally posted by gvdhoorn with karma: 86574 on 2018-04-04
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by ravijoshi on 2018-04-04:
Thanks a lot. It worked like a charm. | {
"domain": "robotics.stackexchange",
"id": 30534,
"tags": "ros-indigo"
} |
Different electric fields | Question: What is the difference between an $electrostatic$ and a $non-electrostatic$ electric field?
Answer: An electrostatic field is an electric field that is produced from a non-changing charge distribution with no current (i.e. $\mathbf J=0$ at all points in space for all time). Essentially for electrostatics we just have stationary charges and constant electric fields.
If this is not the case, then resulting electric fields are not considered to be a part of electrostatics. i.e. moving charges, EM waves, etc. | {
"domain": "physics.stackexchange",
"id": 62167,
"tags": "electricity, magnetic-fields, electric-fields, potential, electromagnetic-induction"
} |
Can you identify this seedpod (Malvaceae)? | Question: I collected this seedpod from a plant I was pretty sure belongs to the mallow family (Malvaceae). Location = Central Europe. Unfortunately I lost my picture of the whole plant. Can you help me to identify the genus/species?
Answer: Lavatera trimestris (flower became-fruit & dried & exposed and seeds are dispersed) | {
"domain": "biology.stackexchange",
"id": 12052,
"tags": "species-identification, botany, seeds"
} |
What is ConstPtr&? | Question:
Hey All, I'm still new to ROS and C++. I'm having trouble understanding what the ConstPtr& does when writing the callback function for a simple subscriber:
void chatterCallback(const std_msgs::String::ConstPtr& msg)
{ROS_INFO("I heard: [%s]", msg->data.c_str());}
Wouldn't the code work with just:
void chatterCallback(const std_msgs::String msg)
{ROS_INFO("I heard: [%s]", msg);}
Originally posted by OmoNaija on ROS Answers with karma: 213 on 2015-07-03
Post score: 21
Answer:
When messages are automatically generated into C++ code, there are several typedefs defined. One of them is ::Ptr, which is typedef-ed to be a boost::shared_ptr<MSG>, and another is ::ConstPtr which is boost::shared_ptr<MSG const>.
By passing a const pointer into the callback, we avoid doing a copy. While this might not make much difference for std_msgs::String, it can make a huge difference for sensor_msgs::PointCloud2.
Originally posted by fergs with karma: 13902 on 2015-07-04
This answer was ACCEPTED on the original site
Post score: 37
Original comments
Comment by OmoNaija on 2015-07-06:
Thank You!
Comment by feixiao on 2016-01-21:
thanks a lot for your help
Comment by wy3 on 2017-07-29:
but what does & means? If msg is already a pointer, why do we take the address of msg? Or does that & means passing by reference?
Comment by ksirks on 2017-09-15:
@wy3: the ampersand (&) means pass by reference as you said.
Comment by eRCaGuy on 2020-06-10:
Note: today I think boost::shared_ptr<> is replaced in ROS by std::shared_ptr<>, since this is now part of the C++ Standard Library.
Comment by eRCaGuy on 2020-06-10:
@wy3, the & is to pass the shared_ptr object itself by reference. The shared_ptr object is a class which manages the object it wraps and "points to", and the & is to pass the shared_ptr object by reference. So, we are passing a message by a reference to a smart pointer which points to it--yeah, it's kind of a double-layered approach. In C you'd just pass the message by pointer and be done, but the added benefit of using a shared pointer, which is a type of "smart pointer", is that it automatically manages the storage duration of the memory (for the message) it points to, meaning you never have to manually free or delete this dynamically-allocated memory block for the message because it's done automatically when all shared pointers to that memory are gone or out of scope.
Comment by eRCaGuy on 2020-06-10:
I just added my above comments, and more, into my own answer here. I realized my comments were becoming more than just comments, so I made them into an answer.
Comment by fergs on 2020-08-18:
ROS1 still very much uses boost in roscpp - changing to std::shared_ptr would break the world. ROS2 does drop boost for std::shared_ptr in rclcpp.
Comment by ia on 2021-04-11:
@ahendrix but why do some message type look like sensor_msgs::ImageConstPtr and others need colons like: automotive_platform_msgs::AdaptiveCruiseControlCommand::ConstPtr? For instance, without those double colons :: right before ConstPtr I get a very hard to grok error message too long to post here.
Comment by eRCaGuy on 2021-04-12:
@ahendrix, I don't see the answer yet, but here's a question I asked to try to start to find out: https://stackoverflow.com/questions/67053471/what-is-a-void-stdallocator-ie-stdallocatorvoid. | {
"domain": "robotics.stackexchange",
"id": 22079,
"tags": "ros, callback, function"
} |
How to solve 'boost/graph/boykov_kolmogorov_max_flow.hpp: No such file or directory'? | Question:
I download pcl for ros:
svn co http://svn.pointclouds.org/ros/trunk/perception_pcl_electric_unstable
And I rosmake,it shows the error:
ira@ira-K42JP:~/code/ros/sam_pcl/perception_pcl_electric_unstable$ rosmake
[ rosmake ] No package specified. Building stack ['perception_pcl_electric_unstable']
[ rosmake ] Packages requested are: ['perception_pcl_electric_unstable']
[ rosmake ] Logging to directory/home/ira/.ros/rosmake/rosmake_output-20120708-170257
[ rosmake ] Expanded args ['perception_pcl_electric_unstable'] to:
['flann', 'cminpack', 'pcl_ros', 'pcl']
[ rosmake ] Checking rosdeps compliance for packages perception_pcl_electric_unstable. This may take a few seconds.
[ rosmake ] rosdep check failed to find system dependencies: libvtk-qt
[rosmake-0] Starting >>> cpp_common [ make ]
[rosmake-0] Finished <<< cpp_common ROS_NOBUILD in package cpp_common
[rosmake-1] Starting >>> roslib [ make ]
[rosmake-1] Finished <<< roslib ROS_NOBUILD in package roslib
[rosmake-2] Starting >>> rosbuild [ make ]
[rosmake-2] Finished <<< rosbuild ROS_NOBUILD in package rosbuild
No Makefile in package rosbuild
[rosmake-3] Starting >>> smclib [ make ]
[rosmake-0] Starting >>> rostime [ make ]
[rosmake-1] Starting >>> std_msgs [ make ]
[rosmake-2] Starting >>> roslang [ make ]
[rosmake-3] Finished <<< smclib ROS_NOBUILD in package smclib
[rosmake-0] Finished <<< rostime ROS_NOBUILD in package rostime
[rosmake-1] Finished <<< std_msgs ROS_NOBUILD in package std_msgs
[rosmake-1] Starting >>> rosgraph_msgs [ make ]
[rosmake-1] Finished <<< rosgraph_msgs ROS_NOBUILD in package rosgraph_msgs
[rosmake-3] Starting >>> rosclean [ make ]
[rosmake-3] Finished <<< rosclean ROS_NOBUILD in package rosclean
[rosmake-2] Finished <<< roslang ROS_NOBUILD in package roslang
No Makefile in package roslang
[rosmake-0] Starting >>> rosconsole [ make ]
[rosmake-1] Starting >>> rosgraph [ make ]
[rosmake-2] Starting >>> rospy [ make ]
[rosmake-3] Starting >>> rosparam [ make ]
[rosmake-0] Finished <<< rosconsole ROS_NOBUILD in package rosconsole
[rosmake-1] Finished <<< rosgraph ROS_NOBUILD in package rosgraph
[rosmake-1] Starting >>> rosmaster [ make ]
[rosmake-0] Starting >>> roscpp_traits [ make ]
[rosmake-2] Finished <<< rospy ROS_NOBUILD in package rospy
[rosmake-3] Finished <<< rosparam ROS_NOBUILD in package rosparam
[rosmake-1] Finished <<< rosmaster ROS_NOBUILD in package rosmaster
[rosmake-0] Finished <<< roscpp_traits ROS_NOBUILD in package roscpp_traits
[rosmake-2] Starting >>> xmlrpcpp [ make ]
[rosmake-3] Starting >>> rosunit [ make ]
[rosmake-0] Starting >>> roscpp_serialization [ make ]
[rosmake-2] Finished <<< xmlrpcpp ROS_NOBUILD in package xmlrpcpp
[rosmake-1] Starting >>> pluginlib [ make ]
[rosmake-3] Finished <<< rosunit ROS_NOBUILD in package rosunit
[rosmake-2] Starting >>> bond [ make ]
[rosmake-0] Finished <<< roscpp_serialization ROS_NOBUILD in package roscpp_serialization
[rosmake-2] Finished <<< bond ROS_NOBUILD in package bond
[rosmake-1] Finished <<< pluginlib ROS_NOBUILD in package pluginlib
[rosmake-3] Starting >>> cminpack [ make ]
[rosmake-0] Starting >>> roscpp [ make ]
[rosmake-0] Finished <<< roscpp ROS_NOBUILD in package roscpp
[rosmake-2] Starting >>> flann [ make ]
[rosmake-3] Finished <<< cminpack [PASS] [ 0.02 seconds ]
[rosmake-1] Starting >>> common_rosdeps [ make ]
[rosmake-0] Starting >>> rosout [ make ]
[rosmake-1] Finished <<< common_rosdeps ROS_NOBUILD in package common_rosdeps
[rosmake-3] Starting >>> bondcpp [ make ]
[rosmake-0] Finished <<< rosout ROS_NOBUILD in package rosout
[rosmake-0] Starting >>> roslaunch [ make ]
[rosmake-3] Finished <<< bondcpp ROS_NOBUILD in package bondcpp
[rosmake-3] Starting >>> nodelet [ make ]
[rosmake-3] Finished <<< nodelet ROS_NOBUILD in package nodelet
[rosmake-0] Finished <<< roslaunch ROS_NOBUILD in package roslaunch
No Makefile in package roslaunch
[rosmake-2] Finished <<< flann [PASS] [ 0.02 seconds ]
[rosmake-3] Starting >>> eigen [ make ]
[rosmake-0] Starting >>> rostest [ make ]
[rosmake-0] Finished <<< rostest ROS_NOBUILD in package rostest
[rosmake-3] Finished <<< eigen ROS_NOBUILD in package eigen
[rosmake-2] Starting >>> test_roslaunch [ make ]
[rosmake-2] Finished <<< test_roslaunch ROS_NOBUILD in package test_roslaunch
[rosmake-2] Starting >>> test_rosgraph [ make ]
[rosmake-3] Starting >>> message_filters [ make ]
[rosmake-1] Starting >>> bullet [ make ]
[rosmake-3] Finished <<< message_filters ROS_NOBUILD in package message_filters
[rosmake-0] Starting >>> topic_tools [ make ]
[rosmake-3] Starting >>> nodelet_topic_tools [ make ]
[rosmake-2] Finished <<< test_rosgraph ROS_NOBUILD in package test_rosgraph
[rosmake-3] Finished <<< nodelet_topic_tools ROS_NOBUILD in package nodelet_topic_tools
[rosmake-1] Finished <<< bullet ROS_NOBUILD in package bullet
[rosmake-0] Finished <<< topic_tools ROS_NOBUILD in package topic_tools
[rosmake-3] Starting >>> angles [ make ]
[rosmake-3] Finished <<< angles ROS_NOBUILD in package angles
[rosmake-3] Starting >>> rosnode [ make ]
[rosmake-3] Finished <<< rosnode ROS_NOBUILD in package rosnode
[rosmake-3] Starting >>> timestamp_tools [ make ]
[rosmake-3] Finished <<< timestamp_tools ROS_NOBUILD in package timestamp_tools
[rosmake-0] Starting >>> rosbag [ make ]
[rosmake-0] Finished <<< rosbag ROS_NOBUILD in package rosbag
[rosmake-2] Starting >>> test_nodelet [ make ]
[rosmake-2] Finished <<< test_nodelet ROS_NOBUILD in package test_nodelet
[rosmake-3] Starting >>> test_ros [ make ]
[rosmake-2] Starting >>> rosmsg [ make ]
[rosmake-2] Finished <<< rosmsg ROS_NOBUILD in package rosmsg
No Makefile in package rosmsg
[rosmake-1] Starting >>> rosbagmigration [ make ]
[rosmake-3] Finished <<< test_ros ROS_NOBUILD in package test_ros
[rosmake-1] Finished <<< rosbagmigration ROS_NOBUILD in package rosbagmigration
No Makefile in package rosbagmigration
[rosmake-0] Starting >>> test_crosspackage [ make ]
[rosmake-0] Finished <<< test_crosspackage ROS_NOBUILD in package test_crosspackage
[rosmake-3] Starting >>> geometry_msgs [ make ]
[rosmake-2] Starting >>> rostopic [ make ]
[rosmake-2] Finished <<< rostopic ROS_NOBUILD in package rostopic
[rosmake-3] Finished <<< geometry_msgs ROS_NOBUILD in package geometry_msgs
[rosmake-0] Starting >>> test_rosnode [ make ]
[rosmake-0] Finished <<< test_rosnode ROS_NOBUILD in package test_rosnode
[rosmake-1] Starting >>> test_rosbag [ make ]
[rosmake-0] Starting >>> test_topic_tools [ make ]
[rosmake-1] Finished <<< test_rosbag ROS_NOBUILD in package test_rosbag
[rosmake-3] Starting >>> sensor_msgs [ make ]
[rosmake-3] Finished <<< sensor_msgs ROS_NOBUILD in package sensor_msgs
[rosmake-2] Starting >>> rosservice [ make ]
[rosmake-0] Finished <<< test_topic_tools ROS_NOBUILD in package test_topic_tools
[rosmake-2] Finished <<< rosservice ROS_NOBUILD in package rosservice
[rosmake-1] Starting >>> pcl [ make ]
[ rosmake ] Last 40 linessservice: 19.1 sec ] [ pcl: 19.1 sec ] [ 2 Active 52/102 Complete ]
{-------------------------------------------------------------------------------
[ 14%] Built target pcl_kdtree
[rosmake-0] Starting >>> diagnostic_msgs [ make ]
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[rosmake-0] Finished <<< diagnostic_msgs ROS_NOBUILD in package diagnostic_msgs
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[ 15%] Built target pcl_search
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[ 26%] Built target pcl_features
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[ 32%] Built target pcl_sample_consensus
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[ 37%] Built target pcl_filters
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[ 39%] Built target pcl_keypoints
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[ 39%] Built target pcl_geometry
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[3]: Entering directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
[ 40%] Building CXX object segmentation/CMakeFiles/pcl_segmentation.dir/src/min_cut_segmentation.cpp.o
In file included from /usr/include/c++/4.4/backward/hash_set:60,
from /usr/include/boost/pending/container_traits.hpp:23,
from /usr/include/boost/graph/detail/adjacency_list.hpp:31,
from /usr/include/boost/graph/adjacency_list.hpp:324,
from /home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/segmentation/include/pcl/segmentation/min_cut_segmentation.h:45,
from /home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/segmentation/src/min_cut_segmentation.cpp:44:
/usr/include/c++/4.4/backward/backward_warning.h:28: warning: #warning This file includes at least one deprecated or antiquated header which may be removed without further notice at a future date. Please use a non-deprecated interface with equivalent functionality instead. For a listing of replacement headers and interfaces, consult the file backward_warning.h. To disable this warning use -Wno-deprecated.
In file included from /home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/segmentation/src/min_cut_segmentation.cpp:45:
/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/segmentation/include/pcl/segmentation/impl/min_cut_segmentation.hpp:45: fatal error: boost/graph/boykov_kolmogorov_max_flow.hpp: No such file or directory
compilation terminated.
make[3]: *** [segmentation/CMakeFiles/pcl_segmentation.dir/src/min_cut_segmentation.cpp.o] Error 1
make[3]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[2]: *** [segmentation/CMakeFiles/pcl_segmentation.dir/all] Error 2
make[2]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/home/ira/code/ros/sam_pcl/perception_pcl_electric_unstable/pcl/build/pcl_trunk/build'
-------------------------------------------------------------------------------}
[ rosmake ] Output from build of package pcl written to:
[ rosmake ] /home/ira/.ros/rosmake/rosmake_output-20120708-170257/pcl/build_output.log
[rosmake-1] Finished <<< pcl [FAIL] [ 19.13 seconds ]
[rosmake-3] Starting >>> rosboost_cfg [ make ]
[ rosmake ] Halting due to failure in package pcl.
[ rosmake ] Waiting for other threads to complete.
[rosmake-2] Starting >>> roswtf [ make ]
[rosmake-3] Finished <<< rosboost_cfg ROS_NOBUILD in package rosboost_cfg
No Makefile in package rosboost_cfg
[rosmake-2] Finished <<< roswtf ROS_NOBUILD in package roswtf
[rosmake-0] Starting >>> test_rospack [ make ]
[rosmake-0] Finished <<< test_rospack ROS_NOBUILD in package test_rospack
[ rosmake ] Results:
[ rosmake ] Built 58 packages with 1 failures.
[ rosmake ] Summary output to directory
[ rosmake ] /home/ira/.ros/rosmake/rosmake_output-20120708-170257
[ rosmake ] WARNING: Rosdep did not detect the following system dependencies as installed: libvtk-qt Consider using --rosdep-install option or `rosdep install flann cminpack pcl_ros pcl`
ira@ira-K42JP:~/code/ros/sam_pcl/perception_pcl_electric_unstable$
How to solve it?
Thank you~
Originally posted by sam on ROS Answers with karma: 2570 on 2012-07-07
Post score: 0
Answer:
This is because PCL doesn't do a good job of keeping track of which version of boost you're using; it assumes you have the latest.
That particular thing was added to boost fairly recently, and your version of boost is too old.
You'll need a newer linux (say, Ubuntu Precise), or to install a newer boost.
Originally posted by Mac with karma: 4119 on 2012-08-09
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sam on 2012-08-09:
How to update boost on an older version of ubuntu? Thank you~
Comment by Mac on 2012-08-09:
Build it from source; see boost.org.
Comment by sam on 2012-08-10:
Is there any way to build from apt? Thank you~
Comment by Mac on 2012-08-11:
You could look for a PPA; figuring that out is better-suited to an Ubuntu forum of some sort. (My machines run precise, so haven't had to figure this out for myself.) | {
"domain": "robotics.stackexchange",
"id": 10090,
"tags": "pcl, rosmake, pcl-ros"
} |
Is it possible to subscribe to an action feedback? | Question:
Hi, I was considering to is it possible to subscribe to a action feedback from another node. I tried it to subscribe a moveit2 action feedback ( /scaled_joint_trajectory_controller/follow_joint_trajectory/_action/feedback) from a node.
Before run the node:
ros2 topic info /scaled_joint_trajectory_controller/follow_joint_trajectory/_action/feedback
Type: control_msgs/action/FollowJointTrajectory_FeedbackMessage
Publisher count: 1
Subscription count: 2
After run the node
ros2 topic info /scaled_joint_trajectory_controller/follow_joint_trajectory/_action/feedback
Type: [‘control_msgs/action/FollowJointTrajectory_Feedback’, ‘control_msgs/action/FollowJointTrajectory_FeedbackMessage’]
Publisher count: 1
Subscription count: 3
But when action feedback starts to come , in node callback function is not triggering. And also when i check the node info
ros2 node info /anlys_node
/anlys_node
Subscribers:
Publishers:
/parameter_events: rcl_interfaces/msg/ParameterEvent
/rosout: rcl_interfaces/msg/Log
Service Servers:
/anlys_node/describe_parameters: rcl_interfaces/srv/DescribeParameters
/anlys_node/get_parameter_types: rcl_interfaces/srv/GetParameterTypes
/anlys_node/get_parameters: rcl_interfaces/srv/GetParameters
/anlys_node/list_parameters: rcl_interfaces/srv/ListParameters
/anlys_node/set_parameters: rcl_interfaces/srv/SetParameters
/anlys_node/set_parameters_atomically: rcl_interfaces/srv/SetParametersAtomically
Service Clients:
Action Servers:
Action Clients:
/scaled_joint_trajectory_controller/follow_joint_trajectory: control_msgs/action/FollowJointTrajectory_Feedback
As seen above , its created Action Clients not Subscribers.
I didn’t get what is up going here, am i doing something wrong or is there a bug ?
Here is my node script.
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
import numpy as np
import matplotlib.pyplot as plt
import yaml
from control_msgs.action import FollowJointTrajectory
class PlanAnlys(Node):
def __init__(self):
super().__init__('anlys_node')
self.get_logger().info("Node Init")
self.create_subscription(FollowJointTrajectory.Feedback,"/scaled_joint_trajectory_controller/follow_joint_trajectory/_action/feedback",self.save_data,10)
def save_data(self,msg):
self.get_logger().info("Test")
rclpy.init()
node = PlanAnlys()
rclpy.spin(node)
node.destroy_node()
rclpy.shutdown()
Originally posted by muratcngncr on ROS Answers with karma: 56 on 2022-08-09
Post score: 3
Original comments
Comment by Fetullah Atas on 2022-08-09:
did you do a ros2 topic echo to make sure data is on the topic ?, also be careful with QOS compatibility of publisher and subscriber,
Comment by muratcngncr on 2022-08-10:
Yeah when i send a goal to action server. I can see the feedback topic via ros2 topic echo.
Answer:
The key is in the type. If you show the topic (ros2 topic list --include-hidden-topics -t) you'll probably see that the type of the feedback is not "FollowJointTrajectory.Feedback" but FollowJointTrajectory_FeedbackMessage.
That type is hidden as well, but you should be able to import it as follows:
from control_msgs.action._follow_joint_trajectory import FollowJointTrajectory_FeedbackMessage
This worked for me in Humble
Originally posted by iwasinnam with karma: 46 on 2023-05-01
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 37911,
"tags": "ros, ros2, action, action-client"
} |
Why aren't mammals and reptiles considered amphibians? | Question: We've all heard it: birds descend from dinosaurs, so they're dinosaurs too. But this got me thinking: doesn't this mean that, for instance, all terrestrial vertebrates – including humans – are technically fish? A recent video by MinuteEarth and the Wikipedia article for "Fish" confirmed my shower thought hypothesis.
Interesting. But... all amniotes, i.e. reptiles (and, by extension, birds) and mammals, descend from amphibians, right? If so, then why aren't they considered amphibians too?
Answer: Mammals and reptiles aren't considered amphibians, because amniotes are not hypothesized to descend from Amphibia. That is to say that Amphibia did not evolve into Amniota. They are sister clades (actually Reptiliomorpha in the Tree of Life tree below). | {
"domain": "biology.stackexchange",
"id": 9627,
"tags": "taxonomy, mammals, cladistics"
} |
Decoupling Presenter from "child" Repository | Question: Still pursuing the white rabbit, I had an IPresenter interface implementation featuring this method:
Private Sub ExecuteAddCommand()
Dim orderNumber As String
If Not RequestUserInput(prompt:=GetResourceString("PromptExcludedOrderNumberMessageText"), _
title:=GetResourceString("AddPromptTitle"), _
outResult:=orderNumber, _
default:=0) _
Then
Exit Sub
End If
'tight coupling...
Dim orderRepo As IRepository
Set orderRepo = New OrderHeaderRepository
Dim values As New Dictionary
values.Add "number", orderNumber
Dim orderModel As New SqlResult
orderModel.AddFieldName "Number"
Dim order As SqlResultRow
Set order = orderRepo.NewItem(orderModel, values)
Dim orderId As Long
orderId = orderRepo.FindId(order)
If orderId = 0 Then
MsgBox StringFormat(GetResourceString("InvalidOrderNumberMessageText"), orderNumber), _
vbExclamation, _
GetResourceString("InvalidOrderNumberTitle")
Exit Sub
End If
Dim reason As String
If Not RequestUserInput(prompt:=GetResourceString("AddExcludedOrderMessageText"), _
title:=GetResourceString("AddPromptTitle"), _
outResult:=reason, _
default:=GetResourceString("DefaultExcludedOrderReason")) _
Then
Exit Sub
End If
Repository.Add NewExcludedOrder(orderHeaderId:=orderId, reason:=reason)
IPresenter_ExecuteCommand RefreshCommand
End Sub
The problem is that, out of 12 implementations so far, it was the only one with such tight coupling - for every single other case, it made sense to implement a "child" presenter with its own repository.
The tightly coupled code works, but can't be tested offline, which was the entire point/purpose of going down the rabbit hole with implementing MVP and Repository patterns in VBA. I had to do something about it.
So I added a FindCommand to the CommandType enum, and implemented an OrderHeaderPresenter:
Option Explicit
Private Type tPresenter
MasterId As Long
Repository As IRepository
DetailsPresenter As IPresenter
View As IView
End Type
Private this As tPresenter
Implements IPresenter
Private Function NewOrderHeader(Optional ByVal id As Long = 0, Optional ByVal number As String = vbNullString) As SqlResultRow
Dim result As SqlResultRow
Dim values As New Dictionary
values.Add "id", id
values.Add "number", number
Dim orderModel As New SqlResult
orderModel.AddFieldName "id"
orderModel.AddFieldName "number"
Set result = Repository.NewItem(orderModel, values)
Set NewOrderHeader = result
End Function
Public Property Get Repository() As IRepository
Set Repository = this.Repository
End Property
Public Property Set Repository(ByVal value As IRepository)
Set this.Repository = value
End Property
Private Function IPresenter_CanExecuteCommand(ByVal commandId As CommandType) As Boolean
If commandId = FindCommand Then IPresenter_CanExecuteCommand = True
End Function
Private Property Set IPresenter_DetailsPresenter(ByVal value As IPresenter)
'not implemented
End Property
Private Property Get IPresenter_DetailsPresenter() As IPresenter
'not implemented
End Property
Private Function IPresenter_ExecuteCommand(ByVal commandId As CommandType) As Variant
Select Case commandId
Case CommandType.FindCommand
IPresenter_ExecuteCommand = ExecuteFindCommand
Case Else
'not implemented
End Select
End Function
Private Function ExecuteFindCommand() As Long
Dim orderNumber As String
If Not RequestUserInput(prompt:=GetResourceString("PromptExcludedOrderNumberMessageText"), _
title:=GetResourceString("AddPromptTitle"), _
outResult:=orderNumber, _
default:=0) _
Then
Exit Function
End If
Dim order As SqlResultRow
Set order = NewOrderHeader(number:=orderNumber)
Dim orderId As Long
orderId = Repository.FindId(order)
If orderId = 0 Then
MsgBox StringFormat(GetResourceString("InvalidOrderNumberMessageText"), orderNumber), _
vbExclamation, _
GetResourceString("InvalidOrderNumberTitle")
Exit Function
End If
ExecuteFindCommand = orderId
End Function
Private Property Let IPresenter_MasterId(ByVal value As Long)
this.MasterId = value
End Property
Private Property Get IPresenter_MasterId() As Long
IPresenter_MasterId = this.MasterId
End Property
Private Property Set IPresenter_Repository(ByVal value As IRepository)
Set Repository = value
End Property
Private Property Get IPresenter_Repository() As IRepository
Set IPresenter_Repository = Repository
End Property
Private Sub IPresenter_Show()
'not implemented
End Sub
Private Property Set IPresenter_View(ByVal value As IView)
'not implemented
End Property
Private Property Get IPresenter_View() As IView
'not implemented
End Property
For this to work, I had to change IPresenter.ExecuteCommand from a Sub to a Function returning a Variant, so that commands like Find and Search can return an object or a value.
And that allowed me to refactor my coupled presenter method to this:
Private Sub ExecuteAddCommand()
Dim orderId As Long
orderId = this.DetailsPresenter.ExecuteCommand(FindCommand)
If orderId = 0 Then Exit Sub
Dim reason As String
If Not RequestUserInput(prompt:=GetResourceString("AddExcludedOrderMessageText"), _
title:=GetResourceString("AddPromptTitle"), _
outResult:=reason, _
default:=GetResourceString("DefaultExcludedOrderReason")) _
Then
Exit Sub
End If
Repository.Add NewExcludedOrder(orderHeaderId:=orderId, reason:=reason)
IPresenter_ExecuteCommand RefreshCommand
End Sub
Here's the OrderHeaderRepository implementation, which partly explains the holes in the presenter implementation:
Option Explicit
Private cmd As New SqlCommand
Private Const selectString As String = "SELECT Id, Number, OrderDate, OrderTypeId, SeasonId FROM Planning.OrderHeaders"
Implements IRepository
Private Sub IRepository_Add(ByVal value As SqlResultRow)
'not implemented
End Sub
Private Function IRepository_Count() As Long
IRepository_Count = RepositoryImpl.Count("Planning.OrderHeaders")
End Function
Private Function IRepository_FindId(ByVal naturalKey As SqlResultRow) As Long
Dim sql As String
sql = "SELECT Id FROM Planning.OrderHeaders WHERE Number = ?;"
Dim result As Long
'todo: find out why query won't return a value with a string parameter.
' underlying db table column is a varchar, doesn't make sense to CLng() here.
result = cmd.QuickSelectSingleValue(sql, CLng(naturalKey("number")))
IRepository_FindId = IIf(IsEmpty(result), 0, result)
End Function
Private Function IRepository_GetAll() As SqlResult
'not implemented. too many results to efficiently process in a SqlResult.
End Function
Private Function IRepository_GetById(ByVal id As Long) As SqlResultRow
Set IRepository_GetById = RepositoryImpl.GetById(selectString, id)
End Function
Private Function IRepository_NewItem(ByVal Model As SqlResult, ByVal values As Scripting.IDictionary) As SqlResultRow
Set IRepository_NewItem = RepositoryImpl.NewItem(Model, values)
End Function
Private Sub IRepository_Remove(ByVal id As Long)
'not implemented
End Sub
Private Function IRepository_Search(ByVal terms As SqlResultRow) As SqlResult
'not implemented. possibly too many results to efficiently process with a SqlResult.
End Function
Private Sub IRepository_Update(ByVal id As Long, ByVal value As SqlResultRow)
'not implemented
End Sub
A few things annoy me slightly:
Most of both IPresenter and IRepository interfaces are left unimplemented for the OrderHeader model - the presenter is only exposing a Repository property... which somewhat smells.
The OrderHeaderPresenter, while physically decoupled from the ExcludedOrdersPresenter, is still functionally coupled with it - the FindCommand is prompting the user for an order number to be excluded, which is what needs to happen, but the OrderHeaderPresenter shouldn't be using resources meant to be used in another implementation. Or should I just change that string to read something more generic, like "Please enter an order number:"?
Anything else sticks out? Any recommendations?
Answer: Let's talk a little about your code the way it is right now. This happens a lot.
Private Property Set IPresenter_DetailsPresenter(ByVal value As IPresenter)
'not implemented
End Property
Private Property Get IPresenter_DetailsPresenter() As IPresenter
'not implemented
End Property
I like that you took the time to comment that it's not implemented, but anyone calling OrderHeaderPresenter.DetailsPresenter will have to dive into the source code to find out why it's not doing anything. Worse, it might take the unwary dev a long time to realize that it isn't doing anything. All of these should raise FeatureNotImplementedErrors.
Putting that aside, you're right. The simple fact that all of these are not implemented does smell, but we'll get back to that.
Private Function IPresenter_ExecuteCommand(ByVal commandId As CommandType) As Variant
Select Case commandId
Case CommandType.FindCommand
IPresenter_ExecuteCommand = ExecuteFindCommand
Case Else
'not implemented
End Select
End Function
The Else case here makes me think that OrderHeaderPresenter isn't really a IPresenter. It only implements one of the command types and you added that command type specifically for this implementation. At the very least, don't let it silently fail. Again, raise an error here.
Private Function ExecuteFindCommand() As Long
Dim orderNumber As String
If Not RequestUserInput(prompt:=GetResourceString("PromptExcludedOrderNumberMessageText"), _
Your execute find command excludes the order number it's given? How is that a "find" command. Find is for positive hits, not negatives. Granted, I don't have a better name for you, but think on it.
The order ID is added to some ExcludedOrders table.
Okay, that's unclear from solely reading the code. Add a comment here.
Private Sub IPresenter_Show()
'not implemented
End Sub
If that doesn't convince you that this isn't a IPresenter I don't know what will. It's a presenter that doesn't present anything.
A lot of the same with IRepository methods not being implemented.
So, that's an awful lot of "Raise errors" and "This isn't really a IPresenter", but not a lot of advice on how to fix it. In my mind, there aren't a lot of options.
You could simply raise the custom errors I've mentioned and be done with it. (Easiest, but not necessarily the right thing to do.)
Create two new interfaces. I think creating a IRepositoryBase that holds the common contracts makes sense. OrderDetailsHeader would implement only IRepositoryBase and all of your other existing classes would need to be changed to implement both IRepositoryBase and IRepository. The second new interface would be for those "presenter" methods that aren't really presenter methods. (By far the hardest, but perhaps the most "correct" option.)
Create a single new interface for this new class. It will duplicate some code from both IRepository and IPresenter.
I would opt for option number three myself. I hope I've done enough to convince you that OrderDetailsPresenter is not really the same thing as an IPresenter. Making it it's own thing makes sense. This could also be considered the most "correct" thing to do because you really should never change an interface once it's been created and used.
To quote CPearson:
Once you have defined your interface class module, you must not change the interface. Do not add, change, or remove any procedures or properties. Doing so will break any class that implements the interface and all those classes will need to be modified. Once defined, the interface should be viewed as a contract with the outside world, because other objects depend on the form of the interface. If you modify an interface, any class or project that implements that interface will not compile and all the classes will need to be modified to reflect the changes in the interface. This is not good. In a large application, particularly one developed by more than one programmer, your interface may be used in places of which you are not aware. Changing the interface will break code in unexpected places. If you find the absolute need to change an interface, leave the original interface unchanged and create a new interface with the desired modifications. This way, new code can implement the new interface, but existing code will not be broken when using the original interface.
Emphasis mine. | {
"domain": "codereview.stackexchange",
"id": 9175,
"tags": "object-oriented, design-patterns, vba"
} |
Energy levels in non-hydrogen-like atoms | Question: The energy $E_n$ for a hydrogen like atom is given as
$$E_n = -hcR_\ce{H}\frac{Z^2}{n^2}$$
However, aside from on wikipedia where there is
$$E_n = -hcR\frac{Z_\text{eff}^2}{n^2},$$
I can't find anywhere that relates to the energy of an atom that has more than one electron. Is there an equation that gives the specific (or approximate) energy or is there an 'easy' way to calcuate a value for $Z_\text{eff}$?
Answer: Consider an atom to consist of an electron orbiting a positvely charged ion core (with one or more electrons associated to it). If you excite the electron to larger and larger values of the principal quantum number $n$ the classical electron orbit becomes larger and larger as well (in fact it scales as $n^2$). At sufficiently large distance, the postively charged ion core behaves as a point charge (like a single proton) and at long range you basically have the hydrogen problem with a different mass (assuming perfect shielding of the core electrons). In order to find the overall solution to this problem, one has to connect the long-range and short range behavior of the system. One way of doing this, is is by considering the system as a collision problem at negative energy (that is, a bound system). The effect of the nonhydrogenic ion core then results in a scattering phase shift between an incommming and outgoing electron. It can be shown that the solutions to this problem have a very similar appearance as the Bohr formula, namely:
$$
E_{n,\ell}=E_\text{IE}-\frac{hcR_A}{(n-\delta_\ell)^2}
$$
where $R_A=R_\infty(1-m_e/m_A)$ is the mass-corrected Rydberg constant, $E_\text{IE}$ is the ionization energy and $\delta_\ell$ is the so-called quantum defect for electrons with orbital angular momentum $\ell$. The quantum defect is related to the phase shift induced by the core and is different for $s, p, d, f$ etc. electrons. In principle, the quantum defect is a function of the binding energy of the electron to the ionic core, but in many cases this dependence may be neglected (in particular for large $n$).
What does this mean in practice? Let us for example look at the $n$s levels of the sodium atom. We find that $R_\text{Na}=109734.697205$ cm$^{-1}$ and $E_\text{IP}(\text{Na})=41449.451$ cm$^{-1}$. The second column in the table below gives the energy (in cm$^{-1}$ from the NIST website) and the third column gives the binding energy of the electron (also in cm$^{-1}$). The last column gives the so-called effective quantum number, that is $n^*=n-\delta_\ell$, with $\ell=0$ for $s$ orbitals
3s 0.000 -41449.451 1.63
4s 25739.999 -15709.452 2.64
5s 33200.673 -8248.778 3.65
6s 36372.618 -5076.833 4.65
7s 38012.042 -3437.409 5.65
8s 38968.510 -2480.941 6.65
9s 39574.850 -1874.601 7.65
10s 39983.270 -1466.181 8.65
11s 40271.396 -1178.055 9.65
As you can see the quantum numbers take the form of the hydrogen solution if we start counting the principal quantum number form the 3$s$ levels and take $\delta_0\approx -0.65$.
To calculate the quantum defect is difficult as it requires ab initio calculations and then converting the energy levels to effective quantum numbers from which the defects can be extracted. However, experimentaly these numbers are very convenient, and one may use this quantum-defect theory for instance to determine the ionization energy of atoms and even simple molecules very accurately by extrapolating Rydberg series. | {
"domain": "chemistry.stackexchange",
"id": 8641,
"tags": "energy, electrons, atoms"
} |
Plane Settings of the Matched $z$-transform Method | Question: I've come across that the matched $z$-transform maps poles of the $s$-plane design to locations in the $z$-plane. My question is, what is the $s$-plane and what does this mean? I'm aware that the variable $s$ can often be used to denote the complex plane but the fact that this has been used in conjunction with the $z$ variable has confused me slightly.
Answer: The $s$-plane is the complex plane associated with the Laplace transform, i.e., with transfer functions of continuous-time systems, whereas the $z$-plane is the complex plane associated with the $\mathcal{Z}$-transform, i.e., with transfer functions of discrete-time systems. The matched $Z$-transform is one way of transforming continuous-time systems to discrete-time systems.
Take a look at this answer for more information about transformations from continuous-time to discrete-time. | {
"domain": "dsp.stackexchange",
"id": 7441,
"tags": "filters, discrete-signals, z-transform, poles-zeros, matched-filter"
} |
tf Python API transform points | Question:
Hi
The TF documentation mentions that there are methods to convert points from one frame to another. I am currently using the python tf api.
I have a list of [x,y,z] points in the sensor frame and I want to transform these points into the map frame.
calling this
(tran, rot) = self.tf_listener.lookupTransform('/body', '/map', rospy.Time(0.0))
gives a translation vector and a quaternion for the rotation. I am not really familiar with quaternions... but I heard they are hard...
Anyway Is there a method in the python api to convert my vector of points to another frame?
I noticed in the c++ tf api there was a function called transformPoint. Is this what I need?
My other idea is to use the transformation.py module to convert the quaternions and the translation vector to a homogeneous transformation matrix and transform the vector of points manually. (by using the transformations.py module) http://wiki.ros.org/geometry/RotationMethods#transformations.py
Originally posted by Sentinal_Bias on ROS Answers with karma: 418 on 2014-03-30
Post score: 2
Answer:
The python tf API has transformPoint, and transformPointCloud. I'm not sure why they don't show up in the API docs, but here's how they work:
transformPoint(self, target_frame, ps) method of tf.listener.TransformListener instance
:param target_frame: the tf target frame, a string
:param ps: the geometry_msgs.msg.PointStamped message
:return: new geometry_msgs.msg.PointStamped message, in frame target_frame
:raises: any of the exceptions that :meth:`~tf.Transformer.lookupTransform` can raise
Transforms a geometry_msgs PointStamped message to frame target_frame, returns a new PointStamped message.
transformPointCloud(self, target_frame, point_cloud) method of tf.listener.TransformListener instance
:param target_frame: the tf target frame, a string
:param ps: the sensor_msgs.msg.PointCloud message
:return: new sensor_msgs.msg.PointCloud message, in frame target_frame
:raises: any of the exceptions that :meth:`~tf.Transformer.lookupTransform` can raise
Transforms a geometry_msgs PoseStamped message to frame target_frame, returns a new PoseStamped message.
You'll have to either loop over your vector and create PointStamped objects, or pack all the points into a PointCloud.
Originally posted by Dan Lazewatsky with karma: 9115 on 2014-03-30
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by Sentinal_Bias on 2014-03-30:
Hi thanks I use the pointcloud method. I found some documentation here http://mirror.umd.edu/roswiki/doc/diamondback/api/tf/html/python/tf_python.html. But the official link is missing sections | {
"domain": "robotics.stackexchange",
"id": 17465,
"tags": "python, transform"
} |
Ping function in Python | Question: Below is a piece of code which takes a list of IPs, pings the hosts and returns (prints) the list for "Alive" and "Dead". Any feedback on what can be done better is welcomed, mainly, speed and reliability! Would you accept it as production code? What about try/except blocks and logging?
import subprocess
from multiprocessing import Pool, cpu_count
hosts = ('127.0.0.1', 'google.com', '127.0.0.2', '10.1.1.1')
def ping(host):
ret = subprocess.call(['ping', '-c', '5', '-W', '3', host], stdout=open('/dev/null', 'w'), stderr=open('/dev/null', 'w'))
return ret == 0
host_list = {"Alive":[], "Dead":[]}
host_stat = {True: host_list["Alive"], False: host_list["Dead"]}
p = Pool(cpu_count())
[host_stat[success].append(host) for host, success in zip(hosts, p.map(ping, hosts))]
print host_stat
Answer: I’ll start with a meta comment:
Python (and production code) emphasis readability. There are several parts of your code which are quick difficult to read, or work out what they’re for (more details below). In production code, we don’t just want speed and reliability – we also want code that somebody else can debug in a year, who’s never seen it before. Try to write something that’s easy to follow.
Then a few style comments:
I prefer to list module imports (after grouping them per PEP 8’s recommendations) in alphabetical order. If you have a long list of module imports, this makes it much easier to quickly see whether a module has been imported, or find duplicate imports.
Quoting PEP 8 on line length:
Limit all lines to a maximum of 79 characters.
The line which defines ret is 126 characters long, which means I can't see it without scrolling off the screen. Ick.
More general comments:
Add comments and docstrings. I don’t know what any of this code does, and even if I work it out, I don’t know why you wrote it that way. For example, what does the ping function return, and why are you passing all those flags to the ping utility?
Use better variable names. A few examples:
I expect a variable named host_list to be a list, but it’s actually a dict.
In production code, I steer clear of single-character variable names like p. Fine in small scripts, but it’s a pain to grep for later.
I’d change the name of the ping function so it can be discussed unambiguously from the command-line ping utility.
I wouldn’t accept these variable names in “production code”.
Not everything is encapsulated in a function. The ping function provides a nice wrapper around the ping utility, but checking the list of hosts is handled at the main level. Wrap this in a function that takes a list of hosts, and returns a list of alive/dead. That makes it easier to reuse this code later.
Then wrap the testing of this particular list of hosts in a main() function, and use the if __name__ == '__main__': main() construction. More portable.
You assume the list of host names is correct. If I drop in something that clearly isn’t a hostname (for example, '127.0.0.1 notahostname' into the hosts tuple), I get a non-zero return code from the subprocess call, but it’s not because that host is dead – it’s because it’s nonsensical.
I would consider storing the return code from each host: this is a much more useful indicator of what’s happened. Alternatively, you could check that hostnames are valid before you pass them in – this Stack Overflow answer looks like it might be useful.
There seems to be duplication of data in host_list and host_stat. I’m really not sure what the point of having both of these dictionaries is. Why not just start with empty lists in host_stat?
That list comprehension is too complicated. Not only is it too long (86 chars), but it’s not immediately obvious what it does. Break it down across multiple lines (remember, Python emphasises readability).
Using /dev/null to throw away the output from subprocess is platform-specific. It’s better to use os.devnull, which should work on non-Unix systems. (from How to hide output of subprocess in Python 2.7)
Four woolier comments:
Are the keys and values of host_stat the right way round? Once you have a dict of host availability data, I’d imagine you’re going to go through the list of hosts one-at-a-time and check if they’re available. For that, surely having the keys be the hostnames makes more sense (but that’s just me).
Why is hosts a tuple? I'd make it a list instead. Also, break it across multiple lines (see below).
I don’t see the value of multiprocessing here. Perhaps if you’re doing it for lots of hostnames, but otherwise I wouldn’t bother.
What do you need logging for? The only thing that might be useful is the return code from ping, but since you’re throwing that away, I’m not sure what you think you’d like to log.
With those comments in mind, here's how I'd rewrite your script:
import os
import subprocess
def ping_return_code(hostname):
"""Use the ping utility to attempt to reach the host. We send 5 packets
('-c 5') and wait 3 milliseconds ('-W 3') for a response. The function
returns the return code from the ping utility.
"""
ret_code = subprocess.call(['ping', '-c', '5', '-W', '3', hostname],
stdout=open(os.devnull, 'w'),
stderr=open(os.devnull, 'w'))
return ret_code
def verify_hosts(host_list):
"""For each hostname in the list, attempt to reach it using ping. Returns a
dict in which the keys are the hostnames, and the values are the return
codes from ping. Assumes that the hostnames are valid.
"""
return_codes = dict()
for hostname in host_list:
return_codes[hostname] = ping_return_code(hostname)
return return_codes
def main():
hosts_to_test = [
'127.0.0.1',
'google.com',
'127.0.0.2',
'10.1.1.1'
]
print verify_hosts(hosts_to_test)
if __name__ == '__main__':
main() | {
"domain": "codereview.stackexchange",
"id": 11485,
"tags": "python, python-2.x, status-monitoring"
} |
How does a Hadamard discrete-time quantum walk result in a skewed distribution? | Question: I was reading this tutorial about discrete random walk and got confused by the following paragraph.
After the succession of Hadamard applications ($H$), I wonder how do we get skewed distribution. I can understand this could happen with noise or erroneous $H$ operation. If the Hadamard operation is perfect, then it would produce the equal split and yield the classical behavior, no?
Thanks.
Answer: You get a skewed distribution because you start with a "skewed" coin state (I'm assuming the system you are considering starts with the walker state in a single fixed state).
In fact, you can verify that the asymmetry of the output distribution depends on the choice of initial coin state. If the initial coin state is for example $|0\rangle+i|1\rangle$, then you get a symmetric distribution.
This might be a bit of an overkill here, but there is actually a nice way to write the transition amplitudes corresponding to an arbitrary $n$-steps discrete-time quantum walk. Suppose the initial state has the form $|i,s\rangle$, with $|i\rangle$ the initial walker state and $|s\rangle$ the initial coin state, and we want to know the probability amplitude of ending up with the state $|j,t\rangle$ after $n\ge1$ steps. Assume that $s,t\in\{0,1\}$ (we don't lose much with this assumption: if the initial or final coin states are superpositions of computational basis ones, we can go back to this case by changing the description of the coin operations).
Denote with $\mathcal W\equiv \mathcal S\mathcal C$ the walk operator, where $\mathcal S$ and $\mathcal C$ are the controlled-shit and coin operation, respectively. Denote with $u_{ss'}$ the matrix elements of the coin matrix (in your case, these would be the matrix elements of the Hadamard matrix). We then have
$$\langle j,t|\mathcal W^n|i,s\rangle =
%\sum_I \prod_{\ell=1}^n u_{I_\ell} \equiv
\sum_I u_{I} \equiv
u_{I_1 I_2} u_{I_3 I_4} \cdots u_{I_{2n-1}I_{2n}},$$
where the short-hand notation $u_I$ denotes the product of matrix elements $u_{\alpha\beta}$ with the indices taken progressively from the $2n$-bit string $I$ (this is way easier to understand in practice than to explain in full generality, I'll give an example in a bit).
The sum is taken over all binary strings $I$ of length $2n$ such that
$I_1\equiv t, I_{2n}=s$;
denote with $\tilde I$ the elements of $I$ obtained removing the extremal ones (e.g. if $I=(011001)$ then $\tilde I=(1100)$). Then $\tilde I$ must be made up of pairs of equal elements. For example, you can have $\tilde I=(1111)$ and $\tilde I=(1100)$, but not $\tilde I=(1101)$;
the number of $1$s in $I$ is related to the initial and final positions $i$ and $j$. More precisely, $j-i=\sharp_1-\sharp_0+\delta_{I_1,1}-\delta_{I_1,0}$, where $\sharp_p\equiv\sharp_p(\tilde I)$ is the number of elements of $\tilde I$ which equal $p\in\{0,1\}$.
While these rules appear rather convoluted, they can be used to directly compute input/output probability amplitudes quite easily. For example, consider the case of three steps, and assume the initial state to be $|0,0\rangle\equiv |0,\uparrow\rangle$.
Then (some of) the output probability amplitude $3$ steps corresponds to the tuples (and therefore amplitudes)
$$|-3,0\rangle \to (000000) \simeq u_{00} u_{00} u_{00}, \\
|+3,1\rangle \to (111110) \simeq u_{11} u_{11} u_{10}, \\
|+1,0\rangle \to (011110) \simeq u_{01} u_{11} u_{10}, \\
|+1,1\rangle \to (100110),(111000) \simeq u_{10} u_{01} u_{10} + u_{11} u_{10} u_{00}, \\
|-1,0\rangle \to (011000), (000110) \simeq u_{01} u_{10} u_{00} + u_{00} u_{01} u_{10}, \\
|-1,1\rangle \to (100000) \simeq u_{10} u_{00} u_{00}.$$
Now, for the Hadamard walk you have $u_{ij}\equiv H_{ij}=\frac{1}{\sqrt2}(-1)^{ij}$, and therefore the probability amplitudes are clearly asymmetric: we have (up to $2^{-3/2}$ constant factors)
$$|-3,0\rangle \to 1, \qquad
|+3,1\rangle \to 1, \\
|+1,0\rangle \to -1, \qquad
|+1,1\rangle \to 0, \qquad
|-1,0\rangle \to 2,\qquad
|-1,1\rangle \to 1.$$ | {
"domain": "quantumcomputing.stackexchange",
"id": 2855,
"tags": "hadamard, quantum-walks"
} |
Why is the distribution of electrons of calcium in K,L,M,N shells 2,8,8,2 instead of 2,8,9,1? | Question: I'm a beginner to this topic, so this would likely sound dumb. As far as I know, when distributing electrons in energy shells, the last energy shell can't have more than 8 electrons. So for calcium, it can't be 2,8,10. However, why isn't it 2,8,9,1?
Answer: The answer of your problem lies in the existence of sub-shells, that you seem not to know. Let me explain.
If you give a number $n$ to the successive shells, $K$ shell gets $1$, $L$ gets $2$, $M$ gets $3$, etc. The maximum amount of electrons per shell is $2n^2$ , which is $2$ for $K$ shell, $8$ for $L$ shell, $18$ for $M$ shell, etc. $n$ is called first quantum number or principal quantum number.
But whatever the $n$ value, the $2$ first electrons of any shell are called with the letter $s$ after the first quantum number $n$. Their energy is always a little bit lower than the other electrons of the same shell. They form what we call the sub-shell $s$. Examples of sub-shells $s$ : $1s, 2s, 3s, 4s$, etc.
If a shell has more than $2$ electrons, the supplementary ones are called $n$ plus the letter $p$. They form a sub shell $p$. Example of sub-shells $p$ : $2p, 3p, 4p$, etc. There may be up to six such $p$ electrons, and their energy is a little bit higher than the two $s$ electrons of the same shell.
If a shell has more than $8$ electrons, the ten supplementary ones are called $n$ plus the letter $d$. They form a sub-shell $d$ and their energy is higher than the electrons $s$ and $p$ of the same shell. Examples of sub-shells $d : 3d, 4d$. etc.
But surprisingly, electrons of the sub-shell $4s$ have a lower energy than $3d$. So when filling up the electronic configuration of Calcium ($Z = 20$), the sub-shell $4s$ will be filled before the $3d$. This is why Calcium contains $2$ electrons in the $K$ shell, $8$ in the $L$ shell, and not more then $8$ electrons in the $M$ shell. After finishing the $3p$ sub-shell, the two last electrons are not going to the $3d$ sub-shell of the $M$ shell. They will prefer going to the $4s$ sub-shell of the $N$ shell. This is why Calcium is not $2,8,9,1$ or $2,8,10,$ as you wrote. It is $2, 8, 8, 2$ | {
"domain": "chemistry.stackexchange",
"id": 16290,
"tags": "electrons, electronic-configuration"
} |
What do normalization term and partial measurement represent when tracing out ancillary qubits? | Question: I am reading a paper and I am having trouble following some equations.
The system in this paper has $N$ qubits, with $N_A$ ancillary and the rest ($N - N_A$) as data qubits. For the purpose of this question, any subscript $t$ can be ignored. $|\Psi(z)\rangle$ represents the quantum state of the entire system.
We then take the partial measurement $\Pi_{\mathcal A}$ on the ancillary subsytem $\mathcal A$ of $|\Psi(\mathcal z)\rangle$, i.e., the post-measurement quantum state $\rho_t(\mathcal z)$ is
$$ \rho_t(\mathcal z) = \frac{\text{Tr}_{\mathcal A}(\Pi_{\mathcal A}|\Psi_t(\mathcal z)\rangle\langle\Psi_t(\mathcal z)|)}{\text{Tr}(\Pi_{\mathcal A}\otimes\mathbb{I}_{2^{N-N_{\mathcal A}}}|\Psi_t(\mathcal z)\rangle\langle\Psi_t(\mathcal z)|)} $$
An immediate observation is that state $\rho_t(\mathcal z)$ is a nonlinear map for $|\mathcal z\rangle$, since both the nominator and denominator of Eq. (13) [the one above] are the function of the variable $|\mathcal z\rangle$.
What do the top and bottom lines mean?
I understand the top line is trying to trace out the ancillary qubits. However, I don't understand the purpose of the partial measurement $\Pi_\mathcal{A}$. The point is to disregard the ancillary qubits, so is $Tr_\mathcal{A}(|\Psi(z)\rangle \langle \Psi(z)|)$ not sufficient? (How to get subspace of quantum circuit?)
Regarding the bottom line, does this act as some form of normalisation?
Many thanks for the help!
Answer: Tracing out without measuring is not equivalent to simply tracing out. Consider the state:
$$\frac{|00\rangle+|11\rangle}{\sqrt{2}}.$$
Tracing out the last qubit, you would get:
$$\rho=\frac12I,$$
that is the fully mixed state. Measuring the last qubit, noting down the result $y$ and tracing it out yields the state:
$$\rho=|y\rangle.$$
Thus, tracing out may not be sufficient depending on your needs and depending on whether your registers are entangled.
Concerning the actual equation, you were right: the first line traces out the ancillary qubits after having measured them, while the second one is a normalization coming from the measurement of the ancillary qubits. | {
"domain": "quantumcomputing.stackexchange",
"id": 3314,
"tags": "linear-algebra"
} |
How to measure the mass of alpha particle? | Question: I was reading the rutherford experiment of the $\alpha$ particles. where we conclude that the positive charge and mass are concentrated in the center of atoms. while concluding the above result we use the charge and mass of the $\alpha$ particle.
I am still wondering when we don't know about the nucleus size(before the experiment). how do we know that:
size of an $\alpha$ particle.
mass of an $\alpha$ particle.
charge of an $\alpha$ particle.
I know that we can measure that speed and charge-mass ratio of $\alpha$ particle by passing it through perpendicular electrical and magnetic fields. Is there any experiment from which we can know the exact mass and charge of $\alpha $ particle?
Do they know that $\alpha$ particular are helium nucleus?
Answer: Yes. In fact that was what Rutherford got the Nobel (chemistry!) prize for. He trapped alpha particles from radium decay and shows that they produced a gas which, when excited, gave off light with the same spectral lines as helium.
https://web.lemoyne.edu/~giunta/ea/ROYDSann.HTML | {
"domain": "physics.stackexchange",
"id": 72869,
"tags": "particle-physics, experimental-physics, mass, nuclear-physics, atomic-physics"
} |
What am I doing wrong? Gmapping on Turtlebot Clone. I've pictures! | Question:
Hello all,
I've been scratching my head over this for a while now, and I would appreciate an explanation. If some of the angels on top high (like Melonee) could just give me a straightforward answer, it would be awesome.
Basically, I have been putting together my Turtlebot Clone over the past few weeks. Gotten around to doing some mapping recently, I am just using the standard demo as per suggested by the tutorials. However, I get errors, and mutated maps that even their own mothers couldn't love.
So, I have taken some screenshots to give you a step by step of what I am doing.
I start off with the above. I have my robot model, laser, map and camera all good to go. I have ran the gmapping launch file. You can see that the first parts of the occupancy grid is filling in. I have not moved the robot yet. There are no errors and we are all green on the dashboard.
Now, I have moved the turtlebot slightly to the left. The occupancy grid expands, however errors start to appear from the dashboard. What do those errors look like you might ask? I'll show you.
It is damn ugly I know. If I continue going around the environment, I will keep getting the error where the transform from base_link to odom fails. The only thing I touch in relation to launch files etc, is the minimal.launch. I don't use startup because teleop would not run when I did. I did the usual gyro config beforehand, and instead of putting the parameters into turtlebot.launch, I place them in minimal.launch. From what I can figure this should work fine. Correct me if I am wrong.
Now, if I continue around the map. I get another flavor of error. This time this is to do with a loop rate that isn't being met. It is usually just shy of it.
Now, if I continue on my merry way around the simple rectangular arena you see that obviously something is wrong. The two differences I can see from anyone else is how I do the bringup with the calibration values in the minimal.launch instead of the turtlebot.launch and I have a weaker netbook than what is suggest. If I had to put my money on it, I would say that it is the netbook not being fast enough.
You can look at the specs of the netbook I bought here.
Samsung NC110
My hands were a little tied with the choice, so this is why I didn't get something more powerful.
Again, I would really and truly like some feedback on what I might be doing wrong here, so I can zero in on the problem faster.
Thank you in advance.
Originally posted by osuairt on ROS Answers with karma: 69 on 2012-02-06
Post score: 0
Original comments
Comment by osuairt on 2012-02-11:
I've found that, and now I get an average load around 2.0. The problem still persists and now thinking that it must be a transform problem.
Comment by mmwise on 2012-02-10:
the parameter is in the kinect.launch file.
Comment by Lorenz on 2012-02-09:
The output shows that there is no odom transform but an odom_combined transform. Can you please also provide the output of rosrun tf tf_monitor and rostopic info /odom?
Comment by osuairt on 2012-02-09:
I've played around a bit more, and I have throttled the point cloud at 10Hz and getting a load average of around 2.0, mostly below. So, I take it that that possibility can be crossed off. Now this leaves the transform as Lorenz is saying. rosrun tf tf_echo base_footprint....) http://imgur.com/TAa70
Comment by Lorenz on 2012-02-09:
I still think that something is wrong with your transform tree, maybe with robot_pose_ekf. Otherwise tf_echo would be able to find odom_combined. Did you verify that robot_pose_ekf is actually connected to all required topics as I mentioned in one comment below?
Comment by osuairt on 2012-02-09:
Thank you very much. I have altered the point cloud throttle max rate down to 1Hz even. Then I get a load of 2.4 while driving around, and I still get the Transform from base_link to odom failure. Your thoughts?
Comment by Lorenz on 2012-02-08:
turtlebot_bringup/kinect.launch
Comment by osuairt on 2012-02-08:
:) Thank you. Can you tell me which config file I can change this parameter? I'll let you know if that helps. Thanks again!!
Comment by mmwise on 2012-02-08:
a load average of 5 might be your problem. Try changing the point cloud throttle max rate to 10Hz. see if that helps.. You want to have a load average less than 2.
Comment by osuairt on 2012-02-08:
Hi Melonee, thank you for taking the time to get back to me. I've written up the upstart stuff at: http://answers.ros.org/question/3901/turtlebot-upstart-install-failure . And, I went ahead and ran top while do the mapping http://pastebin.com/zsUapqpH .
Comment by mmwise on 2012-02-07:
on your TurtleBot run the command top and tell me what the load average is...
Comment by mmwise on 2012-02-07:
p.s. you should use upstart..ask a new question about the teleop and I'll help you make it work...
Comment by osuairt on 2012-02-29:
From what it seems it looks like a problem with the tf. http://dl.dropbox.com/u/36264864/second.bag
Answer:
To me it looks like something is wrong with your odometry or with the laser data that is coming in You could check the following:
Is odometry running and publishing sane values (check with rostopic echo /odom). Move around and verify that odometry is changing.
Check if the odom transform is coming in (rosrun tf tf_echo odom base_footprint). Try setting the fixed frame in rviz to odom and move the robot around. Verify that the robot is moving correctly in rviz, i.e. when you move one meter forward, it should also be approximately one meter in rviz (the grid display helps here).
Originally posted by Lorenz with karma: 22731 on 2012-02-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by osuairt on 2012-02-10:
Hi. Yes, I am using gmapping_demo.launch. This is the result of rosnode info robot_pose_ekf - http://imgur.com/BoJC6.
Comment by Lorenz on 2012-02-10:
Ok. One thing I see already is that gmapping publishes a wrong transform so I guess something is wrong with its config. Which gmapping launch file did you use? turtlebot_navigation/gmapping_demo.launch? Please also post rosnode info robot_pose_ekf.
Comment by osuairt on 2012-02-10:
Also, there is my minimal.launch - http://imgur.com/q034D
Comment by osuairt on 2012-02-10:
Well I pulled down and did the Turtlebot Ubuntu install as usual, there was no mention in the install instructions that additional modifications had to be made. I've removed the mapping at the end robot_pose_ekf. tf_monitor - http://imgur.com/befc4. rostopic - http://imgur.com/fExHx
Comment by Lorenz on 2012-02-09:
Is that really the launch file you are using. It is PR2 specific. Remove the remapping line at the end or change it to match your setup. Also, to solve your problem, it is really important to understand your setup, so please provide the output of rosrun tf tf_monitor and rostopic info odom.
Comment by osuairt on 2012-02-09:
I see what you are saying that it should be odom_combined. http://imgur.com/SlCrn
Comment by Lorenz on 2012-02-07:
robot_pose_ekf is started by minimal.launch. Check with rxgraph or rosnode info robot_pose_ekf if all input topics are connected. I don't know which launch file you were using for gmapping but the odom frame needs to be set to odom_combined there. You don't need to start move_base for just mapping
Comment by osuairt on 2012-02-07:
I performed the standard calibration for the gyro and odometry from the tutorials and included them in minimal.launch. Lets go with the second option. Can you give me a idea of how I go about configuring the robot_pose_ekf and for move_base to use odom_combined?
Comment by Lorenz on 2012-02-07:
The topic is actually odom :) Did you configure and run robot_pose_ekf for fusing odometry and the gyro? You have two choices now: 1) don't use robot_pose_ekf (requries a patch to turtlebot_node) 2) configure robot_pose_ekf and change ros parameters for gmapping and move_base to use odom_combined.
Comment by osuairt on 2012-02-07:
Hi. In both cases of the rosparam get above, it was odom instead of odom_combined as you mentioned. Also, when I run rostopic list, there is only odom no odom_combined. The tf_echo also says that odom_combined does not exist. What do you think?
Comment by osuairt on 2012-02-13:
Hi, do you have any thoughts on my last comment?
Comment by Lorenz on 2012-02-13:
Sorry. I was very busy and didn't have time yet to answer. In general, people are answering as soon as they have time. See http://www.ros.org/wiki/Support#Guidelines_for_asking_a_question_.28Please_read_before_posting.29
Comment by Lorenz on 2012-02-13:
After double-checking the turtlebot launch files, I see that the turtlebot is actually using odom, not odom_combined. I'll edit my answer accordingly. My guesses are now: either you don't get updated laser data or you get bad odometry data. Please verify that both data sources look good.
Comment by osuairt on 2012-02-27:
The laser data looks good. Odom data on the other hand not so much. If I am traveling around the environment taking the mapping, I will get the odom to base_link transform error, then odometry readings skip off to some other area of the map, this is probably why I cannot close the loop.
Comment by osuairt on 2012-02-27:
The following is an image of Rviz http://imgur.com/3NJZ0 . You can see what I mean by the skipping of odometry. I've swapped out the Create for another I had and the behavior still persists.
Comment by osuairt on 2012-02-29:
From what it seems it looks like a problem with the tf. http://dl.dropbox.com/u/36264864/second.bag | {
"domain": "robotics.stackexchange",
"id": 8134,
"tags": "navigation, turtlebot, gmapping"
} |
Reduction from the SAT problem to the NAE-SAT problem | Question: I study complexity and computation independently.
I have a problem that I can not solve.
That's the problem:
For the SAT problem, there is a version in which we receive as input phrase $\varphi$ in the form of CNF and we have to decide whether there is a placement in each of the closures $\varphi$ provides at least one literal and does not provide at least one literal. (This version is called NAE-SAT, where NAE is the acronym for Not All Equal).
for example:
$( x_1 \vee \overline{x_2} \vee x_3 )\wedge (\overline{x_1} \vee \overline{x_2} \vee x_3) \in NAE-SAT$ because for the placement $s(x_1)=T$ , $s(x_2)=T$ and $s(x_3)=T$ it holds that in each clause there is a literal about T and a literal about F, since it will show from the shape: $( T \vee F \vee T )\wedge (F \vee F \vee T )$
$( x_1 \vee x_2 \vee x_3 )\wedge (\overline{x_1} \vee x_2 \vee x_3) \wedge (\overline{x_1} \vee \overline{x_2} \vee x_3) \wedge (\overline{x_1} \vee x_2 \vee \overline{x_3}) \notin NAE-SAT$. because in every placement for the phrase there will be at least one clause in which all the literals will be provided or all will not be provided, contrary to the requirements of the language.
We would like to show a reduction $ SAT \leq _p NAE-SAT
$. Given phrase $\varphi = c_1 \wedge c_2 \wedge \cdots \wedge c_m$ (from the Boolean variables $x_1 , x_2 , \cdots , x_n $) as an instance for the SAT problem, we will first create the new variables $ y_1 , y_2 , \cdots , y_m , z$. Now, for each clause $C_i = (l_{i,1} \vee \cdots \vee l_{i,k_i})$ (which has $k_i \geq 3$ literals) we construct the clauses: $D_{i,1} = (l_{i,1} \vee \cdots \vee l_{i,k_i-1} \vee y_i) $ and $D_{i,2} = ( \overline{y_i} \vee l_{i,k_i} \vee z)$ . Finally, we will define the whole phrase to be $f(\varphi ) = D_{1,1} \wedge D_{1,2} \wedge D_{2,1} \wedge D_{2,2} \wedge \cdots \wedge D_{m,1} \wedge D_{m,2}$.
The question has 3 sections, which are related to each other, so I can not ask each one separately
Section A
For the phrase $\varphi = ( \overline{x_1} \vee x_2 \vee \overline{x_3} \vee x_4 )\wedge (x_1 \vee x_2 \vee x_3)$, what is the phrase FP that will be constructed by the reduction?
$f(\varphi ) = ( \overline{x_1} \vee x_2 \vee \overline{x_3} \vee y_1 )\wedge (\overline{y_1} \vee x_4 \vee z) \wedge (x_1 \vee x_2 \vee y_2) \wedge (\overline{y_2} \vee x_3 \vee z)$
$ f(\varphi ) = ( \overline{y_1} \vee x_4 \vee z) \wedge (x_1 \vee \overline{x_3} \vee y_2) \wedge (y_1 \vee y_2 \vee y_3)
\wedge (\overline{y_3} \vee \overline{x_3} \vee z) $
none of the answers is correct.
I think the answer is 1, It's not complicated.
Section B
Is the reduction, as a function of the length of the phrase, polynomial?
Yes and also linear
Yes but it is not linear
This cannot be determined
No, since the number of new variables we add may increase exponentially depending on the number of original variables
none of the answers is correct.
I think the answer is yes, so 3 and 4 are certainly incorrect. The reduction takes a phrase in SAT and at most doubles it 2. If in SAT there were m clauses, at the moment there will be 2m clauses. And if in SAT there were n variables, currently in NAE-SAT there will be n + m variables. So in my opinion the correct answer is 1, but I'm not sure here.
Section C
Is the reduction correct and defined on each input?
Yes and yes
Although the reduction is correct, in the description we do not refer to clauses with less than 3 liters. Nevertheless, it can be easily adapted to the case of 2 literals, and in the case of a single literal is simply rejected, as it will not be possible to simultaneously supply and not supply it.
The reduction is incorrect and is not defined on the entire input, which is why we do not handle clauses of even length
Although the reduction is defined on the entire input, it is incorrect.
none of the answers is correct.
I think the reduction is correct because it is polynomial. And I think it's also defined, because of every clause in SAT you can get 2 clauses in NAE-SAT, but I'm not sure about that. I stand out between answer 1 and answer 2
The question was translated from Hebrew. So I have no source for the question.
Answer: As you correctly spotted, the reduction can be implemented in polynomial time, and the blowup in the formula size is indeed linear.
The reduction is also correct, with the caveat mentioned in item 2.
It's not trivial to show, but we'll go at it slowly.
So first, let's assume we allow this construction also for 2 literals. As for 1 literal clauses -- we can assume those don't exist, since given a formula $\phi$, any 1-literal clause immediately forces a value of one of the variables $x$. So we can start by plugging in this value, and removing the clause and all occurrences of $x$ (more precisely - if $x$ was assigned True, we can remove all clauses containing $x$, and remove $\neg x$ from all other clauses, and vice-versa if $x$ was assigned False).
So now for the main reduction.
We need to show two directions of correctness. First, let's assume $\phi\in SAT$. Then it has a satisfying assignment $\pi$.
We'll construct a NAE-satisfying assignment for $f(\phi)$ (note that it's called an "assignment", not "placement").
Consider some clause $C_i$ of $\phi$. Since $\pi\models \phi$, then at least one literal in $C_i$ is assigned True. If one of the literals in $D_{i,1}$ is assigned True, then we assign $y_i$ to False and $z$ to False, and now both clauses that result from $C_i$ are NAE-satisfied.
Otherwise, if all literals in $D_{i,1}$ are assigned to False, then the remaining literal, which appears in $D_{i,2}$ is assigned to True, so by assigning $y_i$ to be True (and $z$ still False), we get again a NAE-satisfying assignment.
Note that we set $z$ to False globally, as it is a single variable.
So the first direction is correct. For the converse, we start with the following observation: if $\tau$ is a NAE-assignment, then the ``inverse'' assignment is also NAE.
Thus, if $f(\phi)$ has a NAE-assignment, we can assume w.l.o.g. that $z$ is assigned to False (if not, we invert the assignment).
From here we make a similar deduction to the above, to obtain a satisfying assignment for $\phi$. Try it and write if you get stuck. | {
"domain": "cs.stackexchange",
"id": 18784,
"tags": "complexity-theory, computability, reductions, np, satisfiability"
} |
Physical meaning of the convection term in the momentum equation of acoustic wave | Question: In deriving the acoustic wave equation, the momentum equation is used.
$$\frac{\partial \mathbf{u}}{\partial t}+
(\mathbf{u}\nabla)\mathbf{u}=-\frac{1}{\rho} \nabla p$$
Intuitively, the convection term $(\mathbf{u}\nabla)\mathbf{u} $ represents a component of acceleration, but how is this acceleration originate?
P.S.
What is the difference between $(\mathbf{u}\nabla)\mathbf{u} $ and $(\mathbf{u}\cdot\nabla)\mathbf{u} $?
Answer: The momentum equation written above is written for an incompressible fluid, otherwise the density $\rho$ would have to be written inside the partial derivative:
$$
\frac{\partial\rho\mathbf{u}}{\partial t}
$$
Hence, the momentum in a Control Volume can change by either:
a change in velocity of the fluid
convection (transport) of momentum through the CV boundaries. This is the meaning of the term $(\mathbf{u}\nabla)\mathbf{u}$. | {
"domain": "physics.stackexchange",
"id": 60275,
"tags": "fluid-dynamics, waves, acoustics, vector-fields, flow"
} |
momentum in calculation of Bohr's radius | Question: Some usual calculations of the Bohr's radius (see 2.5 at Feynman's here, or this text here) starts by defining a radius $a$ ( most probable radius ? average radius ? ). Next, from Heisenberg's uncertainty principle:
$$\Delta p \Delta a \ge h/2$$
it is said:
$$p = h/a$$.
where $p$ is the electron momentum.
Remainder of the calculation is expression of the total electron energy and minimization.
This step from uncertainty to momentum is confusing to me: we pass from inequality to equality, from margins ($\Delta$) to concrete values of $p$ and $a$.
Any hint to understand this step? Thanks.
Addendum:
This is another similar example, from "Quantum Mechanics", Nouredine Zettili, example 1.6:
Estimate the uncertainty in the position of (a) a neutron moving at
$5 \cdot 10^6 m/s$ ...
Solution: Using (1.57), we can write the
position uncertainty as $$\Delta x = \frac{\hbar}{2 \Delta p} = \frac{\hbar}{2 m v } = ... $$
again, $\Delta p$, an interval, is converted to the absolute value $p=mv$.
If something is in an interval $p \in [p_{min},p_{max}]$ we can say $p=p_{avg}\pm \Delta p$ where $p_{avg}=\frac{p_{max}+p_{min}}{2}$ and $\Delta p=\frac{p_{max}-p_{min}}{2}$, but these texts seems to assume $p_{min}=0$ and $\frac{p_{max}}{2}=mv=\Delta p$ or something similar.
In other words, when in this example we give a known value for the speed, we are fixing the momentum without any indetermination, thus, uncertainly in position should be infinite.
Answer: The usual way this calculation is demonstrated that you complain about isn't a solid argument, it si more like a play with symbols that is repeated and taught for its ability to provide a shortcut procedure to remind us of the correct result for Bohr radius. It isn't solid for two reasons:
if we are using the uncertainty relation, $x,p$ do not have single simultaneous values and so it makes no sense to define $p=h/a$;
the uncertainty relation and the virial principle themselves do not actually fix the size of the atom, the atom can be as large as we want - great size $a$ corresponds to a high excitation number $n$.
A more solid way to derive some sort of estimate of minimal atom size would work only with expected averages and using some formal equivalent of that minimum requirement.
For example, we can assume that minimum size of atom is achieved when the expected average of its energy is the minimal possible value, and then try to use HUP and the virial principle to derive some estimate of atom size (defined as $\sqrt{\langle x^2\rangle}$). Similar procedure is known in the general theory of electron shells of an atom, where it is able to provide and estimation of the ground state energy and the corresponding psi function. | {
"domain": "physics.stackexchange",
"id": 55477,
"tags": "quantum-mechanics, atomic-physics, heisenberg-uncertainty-principle"
} |
Creating Second Workspace | Question:
I have a workspace called "catkin_ws" in my home directory. I built the workspace with the command catkin_make and I added two extra lines to .bashrc file:
source /opt/ros/indigo/setup.bash
source ~/catkin_ws/devel/setup.bash
Then, I created second workspace called "catkin_ws2" in my home directory and built it by using the command catkin_make. Finally, I decided to delete second workspace by using the command catkin clean. However, when I try to delete it with that command, the message current or desired workspace could not be determined is displayed.
Why cannot I delete it ? (Note: I execute catkin clean in the directory ~/catkin_ws2)
My opinion: Since I did not source setup.bash file of second workspace, catkin build tool does not know such a workspace exists.Hence, the command does not work.
Originally posted by gktg1514 on ROS Answers with karma: 67 on 2019-07-05
Post score: 0
Answer:
Here I think that should be helpful.
http://answers.ros.org/question/66012/removing-catkin-workspace/
Originally posted by chilicheese with karma: 41 on 2019-07-05
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 33347,
"tags": "catkin, ros-indigo"
} |
Reduce false positives having imbalanced data | Question: I'm using a DNN-48 having the following scenario:
Features: 8 (48 at the end because I generate conditional sequences of 6 elements each)
Classes: Y=0 (90%), Y=1 (10%)
Precision and recall are good when Y=0. If I adjust weights properly the recall is good on Y=1 too. The real problem is precision when Y=1.
Using SMOTE results were even worse. Best alternative at the moment was applying a custom stratification based on picking 8 of Y=0 for each Y=1 found.
I've tried moving the threshold to 0.8-0.9 and it slightly works but it sacrifices some recall. I've also tried changing the NN topology adding more layers, units and using regularization like L1/L2 and dropouts. For the time being I've seen best results using simple a topology like:
model = Sequential([
Dense(64, activation='relu', input_shape=(X.shape[1],)),
Dense(32, activation='relu'),
Dropout(0.2),
Dense(16, activation='relu'),
Dropout(0.2),
Dense(8, activation='relu'),
Dense(4, activation='relu'),
Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy', f1_score])
I'd like to keep the good recall without having too much false positives. Any ideas? thank you!
Answer: First of all, there will always be a trade-off between precision_score and recall_score. So you must choose a satisfying recall_score that you can live with.
There are few techniques that you might try:
Use F-beta Score: F-beta score allows you to give more importance to either precision_score or recall_score depends on beta value.
Basic rule for that:
If beta is less than 1, this gives more importance to precision (fewer false positives) over recall.
If beta is greater than 1, this gives more importance to recall (fewer false negatives) over precision.
When beta equals 1, it's equivalent to the F1 score - giving equal weight to both precision and recall.
You can choose whichever value of beta that you want based on your results.
Ensemble Methods: Another approach is using ensemble methods designed for imbalance problems such as Balanced Random Forests and Easy Ensemble classifier which balance classes internally.
To create Balanced Random Forest, you need to install imbalanced-learn.
from imblearn.ensemble import BalancedRandomForestClassifier
brf = BalancedRandomForestClassifier(n_estimators=100)
brf.fit(X_train, y_train)
predictions = brf.predict(X_test) | {
"domain": "datascience.stackexchange",
"id": 11954,
"tags": "machine-learning, neural-network, class-imbalance, binary-classification, smote"
} |
Making the save and cancel buttons editable | Question: The following code makes a div tag editable and handles the save and cancel buttons. But there is a lot of duplication. How could I remove the duplication on this?
var originalTexts = {};
$("div.edit-panel").find("span").click(function() {
var panel = $(this).closest('div.simple-panel');
var text = panel.find(".info");
var id = $(this).parent().attr("id");
panel.find(".edit-panel").addClass('editable');
text.addClass("content-editable").prop('contenteditable', 'true');
originalTexts[id] = text.text();
})
$("div.edit-panel").find(".cancel-editable").click(function() {
var panel = $(this).closest('div.simple-panel');
var text = panel.find(".info");
var id = $(this).parent().attr("id");
text.text(originalTexts[id]);
panel.find(".edit-panel").removeClass('editable');
text.removeClass("content-editable").prop('contenteditable', 'false');
})
$("div.edit-panel").find(".save-editable").click(function() {
var panel = $(this).closest('div.simple-panel');
var text = panel.find(".info");
var currentText = text.text();
text.text(currentText);
panel.find(".edit-panel").removeClass('editable');
text.removeClass("content-editable").prop('contenteditable', 'false');
})
Answer: How about using just one declaration and an if block? You remove the top three lines by this for each function -
assuming buttons are <button>
var originalTexts = {};
$("div.edit-panel").find("button,span").click(function(){
var panel = $(this).closest('div.simple-panel');
var text = panel.find(".info");
var id = $(this).parent().attr("id");
var currentText = text.text();
if ($(this).is("span")){
panel.find(".edit-panel").addClass('editable');
text.addClass("content-editable").prop('contenteditable', 'true');
originalTexts[id] = text.text();
}else if ($(this).hasClass("cancel-editable")){
text.text(originalTexts[id]);
panel.find(".edit-panel").removeClass('editable');
text.removeClass("content-editable").prop('contenteditable', 'false');
}else if ($(this).hasClass("save-editable")){
text.text(currentText);
panel.find(".edit-panel").removeClass('editable');
text.removeClass("content-editable").prop('contenteditable', 'false');
}
});
You can reduce more codes if you can store the id and other information inside the data attributes of each button. But I don't know the html, so could not suggest on that. | {
"domain": "codereview.stackexchange",
"id": 8485,
"tags": "javascript, jquery"
} |
how do i get rviz to recognize my plugin? | Question:
Taken from my question here:
http://stackoverflow.com/questions/43595155/rviz-does-not-recognize-my-plugin
I am missing something as I struggle to get the tutorial rviz plugin to show up within rviz. I have the source for the visualization_tutorials. Within that git repo, there is the rviz_plugin_tutorials. I can successfully build this within a ROS workspace, with the output showing up in rviz_workspace/devel/lib as librviz_plugin_tutorials.so.
I have read that rviz uses pluginlib to load plugins that have the appropriate plugin_description.xml and use the PLUGINLIB_EXPORT_CLASS macro appropriately.
I don't understand how this mechanism is supposed to work. After building the plugin, all you have are the library (.so file) and the package and plugin .xml files. How is running 'rosrun rviz rviz' supposed to allow rviz to find this new library and plugin description file? That's my fundamental misunderstanding. I don't see the tutorial plugin when I run rviz and running rospack doesn't show the tutorial plugin:
honeywell@UGV-Laptop-1:~/rviz_workspace$ rospack plugins --attrib=plugin rviz
rviz /opt/ros/kinetic/share/rviz/plugin_description.xml
honeywell@UGV-Laptop-1:~/rviz_workspace$
Thanks for any help
EDIT: rviz is running from /opt/ros/kinetic/bin/rviz. Am I supposed to copy my plugin_description.xml and librviz_plugin_tutorials.so somewhere other than where the workspace has them?
Originally posted by Rich von Lehe on ROS Answers with karma: 26 on 2017-04-24
Post score: 0
Answer:
Answered my own question at the stackoverflow link. It was a rookie mistake.
From SO:
Rookie mistakes being made here by me. I did two things to solve my problem, then realized only one was needed.
Installed rviz source and built it. After doing this and performing 'rosrun rviz rviz' the problem still remained. No new plugin.
Realized I had not sourced devel/setup.bash for this workspace. Doing this and then running rviz produced the desired results.
I went back and removed rviz from src and removed the devel folder and it all still worked, so it seems it's not necessary to work with rviz built from source.
Originally posted by Rich von Lehe with karma: 26 on 2017-04-24
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by gvdhoorn on 2017-04-25:
I've edited your question to include the text from your answer over at SO. This way future readers don't need to site-hop to understand what the problem (and the solution) was.
Comment by gvdhoorn on 2017-04-25:
Thanks for reporting what your eventual solution was btw, much appreciated. | {
"domain": "robotics.stackexchange",
"id": 27713,
"tags": "ros, rviz, plugin"
} |
The stark effect on ground state of Hydrogen | Question: When considering the Stark Effect, we consider the effect of an external uniform weak electric field which is directed along the positive $z$-axis, $\vec{\varepsilon} = \varepsilon \vec{k}$, on the ground state of a hydrogen atom. Then using nondegenerate perturbation theory it follows that we can approximate the energy of the ground state by $$E_{100} = E_{100}^{(1)} + \epsilon \varepsilon \langle 100| \hat{Z}| 100 \rangle + e^2 \varepsilon^2 \sum_{nlm \neq 100}\frac{|\langle nlm| \hat{Z}| 100 \rangle|^2}{E_{100}^{(0)}-E_{nlm}^{(0)}}.$$ We can show that the second term is zero i.e. $\langle 100| \hat{Z}| 100 \rangle = 0$.
How does it follow from this that the following conclusion can be made "The underlying physics behind this is that when the hydrogen atom is in the ground state, it has no permanent electric dipole moment"?
Thanks for any assistance.
Answer: First of all, it should be $z$, not $\hat{z}$. You're taking the mean value of the $z$-coordinate in the ground state.
Now, the ground state has spherical symmetry. This means that whatever $\langle z \rangle$ is, it better be equal to $\langle x \rangle$ and $\langle y \rangle$, since in the ground state $|100\rangle$ there is nothing that makes the $z$ direction special.
The dipole moment operator for the electron is $q \mathbf{r}$, with $q$ its charge and $\mathbf{r}$ the position operator. So the mean value of the dipole moment in the ground state is $q \langle 100 | \mathbf{r} | 100 \rangle$. But $\mathbf{r} = x \mathbf{\hat{x}} + y \mathbf{\hat{y}} + z \mathbf{\hat{z}}$, and since we know that the mean value of the coordinates is zero, we get that the mean value of the dipole moment is zero too. | {
"domain": "physics.stackexchange",
"id": 39274,
"tags": "quantum-mechanics, electromagnetism, electromagnetic-radiation, perturbation-theory, hydrogen"
} |
Angular momentum conservation in semiconductors | Question: In elementary quantum mechanics, we know that when the system possess continuous rotation symmetry, the angular momentum is conserved, and the Hamiltonian of the system commutes with angular momentum operators ( $ [H, J^2 ] = 0 $ and $[H, J_z]$ ), and vice versa (I believe). In this situation we can use their eigenvalues ($j$ and $m_j$) to refer to the states since they are preserved. However, in some papers I read (for example, this paper), these notations are used in semiconductors to refer to different bands (LH and HH), while, to me, continuous rotation symmetry obviously does not exist in crystals. Therefore I wonder what am I missing here?
Answer: Thinking within the framework of tight-binding approximation, one can tie different bands to the states of isolated atoms, which are labeled by their angular momentum (i.e., s-states, p-states, d-states, etc.) This terminology thus penetrates into labeling semiconductor bands, even though the angular momentum is not conserved. If I am not mistaken, this language is used even by Kittel.
This analogy is sometimes taken even further, e.g., when discussing the total momentum of holes (i.e., their spin plus their "angular momentum"), and when discussing the selection rules for light absorption and exciton formation. As I have already said, this makes perfect sense within the tight-binding picture, but needs not be taken too literally.
Unfortunately, I am not in a position to comment about the relation of this notation and the crystal symmetries (some of which are rotational symmetries). | {
"domain": "physics.stackexchange",
"id": 74074,
"tags": "quantum-mechanics, angular-momentum, solid-state-physics, conservation-laws, semiconductor-physics"
} |
How much are the benefits of installing a telescope in orbit? | Question: I know that our atmosphere acts like a protective blanket letting only some light through while blocking others. We send telescopes to orbit to get a clearer view of space objects.
I want to know that how much is this benefit. How clear, detailed and helpful-to-study are the pictures compared to pictures from ground based telescopes?
Answer:
How much are the benefits of installing a telescope in orbit?
In the past the benefits have been astronomical!
Sorry, I couldn't help it.
...our atmosphere acts like a protective blanket letting only some light through while blocking others...
For visible light astronomy, much of the light above the Earth reaches the ground, but astronomical seeing has limited resolution. Bigger telescopes could collect more light, but they couldn't resolve much better than a 15 cm diameter aperture.
So the 2.4 meter diameter Hubble Space Telescope delivered by far the highest resolution visible light images ever. Putting that telescope in space is one of the single biggest game changers in visible light Astronomy ever, matched only by the invention of the telescope, okay and photographic plates, and digital imaging...
How clear, detailed and helpful-to-study are the pictures compared to pictures from ground based telescopes?
Then speckle interferometry and especially adaptive optics were developed that could mostly overcome the atmospheric effects. But adaptive optics is complicated and expensive and needs computers, calibration and often artificial guide stars to work. Until recently it was only working in the near infrared, though it is pushing into the visible spectrum now.
Why aren't ground-based observatories using adaptive optics for visible wavelengths?
Will the E-ELT use Adaptive Optics at visible wavelengths?
What (if any) capabilities of Hubble are unique and irreplaceable? What can it do that can't be done by any other ground or space-based telescope?
Do point spread functions from large single telescopes using adaptive optics still look like Airy functions for narrow-band filters?
Do point spread functions from large single telescopes using adaptive optics still look like Airy functions for narrow-band filters?
How did VLT's adaptive optics obtain this resolution for Neptune? Is it really working in visible wavelengths?
Number of actuators in adaptive optics
As @RobJeffries points out the Hubble also covers some ultraviolet wavelengths that are not accessible from Earth's surface due to atmospheric absorption.
There are many other electromagnetic bands for which we must go to space to receive signals. These include UV through X-rays and most gamma rays. The atmosphere is too absorbing to to receive this radiation on the surface.
For the highest energy gamma rays however, astronomers use the Earth's atmosphere to convert the gamma rays to a shower of lower energy photons and particles then use a big array of detectors to record the shower and time the individual particles to reconstruct the direction and energy of the gamma ray.
For charged particle or cosmic ray astronomy (e.g. protons and heavier nuclei) you also must go above the atmosphere to directly measure the particles (low energy) but you can also use the Earth's atmosphere to produce a shower.
Measuring cosmic ray and gamma ray showers source
For low energy protons and nuclear cosmic rays, there is the Alpha Magnetic Spectrometer aboard the International Space Station.
At what wavelengths and for what particle types have astronomical objects been imaged or at least directionally resolved from the ISS?
At the other end of the electromagnetic spectrum, starting below 20 or 30 MHz the Earth's ionosphere becomes reflective to low frequencies. This is why short wave radio listeners can hear signals from the other side of the Earth. For these lower frequencies radio astronomers must also get above Earth's atmosphere.
What can be learned from low frequency radio astronomy available outside of Earth's ionosphere?
Requirements to resolve position of Jovian Whistlers up to magnitude of Red Spot with amateur radio equipment?
Another advantage of going to space is distance!
VLBI or very long baseline interferometry is the use of radio telescopes separated by very large distances to make a synthetic aperture for very high resolution. Currently the most famous case is the Event Horizon Telescope which is "as big as the Earth" and firmly on the ground, but VLBI from space offers even larger baselines.
Has VLBI been done using any space-based receivers besides Spektr-R?
What's the status and timeline for Millimetron? (Russia's 10m Deployable Antenna cooled to 6 K Earth-Space VLBI)
Following KRT-10 aboard Salyut-6, have there been any other radio astronomy observations from a space station or crewed capsule?
What are Spectr-R's major contributions to radio astronomy that could not have been done from Earth?
Why is space-based VLBI scattering sub-structure "Hopefully, a new promising tool to reconstruct the true image of observed background target(s)"?
Did the Spectr-R space-based radio telescope use on-board accelerometer to measure non-gravitational acceleration for baseline correction?
Current topics on Radio Astronomy and looking for advice
Examples of radio correlations over times much longer than interferometric baselines? | {
"domain": "astronomy.stackexchange",
"id": 5918,
"tags": "space-telescope"
} |
Does an Operator that neither commutes with $\hat{X}$ or $\hat{P}$, nor can be expressed as a "function" of $\hat{X}$ and $\hat{P}$ make sense? | Question: When you come from classical hamiltonian mechanics (which is based on the phase space), observables are introduced as functions $f$ on the phase space $(q, p)$. There can't be a classical observable that isn't a function of $q$ and $p$ by definition.
In quantum mechanics, however, $\hat{X}$ and $\hat{P}$ are operators, acting on an infinite dimensional hilbert-space, so it seems at least imaginable to me that there are operators that can't be expressed as a "function" of $\hat{X}$ and $\hat{P}$.
I put the "function" into quotation marks because I don't know how to rigorously define such a function on the space of operators, put asside a taylor expansion using $\hat{X}$ and $\hat{P}$ as factors.
Of course I can just make the hilbert-space "bigger", for example by introducing a new spatial dimension $\hat{Y}$ with associated momentum $\hat{P}_y$, but these new observables would automatically commute with the former ones ($\hat{X}$ and $\hat{P}_x$) because they act on another subspace of the hilbert-space.
I have a hard time imagining an operator which doesn't commute with ($\hat{X}$ and $\hat{P}_x$), while at the same time still not depending on each of them. Can somebody provide either an example for such an operator, or give a proof why such an operator can't exist?
EDIT: To clear 2 misconceptions that did arise:
When talking about operators, I only ask about cases where the operators are linear
By "function" I mean any operations that you can perform on operators (multiply them, add them, exponentiate them, have infinite sums or for the sake of the argument as well integrals). The question is about an operator that can't be expressed as "function" of $\hat{X}$ and $\hat{P}$ alone (that means I CAN'T FIND function in the above sense to relate these operators), but still does not commute with at least one of them.
EDIT 2: Since there is more than one answer now with (at first sight) contradicting content, it seems to me that the answers to the question do depend on further assumptions:
Do we consider $X$ (or $P$) to be a complete set of operators, or equivalently, can I express any state as a linearcombination of $|x\rangle$, where the coefficients do depend solely on $x$.
Do I require them to be self-adjoint, symmetric, and bijective?
Am I restricting to "local" Operators?
Answer: In classical mechanics, any "observable" has to be a function of $x,p$ because we define $x,p$ to be the degrees of freedom of the system. If we experimentally find an observable that does not depend on $x,p$, it means there were extra degrees of freedom that we forgot to include. If so, we just enlarge our phase space, i.e., we include these extra degrees of freedom among the $x,p$. So in the end, any observable can always be written as a function of the phase-space variables.
In quantum mechanics, the philosophy is exactly the same. Here, "funcional dependence" is replaced by the notion of completeness: a set of operators $\{\mathcal O_i\}$ is said to be complete if $[A,\mathcal O_i]=0$ implies $A\propto 1$, the identity operator.
When specifying a classical system, one must declare what the coordinates are. When specifying a quantum system, one must declare what a complete set of operators is. A typical example is $X,P$, which is often assumed to be complete. Some problems require extra degrees of freedom, such as the intrinsic spin $S$, in which case a complete set of operators would be $X,P,S$. One may imagine systems that require more degrees of freedom, but also ones that require less (say, finite-dimensional systems).
Being "complete" is the formalisation of the notion of expressing an operator as a function of other operators. In particular, if we declare that $\{\mathcal O_i\}$ is complete, then any other operator $T$ can be expressed as a function of the $\mathcal O_i$, by definition.
Thus, if we declare that $X,P$ is complete, then any other operator must be expressible as a function thereof. In order to have an operator that cannot be expressed as such, one must assume that $X,P$ is not complete. So complete the set: $X,P,Q$, for some $Q$. Now we are free to define the $Q$'s to behave as we want. Say, the $Q$'s could be the position and momenta of extra dimensions, in which case there is a somewhat canonical (pun indented) notion of commutator, namely $[Q,X]=[Q,P]=0$. (But note that this is not forced upon us; it is perfectly consistent to assume that the extra dimensions do not commute with the old variables. For example, we could assume a curved configuration space, and so $[X_\mu,X_\nu]=\omega_{\mu\nu}$ for some form $\omega$).
But we could also take some $Q$'s that do not, by hypothesis, commute with $X,P$, in which case we would have $[Q,X]\neq 0\neq [Q,P]$. And we assumed that $X,P,Q$ is complete, but $X,P$ alone is not, which means that $Q\neq Q(X,P)$. So the answer to the question in the OP is: yes, you can definitely have an operator that does not commute with the canonical variables, yet is not expressible as a function thereof.
(At least, at a matter of principle it can exist: there is nothing inconsistent about the existence of such an operator. But experience tells us that the canonical variables are always enough to encode all the relevant degrees of freedom, and so they are always in practice complete. If we find a system where it is not, it means we did not realise there were some extra degrees of freedom, and we have to include those together with the canonical variables, i.e., they become canonical variables themselves.) | {
"domain": "physics.stackexchange",
"id": 73856,
"tags": "quantum-mechanics, hilbert-space, operators, commutator"
} |
Conjugate symmetry of real-coefficient filters in Oppenheime's Discrete Time Signal Processing | Question: Is someone able to tell me where and how I could look up the yellow part in Oppenheime's Discrete Time Signal Processing, or explain it?
Answer: The Fourier transform of a real function $x[n]$ is conjugate symmetric. That is,
$$X^*(\omega)=X(-\omega)$$
This is easy to observe. From the definition of FT: $$X(\omega)=\sum_{n=-\infty}^{+\infty}x[n]e^{-j\omega n}$$
Conjugating both sides:
$$\begin{align}
X^*(\omega)=&\sum_{n=-\infty}^{+\infty}x^*[n]\left(e^{-j\omega n}\right)^*\\
&=\sum_{n=-\infty}^{+\infty}x[n]e^{j\omega n}\\
&=X(-\omega)
\end{align}$$
A direct consequence of this property (that you can easily verify) is that
1- The real part and the magnitude of the Fourier transform of a real valued function are even functions of $\omega$. That is, $\Re\{X(\omega)\}=\Re\{X(-\omega)\}$ and $|X(\omega)|=|X(-\omega)|$.
2- The imaginary part and the phase of the Fourier transform of a real valued function are odd functions of $\omega$. That is, $\Im\{X(\omega)\}=-\Im\{X(-\omega)\}$ and $\angle X(\omega)=-\angle X(-\omega)$.
Based on the above explanation, for a real-coefficient filter $h$, the magnitude of the frequency response is an even function.
$$\begin{align}
|H(e^{j(\omega-\pi)})|&=|H(e^{j(-\omega-\pi)})|\\
&=|H(e^{j(-\omega-\pi+2\pi)})|=|H(e^{j(\pi-\omega)})|
\end{align}$$
and the second line is because DTFT is $2\pi$-periodic. Hence we can add $2\pi$ and the result does not change. | {
"domain": "dsp.stackexchange",
"id": 5121,
"tags": "filters"
} |
Why does this thermodynamics problem seem to have two answers? | Question: The problem goes as:
For an ideal gas, molar heat capacity varies as: $C = C_v + aV$, where $a$ is a constant.
Now we are asked to find a relation between Temperature and Volume.
The way I approached the problem is by comparing the general expression for the heat capacity of an ideal gas to this one.
The general expression is: $C = C_v +\frac{R}{1-n} $ where $n$ is the coefficient in the general polytropic process.
By comparing the two equations, we can say that:
$\frac{R}{1-n} = aV$
Also, we know that $TV^{n-1} = k$, where $k$ is a constant.
So we can just replace $n-1$ in the second equation get a relation as: $TV^{-\frac{R}{aV}} = k$. But that is not the answer, and that is what I cannot understand. What went wrong here?
The method to get the 'correct' answer is:
$TV^{n-1}=k$$\Rightarrow$
$T(n-1)dV + VdT = 0$$\Rightarrow$
$(1-n)=\frac{VdT}{TdV} = \frac{R}{aV}$
$\Rightarrow \int \frac{dT}{T} = \int\frac{RdV}{aV^2}$
$\Rightarrow ln(T) = -\frac{R}{aV} + k'$
$\Rightarrow T=e^{-\frac{R}{aV}+k'}$
This is more of a mathematical question, But I couldn't boil down this thermodynamics problem into a math problem, and I guess there is a chance of physics being involved here and there. So why is the second answer correct? And what is wrong in the first approach?
Answer: If you go through this SE post,
$$C=C_V+P\frac{dV}{dT}$$
where we have taken the number of moles to be one. Comparing with the expression given
$$P\frac{dV}{dT}=aV\Rightarrow \frac{1}{aV^2}dV=\frac{1}{RT}dT$$
Integratin leads to
$$T=T_0e^{-R/aV}$$
As required. The way you have done is valid only for polytropic process no in general.
A Polytropic process is the one which follows:
$$pV^n=C$$
which is not general. A general process for an ideal gas defined as
$$p=f(V)$$
For example, the process might be like
$$p=V^3+V^2 e^{-\lambda V}$$
which is not polytropic! | {
"domain": "physics.stackexchange",
"id": 77108,
"tags": "homework-and-exercises, thermodynamics, ideal-gas"
} |
Compilation speed dependent on CPU? | Question: Is the speed of compiling software very dependent on the CPU? I know this question is a bit broad, but I'm to find an answer to the case described below.
If I download a software project that makes use of Node.js, TypeScript, and JavaScript and compile it running yarn it takes a certain duration on machine A and another duration on machine B. In my test case machine A is equipped with an Intel i5-4570 and machine B is equipped with an AMD Ryzen 5 1600x. Both have 16 GB RAM and both also have similar SSD drives in use.
Now, if I run the software compilation on both machines, machine A takes about 1:30min and machine B takes about 1:20min. Hence, machine B is not significantly faster, even though the CPUs are quite different in capability and get different scores according to CpuBenchmark:
Intel i5-4570: 7109 CPU Mark
AMD Ryzen 5 1600x 13232 CPU Mark
The single core rating of both CPUs is quite similar at around 2000 Points, but when I run compilation all cores move up to 100% according to system power statistics.
I also checked the network usage, but doesn't move over 10kB/s and is therefore negligible. The RAM use fluctuates within about 1GB in both systems between 9GB and 10GB.
Now, as the expected results differ quite intensly to the reality, I obviously am missing a point. So, an experienced input of some of you guys would very much help.
Is there anything else I could measure? Is there anything I missed or maybe don't know and therefore should further test? I basically want to find out, why the Ryzen is not significantly faster than the Intel mentioned above.
Answer: Things you are missing is
RAM performance: Intel CPUs has lower memory latency, and compilation speed significantly depends on memory latencies
Amount of cores/threads: each Ryzen core is slightly slower than Intel core, but it has 1.5x more cores, and each core can run 2 threads. This makes it 1.5x faster for ideal multi-threaded tasks (like benchmarks), but real programs may be not ideally scale from 4 to 12 threads
SSD speed, in particular 4K IOPS (i.e. speed of reading many small files)
F.e. first test on this page compares C++ compile times on various Intel/AMD CPUs:
8700K: Intel 6-core
7820X: Intel 8-core with slower memory controller
2700X: AMD 8-core
2600X: AMD 6-core
As you can see, Intel 6 cores outperformed AMD 8 cores and even Intel's own 8 cores coupled with slower memory controller.
And this page discusses a lot memory/cache latencies. | {
"domain": "cs.stackexchange",
"id": 19125,
"tags": "compilers, cpu"
} |
CMakeLists for plugin compilation | Question:
Hello,
I don't relly know how to use CMAKE. I would like to add some more .cc files to compile a model or world plugin for gazebo. Until now, I took the CMakLists.txt file given in the tutorial to compile (with commands "cmake ../" and "make"). I added codes to my plugin by including them as .hh file. But the code starts being pretty big and it takes 30 seconds to compile everytime I make a modification. So does anyone knows how I can modify the CMakeListst.txt file to add some other .cc files ? Here is the my actual one:
cmake_minimum_required(VERSION 2.8 FATAL_ERROR)
find_package(Boost REQUIRED COMPONENTS system)
include_directories(${Boost_INCLUDE_DIRS})
link_directories(${Boost_LIBRARY_DIRS})
include (FindPkgConfig)
if (PKG_CONFIG_FOUND)
pkg_check_modules(GAZEBO gazebo)
endif()
include_directories(${GAZEBO_INCLUDE_DIRS})
link_directories(${GAZEBO_LIBRARY_DIRS})
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall -std=gnu++0x")
add_library(arm_control SHARED arm_control.cc)
target_link_libraries(arm_control ${GAZEBO_LIBRARIES} ${Boost_LIBRARIES})
Cheers
Originally posted by debz on Gazebo Answers with karma: 198 on 2015-12-09
Post score: 1
Answer:
Change this line:
add_library(arm_control SHARED arm_control.cc)
to
add_library(arm_control SHARED arm_control.cc MY_NEW_SOURCE_FILE.cc MY_THIRD_SOURCE_FILE.cc)
Originally posted by nkoenig with karma: 7676 on 2015-12-09
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by debz on 2015-12-11:
Sweet, thx. | {
"domain": "robotics.stackexchange",
"id": 3840,
"tags": "gazebo"
} |
Linked list node implementation with maximum code reuse | Question: Currently as an exercise to practicing SOLID principles and basic data structures, I am trying to implement linked list type structures with as much code reuse as possible. Currently, I have:
package structures.linked;
public class SingleNode<T> {
private T data;
private SingleNode<T> next;
public SingleNode(T data, SingleNode<T> next) {
this.data = data;
this.next = next;
}
public T getData() {
return data;
}
public void setData(T data) {
this.data = data;
}
public SingleNode<T> getNext() {
return next;
}
public void setNext(SingleNode<T> next) {
this.next = next;
}
}
and...
package structures.linked;
public class DoubleNode<T> extends SingleNode<T> {
private DoubleNode<T> prev;
public DoubleNode(T data, DoubleNode<T> next, DoubleNode<T> prev) {
super(data, next);
this.prev = prev;
}
public DoubleNode<T> getPrev() {
return prev;
}
public void setPrev(DoubleNode<T> prev) {
this.prev = prev;
}
public DoubleNode<T> getNext() {
return (DoubleNode<T>) super.getNext();
}
public void setNext(DoubleNode<T> next) {
super.setNext(next);
}
}
It seems to me that getNext() inside of DoubleNode<T> is a violation of Liskov's substitution principle. Is this the case? Is there a better way to implement this while still reusing the code in SingleNode<T> and without breaking SOLID principles?
Answer: Don't use inheritance if it's just to write less code. Use inheritance if it makes sense to add behaviour to a valid base class.
Like you have noticed yourself you're violating Liskov's substitution principle. The easiest way for me to think about this is by imagining a method taking the base class as input:
public void doSomething(BaseClass mything)
If we then pass an instance of a subclass to that method:
BaseClass myActualThing = new SomeSubClass();
doSomething(myActualThing);
Would that method notice anything different than if we had passed an actual instance of BaseClass?
The answer has to be no. It should have exactly the same methods as the BaseClass (I mean, the name, parameter types and return type. The implementation can of course differ).
In your case I'm wondering how you intend to use these nodes in an actual program.
If we take a quick look at the standard library we find the LinkedList class. Creating a list, adding/removing elements, getting an iterator, etc... are all called on this class. The internal representation of a list however, is a Node based.
Notice that this is the entire implementation of that internal Node class:
private static class Node<E> {
E item;
Node<E> next;
Node<E> prev;
Node(Node<E> prev, E element, Node<E> next) {
this.item = element;
this.next = next;
this.prev = prev;
}
}
There's no difference in nodes with 2 links or 1 link. If it's the first node in a list it just means prev=null and if it's the last next=null.
All the case handling happens inside the LinkedList class.
It could be a fun exercise to write your own implementation of a linked list where you do handle special cases with specialised classes (be sure to implement the List and/or Deque interfaces). In that case I would start with a Node interface like J H suggested. This interface should contain ALL the methods you expect from a node (for example: hasNext(), hasPrevious(), add(), ...).
Then think about which cases you want to handle with different classes. (Like FirstNode, LastNode, DoublyLinkedNode, EmptyNode).
If there are methods that are the same for all of these classes you can choose to change the interface into an abstract class that implements those common methods. That way you reduce the amount of code you write in a way that actually makes sense.
Afterwards it could also be interesting to compare your implementation with the
standard java LinkedList implementation. Notice which of their complex methods became easier in your implementation, and which things are super simple in their implementation but where you are struggling. | {
"domain": "codereview.stackexchange",
"id": 26912,
"tags": "java, beginner, object-oriented, linked-list"
} |
QM Continuity Equation: Many-Body Version for Density Operator? | Question: I am trying to brush up my rusty intuition on second quantization and many-particle systems and i came across the following problem:
In 1-particle QM we have the continuity equation
$$
\frac{\partial}{\partial t}\left(\psi\psi^*\right)=\frac{i\hbar}{2m}\left(\psi^*\triangle\psi-\psi\triangle\psi^*\right)
$$
Now, in many-particle physics (free particles!) i also expect the spatial density operator (or: number operator in spatial basis) to somehow evolve or "diffuse", if i start with spatially non-homogeneous initial conditions. Therefore i started to wonder, what the evolution equation for this operator actually looks like.
My naive expectation is a direct analogy to the wavefunction:
$$
\frac{\partial}{\partial t}\left(\hat{\psi}\hat{\psi}^{\dagger}\right)=\frac{i\hbar}{2m}\left(\hat{\psi}^{\dagger}\triangle\hat{\psi}-\hat{\psi}\triangle\hat{\psi}^{\dagger}\right)
$$
If I try to actually calculate it, I get:
$$
i\hbar\frac{\partial}{\partial t}\left(\hat{\psi}\hat{\psi}^{\dagger}\right)=\left[\hat{\psi}\hat{\psi}^{\dagger},\hat{H}\right]=\left[\hat{\psi}\hat{\psi}^{\dagger},\frac{\hbar^{2}}{2m}\triangle\hat{\psi}\hat{\psi}^{\dagger}\right]
$$
or
$$
\frac{\partial}{\partial t}\left(\hat{\psi}\hat{\psi}^{\dagger}\right)=\frac{-i\hbar}{2m}\left[\hat{\psi}\hat{\psi}^{\dagger},\triangle\hat{\psi}\hat{\psi}^{\dagger}\right]=\frac{-i\hbar}{2m}\left[\hat{\psi}\hat{\psi}^{\dagger},\triangle\right]\hat{\psi}\hat{\psi}^{\dagger}-\frac{i\hbar}{2m}\triangle\left[\hat{\psi}\hat{\psi}^{\dagger},\hat{\psi}\hat{\psi}^{\dagger}\right]
$$
and finally
$$
\frac{\partial}{\partial t}\left(\hat{\psi}\hat{\psi}^{\dagger}\right)=\frac{-i\hbar}{2m}\left[\hat{\psi}\hat{\psi}^{\dagger},\triangle\right]\hat{\psi}\hat{\psi}^{\dagger}.
$$
Now the question actually is: What is the commutator of the number operator and the Laplacian? (Why I cannot answer this myself: I have no intuition about how the Laplacian acts on a many-particle state.)
Answer: $\def\rr{{\bf r}}
\def\ii{{\rm i}}$
There is indeed a continuity equation for the particle density $\rho(\rr)=\Psi^\dagger(\rr)\Psi(\rr),$ where the field operator $\Psi^\dagger(\rr)$ creates a particle at position $\rr$. To derive it, you need only the canonical commutation relations for the field
\begin{align}
[\Psi(\rr),\Psi^\dagger(\rr')]& = \delta(\rr-\rr'),\\
[\Psi(\rr),\Psi(\rr')]&=0
\end{align}
together with the correct form of the Hamiltonian, which for free particles reads as
$$ H = -\frac{\hbar^2}{2m}\int{\rm d} \rr\; \Psi^\dagger(\rr)\nabla^2\Psi(\rr). $$
Note that here the derivative $\nabla$ is not an operator on the space of quantum states. It acts only on operator-valued (generalised) functions like $\Psi(\rr)$, which themselves act on the space of states. Therefore, the commutator between the field operator and the derivative makes little sense and has no relevance for the problem.
The derivation of the continuity equation proceeds as follows, using the Heisenberg equation of motion for $\rho(\rr)$,
\begin{align}
\partial_t \rho(\rr') & =\frac{\ii}{ \hbar} [H, \Psi^\dagger(\rr')\Psi(\rr') ]\\
& = \frac{\hbar}{2m\ii} \int{\rm d}\rr\;[\Psi^\dagger(\rr)\nabla^2\Psi(\rr),\Psi^\dagger(\rr')\Psi(\rr')] \\
& =\frac{\hbar}{2m\ii} \int{\rm d}\rr\;\left \lbrace \Psi^\dagger(\rr') [\Psi^\dagger(\rr)\nabla^2\Psi(\rr),\Psi(\rr')] + [\Psi^\dagger(\rr)\nabla^2\Psi(\rr),\Psi^\dagger(\rr')] \Psi(\rr')\right\rbrace\end{align}
Now, for simplicity, let's examine one of the terms above. The key is to treat each of $\Psi^\dagger(\rr)$ and $\nabla^2 \Psi(\rr)$ as separate operators, neither of which commute with $\Psi(\rr)$ in general. Nevertheless, you can use integration by parts* to shift the $\nabla^2$ around for convenience, leading to
\begin{align}
& \int{\rm d}\rr\; \Psi^\dagger(\rr') [\Psi^\dagger(\rr)\nabla^2\Psi(\rr),\Psi(\rr')] \\
& = \int{\rm d}\rr\; \left\lbrace\Psi^\dagger(\rr') \Psi^\dagger(\rr) [\nabla^2\Psi(\rr),\Psi(\rr')] + \Psi^\dagger(\rr') [\Psi^\dagger(\rr),\Psi(\rr')]\nabla^2\Psi(\rr) \right\rbrace \\
& = \int{\rm d}\rr\; \left\lbrace \Psi^\dagger(\rr')\nabla^2\Psi^\dagger(\rr) [\Psi(\rr),\Psi(\rr')] + \Psi^\dagger(\rr') [\Psi^\dagger(\rr),\Psi(\rr')]\nabla^2\Psi(\rr) \right\rbrace\\
& = \int{\rm d}\rr\; \left\lbrace 0 - \Psi^\dagger(\rr')\nabla^2\Psi(\rr) \delta(\rr'-\rr) \right\rbrace \\
& = -\Psi^\dagger(\rr')\nabla^2 \Psi(\rr').
\end{align}
Putting it together, you should obtain
$$ \partial_t\rho = -\frac{\hbar}{2m\ii}\left(\Psi^\dagger\nabla^2 \Psi - \nabla^2\Psi^\dagger\Psi\right),$$
from which one identifies the particle current operator
$$ \mathbf{J} = \frac{\hbar}{2m\ii}\Psi^\dagger\nabla \Psi + {{\rm h.c.}},$$
defined such that $\partial_t\rho + \nabla\cdot{{\bf J}} = 0$.
$\,$
*One also assumes as usual that the fields vanish at infinity, or more strictly, that the Hilbert space only contains states which are annihilated by the field operators at infinity. | {
"domain": "physics.stackexchange",
"id": 35655,
"tags": "quantum-mechanics, second-quantization, time-evolution"
} |
Cards shuffling and dealing program | Question: The program interacts between cards and four players among whom cards are to be distributed.
The Program do the following function
Creates a deck of cards.
Shuffle the deck.
Shows the deck.
Deal cards equally among four players.
Show the cards of each Player.
Please suggest some better ways of doing this program.
Also suggest new functions and modifications in this program.
package cardgame;
public class Card {
String suit;
String rank;
public Card(String cardSuit, String cardRank){
this.suit = cardSuit;
this.rank = cardRank;
}
}
package cardgame;
import java.util.*;
public class DeckOfCards {
final int size = 52;
Card[] deckOfCards = new Card[size];
public DeckOfCards(){
int count=0;
String[] suits = {"Diamonds","Clubs","Hearts","Spades"};
String[] ranks ={"King","Queen","Jack","Ten","Nine","Eight","Seven","Six","Five","Four","Three","Deuce","Ace",};
for (String s:suits){
for (String r:ranks){
Card card = new Card(s, r);
this.deckOfCards[count] = card;
count++;
}
}
}
public void ShuffleCards(){
Random rand = new Random();
int j;
for(int i=0; i<size; i++){
j = rand.nextInt(52);
Card temp = deckOfCards[i];
deckOfCards[i]=deckOfCards[j];
deckOfCards[j]= temp;
}
}
public void showCards(){
System.out.println("---------------------------------------------");
int count =0;
for (Card card : deckOfCards){
System.out.print(card.rank + " of " + card.suit + " ");
count++;
if(count%4==0)
System.out.println("");
}
System.out.println("---------------------------------------------");
}
public void dealCards(Players player1,Players player2,Players player3,Players player4){
int count = 0;
for (Card card : deckOfCards){
if (count>38){
player1.playCards[count%13] = card;
//System.out.println(player1.playCards[count/12].rank+" "+player1.playCards[count/12].suit);
}
else if (count>25){
player2.playCards[count%13] = card;
}
else if (count>12){
player3.playCards[count%13] = card;
}
else{
player4.playCards[count%13] = card;
}
count++;
}
}
}
package cardgame;
public class Players {
String name;
Card[] playCards = new Card[13];
public Players(String name){
this.name = name;
}
public void ShowPlayerCards(){
System.out.println("---------------------------------------------");
for (Card card : playCards){
if(card!=null)
System.out.println(card.rank + " of " + card.suit);
}
System.out.println("---------------------------------------------");
}
public String getName(){
return name;
}
}
package cardgame;
import java.util.*;
public class CardGame {
public static void main(String[] args) {
DeckOfCards deck = new DeckOfCards();
System.out.println("UnShuffeled Cards.");
deck.showCards();
deck.ShuffleCards();
System.out.println("Shuffeled Cards.");
deck.showCards();
Scanner input = new Scanner(System.in);
System.out.println("Player One...\nEnter Name:");
Players player1 = new Players(input.nextLine());
System.out.println("Player Two...\nEnter Name:");
Players player2 = new Players(input.nextLine());
System.out.println("Player Three...\nEnter Name:");
Players player3 = new Players(input.nextLine());
System.out.println("Player Four...\nEnter Name:");
Players player4 = new Players(input.nextLine());
deck.dealCards(player1, player2, player3, player4);
System.out.println("---------------------------------------------");
System.out.println(player1.getName());
player1.ShowPlayerCards();
System.out.println(player2.getName());
player2.ShowPlayerCards();
System.out.println(player3.getName());
player3.ShowPlayerCards();
System.out.println(player4.getName());
player4.ShowPlayerCards();
}
}
Answer: Ok... I am not sure how to show you all of the refactorings I did in a way that will make sense, so I'm just going to post the refactored classes and go from there.
Main:
public static void main(String[] args) {
Scanner input = new Scanner(System.in);
Players[] players = new Players[4];
Card[] deck = Dealer.getDeckOfCards();
System.out.println("Un-shuffled Cards.");
Dealer.showCards(deck);
Card[] shuffledCards = Dealer.shuffleCards(deck);
System.out.println("Shuffled Cards.");
Dealer.showCards(shuffledCards);
for(int i = 0; i < players.length; i++) {
System.out.println("Enter Player Name: ");
players[i] = new Players(input.nextLine());
}
Players[] playersWithCards = Dealer.dealCards(players, shuffledCards);
System.out.println("---------------------------------------------");
for(Players player : playersWithCards) {
System.out.println(player.getName());
player.showPlayerCards();
}
}
Players:
class Players {
private String name;
private Card[] cards = new Card[13];
Players(String name){
this.name = name;
}
void showPlayerCards(){
System.out.println("---------------------------------------------");
for (Card card : cards){
//you had been checking here if this was null, but there was no need for that check
System.out.printf("%s of %s\n", card.rank, card.suit);
}
System.out.println("---------------------------------------------");
}
void receiveCard(Card card, int position){
cards[position] = card;
}
String getName(){
return name;
}
}
Dealer (formerly DeckOfCards)
class Dealer {
private static final int SIZE = 52;
private static Card[] deckOfCards = new Card[SIZE];
static Card[] getDeckOfCards() {
int count = 0;
String[] suits = {"Diamonds", "Clubs", "Hearts", "Spades"};
String[] ranks = {"King", "Queen", "Jack", "Ten", "Nine", "Eight", "Seven", "Six", "Five", "Four", "Three", "Deuce", "Ace"};
for (String s : suits) {
for (String r : ranks) {
Card card = new Card(s, r);
deckOfCards[count] = card;
count++;
}
}
return deckOfCards;
}
static Card[] shuffleCards(Card[] deckOfCards) {
Random rand = new Random();
int j;
for (int i = 0; i < SIZE; i++) {
j = rand.nextInt(SIZE);
Card temp = deckOfCards[i];
deckOfCards[i] = deckOfCards[j];
deckOfCards[j] = temp;
}
return deckOfCards;
}
static void showCards(Card[] deckOfCards) {
System.out.println("---------------------------------------------");
int count = 0;
for (Card card : deckOfCards) {
System.out.printf("%s of %s\t", card.rank, card.suit); //use print f with \t (tab character)
count++;
if (count % 4 == 0)
System.out.println();
}
System.out.println("---------------------------------------------");
}
static Players[] dealCards(Players[] players, Card[] deck) {
int numOfCardsPerPlayer = deck.length / players.length;
for (int i = 0; i < deck.length; i++) {
int positionInHand = i % numOfCardsPerPlayer;
players[i % players.length].receiveCard(deck[i], positionInHand);
}
return players;
}
}
and Card:
class Card {
String suit;
String rank;
Card(String cardSuit, String cardRank){
this.suit = cardSuit;
this.rank = cardRank;
}
}
The first thing I did after refactoring your Main to use loops whenever possible was to ensure that you weren't unnecessarily making code public. All of your classes are in the same package, so you can make them package-private by removing the public modifiers. This is just generally considered good practice so that when you start working on projects with many classes, (some of which may have the same name) you are limiting conflicts.
Probably the single biggest difference between your code and the way I refactored it was that I changed DeckOfCards to a Dealer, and made it static. In programming, an abstraction of a DeckOfCards is really just an array of cards, like Card[] deck = Dealer.getDeckOfCards();. It seemed to me that most of the tasks you were calling from DeckOfCards were really the job of a Dealer, so I changed the code to reflect that, passing in the values created in the driver class as the program progresses. (For example in the line Card[] shuffledCards = Dealer.shuffleCards(deck);) If you look at this class, you'll see that all of its methods are static, which is really just a preference thing. If you wanted to make a constructor like Dealer dealer = new Dealer(); for a dealer and view it more as an entity than a doer, you could.
I'm sure I probably missed some stuff so if you have any questions let me know. All in all I think you did a really good job for a new developer. | {
"domain": "codereview.stackexchange",
"id": 32789,
"tags": "java, beginner, playing-cards, shuffle"
} |
What does the exact $\mu$-recursive program for minimization look like? | Question: The minimization of a given primitive recursive function $f$ is computed by the following expression:
$
\newcommand{\pr}[2]{\text{pr}^{#1}_{#2}}
\newcommand{\gpr}{\text{Pr}}
\newcommand{\sig}{\text{sgn}}
\text{Mn}[f] = \gpr[g, h] \circ (\pr{1}{1}, \pr{1}{1})
$
$g = \sig \circ f \circ (\pr{1}{1}, c^1_0)$
$h = \text{Cond}[\text{t}_\leqslant \circ (\pr{3}{3}, \pr{3}{2}), \pr{3}{3}, \text{Cond}[ \text{add} \circ (\text{t}_= \circ (\pr{3}{3}, \text{ suc} \circ \pr{3}{2}), f \circ (\pr{3}{1}, \text{suc} \circ \pr{3}{2})), \text{suc} \circ \pr{3}{2}, \text{suc} \circ \text{suc} \circ \pr{3}{2} ]]$
The first call to the function ($\text{Mn}[f] \circ (\pr{1}{1}, \pr{1}{1}) (n, k)$) will return at most k+1 and terminate for all inputs, if $f$ is total recursive.
A $\mu$-recursive search would not necessarily terminate, if no $z \leqslant k$ such that $f(n, z) = 0$. But I fail to see how that $\mu$-recursive expression would look in contrast to the above one, meaning $-$ what would the part of the expression look like, that makes the function not terminate.
Answer: I think you're confused about definitions. What you represented above is bounded minimization where we look for the least number $z$ below a given bound $k$ such that $f(n,z) = 0$. Such bounded minimization can be defined without any extra gadgets, using only primitive recursion.
The so-called $\mu$ operation performs unbounded search. It is not something we can define using primitive recursion. It is a new primitive operation which we add to primitive recursive functions to obtain recursive functions.
That is, if $f$ is a function of several arguments then $\mu(f)$ is a partial function which satisfies
$$
\mu(f)(\vec{n}) = k \iff f(\vec{n},k) = 0 \land \forall j < k, f(n,k) \neq 0.
$$
If there is no such $k$ then $\mu(f)(\vec{n})$ is undefined.
Every primitive recursive function is total. However, $\mu$ allows us to create non-total functions, for instance $\mu(f)(1)$, where $f(n,k) = 1$, is everywhere undefined. Therefore, $\mu$ is not something that can be defined using primitive recursion.
There are various mechanisms that exceed the power of primitive recursion which allow us to define $\mu$. One such is a general recursion, and another is a fixed-point operator. | {
"domain": "cs.stackexchange",
"id": 7069,
"tags": "computability, primitive-recursion, mu-recursion"
} |
Angular momentum and Schrodinger equation | Question: I'm studying stationary states and their orbital angular momentum in 3D Schrodinger equation. I have tried to understand by myself the situation but I get lost. I think it might be useful to know which cases are possible before starting the derivation by myself, so can you tell me:
Which of the following three situations are possible?
A stationary state in a central field that has a non determined modulus of the orbital angular-momentum.
A stationary state in a non-central field that has a non determined modulus of the orbital angular-momentum.
A stationary state in a non-central field has a determined modulus of the orbital angular-momentum.
Answer: By definition, a central force is spherically symmetric. From Noether's theorem, this implies that angular momentum is conserved. Equivalently, the angular momentum operator commutes with the Hamiltonian. This implies that $L^2$ and $H$ have simultaneous eigenvectors, so there are stationary states (i.e. eigenvectors of $H$) which have definite angular momentum (i.e. eigenvectors of $L^2$). Let's see what this argument implies about each of the three cases:
$L^2$ and $H$ must share some eigenstates. However, there's no guarantee that they share all eigenstates. When $H$ has repeated eigenvalues, there might exist eigenstates of $H$ which are not also eigenstates of $L^2$, so this is possible.
This is also possible, for more or less the same reason as 1.
Two operators can share one eigenstate without sharing a complete basis of eigenstates, so this is possible. | {
"domain": "physics.stackexchange",
"id": 69605,
"tags": "quantum-mechanics, angular-momentum, schroedinger-equation, quantum-states"
} |
Factory for constructing directed graphs in C++ | Question: I want to implement a directed graph which has 3 main kind of edges:
self referencing edges
non directional edges between two nodes
directional edges between two nodes
Also I want to be able to do global operations on the graph, like e.g. counting or removing all edges fulfilling certain criteria.
The easiest program structure I came up with is:
#include <unordered_set>
using namespace std;
struct Edge;
struct Node
{
Node(std::string name):name(name){}
string name;
std::unordered_set<Edge*> edges;
};
struct Edge
{
Edge(Node * node1): node1(node1){}
double weight;
Node * node1;
};
struct EdgeBetweenTwoNodes: public Edge
{
EdgeBetweenTwoNodes(Node * node1, Node * node2 ): Edge(node1), node2(node2){}
Node * node2;
};
struct DirectionalEdge: public EdgeBetweenTwoNodes
{
DirectionalEdge(Node * node1, Node * node2, bool direction ): EdgeBetweenTwoNodes(node1,node2), direction(direction){}
bool direction;
};
struct EdgesContainer
{
std::unordered_set<Edge*> all_edges;
void register_new_edge(Edge * edge)
{
all_edges.insert(edge);
}
//do all kind of manipulations on all edges of a cetain type...
};
struct EdgeFactory
{
EdgeFactory(){}
void create_edge(Node* node1,EdgesContainer & edge_container)
{
Edge * edge = new Edge(node1);
node1->edges.insert(edge);
edge_container.register_new_edge(edge);
}
void create_edge(Node* node1, Node* node2, EdgesContainer & edge_container)
{
EdgeBetweenTwoNodes * edge = new EdgeBetweenTwoNodes(node1,node2);
node1->edges.insert(edge);
node2->edges.insert(edge);
edge_container.register_new_edge(edge);
}
void create_edge(Node* node1, Node* node2, bool direction, EdgesContainer & edge_container)
{
DirectionalEdge * edge = new DirectionalEdge(node1,node2,direction);
node1->edges.insert(edge);
node2->edges.insert(edge);
edge_container.register_new_edge(edge);
}
};
int main()
{
Node * A = new Node("A");
Node * B= new Node("B");
Node * C= new Node("C");
EdgesContainer edges;
EdgeFactory edge_factory;
edge_factory.create_edge(A,edges);
edge_factory.create_edge(A,B,edges);
edge_factory.create_edge(A,C,1,edges);
return 0;
}
What do you think about it? Is this the correct use of a so called "Factory?"
Answer: At the moment
your edges have an identity.
There are three types.
Abolishing both leads to a simpler implementation:
struct Edge {
Node* from;
Node* to;
double weight;
// optional: enum EdgeType type;
};
std::unordered_set<Edge> edges;
Circular edges have both pointing to the same node.
Bi-directional edges have a reversed duplicate.
And normal directional edges are simple.
If you actually need to differentiate between those often enough, adding an enum to cache that info is simplicity itself.
Until that day, revel in the simplicity of having only a single type.
If you save your nodes in a container, using indices might be a nice idea.
Implementation:
Never import wholesale any namespace not designed for it, like std. Doing so can lead to conflicts which might silently or hopefully noisily break your code if the implementation changes even slightly.
The Nodes you leak might be a concern, outside this toy-example. At the very least make sure Consider saving them in some container by value (careful of invalidation-rules), or managing them with smart-pointers.
Of more concern are the Edges. If registering an Edge fails, it is leaked. Allocate them with std::make_unique instead, to ensure they are always properly owned. | {
"domain": "codereview.stackexchange",
"id": 32236,
"tags": "c++, graph"
} |
Isn't this statement regarding projectile motion wrong? | Question: Isn't this statement regarding projectile motion wrong?
If a body is thrown at an angle to the horizontal with initial velocity $u$, then displacement of body as a function of time is $\vec{s}=\vec{u}t+\frac12\vec{g}t^2$. (Air drag is neglected)
How can it be correct? Gravity acts in downward direction, so wouldn't the displacement be $\sqrt{x^2+y^2}$?
Answer: Note the vector signs! The vector signs mean that direction is included in the equation. So g only has a y component, but u may have components in any direction. | {
"domain": "physics.stackexchange",
"id": 14507,
"tags": "kinematics, vectors, projectile"
} |
Add values from a dictionary to subarrays using numpy | Question: I'm working with numpy arrays and dictionaries, the keys of the dictionary are coordinates in the numpy array and the values of the dictionary are a list of values I need to add in those coordinates, I also have a 3D list of coordinates that I use for reference, I was able to do it, but I'm creating unnecessary copies of some things to do it. I believe there is an easy way, but I really don't know how to do it, this is my code:
import numpy as np
arr = np.array([[[ 0., 448., 94., 111., 118.],
[ 0., 0., 0., 0., 0.],
[ 0., 6., 0., 6., 9.],
[ 0., 99., 4., 0., 0.],
[ 0., 31., 9., 0., 0.]],
[[ 0., 496., 99., 41., 20.],
[ 0., 0., 0., 0., 0.],
[ 0., 41., 0., 1., 6.],
[ 0., 34., 2., 0., 0.],
[ 0., 91., 4., 0., 0.]],
[[ 0., 411., 53., 75., 32.],
[ 0., 0., 0., 0., 0.],
[ 0., 45., 0., 3., 0.],
[ 0., 10., 3., 0., 7.],
[ 0., 38., 0., 9., 0.]],
[[ 0., 433., 67., 57., 23.],
[ 0., 0., 0., 0., 0.],
[ 0., 56., 0., 4., 0.],
[ 0., 7., 5., 0., 6.],
[ 0., 101., 0., 6., 0.]]])
#The first list in reference are the coordinates for the subarray [:,2:,2:] of the first two arrays in arr
#The second list in reference are the coordinates for the subarray [:,2:,2:] of the second two arrays in arr
reference = [[[2, 3], [2, 4], [3, 2], [4, 2]], [[2, 3], [3, 2], [3, 4], [4, 3]]]
#Dictionary whose keys matches the coordinates in the reference list
mydict = {(2, 3): [5, 1], (2, 4): [14, 16], (3, 2): [19, 1], (3, 4): [14, 30], (4, 2): [16, 9], (4, 3): [6, 2]}
#I extract the values of the dict if the key matches the reference and created a 3D list with the values
listvalues = [[mydict.get(tuple(v), v) for v in row] for row in reference]
#Output
listvalues = [[[5, 1], [14, 16], [19, 1], [16, 9]], [[5, 1], [19, 1], [14, 30], [6, 2]]]
#Then I create a numpy array with my aux list and transpose.
newvalues = np.array(listvalues).transpose(0, 2, 1)
newvalues = [[[ 5, 14, 19, 16],
[ 1, 16, 1, 9]],
[[ 5, 19, 14, 6],
[ 1, 1, 30, 2]]]
What I need is to get a copy of arr (arr shape is (4, 5, 5) and then the copy of arr which I will call newarr will have a shape of (8, 5, 5)) then I need to use the array [5 14 19 16] in newvalues to add the numbers in the corresponding coordinates in the first two arrays of newarr and then the values [5 19 14 6] in the next two arrays in newarr, then (here the copy starts) add the values of [ 1 16 1 9] in the next two arrays of newarr and finally add the values of [ 1 1 30 2] in the final two arrays. Here is the rest of the code.
newarr = np.tile(arr, (2, 1, 1)) #Here I repeat my original array
price = np.reshape(newvalues, (4, 4), order='F') #Here I reshape my 3D array of values to 2D and the order change
final = np.repeat(price, 2, axis =0) #And here I repeat the price so newarr and price have the same dimension in axis = 0
#And finally since they have the dimension in axis = 0 I add the values in the subarray.
index = newarr[:, 2:, 2:] #This is the slice of the subarray
index[index.astype('bool')] = index[index.astype('bool')] + np.array(final).ravel() #And this add values to the right places.
print(newarr)
Output
newarr=[[[ 0., 448., 94., 111., 118.],
[ 0., 0., 0., 0., 0.],
[ 0., 6., 0., 11., 23.],
[ 0., 99., 23., 0., 0.],
[ 0., 31., 25., 0., 0.]],
#In these two add the values of [5 14 19 16]
[[ 0., 496., 99., 41., 20.],
[ 0., 0., 0., 0., 0.],
[ 0., 41., 0., 6., 20.],
[ 0., 34., 21., 0., 0.],
[ 0., 91., 20., 0., 0.]],
[[ 0., 411., 53., 75., 32.],
[ 0., 0., 0., 0., 0.],
[ 0., 45., 0., 8., 0.],
[ 0., 10., 22., 0., 21.],
[ 0., 38., 0., 15., 0.]],
#In these two add the values of [5 19 14 6]
[[ 0., 433., 67., 57., 23.],
[ 0., 0., 0., 0., 0.],
[ 0., 56., 0., 9., 0.],
[ 0., 7., 24., 0., 20.],
[ 0., 101., 0., 12., 0.]],
#<-Here starts the copy of my original array
[[ 0., 448., 94., 111., 118.],
[ 0., 0., 0., 0., 0.],
[ 0., 6., 0., 7., 25.],
[ 0., 99., 5., 0., 0.],
[ 0., 31., 18., 0., 0.]],
#In these two add the values of [ 1 16 1 9]
[[ 0., 496., 99., 41., 20.],
[ 0., 0., 0., 0., 0.],
[ 0., 41., 0., 2., 22.],
[ 0., 34., 3., 0., 0.],
[ 0., 91., 13., 0., 0.]],
[[ 0., 411., 53., 75., 32.],
[ 0., 0., 0., 0., 0.],
[ 0., 45., 0., 4., 0.],
[ 0., 10., 4., 0., 37.],
[ 0., 38., 0., 11., 0.]],
#And finally in these two add the values of [ 1 1 30 2]
[[ 0., 433., 67., 57., 23.],
[ 0., 0., 0., 0., 0.],
[ 0., 56., 0., 5., 0.],
[ 0., 7., 6., 0., 36.],
[ 0., 101., 0., 8., 0.]],
I mean it does what I need, but like I said, I think there are some unnecessary copies that I don't need, and it's ugly code, I believe there should be an easy way, exploiting the possibilities of the dictionary and the numpy array, but I just can't see it. Any help will be appreciated, this is just an example to see what's going on, but the arr can have more arrays and the list values of the the dictionary can be bigger.
Answer: This seems to so what you want, but is specific to the example and code you gave.
There are two pairs of subarrays in arr and two different sets of indices and data to add to the subarrays. So there are four combinations. These get figured out by the values of i, j, and 'k'.
Because the data to be added is sparse, I'm going to use scipy.sparse.coo_matrix() to build arrays from reference and mydict.
The line data = ... converts the information in mydict and reference into a list of three tuples. data[0] is the values to be added, data[1] are the row coordinates, and data[2] are the col coordinates.
m = coo_matrix(...) builds the sparse matrix and converts it to an numpy.array.
x = arr[2*j:2*j+2] + m uses the numpy array broadcasting rules to add m to the subarrays of the arr slice. So x is a pair of subarrays with the values added to the selected coordinates.
All of the x arrays are gathered in a list newarr, and are vertically stacked at the end.
import numpy as np
from scipy.sparse import coo_matrix
newarr = []
for k in range(4):
i,j = divmod(k,2)
data = [*zip(*((mydict[tuple(coord)][i], *coord) for coord in reference[j]))]
m = coo_matrix((data[0],(data[1], data[2]))).toarray()
x = arr[2*j:2*j+2] + m
newarr.append(x)
newarr = np.vstack(newarr) | {
"domain": "codereview.stackexchange",
"id": 38970,
"tags": "python, numpy, hash-map"
} |
What are ketoximes? | Question: I am reading a book which is saying about the Syn-Anti nomenclature of oximes. It is showing this molecule to be Syn-methyl-ethyl ketoxime.
I can't understand what is a ketoxime and how is there nomenclature done. Can somebody explain please ?
Answer: An oxime is a chemical compound belonging to the imines, with the general formula $\ce{(R^1)(R^2)C=N-O H}$,
How is the nomenclature of methyl-ethyl ketoxime done?
The answer is :
Look at your photo.
1. On the left side you can see "$\ce{CH3-CH2 -}$" (ethyl).
2. On the right side you can see "$\ce{CH3 -}$" (methyl).
3. On the middle you can see "$\ce{C=N-O-H}$" (ketoxime).
Now assemble (1 & 2 & 3 ) ..then you will have the name "methyl-ethyl ketoxime".
For more you can see : https://en.wikipedia.org/wiki/Oxime | {
"domain": "chemistry.stackexchange",
"id": 3466,
"tags": "organic-chemistry, nomenclature"
} |
Hamiltonian from a differential equation | Question: In my differential equations course an example is given from the Lotka-Volterra system of equations:
$$ x'=x-xy$$
$$y'=-\gamma y+xy.\tag{1}$$
This is then transformed by the substitution: $q=\ln x, p=\ln y$.
$$ q'=1-e^p$$
$$p'=-\gamma +e^q.\tag{2}$$
Then without any explanation they say the Hamiltonian is then equal to:
$$H(p,q)=\gamma q -e^q+p-e^p\tag{3}$$
How is this Hamiltonian derived?
Answer: This is explained in part II of my Phys.SE answer here, which shows that a 2D system always has a Hamiltonian description locally.
It turns out, that before the non-canonical transformation $(x,y) \to (q,p)$, from the first pair of eoms (1) alone, the Hamiltonian and non-canonical Poisson bracket can be derived as $$H~=~\gamma \ln x -x +\ln y -y $$ and $$\{x,y\}_{PB} ~=~ xy,$$ respectively. Next the canonical coordinates $(q,p)$ can be easily determined. | {
"domain": "physics.stackexchange",
"id": 30016,
"tags": "homework-and-exercises, hamiltonian-formalism, differential-equations, phase-space, poisson-brackets"
} |
Product of Doublet and Arbitrary Function | Question: We know that the product of the delta function and another function samples the latter function. That is,
$$
\delta(t-\tau)f(t)=\delta(t-\tau)f(\tau)
$$
Does the doublet function retain this same property? That is, does the following hold true:
$$
\delta'(t-\tau)f(t)=\delta'(t-\tau)f(\tau)
$$
My reasoning is that $\delta'(t-\tau)$ is only nonzero at $\tau$, and therefore the value of $f$ at any time other than $\tau$ does not matter, in the same manner as the delta function product.
One of this issues I have with this result though is that computing the Laplace transform of the doublet function does not seem to work out.
$$
\mathcal{L}\{\delta'(t)\} = \int_{-\infty}^{\infty} \delta'(t)e^{-st}dt
= \int_{-\infty}^{\infty} \delta'(t)e^{-s\times0}dt = \int_{-\infty}^{\infty} \delta'(t)dt = 0
$$
The final integral is zero since the doublet function is odd. However, we know that since the doublet is the derivative of the delta, its Laplace transform must be $\mathcal{L}\{\delta'(t)\}=s$.
I would appreciate if someone could offer some insight.
Answer: Your first equation is correct. For derivatives of the Dirac delta impulse you get slightly more involved expressions. For $\delta'(t)$ the following holds:
$$f(t)\delta'(t)=f(0)\delta'(t)-f'(0)\delta(t)\tag{1}$$
Of course we assume that $f(t)$ and $f'(t)$ are continuous at $t=0$.
With $(1)$ you obtain the correct result for the Laplace transform of $\delta'(t)$:
$$\begin{align}\int_{-\infty}^{\infty}\delta'(t)e^{-st}dt&=\int_{-\infty}^{\infty}\big[\delta'(t)e^{-s\cdot 0}+se^{-s\cdot 0}\delta(t)\big]dt\\&=\int_{-\infty}^{\infty}\delta'(t)dt+s\int_{-\infty}^{\infty}\delta(t)dt\\&=s\end{align}$$
The general formula for the $n^{th}$ derivative of the Dirac delta impulse is [1]
$$f(t)\delta^{(n)}(t)=\sum_{k=0}^n(-1)^k\binom{n}{k}f^{(k)}(0)\delta^{(n-k)}(t)\tag{2}$$
[1] A. Papoulis, The Fourier Integral and Its Applications, p. 274. | {
"domain": "dsp.stackexchange",
"id": 10927,
"tags": "continuous-signals, laplace-transform, dirac-delta-impulse"
} |
Does it make sense to speak in a total derivative of a functional? Part I | Question: I would like to consider the problem of the total derivative of a given functional \begin{equation}
\mathcal{L}\bigg[\phi\big(x,y,z,t\big),\frac{\partial{\phi}}{\partial{x}}\big(x,y,z,t\big),\frac{\partial{\phi}}{\partial{y}}\big(x,y,z,t\big),\frac{\partial{\phi}}{\partial{z}}\big(x,y,z,t\big),\frac{\partial{\phi}}{\partial{t}}\big(x,y,z,t\big),x,y,z,t\bigg],\tag{I.1}\label{eq0}
\end{equation}
where all variable are independent each other.
However, before expressing my inquiry about the problem itself, I will make a brief preamble as motivation, or warm-up, as you wish. All the exhibition considered here takes into account that all functions are continuous and differentiable in any order, that is, they are all $C^{\infty}$ class.
Let us consider the case in which $z$ is a function of two variables $x$ and $y$, say $z=f(x,y)$, while $x$ and $y$, in turn, are functions of two variables $u$ and $v$, so that $x=g(u,v)$ and $y=h(u,v)$. Then $z$ becomes a function of $u$ and $v$, namely, $z=f\big(g\big(u,v\big),h\big(u,v\big)\big)=f\big(u,v\big)$. Here, we consider $u$ and $v$ as independents variables.
As we know, the total differential of the function $z=f(x,y)$ with respect to $x$ and $y$ is given by
\begin{equation}
dz=\frac{\partial{f}}{\partial{x}}dx+\frac{\partial{f}}{\partial{y}}dy,\tag{I.2}\label{eq1}
\end{equation}
while the total differential of the functions $x$ and $y$ with respect to $u$ and $v$ are given by
\begin{align}
dx=\frac{\partial{g}}{\partial{u}}du+\frac{\partial{g}}{\partial{v}}dv,\tag{I.3}\label{eq2}\\
dy=\frac{\partial{h}}{\partial{u}}du+\frac{\partial{h}}{\partial{v}}dv.\tag{I.4}\label{eq3}
\end{align}
Now, let us replacing (\ref{eq2}) and (\ref{eq3}) in (\ref{eq1}), such that now we have
\begin{equation}
dz=\Bigg(\frac{\partial{f}}{\partial{x}}\frac{\partial{g}}{\partial{u}}+\frac{\partial{f}}{\partial{y}}\frac{\partial{h}}{\partial{u}}\Bigg)du+\Bigg(\frac{\partial{f}}{\partial{x}}\frac{\partial{g}}{\partial{v}}+\frac{\partial{f}}{\partial{y}}\frac{\partial{h}}{\partial{v}}\Bigg)dv.\tag{I.5}\label{eq4}
\end{equation}
Thus, knowing that the total differential of $z$ with respect to $u$ and $v$ is given by
\begin{equation}
dz=\frac{\partial{f}}{\partial{u}}du+\frac{\partial{f}}{\partial{v}}dv,\tag{I.6}\label{eq5}
\end{equation}
we can, by direct comparison, to conclude that
\begin{align}
\frac{\partial{z}}{\partial{u}}=\frac{\partial{f}}{\partial{x}}\frac{\partial{g}}{\partial{u}}+\frac{\partial{f}}{\partial{y}}\frac{\partial{h}}{\partial{u}},\tag{I.7}\label{eq6}\\
\frac{\partial{z}}{\partial{v}}=\frac{\partial{f}}{\partial{x}}\frac{\partial{g}}{\partial{v}}+\frac{\partial{f}}{\partial{y}}\frac{\partial{h}}{\partial{v}}.\tag{I.8}\label{eq7}
\end{align}
And here arises my first question: Does it make sense to speak of the total derivative of $z$ in relation both of the two variables $u$ and $v$?
If the answer will be yes, and I think that this to be the answer, so, from Eq. (\ref{eq5}), it is valid that
\begin{equation}
\frac{dz}{du}=\frac{\partial{f}}{\partial{u}}\frac{du}{du}+\frac{\partial{f}}{\partial{v}}\frac{dv}{du}=\frac{\partial{f}}{\partial{u}}\quad\text{and}\quad\frac{dz}{dv}=\frac{\partial{f}}{\partial{u}}\frac{du}{dv}+\frac{\partial{f}}{\partial{v}}\frac{dv}{dv}=\frac{\partial{f}}{\partial{v}}. \tag{I.9}\label{eq7a}
\end{equation}
If the answer will be no, so the notation $dz/du$ and $dz/dv$ cannot be used and we can only speak in the validity of the equations (\ref{eq6}) and (\ref{eq7}). Here, $$\dfrac{\partial{z}}{\partial{u}}\equiv\dfrac{\partial{f}}{\partial{u}} \quad\text{and}\quad\dfrac{\partial{z}}{\partial{v}}\equiv\dfrac{\partial{f}}{\partial{v}}.$$
The situation is similar when we are considering coordinates transformation of type:
\begin{align}
\begin{split}
x'=f\big(x,y,z,t),\\
y'=g\big(x,y,z,t),\\
z'=h\big(x,y,z,t),\\
t'=w\big(x,y,z,t),
\end{split}
\end{align}
where the set of prime coordinates are independent of each other. Similarly, the set of coordinates without prime are also independent of each other. Thus, such that the total differential are
\begin{align}
\begin{split}
dx'=&\frac{\partial{f}}{\partial{x}}dx+\frac{\partial{f}}{\partial{y}}dy+\frac{\partial{f}}{\partial{z}}dz+\frac{\partial{f}}{\partial{t}}dt,\\
dy'=&\frac{\partial{g}}{\partial{x}}dx+\frac{\partial{g}}{\partial{y}}dy+\frac{\partial{g}}{\partial{z}}dz+\frac{\partial{g}}{\partial{t}}dt,\\
dz'=&\frac{\partial{h}}{\partial{x}}dx+\frac{\partial{h}}{\partial{y}}dy+\frac{\partial{h}}{\partial{z}}dz+\frac{\partial{h}}{\partial{t}}dt,\\
dt'=&\frac{\partial{w}}{\partial{x}}dx+\frac{\partial{w}}{\partial{y}}dy+\frac{\partial{w}}{\partial{z}}dz+\frac{\partial{w}}{\partial{t}}dt,
\end{split}
\end{align}
and so, we have found to case of $x'$, for example, that
\begin{equation}
\frac{dx'}{dx}=\frac{\partial{f}}{\partial{x}},\quad\frac{dx'}{dy}=\frac{\partial{f}}{\partial{y}}\quad\frac{dx'}{dz}=\frac{\partial{f}}{\partial{z}},\quad\text{and}\quad\frac{dx'}{dt}=\frac{\partial{f}}{\partial{t}}.
\end{equation}
And again, we ask ourselves: is it valid to use the $d/dx$, $d/dy$, $d/dz$ and $d/dt$ notation, since the function $x'$ has a dependence on the variables $x$, $y$, $z$ and $t$?
To finalize this preamble, which already is very long and tiresome, let us consider that the variables $x$, $y$ and $z$ have dependence with $t$, i.e., we have $x\big(t\big)$, $y\big(t\big)$ and $z\big(t\big)$, so we can write:
\begin{align}
\begin{split}
\frac{dx'}{dt}=\frac{\partial{f}}{\partial{x}}\frac{dx}{dt}+\frac{\partial{f}}{\partial{y}}\frac{dy}{dt}+\frac{\partial{f}}{\partial{z}}\frac{dz}{dt}+\frac{\partial{f}}{\partial{t}},\\
\end{split}
\tag{I.10}\label{eq11}
\end{align}
where, by simplicity, we have consider only the total derivative to $x'$. Obviously, that $y'$, $z'$ and $t'$ have analog equations. If $x'$, $y'$, $z'$ and $t'$ are not explicitly dependents of $t$ variable, so, of course, $$\frac{\partial{f}}{\partial{t}}=\frac{\partial{g}}{\partial{t}}=\frac{\partial{h}}{\partial{t}}=\frac{\partial{w}}{\partial{t}}=0.$$
We also point out that Eq. (\ref{eq11}) can be rewritten as
\begin{align}
\begin{split}
dx'=\Bigg(\frac{\partial{f}}{\partial{x}}\frac{dx}{dt}+\frac{\partial{f}}{\partial{y}}\frac{dy}{dt}+\frac{\partial{f}}{\partial{z}}\frac{dz}{dt}+\frac{\partial{f}}{\partial{t}}\Bigg)dt=\frac{df}{dt}dt.\\
\end{split}
\tag{I.11}\label{eq12}
\end{align}
After this exhaustive exposition, I want to turn back in the original problem of the functional (\ref{eq0}), whose total differential is given by
\begin{equation}
d\mathcal{L}=\frac{\partial{\mathcal{L}}}{\partial{\phi}}d\phi+\frac{\partial{\mathcal{L}}}{\partial{\big(\partial_i\phi\big)}}d\big(\partial_i\phi\big)+\frac{\partial{\mathcal{L}}}{\partial{x}}dx+\frac{\partial{\mathcal{L}}}{\partial{y}}dy+\frac{\partial{\mathcal{L}}}{\partial{z}}dz+\frac{\partial{\mathcal{L}}}{\partial{t}}dt.\tag{I.12}\label{eq15}
\end{equation}
Here, we can immediately think in the total derivative as (I will do the exposition only the $x$ variable.)
\begin{equation}
\frac{d\mathcal{L}}{dx}=\frac{\partial{\mathcal{L}}}{\partial{\phi}}\frac{\partial\phi}{\partial x}+\frac{\partial{\mathcal{L}}}{\partial{\big(\partial_i\phi\big)}}\frac{\big(\partial_i\phi\big)}{\partial x}+\frac{\partial{\mathcal{L}}}{\partial{x}},\tag{I.13}\label{eq16}
\end{equation}
once that $x$, $y$ and $z$ are independents each other.
However, if remember that
\begin{align}
\begin{split}
d\phi &=\frac{\partial\phi}{\partial x}dx+\frac{\partial\phi}{\partial y}dy+\frac{\partial\phi}{\partial z}dz+\frac{\partial\phi}{\partial t}dt,\\
d\big(\partial_i\phi\big) &=\frac{\partial\big(\partial_i\phi\big)}{\partial x}dx+\frac{\partial\big(\partial_i\phi\big)}{\partial y}dy+\frac{\partial\big(\partial_i\phi\big)}{\partial z}dz+\frac{\partial\big(\partial_i\phi\big)}{\partial t}dt,
\end{split}
\end{align}
we can, instead of we immediately write equation (\ref{eq16}), rewritten the equation (\ref{eq15}) as
\begin{multline}
d\mathcal{L}=\Bigg(\frac{\partial\mathcal{L}}{\partial\phi}\frac{\partial\phi}{\partial{x}}
+ \frac{\partial\mathcal{L}}{\partial\big(\partial_i\phi\big)}\frac{\partial\big(\partial_i\phi\big)}{\partial{x}} + \frac{\partial\mathcal{L}}{\partial{x}}\Bigg)dx\\ + \Bigg(\frac{\partial\mathcal{L}}{\partial\phi}\frac{\partial\phi}{\partial{y}} + \frac{\partial\mathcal{L}}{\partial\big(\partial_i\phi\big)}\frac{\partial\big(\partial_i\phi\big)}{\partial{y}} + \frac{\partial\mathcal{L}}{\partial{y}}\Bigg)dy\\+ \Bigg(\frac{\partial\mathcal{L}}{\partial\phi}\frac{\partial\phi}{\partial{z}} + \frac{\partial\mathcal{L}}{\partial\big(\partial_i\phi\big)}\frac{\partial\big(\partial_i\phi\big)}{\partial{z}} + \frac{\partial\mathcal{L}}{\partial{z}}\Bigg)dz\\ + \Bigg(\frac{\partial\mathcal{L}}{\partial\phi}\frac{\partial\phi}{\partial{t}} + \frac{\partial\mathcal{L}}{\partial\big(\partial_i\phi\big)}\frac{\partial\big(\partial_i\phi\big)}{\partial{t}} + \frac{\partial\mathcal{L}}{\partial{t}}\Bigg)dt\tag{I.14}\label{eq18}
\end{multline}
Here's the dilemma! Since $\phi$ and $\partial_i\phi$ are functions of the variables $x$, $y$, $z$ and $t$, and, in addition, the functional $\mathcal{L}$ itself depends explicitly on these same variables, we could then think that the functional $\mathcal{L}$ is implicitly a function of the variables $x$, $y$, $z$ and $t$, and, therefore, $\mathcal{L}=\mathcal{L}\big(x,y,z,t\big)$. If so, then the total ``implicit'' the total differential of $\mathcal{L}$ would be given by
\begin{equation}
d\mathcal{L}=\frac{\partial{\mathcal{L}}}{\partial{x}}dx+\frac{\partial{\mathcal{L}}}{\partial{y}}dy+\frac{\partial{\mathcal{L}}}{\partial{z}}dz+\frac{\partial{\mathcal{L}}}{\partial{t}}dt
\end{equation}
But this is not right since it contradicts the Eq. (\ref{eq15})!
Based on this contradiction, I ask: who are the terms in parentheses in Eq. (\ref{eq18})? Is it possible to speak in a total derivative of the functional $\mathcal{L}$?
To conclude, I would like to justify this exposition, and its inquiries, saying that the problem arises when I try to derive the Noether theorem. In a certain passage, similar terms arose, suggesting the use of a total derivative. However, I was unsure whether such a procedure would be correct or valid.
See Does it make sense to speak in a total derivative of a functional? Part II to additional motivation.
Answer:
Consider for simplicity a single real scalar field
$$\phi: \mathbb{R}^4\to \mathbb{R}\tag{A}$$
on a 4-dimensional spacetime $\mathbb{R}^4$. The Lagrangian density
$${\cal L}:~ \mathbb{R} \times \mathbb{R}^4 \times \mathbb{R}^4~~\to~~ \mathbb{R}\tag{B}$$ is a differentiable function. We can construct partial derivatives of the Lagrangian density ${\cal L}$ wrt. any of its 1+4=4=9 arguments. See also this & this related Phys.SE posts.
The integrand
$$\phi^{\ast}{\cal L}:\mathbb{R}^4\to \mathbb{R}\tag{C}$$
of the action functional
$$S[\phi]~:=~\int_{\mathbb{R}^4} \!d^4x~ (\phi^{\ast}{\cal L})(x)\tag{D}$$
is the pullback
$$x~~\mapsto~~ (\phi^{\ast}{\cal L})(x)~:=~{\cal L}(\phi(x),\partial\phi(x),x)\tag{E}$$
of the Lagrangian density ${\cal L}$ by the field $\phi$.
The derivative
$$ x~~\mapsto~~\frac{d(\phi^{\ast}{\cal L})(x)}{dx^{\mu}}\tag{F}$$
of the pullback (E) is by definition the total derivative [wrt. the spacetime coordinate $x^{\mu}$].
Be aware that physics texts usually don't bother to spell out the difference between the Lagrangian density ${\cal L}$ and its pullback $\phi^{\ast}{\cal L}$, either in words or notation. It is implicitly understood. | {
"domain": "physics.stackexchange",
"id": 57788,
"tags": "lagrangian-formalism, field-theory, differentiation, education, mathematics"
} |
How are diazonium salts prepared without self coupling | Question: While preparing diazonium salts, what prevents the freshly formed diazonium salt from reacting with the still unreacted amines left in the solution and perform diazo coupling?
If this happened then there won't be an appreciable yield of the diazo salt.
So what's the reason?
Answer: During diazonium ion preparation, first, sodium nitrite is mixed with hydrochloric acid to produce nitrous acid. The nitrous acid can be protonated under acidic conditions, which undergoes the loss of water and produce the nitrosonium ion ($\ce{O=N+}$; this reaction is usually performed at or around $\pu{0 ^\circ C}$):
An aromatic amine can attack the electrophilic nitrosonium ion to form a nitrosoamine $(\ce{Ar-NH-N=O})$.
The nitrosoamine can undergo a proton transfer (tautomerism) to give corresponding imminol $(\ce{Ar-N=N-OH})$.
Under acidic conditions, this $\ce{N=N-OH}$ group can be protonated and left as a water molecule to give the diazonium salt, which is resonance stabilized $(\ce{(Ar-N#N)+Cl-})$.
The key word here is all of these happen under acidic conditions. Therefore, the aromatic nucleus of aniline derivative is not active because of protonation. Also, during the synthesis, the starting aniline derivative is used as the limiting reagent. In addition, many of diazonium salts are unstable and hence, they are always prepared as needed under acidic conditions with good stirring, kept at or near $\pu{0 ^\circ C}$ to minimize the side reactions such as the reaction with water to produce a corresponding phenol until used in the coupling reaction (usually immediately).
The diazonium salt reacts as an electrophile with an electron-rich coupling component (a phenol or an aniline derivative), through an electrophilic aromatic substitution mechanism under basic conditions (specially when coupling reagent is a phenol derivative). To complete the synthesis of some azo dyes, it is also required heating. | {
"domain": "chemistry.stackexchange",
"id": 14434,
"tags": "organic-chemistry, reaction-mechanism, amines, salt"
} |
Design a native video buffer in C++ to be shown on android application and shared with other modules in native space and Java | Question: I'm working on a design for a video buffer in C++ that will be used by many consumers both in the native library space(C++) as well as Java side using JNI. My Idea is the following:
Have a buffer manager that will receive frames directly from the hardware and allocate each frame only once on the heap using shared pointer if it is aware of at least one consumer for this frame. When a consumer would like to to receive video frame it will call the bind method and specify the buffer size it wants since not all consumers will process data at the same rate. I think it is better to have a dedicated buffer for each client/consumer. After binding the buffer manager returns a DataBufferStream object that will be used by the consumer to read/fetch video frames. The DataBufferStram class is nothing more than a circular buffer of certain type and size. Using JNI im planning to pass a pointer to the buffer or copy the data to a byte array
I would like to get some feedback from people here to see if there is anything I can improve on. is there a better solution that you can recommend
class BufferManager {
public:
std::shared_ptr<DataBufferStream<SHARED_ARRAY>> bind(const unsigned int bufferSize,const int
requestorID);
void unbind(const int requestorID);
std::shared_ptr<BufferManager> getInstance();
private:
std::map<int,std::shared_ptr<DataBufferStream<SHARED_ARRAY>>> mDataBufferMap;
std::mutex mMutex;
};
std::shared_ptr<DataBufferStream<SHARED_ARRAY>> BufferManager::bind(const unsigned int
bufferSize,const int requestorID) {
std::lock_guard<std::mutex> lockGuard(mMutex);
auto it = mDataBufferMap.find(requestorID);
if(it == mDataBufferMap.end()){
mDataBufferMap[requestorID] = std::make_shared<DataBufferStream<SHARED_ARRAY>>(bufferSize);
}
return mDataBufferMap[requestorID];
}
void BufferManager::unbind(const int requestorID) {
std::lock_guard<std::mutex> lock(mMutex);
mDataBufferMap.erase(requestorID);
}
`
#include "../include/boost/circular_buffer.hpp"
#include <mutex>
#include <array>
#include <condition_variable>
#include <boost/shared_array.hpp>
typedef boost::shared_array<char> SHARED_ARRAY;
template <class T>
class DataBufferStream {
private:
boost::circular_buffer<T> mCircularBuffer;
std::mutex mMutex;
std::condition_variable mConditionalVariable;
unsigned int mIndex;
public:
DataBufferStream(const unsigned int bufferSize);
DataBufferStream(DataBufferStream& other);
DataBufferStream() = delete;
virtual ~DataBufferStream();
void pushData(T data);
T fetchData();
T readData(unsigned int duration);
T operator[](const unsigned int index);
T operator*();
void operator++();
void operator--();
DataBufferStream<T> &operator=(DataBufferStream<T>& other);
void clear();
unsigned int size();
};
`
#include "DataBuffer.h"
template <class T>
DataBufferStream<T>::DataBufferStream(const unsigned int bufferSize):
mCircularBuffer(bufferSize),
mMutex(),
mConditionalVariable(),
mIndex(0)
{
}
template <class T>
T DataBufferStream<T>::fetchData() {
std::lock_guard<std::mutex> lock (mMutex);
if(mCircularBuffer.size()>0) {
auto ptr = mCircularBuffer.front();
mCircularBuffer.pop_front();
return ptr;
}
return nullptr;
}
template <class T>
void DataBufferStream<T>::pushData(T data) {
std::lock_guard<std::mutex> lock (mMutex);
mCircularBuffer.push_back(data);
mConditionalVariable.notify_all();
}
template<class T>
T DataBufferStream<T>::operator[](const unsigned int index) {
std::lock_guard<std::mutex> lock (mMutex);
return mCircularBuffer.size()>index ? mCircularBuffer[index] : nullptr;
}
template<class T>
DataBufferStream<T>::~DataBufferStream() {
}
template<class T>
T DataBufferStream<T>::operator*() {
std::lock_guard<std::mutex> lock(mMutex);
return mCircularBuffer.size() > mIndex ? mCircularBuffer[mIndex] : nullptr;
}
template<class T>
void DataBufferStream<T>::operator++() {
std::lock_guard<std::mutex> lock(mMutex);
mIndex = mCircularBuffer.size() < mIndex+1 ? 0 : mIndex++;
}
template<class T>
void DataBufferStream<T>::operator--() {
std::lock_guard<std::mutex> lock(mMutex);
mIndex = mIndex > 0 ? mIndex-- : 0;
}
template<class T>
void DataBufferStream<T>::clear() {
std::lock_guard<std::mutex> lock(mMutex);
mCircularBuffer.clear();
mIndex = mCircularBuffer.size();
}
template<class T>
DataBufferStream<T>::DataBufferStream(DataBufferStream &other):
mMutex(),
mConditionalVariable(){
std::lock_guard<std::mutex> lock(other.mMutex);
this->mCircularBuffer = other.mCircularBuffer;
this->mIndex = other.mIndex;
}
template<class T>
DataBufferStream<T> &DataBufferStream<T>::operator=(DataBufferStream<T> &other) {
if(this!=&other){
std::unique_lock<std::mutex> myLock (mMutex,std::defer_lock);
std::unique_lock<std::mutex> otherLock(other.mMutex,std::defer_lock);
std::lock(myLock,otherLock);
mCircularBuffer = other.mCircularBuffer;
mIndex = other.mIndex;
}
return *this;
}
template<class T>
unsigned int DataBufferStream<T>::size() {
std::lock_guard<std::mutex> lock(mMutex);
return mCircularBuffer.size();
}
template<class T>
T DataBufferStream<T>::readData(unsigned int duration) {
auto data = fetchData();
std::unique_lock<std::mutex> lock(mMutex);
if((data == nullptr) &&
(mConditionalVariable.wait_for(lock,std::chrono::milliseconds(duration)) ==
std::cv_status::no_timeout)){
if(mCircularBuffer.size()>0) {
auto data = mCircularBuffer.front();
mCircularBuffer.pop_front();
}
}
return data;
}
template class DataBufferStream<SHARED_ARRAY>;
Answer: If your system is going to be impacted by high rate operations, looks like, I recommend you to preallocate the memory on boot time of the application, so during the execution of your program you dont do any allocations, for example:
auto it = mDataBufferMap.find(requestorID);
if(it == mDataBufferMap.end()){
mDataBufferMap[requestorID] = std::make_shared<DataBufferStream<SHARED_ARRAY>>(bufferSize);
}
will be like
auto it = mDataBufferMap.find(requestorID);
if(it == mDataBufferMap.end()){
mDataBufferMap[requestorID] = getDataBufferStreamFromPool();
}
And on the erase release the block from the map but without free the memory. Hope it helps | {
"domain": "codereview.stackexchange",
"id": 36590,
"tags": "c++, android, stream, video, jni"
} |
Deducing orbital degeneracy in geometries apart from octahedral or tetrahedral | Question: How do you determine the crystal field splitting pattern when given the structure of a complex?
It's quite simple for structures like octahedral or tetrahedral, but I find that the reasoning behind the patterns for the trigonal bipyramidal structure, for example, is somewhat arbitrary.
Continuing the example of a trigonal bipyramidal structure, the $d_{xy}$ and the $d_{x^2-y^2}$ orbitals are supposed to be degenerate, but the $d_{x^2-y^2}$ fully overlaps with one of the ligands on the complex, while the $d_{xy}$ partly overlaps with two, and I don't think we have enough information to assume that the two "parts" add up to one.
Do I just have to accept that this is how it works (as in, is this just a "limitation" of the crystal field theory)? Or is there some aspect that I am missing/misunderstanding?
Answer: The easiest way to do it is to look at what other people before you have done.
If you look at the $D_\mathrm{3h}$ character table, you see that $(x^2 - y^2, xy)$ transform together as $E'$.
I don't think there is a way of looking at diagrams of the orbitals and concluding that they are necessarily degenerate. As you said, there isn't "enough information to assume that the two parts add up to one".
It isn't really a limitation of crystal field theory, either. Crystal field theory isn't about looking at how much orbitals overlap with each other and saying "oh, I think those look about the same, they're probably degenerate". If that was the case then in an octahedral complex you would not be able to tell why $\mathrm{d}_{z^2}$ and $\mathrm{d}_{x^2-y^2}$ are degenerate, since the former only really overlaps with two ligands and the latter overlaps with four.
The degeneracy is predicted by symmetry, the application of which is integral to, and implicit in, crystal field theory. There is no CFT without symmetry considerations. | {
"domain": "chemistry.stackexchange",
"id": 5774,
"tags": "inorganic-chemistry, coordination-compounds, crystal-field-theory"
} |
Is it possible that the Rosetta orbiter moved the comet when it crashed? | Question: Rosetta Comet Orbiter (RCO) crashed into the surface of a comet after the comet passed near Jupiter, which would be out-of-range for its antenna to communicate with Earth. So, the ESA made the difficult decision to just let go and crash the darn thing. (Talk about going out with a bang! geez!) Anyway, I saw a mission called DART wreck into an asteroid on purpose in order to move it. It got me thinking, did Rosetta do the same thing? The DART main spacecraft was about the size of a refrigerator, with 8-meter-wide solar arrays. Rosetta was an aluminum box with two solar panels that extended out like wings. The box, which weighed about 6,600 lbs. (3,000 kilograms), measured about 9 by 6.8 by 6.5 feet (2.8 by 2.1 by 2 meters). It has a wingspan on 105 feet. Given Rosetta is MUCH heavier than DART, was it possible to move the comet with Rosetta? There is a slight factor that could mean all the difference, Rosetta's target was bigger than DART's.
I state my question one last time, is it possible that the Rosetta Orbiter might have moved the comet it crashed into?
Answer: Yes, it did. But not by much.
The comet has a mass of about $10^{13}$ kg. Rosetta had a mass (after fuel had been used up) of about 1300kg. The "impact" was at 0.9 m/s. This means that the spacecraft had a momentum of about 1200 kg m/s
After the impact, and in the frame of the comet before the impact, the combined body would have the same momentum: 1200 kg m/s. But with a large mass the velocity would be small: $1200/10^{13}$. That is (having converted units) about 0.01 mm per day (or about one foot per decade).
Now The comet would have had a velocity, relative to the sun of about 7 km per second. A change of 0.01 mm per day would be negligible. | {
"domain": "astronomy.stackexchange",
"id": 6713,
"tags": "asteroids, comets, impact"
} |
Why do phase transitions even exist? Why not smooth density change curves? | Question: Why do phase transitions even exist? Why not smooth density change curves? What properties of matter, quantum or otherwise, predicts that matter will undergo phases at different pressures and temperatures?
Some materials have all phases, others are missing some.
If this were to be researched from very little existing knowledge, a great place to start would be by examining the differences between materials that are missing some phases, and comparing them with ones that aren't.
Answer: Phase transitions are a many-body effects. You can not generate sharp transition with a finite number of degrees of freedom (or particles). However as you add particles the features of the system may become sharper. In the limit of infinitely many particles (thermodynamic limit) you get a truly discontinuous transition.
In practice nothing is infinite. The typical number of atoms in normal matter is however 10^(23) which is indistinguishable from infinity in the sense that phase transitions appear perfectly sharp.
The mechanism from which the transition occurs depends on the particular system and transition that you consider. For example, in the case of water freezing, you have a competition in between disorder (temperature) and atom-atom interaction. The atoms want to stick together in ordered pattern (crystal) but the temperature wants them to have random position. There is a critical temperature above which the disorder wins and below which it is the potential energy that is stronger. | {
"domain": "physics.stackexchange",
"id": 19759,
"tags": "thermodynamics, statistical-mechanics, phase-transition"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.