anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Detecting errors changing an odd number of bits using CRC | Question: I was studying CRC from lecture notes of Schwarzkopf but I am stuck at this statement:
If $G(x)$ is a factor of $E(x)$, then $G(1)$ would also have to be $1$.
What does this statement means? It is the third point in the third last block of the article whose link is given. I know a similar question is asked but that is a specific question. I want to understand this particular statement.
Answer: Recall that we are working modulo $2$. Thus $E(x)$ is a polynomial whose coefficients are $0,1$, and $E(1) \in \{0,1\}$.
By definition, $G(x)$ is a factor of $E(x)$ if there exists a polynomial $H(x)$ such that $E(x) = G(x) H(x)$.
In this case, we assume that $E(1) = 1$. Since $E(1) = G(1) H(1)$, this forces $G(1) = 1$. | {
"domain": "cs.stackexchange",
"id": 17779,
"tags": "crc"
} |
The robustness of the Frobenius and L2,1 norms to the outlier | Question: I have a question about the properties of the Frobenius and L$_{2,1}$ norms. Why is the L$_{2,1}$ norm more robust to the outlier than the Frobenius norm?
PS: For a matrix $A\in\mathbb{R}^{n\times d}$, it can be easily seen that
$$
\text{Frobenius norm:}\qquad\Vert A\Vert_F=
\left(\sum_{i=1}^{n}\sum_{j=1}^{d}\vert a_{i,j}\vert^2 \right)^{\frac{1}{2}}=\sum_{i=1}^{n}\Vert A(i,:)\Vert_2^2,
$$
and
$$
L_{2,1}\,\,norm: \qquad\Vert A\Vert_{2,1}=\sum_{i=1}^{n}\left(\sum_{j=1}^{d}\vert a_{i,j}\vert^2 \right)^{\frac{1}{2}}=\sum_{i=1}^{n}\Vert A(i,:)\Vert_2,$$
where $A(i,:)$ is the $i$-th row of $A$.
I would be very grateful if some could answer my question.
Answer: I have only a couple of hints:
Frobenius norm, by definition, takes equal account of all data in matrix (all rows and columns).
Whereas $L_{2,1}$ norm is Frobenius norm but per row instead, so outliers in other rows do not affect (equally) norm of current row. | {
"domain": "datascience.stackexchange",
"id": 8938,
"tags": "outlier"
} |
Renormalization group flow when temperature $T < T_C$, $T_C$ being critical point temperature | Question: Does renormalization group flow have to decrease temperature when $T<T_C$, with $T_C$ being critical point temperature? I think not, but my professor suggests something like that. Maybe I misunderstood him. I am asking whether temperature decrease has to happen along renormalization group flow in all reasonable circumstances, not just for specific circumstances.
What would happen for $T>T_C$? Would there be monotonic temperature increase or decrease in reasonable circumstances?
Answer: In the case of a single critical fixed point, I think that the statement is at least asymptotically correct.
On the one hand, renormalization flow cannot cross the critical manyfold. Therefore, if the flow starts from a point above the critical temperature of a system, it should remain above the critical temperature of all the systems described by couplings corresponding to each point along a flow line, and similarly for points starting below the critical temperature.
On the other hand, soon or later, since the system does not start from the critical manyfold, it will move towards a trivial fixed point, high temperature or $T=0$, depending on where the flow started.
In the presence of more than a critical fixed point, I do not know if it is possible justify in general such a behavior. | {
"domain": "physics.stackexchange",
"id": 56345,
"tags": "statistical-mechanics, temperature, renormalization"
} |
Why is Fuzzy Dark Matter Cold? | Question: Fuzzy dark matter (FDM) has a typical mass of $10^{-22}$ eV. With such a low mass, why is it typically assumed to be cold? That is, what keeps the FDM non-relativistic. With such a low mass, wouldn't almost any energy input cause a FDM particle to have a momentum significantly larger than its mass? Or is it assumed that it formed cold and then doesn't interact at all over a timescale longer than the age of the universe?
Answer: Long answer short: Yes, it is assumed to have formed with very low energy ("cold"), and further, that it continues to be decoupled from the rest of the Universe so much that it stays that way. Any dark matter model that predicts such very light particles but with different formation mechanisms (e.g. the common thermal production) or non-negligible couplings is ruled out by data.
Going through the questions individually:
With such a low mass, why is it typically assumed to be cold?
Any viable dark matter model has to be "cold" to agree with data, specifically with structure formation. So it's assumed to be cold to be a viable model in the first place.
That is, what keeps the FDM non-relativistic. With such a low mass, wouldn't almost energy input cause a FDM particle to have a momentum significantly larger than its mass?
Indeed it's somewhat counter-intuitive that such very light dark matter would be low-energy enough, and would remain so. In the limit where your model decouples this dark matter completely from the rest of the universe that's not a problem though.
Or is it assumed that it formed cold
I'm not aware of a model where it isn't formed cold in the first place. Some sort of a "Mis-Alignment Mechanism" is a popular choice. If you were to form it hot or warm, /and/ have it decoupled from the rest of the universe such that once it's cold it stays cold, it's hard to see how it could cool in the early universe. Thus: make a model that forms it cold in the first place.
and then doesn't interact at all over a timescale longer than the age of the universe?
Right.
Somebody in the comments mentions boson condensates. That is a possibility but not necessary. For axions in particular there's a long debate whether they would form a condensate or not; I think the bottom line is simply that it depends on the quirks of your model whether you get a condensate or not. | {
"domain": "physics.stackexchange",
"id": 68429,
"tags": "cosmology, temperature, dark-matter"
} |
Does a chiral allene have stereogenic centres? | Question: This a picture of what my professor taught in class.
According the the definition of a stereogenic centre, allene should have two of them. So I don't understand what I am interpreting wrong.
Answer: The definition of "stereogenic centre" is unhelpful and can be confusing
The large majority of chiral molecules are chiral because a carbon in them has 4 different things attached to it (atoms or more complicated groups). The tetrahedral arrangement around carbon guarantees that the result will be a chiral molecules (with some complications if there are multiple such centres).
But the underlying cause of the chirality is the overall symmetry of the molecule (or, better, lack of certain types of symmetry like mirror planes). Carbon atoms with 4 different substituents are the commonest way to guarantee that type of symmetry. As a result those carbon atoms are often described as "stereogenic centres". But there are many ways to get chiral molecules (and no mirror planes) without the need for that type of stereogenic centre.
Allenes are an example where the chirality does not result from the arrangement around a single carbon. The Wikipedia definition is confusing and unhelpful and most chemists would not use the term "stereogenic centre" for any atom in the molecule.
A more obvious example of a chiral molecule with no stereogenic centre is hexahelicene (which is made from 6 ortho-fused benzene rings). The terminal benzene units can't exist in the same plane so the whole molecule twists into a spiral to minimise spatial interactions. The 3D structure is shown below.
The molecule is forced by spatial interactions to be a spiral which leads to a strongly chiral spatial structure but one where no single atom cause the chirality and no definition of stereogenic centre makes sense.s
Allene is just a simpler example of the general point that, to understand chirality, the overall symmetry of the molecule needs to be considered and not some count of stereogenic centres. | {
"domain": "chemistry.stackexchange",
"id": 17798,
"tags": "stereochemistry, allenes"
} |
Looking for suggestions on the model/algorithm for 2D row labeling | Question: As an experiment, I'd like to train a model to label a few rows of data on a 2D tensor. F.e. on a black and white image, label the "darkest" and "lightest" row.
Is this a task for CNN or some simpler methods would do?
Does this task fall into one of existing categories?
Answer: Is the number of rows fixed or variable? Can the row predictions be made independently or do they require information from other rows?
If predictions are independent, you could take your input image of size (rows, cols, channels), reshape it to (rows, cols*channels) and run it through a MLP that predicts a value for each row separately. This would work for fixed or variable number of rows
If number of rows is fixed and predictions are dependent, you could use a pretrained CNN to map the input image to a fixed length embedding vector. Then send the embedding vector to a MLP that outputs a fixed number of predictions (fixed at number of rows).
If rows are variable and predictions are dependent, you could do the same pretrained CNN -> embedding, but decode with an autoregressive model that can predict variable length outputs for each input. | {
"domain": "datascience.stackexchange",
"id": 11961,
"tags": "machine-learning, cnn, machine-learning-model"
} |
Replacing pole with delta function in Peskin and Schroeder | Question: Hi on page 234 of their book P&S replaced a pole with a delta function to evaluate a contour integral (see scan of pages below), but I don't quite get how you can reproduce (7.54) this way? I assume with the delta function you can replace $ m^2$ with $(k/2+q)^2 $ in the first fraction, but that doesn't seem to work. Since they do not simply evaluate the integral using good old contour integration I'm assuming this delta function trick is easier, but how exactly do you use it? Any help is appreciated.
Answer: $\def\d{\delta}
\def\e{\varepsilon}
\def\p{\pi}
\def\l{\lambda}
\def\vq{\mathbf{q}}$The pole of interest
We solve $(k/2\mp q)^2-m^2+i\e=0$ with the result
$$q^0 = \begin{cases}
+k^0/2 + E_\vq-i\e \\
+k^0/2 - E_\vq+i\e \\
-k^0/2 + E_\vq-i\e \\
-k^0/2 - E_\vq+i\e.
\end{cases}$$
(Here we use that
$q^2 = (q^0)^2-\vq^2 +E_\vq^2-E_\vq^2 = (q^0)^2-E_\vq^2+m^2$.)
The text claims that the only contribution to the discontinuity comes from
$$q^0 = q^0_{-+} = -k^0/2 + E_\vq$$
which should be straightforward to verify using the techniques described below.
We have
\begin{align*}
i\d M \rightarrow \frac{\l^2}{2} \int \frac{d^3q}{(2\p)^4}\int dq^0
\frac{1}{(k/2-q)^2-m^2}(-2\p i)\d((k/2+q)^2-m^2)
[q^0=q^0_{-+}],
\end{align*}
where $[P]$, the Iverson bracket, is 1 if statement $P$ is true and $0$ if $P$ is false.
Note that if $g$ has simple zeros, $x_i$, then
\begin{align*}
\d(g(x)) &= \sum_i \frac{\d(x-x_i)}{|g'(x_i)|}.
\end{align*}
That is,
\begin{align*}
\d(g(x))[x=x_j] &= \frac{\d(x-x_j)}{|g'(x_j)|}.
\end{align*}
Thus, the factor $[q^0=q^0_{-+}]$ is necessary to pick out only the contribution from $q^0_{-+}$ (and to ignore that from $q^0_{--}$).
Continuing,
\begin{align*}
\frac{d}{dq^0}\left[(k/2+q)^2-m^2\right]|_{q^0 = q^0_{-+}}
&= 2(k/2+q)\cdot(1,\mathbf{0})|_{q^0 = q^0_{-+}} \\
&= 2(k^0/2+q^0)|_{q^0 = q^0_{-+}} \\
&= 2E_\vq.
\end{align*}
Also,
\begin{align*}
(k/2-q)^2-m^2|_{q^0=q^0_{-+}}
&= (k/2+q)^2-2k\cdot q-m^2|_{q^0=q^0_{-+}} \\
&= m^2-2k^0(-k^0/2+E_\vq)-m^2 \\
&= k^0(k^0-2E_\vq).
\end{align*}
Thus,
$$i\d M \rightarrow -2\p i \frac{\l^2}{2} \int \frac{d^3q}{(2\p)^4}
\frac{1}{2E_\vq}\frac{1}{k^0(k^0-2E_\vq)}$$
as claimed.
Appearance of the delta function
Consider the integral
$$I = \int_{-\infty}^\infty \frac{f(x)}{g(x)}dx,$$
where $f$ is holomorphic on the upper half-plane,
$g$ has one simple zero in the upper half-plane at $z=x_0+i\e$ ($x_0\in\mathbb{R}$, $0<\e\ll1$),
and where the integral on the upper half-circle vanishes.
Then
\begin{align*}
I &= \int_\gamma\frac{f(z)}{g(z)}dz \\
&= 2\pi i\frac{f(x_0)}{g'(x_0)}\quad (\e\rightarrow 0)
\end{align*}
where $\gamma$ is the upper semicircular contour,
which is equivalent to
\begin{align*}
I &= 2\pi i \int_{-\infty}^\infty f(x) \frac{\d(x-x_0)}{g'(x_0)}dx \\
&= 2\pi i \int_{-\infty}^\infty f(x) \frac{\d(x-x_0)}{|g'(x_0)|e^{i\arg g'(x_0)}}dx \\
&= 2\pi i \int_{-\infty}^\infty f(x)\d(g(x)) e^{-i \arg g'(x_0)} dx.
\end{align*}
Thus, up to a phase one can replace $1/g(x)$ with $2\pi i \d(g(x))$ in the integral.
If instead $f$ is holomorphic on the lower half-plane, $g(x)$ has a simple zero in the lower half-plane, and the integral on the lower half-circle vanishes a minus sign will appear due to the change in sense of the contour. | {
"domain": "physics.stackexchange",
"id": 78053,
"tags": "quantum-field-theory, feynman-diagrams, complex-numbers, dirac-delta-distributions, analyticity"
} |
Is there a difference between the solar elevation angle and sun declination? | Question: I need to use the sunrise equation but one of the variables is the sun declination. On the other hand, I have the values for solar elevation angle that I need. Are they the same thing?
Answer: Elevation and Declination are from different co-ordinate systems, so solar elevation is given in Alt/Az co-ordinates and refers to the elevation above the local horizon. Sun declination is measured in RA/Dec co-ordinates (which is an equatorial co-ordinate system) and measures the Sun's inclination above or below the equator on the celestial sphere. | {
"domain": "astronomy.stackexchange",
"id": 2341,
"tags": "the-sun, declination, sun-rays"
} |
why are WAW and WAR hazards not possible in mips architecture | Question: i have read about data hazards and then came across that mips architecture doesn't allow WAR AND WAW hazards can someone please help me understand it? the reason is not given in the book the MIPS pipeline is divided into :
1.IF(instruction fetch) 2.ID(decode the instruction) 3.EX(execute instruction) 4.MEM(write or read from the memory) 5.WB(write back to the register file)
for eg in case of WAW hazards :
I1: LOAD R1,0(R2)
I2: ADD R1, R2, R3
I1: |IF|ID|EX|MEM|WB |
I2 |IF|ID|EX |MEM|WB|
The above is expected way in which the instructions execute without data hazards
but here the second instruction has to wait till the WB phase of the instruction I1 for getting the value of R1 hence it will stall till the value of R1 is available in register file i.e till WB phase of the I1. here what my doubt is in case I2 Takes less number of clock cycles than I1 for completion then can I2 access the register file going to WB phase directly in case it has nothing to do in the phase of MEM will this give rise to a hazard?
Answer:
in case I2 Takes less number of clock cycles than I1 for completion then can I2 access the register file going to WB phase directly in case it has nothing to do in the phase of MEM will this give rise to a hazard?
I2 reads the register R1 at its ID phase. Therefore, if implemented directly, we must have time(I2.ID)>time(I1.WB).
A more sophisticated solution would bypass the register file and take the values directly from the Memory output. In this case we need
time(I2.ID)>time(I1.MEM) and the controller becomes more involved.
If I understand your question correctly, you ask whether I2 must take 5 clocks although it doesn't use the MEM phase. The answer is YES or otherwise, it may conflict with other instructions that would try to use MEM at the same time (unless the pipeline controller stalls them when congestion appears). | {
"domain": "cs.stackexchange",
"id": 18660,
"tags": "computer-architecture, cpu"
} |
Recasting integrals from Lagrangian to Eulerian frame | Question: Working on a research problem in the continuum mechanics of fluids. For clarity, uppercase will be used for tensors in the reference configuration, and corresponding spatial items will be in lowercase.
For a moving body, define a map $\Phi:(X,t)\to{(x,t)}\;$ from a reference configuration to spatial configuration, whose deformation gradient is $F=\nabla\Phi$ with the determinant $J=\textrm{Det }F$.
What is the correct approach to convert a surface integral in reference configuration to one in spatial configuration? I have come across conflicting information in multiple sources: e.g. for a reference surface $A$ with area element $dS$,
(a) Using a change of variables (integration by substitution)
i.e. $\int_A T\;dS = \int_a J t\;ds$ where $a=\Phi(A)$, $ds=\Phi(dS)$ and $t=\Phi(T)$ for an arbitrary tensor $T$ in the reference configuration.
(b) Applying Piola transforms.
Certainly these approaches give very different answers unless I am missing something?
The actual integrals I am trying to convert to spatial integrals have the general forms:
$$I_1 = \int_A (N\cdot{J^k}t^k(F^{-\top})^kN)\;(U^{k+1}{\cdot}N)\;dS$$
$$I_2 = \int_A (U^{k+1}{\cdot}N)(V^{k+1}{\cdot}N)\;dS$$
$$I_3 = \int_A (U^{k+1}{\cdot}V^{k+1})\;dS$$
where $N$ is the unit normal to surface area element $dS$, $t$ is a symmetric second order tensor (in the spatial frame), and $U$ and $V$ are vectors in the reference frame. The superscripts $k$ and $k+1$ refer to two separate times.
Please any experienced insights would be very welcome.
Answer: I suspect that you hope to find a formula involving $\mathbf{F}$, but the deformation gradient is used primarily to re-write integrals over deformed configurations as integrals over the reference configuration. Expressing everything in terms of the reference configuration is what makes the Lagrangian approach to solid mechanics so convenient.
Again, let $\mathcal{B}$ be the portion of $\mathbb{R}^3$ occupied by the reference configuration, so that $\Phi_t(\mathcal{B})$ is the set that those (deformed) material points occupy at time $t$.
We use the symbol $\partial\mathcal{B}$ for the boundary of $\mathcal{B}$ and $\partial\Phi_t(\mathcal{B})$ for the boundary of $\Phi_t(\mathcal{B})$.
I think that your $I_3$ is
\begin{equation}
\int_{\partial\mathcal{B}}\mathbf{U}(\Phi_{t_{k+1}}(\mathbf{X}))\cdot\mathbf{V}(\Phi_{t_{k+1}}(\mathbf{X}))dS(\mathbf{X}).
\end{equation}
It would be more helpful to know what the vector fields $\mathbf{U}$ and $\mathbf{V}$ are, as we usually integrate fields whose immediate argument is $\mathbf{X}$ (as opposed to $\mathbf{x} = \Phi_t(\mathbf{X})$) and use transformations such as Nanson's formula to allow us to integrate functions of $\mathbf{X}$ over $\mathcal{B}$.
The most general I can get without venturing into differential forms is to note that we assume there are parametrizations of the material points in the reference configuration. Since the surface $\partial\mathcal{B}$ is a 2-dimensional surface, there should be (at least locally) some real variables $s_1$ and $s_2$ (for example, the angles $\theta$ and $\phi$ on a spherical surface) such that a point on that surface can be described as $\mathbf{X}(s_1,s_2)$. Let $F$ be a smooth-enough function on $\partial\mathcal{B}$. Then in a small area around $\mathbf{X}(s_1,s_2)$ the differential area is
\begin{equation}
dA(\mathbf{X}(s_1,s_2)) = \left\Arrowvert\frac{\partial\mathbf{X}}{\partial s_1}(s_1,s_2)\times\frac{\partial\mathbf{X}}{\partial s_2}(s_1,s_2)\right\Arrowvert ds_1 ds_2,
\end{equation}
where we take the norm of the cross-product of the partial derivates of $\mathbf{X}(s_1,s_2)$. The integral of a function $F$ over a (perhaps small) patch would be the iterated integral
\begin{equation}
\int_{a}^{b}\int_{c}^{d}F(\mathbf{X}(s_1,s_2))\left\Arrowvert\frac{\partial\mathbf{X}}{\partial s_1}(s_1,s_2)\times\frac{\partial\mathbf{X}}{\partial s_2}(s_1,s_2)\right\Arrowvert ds_1 ds_2.
\end{equation}
But the co-ordinates $s_1$ and $s_2$ also parametrize the material points on the surface of the deformed body:
\begin{equation}
\mathbf{x}(s_1,s_2) = \Phi_t(\mathbf{X}(s_1,s_2)).
\end{equation}
Integrating a function $f$ on part of the surface of the deformed body has the same form:
\begin{equation}
\int_{a}^{b}\int_{c}^{d}f(\mathbf{x}(s_1,s_2))\underbrace{\left\Arrowvert\frac{\partial\mathbf{x}}{\partial s_1}(s_1,s_2)\times\frac{\partial\mathbf{x}}{\partial s_2}(s_1,s_2)\right\Arrowvert ds_1 ds_2}_{da(\mathbf{x}(s_1,s_2))}.
\end{equation}
I could re-write this using facts such as
\begin{equation}
\frac{\partial\mathbf{x}}{\partial s_i}(s_1,s_2) = \mathbf{F}(\mathbf{X}(s_1,s_2))\cdot\frac{\partial\mathbf{X}}{\partial s_i}(s_1,s_2),
\end{equation}
but that would change this to an integral over the reference configuration, which is not what you seek.
Do you have a parametrization of the reference configuration and some specific form for the kinds of deformations under consideration? Those details could make this discussion much more concrete. | {
"domain": "physics.stackexchange",
"id": 68083,
"tags": "fluid-dynamics, tensor-calculus, integration, continuum-mechanics"
} |
Extract changesets between last two tags in Mercurial | Question: I have written the following script that I'd like to have reviewed:
#!/usr/bin/python
import os
os.system('hg tags > tags.txt')
file = 'tags.txt'
path = os.path.join(os.getcwd(), file)
fp = open(path)
for i, line in enumerate(fp):
if i == 1:
latestTagLine = line
elif i == 2:
previousTagLine = line
elif i > 4:
break
fp.close()
revLatestTag = latestTagLine.split(':')
l = revLatestTag[0].split(' ')
revPreviousTag = previousTagLine.split(':')
p = revPreviousTag[0].split(' ')
command = 'hg log -r {}:{} > diff.txt'.format(l[-1],p[-1])
os.system(command)
Output of hg tags command:
tip 523:e317b6828206
TOOL_1.4 522:5bb1197f2e36
TOOL_1.3 515:7362c0effe40
TOOL_1.1 406:33379f244971
Answer: You didn't specify that you wanted to leave a tags.txt file as an intentional side-effect of your script. I'm going to assume that it's an unwanted temporary file. In that case, you can read the output of hg directly through a pipe. Furthermore, the file object can be used as an iterator to fetch just two lines, with no for-loop.
from subprocess import Popen, PIPE
# Read the first two lines of `hg tags`
with Popen(['hg', 'tags'], stdout=PIPE).stdout as hg_log:
latestTagLine = hg_log.next()
previousTagLine = hg_log.next()
The code to interpret each line is repeated, and therefore deserves to be put into a reusable function. Instead of splitting, use a capturing regular expression.
import re
def parse_tag_line(line):
match = re.match(r'^(.*) *(\d+):(.*)', line)
return dict([
('tag', match.group(1)),
('rev', match.group(2)),
('hash', match.group(3)),
])
Finish up by generating the output:
log_range = '%s:%s' % (latestTag['hash'], previousTag['hash'])
with open('diff.txt', 'w') as diff:
Popen(['hg', 'log', '-r', log_range], stdout=diff)
Personally, I would prefer to just dump the output to sys.stdout, and use the shell to redirect the script's output to a file (script > diff.txt) instead of hard-coding diff.txt in the script itself. Then, the epilogue would become
log_range = '%s:%s' % (latestTag['hash'], previousTag['hash'])
Popen(['hg', 'log', '-r', log_range])
Putting it all together:
import re
from subprocess import Popen, PIPE
def parse_tag_line(line):
match = re.match(r'^(.*) *(\d+):(.*)', line)
return dict([
('tag', match.group(1)),
('rev', match.group(2)),
('hash', match.group(3)),
])
# Read the first two lines of `hg tags`
with Popen(['hg', 'tags'], stdout=PIPE).stdout as hg_log:
latestTag = parse_tag_line(hg_log.next())
previousTag = parse_tag_line(hg_log.next())
log_range = '%s:%s' % (latestTag['hash'], previousTag['hash'])
# Write `hg log -r ...:...` to diff.txt
with open('diff.txt', 'w') as diff:
Popen(['hg', 'log', '-r', log_range], stdout=diff) | {
"domain": "codereview.stackexchange",
"id": 6218,
"tags": "python, parsing, child-process"
} |
Smallest Distance-5 Quantum Error Correction Code? | Question: Is it known/proven what the smallest quantum error correction code is that can correct arbitrary two-qubit Pauli errors? I can think of the nested/concatenated 5-qubit code or a 25-qubit version of the Shor (repetition) code, but I am not sure if there are codes requiring fewer qubits.
Answer: If you look in this paper, section 7, they give an [[11,1,5]] code, and show that it is the smallest you can have.
In general, for these sorts of questions, a great starting point is Gottesman's thesis. That's where I found this result stated. | {
"domain": "quantumcomputing.stackexchange",
"id": 1607,
"tags": "error-correction"
} |
How to study the effect on tau protein isoforms on microtubule based transport? | Question: From what I read, A-beta plaques inhibit microtubule based transport of mitochondria when tau protein is present in the cell. How would I be able to do a test to see if one isoform of tau is more effective at conferring this A-beta sensitivity than another isofrom?
Answer: You can do a microscopy based assay to quantify the transport rate (both retrograde and anterograde). Mitochondria can be labeled with fluorescent proteins such as mito-dsRed, and its movement along the axon can be tracked by live cell imaging. You can check this paper; they have done this experiment. (Others have also done it but I remember this paper because someone told me about it recently).
How do you plan to see the effect of different isoforms? It may not be that easy. You will have to replace the common isoform with the others. It is not very easy to control splicing. So you may have to make lines that express only one variant (exon deletion). Though this study reports that alternative splicing can be controlled by using antisense oligonucleotides, the technology is not standardized yet; you can try it nonetheless, because making knockouts is difficult (you would have to make KO mice and then culture neurons from it). | {
"domain": "biology.stackexchange",
"id": 2195,
"tags": "cell-biology"
} |
How to evaluate a Deep Q-Network | Question: Good day, it's a pleasure having joined this Stack.
In my master thesis I have to expand a Deep Reinforcement Learning Network, to be precise a Deep Q-Network, which is used to control machines in an electrical grid for power quality management.
What would be the best way to evaluate if a network is doing a good job during training or not? Right now I have access to the reward function as well as the q_value function.
The rewards consist of 4 arrays, one for each learning criteria of the network. The first tuple is a hard criteria (adherence mandatory) while the latter 3 are soft criteria:
Episode: 1/3000 Step: 1/11 Reward: [[1.0, 1.0, -1.0], [0.0, 0.68, 1.0], [0.55, 0.55, 0.55], [1.0, 0.62, 0.79]]
Episode: 1/3000 Step: 2/11 Reward: [[-1.0, 1.0, 1.0], [0.49, 0.46, 0.67], [0.58, 0.58, 0.58], [0.77, 0.84, 0.77]]
Episode: 1/3000 Step: 3/11 Reward: [[-1.0, 1.0, 1.0], [0.76, 0.46, 0.0], [0.67, 0.67, 0.67], [0.77, 0.84, 1.0]]
The q_values are arrays which I do not fully understand yet. Could one of you explain them to me? I read the official definiton of Q-Values positive False Discovery Rate. Can these values be used to evaluate neural network training? These are the Q-Values for step 1:
Q-Values: [[ 0.6934726 -0.24258053 -0.10599071 -0.44178435 0.5393113 -0.60132784
-0.07680141 0.97968364 0.7707691 0.57855517 0.16273917 0.44632837
0.00799532 -0.53355324 -0.45182624 0.9229134 -1.0455914 -0.0765233
0.37784138 0.14711905 0.10986999 0.08918551 -0.8189287 0.14438646
0.8869624 -0.43251887 0.7742889 -0.7671829 0.07737591 0.2569678
0.5102049 0.5132051 -0.31643414 -0.0042788 -0.66071266 -0.18251896
0.7762838 0.15322062 -0.06284399 0.18447408 -0.9609979 -0.4508798
-0.07925312 0.7503184 0.6858963 -1.0436649 -0.03167241 0.87660617
-0.43605536 -0.28459656 -0.5564517 1.2478396 -1.1418368 -0.9335588
-0.72871417 0.04163677 0.30343965 -0.30024529 0.08418611 0.19429305
0.44063848 -0.5541725 0.5740701 0.76789933 -0.9621064 0.0272104
-0.44953588 0.13415053 -0.07738207 -0.16188647 0.6667519 0.31965214
0.3241703 -0.27273563 -0.07130697 0.49683014 0.32996863 0.485767
0.39242893 0.40508035 0.3413986 -0.5895434 -0.05772913 -0.6172271
-0.12423459 0.2693861 0.32966745 -0.16036317 -0.36371914 -0.04342368
0.22878243 -0.09400887 -0.1134861 0.07647536 0.04724833 0.2907955
-0.70616114 0.71054566 0.35959414 -1.0539075 0.19137645 1.1948669
-0.21796732 -0.583844 -0.37989947 0.09840107 0.31991178 0.56294084]]
Are there other ways of evaluating DQNetworks? I would also appreciate literature about this subject. Thank you very much for your time.
Answer: Q-values represent expected return after taking action $a$ in state $s$, so they do tell you how good it is to take an action in the specific state. Better actions will have larger Q-values. Q-values can be used to compares actions but they are not very meaningful in representing performance of the agent since you have nothing to compare them with. You don't know the actual Q-values so you can't conclude if your agent is approximating well those Q-values or not.
Better performance metric would be the average reward per episode/epoch, or average reward in last $N$ timesteps for continuing tasks. If your agent is improving its performance then it's average reward should be increasing. You said that you have rewards per state and that some of them represent more important criteria then others. You could plot the average reward per episode by doing some kind of weighted linear combination of criteria rewards
\begin{equation}
\bar R = \bar R_1 w_1 + \bar R_2 w_2 + \bar R_3 w_3 + \bar R_4 w_4
\end{equation}
where $\bar R_i$ is the average episode reward for criteria $i$.That way you can provide more importance to some specific criteria in your evaluation. | {
"domain": "ai.stackexchange",
"id": 1982,
"tags": "neural-networks, deep-learning, reinforcement-learning, dqn, software-evaluation"
} |
Execute a function n times, where n is known at compile time | Question: Motivation
In this question, a user asked whether it is possible to inline the following function:
-- simplified version
{-# INLINE nTimes #-}
nTimes :: Int -> (a -> a) -> a -> a
nTimes 0 f x = x
nTimes n f x = nTimes (n-1) f (f x)
Unfortunately, the answer seems no, since GHC sees a recursive function and gives up. Even if you use a compile-time constant, e.g. nTimes 1 (+1) x, you don't end up with x + 1, but with nTimes 1 (+1) x.
While it's of course fine to refuse inlining if the number of loops is unknown, it's also a hassle if it is known.
Code
As you can see in the question above, I've proposed the following solution:
{-# LANGUAGE TemplateHaskell #-}
module Times where
import Control.Monad (when)
import Language.Haskell.TH
-- > item under review begins here
nTimesTH :: Int -> Q Exp
nTimesTH n = do
f <- newName "f"
x <- newName "x"
when (n <= 0) (reportWarning "nTimesTH: argument non-positive")
when (n >= 1000) (reportWarning "nTimesTH: argument large, can lead to memory exhaustion")
let go k | k <= 0 = VarE x
go k = AppE (VarE f) (go (k - 1))
return $ LamE [VarP f,VarP x] (go n)
-- < item under review ends here
This should, for any n, create a function with patterns named f and x, and apply f to x with AppE n times:
$(nTimesTH 0) = \f x -> x
$(nTimesTH 1) = \f x -> f x
$(nTimesTH 2) = \f x -> f (f x)
$(nTimesTH 3) = \f x -> f (f (f x))
I can verify that the created function has the correct type:
$ ghci -XTemplateHaskell Times.sh
ghci> :t $(nTimesTH 0)
$(nTimesTH 0) :: r -> r1 -> r1
ghci> :t $(nTimesTH 1)
$(nTimesTH 1) :: (r1 -> r) -> r1 -> r
ghci> :t $(nTimesTH 2)
$(nTimesTH 2) :: (r -> r) -> r -> r
ghci> :t $(nTimesTH 3)
$(nTimesTH 3) :: (r -> r) -> r -> r
To all my knowledge, nTimesTH works exactly as expected.
Given that this is my first time dabbling with Template Haskell, does this follow best practices? Also, I'm using VarE, AppE and so on. Language.Haskell.TH also provides some combinators, so that one can write
let go k | k <= 0 = varE x
go k = appE (varE f) (go (k - 1))
lamE (map varP [f,x]) (go n)
Is this just personal preference, or is lamE preferred? As far as I can see, the expression combinators use the canonic implementation, e.g. varE = return . VarE, appE f x = liftA2 AppE f x and so on.
Tests
Just for completeness the QuickCheck tests. Those aren't really part of the review, but here fore completeness:
module Times where
import Test.QuickCheck
-- .. rest of module as above
genTestSingle :: Int -> Q Exp
genTestSingle n = do
f <- newName "gTSf"
x <- newName "gTSx"
lamE [varP f, varP x] $
appsE [ [| (==) |], appsE [nTimesTH n, varE f, varE x]
, appsE [ [| nTimesFoldr n |], varE f, varE x]]
genAllTest :: Int -> Q Exp
genAllTest n = do
f <- newName "gATf"
x <- newName "gATv"
lamE [varP f, varP x] $ doE $ (flip map) [1..n] $ \i ->
noBindS $ appE [| quickCheck |] $ appsE [genTestSingle i, varE f, varE x]
module Main where
main = $(genAllTest 100) sin 40
Answer: Your implementation is fine. But it is a little bit ironic: In your SO answer, you start of with "you can use better functions", yet you rewrite nTimes via go. If this was a real module, you would probably want to export both functions nTimesTH and nTimes, and implement the former via the latter:
{-# LANGUAGE TemplateHaskell #-}
module Times (nTimes, nTimesTH) where
import Control.Monad (when)
import Language.Haskell.TH
-- example implementation
nTimes :: Int -> (a -> a) -> a -> a
nTimes n f x = iterate f x !! max 0 n
nTimesTH :: Int -> Q Exp
nTimesTH n = do
when (n <= 0) (reportWarning "nTimesTH: argument non-positive")
when (n >= 1000) (reportWarning "nTimesTH: argument large, can lead to memory exhaustion")
f <- newName "f"
x <- newName "x"
lamE (map varP [f,x]) $ nTimes n (appE (varE f)) (varE x)
This removes the need for code duplication, and any future optimization you find for nTimes gets automatically applied in nTimesTH, although that only matter during the compile time. After all $(nTimesTH n) is completely evaluated during compilation.
Other than that, I moved the warnings on top of the function, since this brings the names f and x closer to their actual use. I also replaced the data constructors with their respective functions.
Note that for a real module you still want to add documentation:
-- | @'nTimes' n f x@ applies @f@ @n@ times over @x@.
-- It does not apply @f@ if @n@ is negative. In this case @x@ is returned.
--
-- prop> nTimes n f x == iterate f x !! max 0
nTimes :: Int -> (a -> a) -> a -> a
-- | @'nTimesTH' n@ returns a function that applies the first
-- argument @n@ times to the second argument.
-- @n@ must be known at compile time. Needs @-XTemplateHaskell@.
--
-- prop> $(nTimesTH n) f x == nTimes n f x
nTimesTH :: Int -> Q Exp | {
"domain": "codereview.stackexchange",
"id": 24480,
"tags": "haskell, template-meta-programming"
} |
How to kill master process 'rosout'? (will always respawn) | Question:
I used 'roscore &' to launch the ROS master server in a shell script. That might not have been a good idea. After invoking this script I failed to kill process rosout as it always respawns with a different pid. It looks like a watchdog process but I couldn't find one so far. Is there any possibility to kill rosout without having to restart my system?
Originally posted by Julius on ROS Answers with karma: 960 on 2011-03-08
Post score: 1
Answer:
It's a python script.
So killall python will work.
If you have other python stuff running:
ps aux|grep roscore and kill the python process with roscore in the name
Originally posted by dornhege with karma: 31395 on 2011-03-08
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Julius on 2011-03-08:
Yes, you're totally right that my suggestion to use 'killall python' is nasty (I pointed this out, too). I removed my answer to avoid any further confusion and let quality rule over quantity.
Comment by kwc on 2011-03-08:
I strongly recommend against running "killall python". That will kill every python process on your system.
Comment by McMurdo on 2014-04-27:
killall python no longer works. Is there an elegant solution to this problem? | {
"domain": "robotics.stackexchange",
"id": 4994,
"tags": "roscore"
} |
Building a Pure CMake project with Catkin | Question:
We want to build a pure CMake project in our catkin workspace, but we want it to be built with non default options.
Are there examples on how to do this or best practices to do this?
At first, I was imagining that you could add the CMake arguments to the package.xml that needs to be created for the package, but this doesn't look to be the case. Is there an example on how to do this, documentation seems to really be lacking for using catkin with pure cmake projects beyond stating that it is supported.
Originally posted by John Hoare on ROS Answers with karma: 765 on 2016-05-03
Post score: 3
Answer:
There is no support for this (passing arguments to exactly one package), but it has been discussed before:
https://github.com/catkin/catkin_tools/issues/205
There's even a proposal of how to do it. But no one has had time to implement it.
Short of implementing that feature (which would be appreciated :D), you might be able to work around the issue by passing the build options to all packages (using the --cmake-args option which is available in all the tools). So long as the options are fairly specific to your plain CMake package, then the other packages should ignore it (they will complain about being given an option that doesn't affect them). This isn't a great solution but it would work for most cases. The other option is to put your plain CMake package in a subfolder, then put the package.xml and another CMakeLists.txt in the folder above. Then in that CMakeLists.txt you can set options and then include the plain cmake package with add_subdirectory (https://cmake.org/cmake/help/v3.0/command/add_subdirectory.html).
Hopefully one of these options works for you.
Originally posted by William with karma: 17335 on 2016-05-03
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by John Hoare on 2016-05-04:
Thanks. We are going the route of placing the actual CMake project in a subdirectory. We will look at teaching catkin proper support possibly in the future.
Comment by John Hoare on 2016-05-04:
I think the best way to do what we want is to be able to provide your own pre-cache file specified in the package.xml https://cmake.org/cmake/help/v2.8.11/cmake.html#opt%3a-Cinitial-cache | {
"domain": "robotics.stackexchange",
"id": 24552,
"tags": "catkin, cmake"
} |
Interpretation of hierarchical clustering with bootstrapping | Question: I have data that includes 'cases' and 'controls' and have carried out hierarchical clustering.
I used the pvclust package in R to bootstrap the results and significant branches are highlighted with red rectangles (based on au>0.95):
What is clear is that no clustering occurs that separates 'cases' and 'controls', and this is in fact what we expected and want to show. We want to show that the measured variable does not distinguish between cases and controls.
List itemApart from saying visually no clear clusters emerge that distinguishes between cases and controls are there any objective measures that can be used to say no significant clustering occurs between two groups?
One observation I have here is that the AU values and the BP values are very different, even though both p-values should be interpreted in a similar fashion, am I missing something?
Perhaps pvclust (bootstrapping) is not the right option here, is there a better way of showing quantifiably that no significant clustering occurs between two groups? Perhaps some kind of supervised clustering (I am not sure what this even means in this context)?
Answer: Good question. You have performed the first step via unsupervised learning and AU bootstraps give great results, orthodox bootstraps give no significant (or very few) clusters. The contrast between these approaches is unusual because they should be somewhere close to each other. If you can resolve that, your analysis is fine. So at present IMO the unsupervised learning needs further investigation.
... deleted
... antibodies ... oh I see. Trees, like you've done, don't work here because of the cross-reactivity creates network phylogenies. This is the antonym of bifurcating trees. If someone has a robust bifurcating phylogeny with antibody data and gets good bootstraps .. utterly amazing. However, if you subject your data to a network analysis you'll find connections all over the place - in practical terms this really messes up bootstraps. Thats the likely reason for you results. There is no point doing a complex bifurcating tree analysis because the antibody cross-reactivity will mess it up. k-means has been implemented as unsupervised learning in these instances and would provide the starting point in using clustering to define a supervised learning problem. | {
"domain": "bioinformatics.stackexchange",
"id": 955,
"tags": "r, clustering"
} |
Is $n^{1/\log \log n} = O(1)$? | Question:
Is $n^{1/\log \log n} = O(1)$ ?
Suppose that $n^{1/\log \log n} = c$ where $c$ is constant.
Taking logs of both sides,
$$\frac{1}{\log \log n}\log n = \log c.$$
I am not able to spot an error. Please help
Answer: The function $n^{1/\log \log n}$ tends to infinity, since
$$
n^{1/\log\log n} = e^{\log n/\log\log n},
$$
and $\log n/\log \log n \longrightarrow \infty$. | {
"domain": "cs.stackexchange",
"id": 15083,
"tags": "asymptotics"
} |
Kinect laser scan angle | Question:
How do you set the min and max laser scan angle for the kinect? I have checked out dynamic reconfigure and do not see any way of changing the angles. In ros electric, the view angle was almost 180 degrees, in groovy it seems to be only about 57 degrees. I need to have the same view angle that groovy did. How can I change the min and max view angle?
Originally posted by mickey11592 on ROS Answers with karma: 3 on 2013-07-10
Post score: 0
Answer:
The Kinect has a field of view of 57º, I do not know any mean to change that other than using an optical adapter. You can see the specs here.
A field of view of 180º for a Kinect seems like a bug...
If 57º is too narrow, the obvious solution if you can afford it would be to use one or more additional devices.
Originally posted by po1 with karma: 411 on 2013-07-10
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by mickey11592 on 2013-07-10:
Thank you for your reply. I do not think that 180 degrees was a bug though because my whole class used the turtlebot with kinect running ros electric for a few projects and we all received approximately 180 degrees as a viewing angle. I am hoping there is a way to receive that again in groovy.
Comment by Bill Smart on 2013-07-10:
As po1 said, there's no way for the kinect to physically see more than about 57 degrees. You can pad this on both sides with "don't know" readings to get 180 degrees of scan, but there isn't a way to fill these additional 133 degrees with meaningful readings.
Comment by mickey11592 on 2013-07-11:
I don't understand how the kinect had approximately 180 degree field of view when running ros electric then. They were not padded readings, they were accurate since we used them to drive forward and stay in the center of the hallway, and used rviz to view and test the scans.
Comment by Vinh K on 2016-07-18:
Micket11592, how did you implement the code for the turtlebot so that it stays in the middle of the hallway? could you share your code please? | {
"domain": "robotics.stackexchange",
"id": 14868,
"tags": "kinect, ros-groovy"
} |
Constructing product automaton with conjunctive conditions | Question: The question was to construct a DFA which accepts the language $$\{ x \in \{0,1\}^* \mid x\text{ starts with a }0\text{ and has at most one }1\}$$
So I first constructed DFA for '$x$ starts with a $0$' and a DFA for 'has at most one $1$' then I tried getting the product automaton of these, but the quiz says the automaton should not accept the string "" (empty string). Does that mean I just change $q_0p_0$ from accept state to normal state? or have I done something more wrong?
Answer: The "DFA for 'has at most one $1$-symbol'" in the question is not correct. It accepts $101$, $0101$, etc.
The right one should be like the following.
In the product automaton, you should label state $p_iq_j$ as accept state iff both $p_i$ and $q_j$ are accept states in their original automaton respectively, since the condition for string $x$ to be in the language is " ... and ...".
If the condition been "... or ...", then you should label state $p_iq_j$ as accept state iff at least one of $p_i$ and $q_j$ is an accept state in its original automaton. | {
"domain": "cs.stackexchange",
"id": 19783,
"tags": "formal-languages, finite-automata"
} |
Movie tickets booking system (Frontend only) | Question: I developed this tickets booking system recently to showcase HTML5 features. I would like you to review it from the UI/UX perspective. Also if there is something more that I can add.
Is it good enough?
JS/HTML5/CSS3:
$(document).ready(function(){
// first check the movies already booked
checkMoviesBooked();
// apply jQuery UI Redmond theme to 'Book Tickets' button
$("#submit").button();
// calculateTotalPrice on keyup or on change of movie/date/tickets
$("#movie_name, #date, #tickets_quantity").keyup(calculateTotalPrice);
$("#movie_name, #date, #tickets_quantity").change(calculateTotalPrice);
// on form submit
$("#book_tickets").submit(function(event){
// prevent on submit page refresh
event.preventDefault();
// check locally stored data
if(window.localStorage){
var moviesListJson = localStorage.getItem('movies_list');
var movies_list = moviesListJson ? JSON.parse(moviesListJson) : [];
var movie = $("#movie_name").val();
movies_list.push(movie);
localStorage.setItem('movies_list', JSON.stringify(movies_list));
}
// clear the form
$( '#book_tickets' ).each(function(){
this.reset();
});
// reset (enter data first) message
$("#total_price").html("(enter data first)");
// update movies booked list
checkMoviesBooked();
});
// set minimum date in datepicker as today
var today = new Date().toISOString().split('T')[0];
document.getElementsByName("date")[0].setAttribute('min', today);
});
function calculateTotalPrice(){
if($("#tickets_quantity").val() != "" && $("#movie_name").val() != "" && $("#date").val() != ""){
if(window.Worker){
// create web worker
var blob = new Blob(
[document.querySelector("#worker").textContent],
{type: 'text/javascript'});
var worker = new Worker(window.URL.createObjectURL(blob));
worker.onmessage = function(event){
$("#total_price").html(event.data);
}
worker.onerror = function(errorObject){
$("#total_price").html("Error: " + errorObject.message);
}
// get date
var date = new Date($('#date').val());
// get day
var day = date.getDay();
// get number of booked shows
var number_booked_shows;
if(window.localStorage){
// check if movies_list is present already
if(localStorage.getItem('movies_list')){
var movieListJson = localStorage.getItem('movies_list');
var movies_list = JSON.parse(movieListJson);
number_booked_shows = movies_list.length;
}
else
number_booked_shows = 0;
}
// send JSON data to worker
var jsonData = {'day': day, 'number_booked_shows': number_booked_shows, 'tickets_quantity': Number($("#tickets_quantity").val())};
worker.postMessage(jsonData);
}
}
}
// fetch details of movies booked
function checkMoviesBooked(){
$("#movies_list").html("<span id='none'>(none)</span>");
if(window.localStorage){
if(localStorage.getItem('movies_list')){
$("#none").remove();
var movieListJson = localStorage.getItem('movies_list');
var movies_list = JSON.parse(movieListJson);
var sr_no = 0;
$.each(movies_list,function(key,value){
$("#movies_list").append(++sr_no + ". " + value + "<br>");
});
}
}
}
html{
height: 100%;
}
body{
font-family: "Arial", Helvetica, sans-serif;
position: relative;
}
#container{
text-align: center;
position: relative;
height: 100%;
}
#movies_booked, #form{
display: inline-block;
width: 40%;
height: 100%;
margin: 0 auto;
vertical-align:text-top;
}
fieldset, #movies_booked{
border:1px solid #AED0EA;
border-radius:8px;
box-shadow:0 0 10px #D7EBF9;
}
legend, #disount_title{
color: #2779AA;
font-size: 120%;
text-align: center;
background-color: white;
}
p{
overflow: hidden;
}
label{
width: 50%;
text-align: right;
float: left;
clear: both;
color: #2779AA;
}
p input, p select{
width: 40%;
-moz-box-sizing: border-box;
box-sizing: border-box;
float: left;
margin-left: 5%;
margin-right: 5%;
color: #2779AA;
}
#theaters, #total_price, #movies_list, li, #perTicketPrice, #note{
color: #2779AA;
}
p span, li{
text-align: left;
}
#submit_wrapper{
text-align: center;
}
#submit{
font-size: 13px;
}
#discount, #perTicketPrice{
text-align: left;
}
#dateNote{
font-size: 9px;
}
<div id="container">
<div id="form">
<form id="book_tickets">
<fieldset>
<legend>Booking Details</legend>
<p>
<label for="movie_name">Movie</label>
<select id="movie_name" name="movie_name" required autofocus>
<option></option>
<option value="Movie 1">Movie 1</option>
<option value="Movie 2">Movie 2</option>
<option value="Movie 3">Movie 3</option>
</select>
</p>
<p>
<label for="theaters">Theaters</label>
<select id="theaters" required>
<option></option>
<option value="Theater 1">Theater 1</option>
<option value="Theater 2">Theater 2</option>
<option value="Theater 3">Theater 3</option>
</select>
</p>
<p>
<label for="date">Date<br/><span id="dateNote">Firefox does not have a HTML5 datepicker <a href="https://support.mozilla.org/en-US/questions/986096">yet</a>.</span></label>
<input type="date" name="date" id="date" min="today" required />
</p>
<p>
<label for="email">Email</label>
<input type="email" name="email" id="email" required />
</p>
<p>
<label for="tickets_quantity"># Tickets</label>
<input type="number" min="1" name="tickets_quantity" id="tickets_quantity" required />
</p>
<p>
<label>Total Price</label>
<span id="total_price">(enter data first)</span>
</p>
<div id="submit_wrapper">
<input type="submit" id="submit" value="Book Tickets" />
</div>
</fieldset>
</form>
<p id="perTicketPrice">Per ticket price = ₹ 100.00</p>
<p id="discount">
<span id="disount_title">Discounts:</span>
<ul>
<li>5% discount if show is on weekday</li>
<li>10% discount if number of booked shows >= 10</li>
</ul>
</p>
</div>
<fieldset id="movies_booked">
<legend>Movies Booked Till Date</legend>
<span id="movies_list"></span>
</fieldset>
</div>
<script id="worker" type="javascript/worker">
self.onmessage = function msgWorkerHandler(event){
var jsonString = event.data;
var day = jsonString.day;
var number_booked_shows = jsonString.number_booked_shows;
var tickets_quantity = jsonString.tickets_quantity;
// set price of each ticket as Rs. 100
var totalPrice = tickets_quantity * 100;
// 5% discount if on weekday
if(day > 0 && day < 6){
totalPrice = totalPrice - 0.05 * totalPrice;
}
// 10% discount if number of booked shows >= 10
if(number_booked_shows >= 10){
totalPrice = totalPrice - 0.10 * totalPrice;
}
postMessage("₹ " + totalPrice);
}
</script>
Answer: UX (User Experience)
I'll start with a few possible improvements to the User Experience functionality.
In order to offer the best possible interface you have to provide the same or similar experience in all browsers, simply listing something isn't supported, in my mind, is lazy and not a great UX/UI option. I'd look into webshims or a similar polyfill. Basically, you should provide a fallback or functional date picker in browsers that don't support the HTML5 feature. Check the caniuse.com and html5please.com to see support levels and polyfill options for all these HTML5 features.
There is no mobile optimized content or media queries. There is an increasing number of movie goers that are utilizing mobile devices in order to purchase tickets. Not providing mobile optimized layout is a huge draw back to the UX for half the people that would utilize this tool. (source)
I'd look into Flying Focus to provide tabbed and arrow key users the ability to see where they are going when tabbing through fields.
I'd look into <datalist>'s (webshims provides support for it) to use in place of your <select>'s, but since support is shaky for this HTML5 feature you could also provide a search box in the dropdown similar to Select2 in browsers that don't support it or if you choose not to polyfill.
UI (User Interface)
To me, this tool is a bit bland, it has a fairly antiquated look/feel. I'll offer a few suggestions that may help you achieve a better UI. That said, it doesn't look bad, just old.
Since <legend> is difficult to style, I would visually hide it (see .visuallyhidden style below) and apply an <h2> or some other form of heading that is easier to style.
css
.visuallyhidden {
border: 0;
clip: rect(0,0,0,0);
clip: rect(0 0 0 0);
height: 1px;
overflow: hidden;
position: absolute;
width: 1px;
margin: -1px;
padding: 0}
html
<legend class="visuallyhidden">Booking Details</legend>
<h2>Booking Details</h2>
A possible option to bolster the appearance is to provide some contrast using color, having the border to separate the modules is fine, but perhaps you could look into using background as an additional way to ease some of that excessive white color. This Fiddle is a quick example of using color to separate items in a little more up to date fashion.
I'd recommend using a <div> to do the styling for the form areas, and having the <fieldset>'s styles removed. (see fieldset normalize & reset below)
css
/* from http://necolas.github.io/normalize.css/ */
fieldset {
border: 1px solid #c0c0c0;
margin: 0 2px;
padding: 0.35em 0.625em 0.75em}
/* remove default styles */
fieldset {
border: 0;
margin: 0;
padding: 0}
I'd add a greater padding on the form fields to offer interaction with touch screens. The code below should suffice.
input,select{padding:7px}
I'd also add some transitions transition:all .2s ease; to ease changes to different form states. :focus, :hover, onmouseenter, onmouseleave or something of the sort.
I'd also take advantage of the placeholder option and tabindex attributes. A good rule of thumb is the <label> is for the descriptor, the placeholder is for an example.
I won't touch base on the semantics too much, but I will say you should add ARIA support.
I like the idea, I just think the concept fell a little short. Providing a simple method of searching movie options in a easy to use form is great; however, the concept isn't used very often due to people being more visual these days, using forms for searchable content works in some cases, but I'm not entirely sure it is suitable for this particular instance.
Take a look at Flixter, I love the UI/UX on both the mobile and desktop versions. It's very intuitive and easy to navigate, seeing what's playing shows the actual movie cover instead of having a bland form to interact with.
Links list:
http://afarkas.github.io/webshim/demos/
http://nativemobile.com/mobile-playing-increasingly-vital-role-among-moviegoers-6253
https://github.com/NV/flying-focus
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-button-element.html#the-datalist-element
http://afarkas.github.io/webshim/demos/#Forms-forms
http://html5please.com/#datalist
http://ivaynberg.github.io/select2/
http://necolas.github.io/normalize.css/
http://www.flixster.com/#/browse
http://caniuse.com/
http://html5please.com/
http://www.w3.org/WAI/intro/aria
http://jsfiddle.net/darcher/9u45Q/60/ | {
"domain": "codereview.stackexchange",
"id": 8152,
"tags": "javascript, jquery, css, html5, jquery-ui"
} |
Lengths and substitution in L-systems | Question: Am looking into writing up a Lindenmayer systems implementation. I've looked at a few example implementations and the one thing that's giving me trouble at this stage is how symbols and substitutions are meant to work (if this is even specified in the original treatise).
For example, the implementations I've seen tend in most cases to start with axioms based on A. Let's take the simplest example of this where the Axiom is (just) A.
When this is drawn (depth 0), what should I expect to see? If otherwise undefined, is A meant to render as anything at all? Or does it need to be defined as one full length (whichever symbol is used for this; I have seen pipe being used to this purpose) before it will render out?
Taking a second example, let us say that A renders out, irrespective of what the answer to the above is. If part of F's definition is B, and B is not defined, should B render as a full length?
The way I would expect this to work is that without F being defined at all, nothing should be drawn at depth 0 and subsequently at no other depth, either.
Yet in L-systems Explorer (LSE) and in this web-based implementation, even where a symbol is not defined, it will still be rendered as a full length. The question is, why?
Answer: L-systems, are more about general : Abstract rewriting systems rather then to concrete application.
Agree, a wide range of L-systems applications is in computer graphics (generating nature, textures etc), while from mathematical point of view, they are still, the same model.
As stated in referred by you L-systems definition :
Using L-systems for generating graphical images requires that the symbols in the model refer to elements of a drawing on the computer screen.
Coming back to your question
Yet in (...) implementation, even where a symbol is not defined, it will still be rendered as a full length. The question is, why?
Answer: Because that's the way, how those implementations' authors decided to map symbols into rendering commands.
You can do in your implementation any other mapping... and still both of them can be same type of system - L-system.
It's all about defining isomorphism between systems behind implementations, or homomorphism between those systems and graphics they generate. If you will make your own implementation, that will draw symbols differently - you will indirectly define your own homomorphism. | {
"domain": "cstheory.stackexchange",
"id": 1081,
"tags": "pl.programming-languages, grammars, term-rewriting-systems"
} |
JQuery Slide Show Simplification | Question: I'm trying to come up with the most basic example of making a JQuery slide show where you click on the current image you're viewing and you cycle through a gallery of photos. I know its probably not the most basic example, because if I want to add a new image I have to code more JQuery. Is there a more abstract approach where I don't have to code JQuery in terms of div id's and let classes take care of the work?
Here is my JQuery
$(document).ready(function() {
$("#pic1").click(function() {
$("#pic1").hide();
$("#pic2").show();
});
$("#pic2").click(function() {
$("#pic2").hide();
$("#pic3").show();
});
$("#pic3").click(function() {
$("#pic3").hide();
$("#pic1").show();
});
});
The rest is here.
http://jsfiddle.net/XjdTX/3/
Answer: This should do the trick:
http://jsfiddle.net/XjdTX/11/
I changed the <div id="slideframe">...</div> to use a class instead. This will allow you to have multiple slide shows functioning off the same code, as shown in the jsFiddle. For toggling through the pics, the pic class is used and cycles based on the index of the clicked image.
There is also code in the jsFiddle that will fade the images instead of just toggling their visibility.
$(document).ready(function() {
$('.slideframe').each(function() {
var $pics = $(this).find('.pic'),
max = $pics.length;
$pics.on('click.slideframe', function(e) {
var idx = $pics.index(this);
if (idx < 0) {
return false;
}
$(this).hide();
$pics.eq(++idx % max).show();
});
});
}); | {
"domain": "codereview.stackexchange",
"id": 2819,
"tags": "javascript, jquery, html, html5"
} |
Entropy of water drop | Question: The second law of thermodynamics states that the total entropy of an isolated system can never decrease over time. However, if the drop of water falls into water, splashes decrease and disappear over time, which looks like a decrease in entropy. How does it comply with the second law?
Answer: It seems to me that the potential energy of the original drop is ultimately converted to internal energy of the pond water (including the original drop) and the surrounding air (and everything else surrounding). So the entropy of everything afterwards is a little higher than everything before, and roughly equal to the potential energy of the original drop divided by the absolute temperature of the pond, air, and greater surroundings. So, in this essentially large-scale isolated system, as expected, when a spontaneous process takes place, the entropy of the system increases. | {
"domain": "physics.stackexchange",
"id": 49568,
"tags": "thermodynamics, entropy"
} |
Is a perturbation of a tensor field a tensor field? | Question: Let say I take some $2$-tensor field $T_{\mu\nu}$ on some pseudo-Riemannian manifold. Now, often, we are interested in its linearization, which means that we take a family of tensor fields $T_{\mu\nu}(t)$ such that $T_{\mu\nu}(0)=T_{\mu\nu}$. Then, we expand in a Taylor series which yields
$$T_{\mu\nu}(t)=T_{\mu\nu}+t T_{\mu\nu}^{\prime}+\mathcal{O}(t^{2})$$
where the perturbation or linearization is defined by $T_{\mu\nu}^{\prime}:=\partial_{t}T_{\mu\nu}(t)\vert_{t=0}$. How do I show that $T_{\mu\nu}^{\prime}$ is a tensor field? I know that it has to be a tensor field, since this is for example used in linearized gravity, where one takes $T$ to be the metric $g$ and derives the linearized Einstein equations for the perturbation $g^{\prime}$.
My attempt:
If we ignore all terms of order $\mathcal{O}(t^{2})$, we get
$$T^{\prime}_{\mu\nu}\propto T_{\mu\nu}(t)-T_{\mu\nu},$$
which is the difference between two tensor fields, however, I don't think that I am in general allowed to ignore all additional terms of higher orders....
Answer: Yes, in fact perturbations of all orders, $\frac{d^k}{dt^k}(T(t))$ are still tensor fields. Let me first address this in more generality, then provide special cases, and other ways of thinking about this.
Let $(E,\pi,M)$ be a smooth vector bundle. Recall that a section of this vector bundle is by definition a mapping $\psi:M\to E$ such that $\pi\circ \psi=\text{id}_M$ (so far I only gave the definition of a section; you can also define smooth sections by requiring $\psi$ to be smooth). More intuitively, a vector bundle means you have a base manifold $M$ (think spacetime, or a configuration space of some mechanical system), and at each point $x\in M$, we have a vector space $E_x$ “attached at the point $x$”, and that the family of vector spaces $\{E_x\}_{x\in M}$ “vary smoothly” as you vary $x$. A section just means for each point $x\in M$, you have a vector $\psi_x\in E_x$ “attached at the point $x$”. The most important special case is when $E_x=T_xM$ is the tangent space to the manifold at the point $x$, in which case $E=TM$ is the tangent bundle.
Now, let us say we have a smooth one-parameter family of sections of $(E,\pi,M)$, $t\mapsto \Psi(t)$. So, for each $t\in\Bbb{R}$ and each point $x\in M$, we have a vector $\Psi_x(t)\in E_x$ (we can formulate smoothness in terms of $t$ alone, or jointly in $(t,x)$). Now, fix the point $x$, and let us vary $t$; so we have the mapping $\Psi_x:\Bbb{R}\to E_x$, whose value at a parameter $t$ is the vector $\Psi_x(t)\in E_x$. This is a very simple type of mapping because the domain is simply $\Bbb{R}$, and the target space is $E_x$, which is a real vector space. So, we can differentiate as usual (up to any order we like) to get the mapping $\frac{d^k\Psi_x}{dt^k}:\Bbb{R}\to E_x$, which we can further evaluate at $t=0$ you get the vector $\frac{d^k\Psi_x}{dt^k}\big\rvert_{t=0}\in E_x$. Since we have a vector at each point $x$, we have a section $\frac{d^k\Psi}{dt^k}\big\rvert_{t=0}$ of $E$, as desired. Thus, we have shown that derivatives with respect to the parameter still give us sections (and if you initially assume smoothness with respect to $(t,x)$, then that smoothness is still preserved).
In your special case, we’re considering $E=T^0_2(TM)$, the $(0,2)$ tensor bundle on the manifold. Sections of this bundle are by definition the $(0,2)$ tensor fields, so everything above can be applied here. The bottom line is that once you fix the point $x$, you’re working within a single vector space so it is basic calculus as usual. In particular with $k=1$, you see that the linearization of any tensor field is still a tensor field
If you want to think in terms of components and transformation laws, you can do that as well. Given a smoothly varying 1-parameter family of (say) $(0,2)$ tensor fields $T(t)$, in terms of a coordinate chart, we can write it as $T(t)=T_{\mu\nu}(t,x)\,dx^{\mu}\otimes dx^{\nu}$. We can then differentiate the components $T_{\mu\nu}(t,x)$ with respect to $t$ as many times as we like, and evaluate at $0$ to get $\frac{\partial^kT_{\mu\nu}}{\partial t^k}(0,x)$. Now, you can change coordinates and verify that the transformation laws still hold. This is because the $\mu\nu$ transformation law of the tensor are “not bothered” by the $t$-dependence. Explicitly, we have the following equation because each $T(t)$ is a $(0,2)$ tensor field
\begin{align}
T_{\mu\nu}(t,x)&=\tilde{T}_{\alpha\beta}(t,y)\frac{\partial y^{\alpha}}{\partial x^{\mu}}\frac{\partial y^{\beta}}{\partial x^{\nu}}.
\end{align}
Now, since the coordinate changes do not depend on the parameter $t$, we can differentiate both sides with respect to $t$ as many times as we wish, then evaluate at $0$ and the same relation holds:
\begin{align}
\frac{\partial^kT_{\mu\nu}}{\partial t^k}(0,x)&=\frac{\partial^k\tilde{T}_{\alpha\beta}}{\partial t^k}(0,y)\frac{\partial y^{\alpha}}{\partial x^{\mu}}\frac{\partial y^{\beta}}{\partial x^{\nu}}.
\end{align}
This is the component proof that perturbations of all orders of tensor fields are again tensor fields (the smoothness with respect to $x$ being the same as before). | {
"domain": "physics.stackexchange",
"id": 93048,
"tags": "differential-geometry, metric-tensor, tensor-calculus, perturbation-theory, linearized-theory"
} |
Maxwell equations and quantized electromagnetic field | Question: When the electromagnetic field is quatized for a single mode, we first take Maxwell equations and proceed to write the electric and magnetic field as a stationary wave, since we consider the electromagnetic field to be in a cavity. (Introductory quantum optics, Gerry and Knight).
After some commutation rules, the electric and magnect field are written as operators.
Classically, the electric and magnetic field are wave functions, since they are solution of the wave equation and Maxwell equations.
So, they start classically as a wave function but when quantized they become observables. How's that possible? I can undestand the position and momenta to be observables when quantized since those are the quantities we want to measure, but when talking about the electromagnetic field we are talking about the entity itself. I don't know if I'm just overthinking or it's really strange that a wave function becomes an observable.
$$\langle \psi| \hat{E}| \psi\rangle $$ is the expected value of the electric field but now there is another object taking the role of wave function, $\psi$. How can we understand this wave function $\psi$ in the classical description?
Answer: You're confusing the technical term wavefunction (i.e. a function $\psi:M\to\mathbb C$ defined on some configuration space $M$ which obeys a Schrödinger equation and which gives the probability of finding the system in some patch $N\subseteq M$ as $\int_N|\psi(q)|^2\mathrm d\mu(q)$) with the much more loosely-defined "function which obeys some form of linear wave equation". The electric field is the latter but it is definitely not the former. There is nothing crazy about quantizing it.
Quantization means taking our observables, which used to have definite values, and making them operators on some suitable Hilbert space. For the electromagnetic field, our observable was a function of position, so what you get is an operator-valued function of position. Nothing all that mysterious, really. If you find the idea weird, though, then welcome to quantum field theory!
In going forward, it's a lot easier to detach the wavefunction from its pedestal of The Description of a given quantum system. In general, you simply have a quantum state $|\psi⟩$ which lives in some abstract Hilbert space $\mathcal H$, and it doesn't need to represent a function of position. The wavefunction is simply one possible representation of the state, $⟨x|\psi⟩$, when that makes sense. When it doesn't, the state is just the state.
Doing this will slightly blunt the pain and confusion that you'll feel when the time comes for you to see quantum field theorists turn the actual wavefunction (i.e. $\psi(\mathbf r)$) into an operator $\hat \psi(\mathbf r)$ as well. | {
"domain": "physics.stackexchange",
"id": 25678,
"tags": "quantum-mechanics, electromagnetic-radiation, quantum-electrodynamics, quantum-optics"
} |
Why does moving air feel colder? | Question: If temperature is just the average kinetic energy of particles, why would moving air feel colder rather than warmer?
Answer: If the air was still, body heat warms a thin layer of air next to the skin. This warm air would stay near the skin, separating it from the cold air. Wind, however, continuously blows away this warm bit of air, replacing it with the colder surrounding air. There's a similar effect on humidity. Evaporating sweat increases the humidity right next to the skin, decreasing the rate of evaporation. Wind removes this humid air and replaces it with the less humid surrounding air. This is why a fan can cool a person down by blowing hot air at them.
I've also heard stories from soldiers driving tanks in the desert that remaining still can make 120$^\circ$F (49$^\circ$C) days more tolerable. Their bodies create a layer of 98$^\circ$F (37$^\circ$C) air next to their skin. | {
"domain": "physics.stackexchange",
"id": 67366,
"tags": "thermodynamics, temperature, everyday-life, air, evaporation"
} |
Rviz not displaying marker on first call of publish(marker) | Question:
Using the attached code for some reason the marker does not appear with the first call of publish() without a small delay inserted. If I echo the visualization_marker topic without the delay it does not display the msg associated with the first publish(), only the subsequent msgs. With the delay it displays all the msgs. Is there something in my code causing this or am I missing something as to why there would need to be a delay between setting the msg parameters and publishing it?
This is being run on a ubuntu 16.04.7, kernel 4.15.0-142 using VMWare 17, with ROS Kinetic
Thanks
#include <ros/ros.h>
#include <visualization_msgs/Marker.h>
int main( int argc, char** argv )
{
ros::init(argc, argv, "add_markers");
ros::NodeHandle n;
visualization_msgs::Marker marker;
ros::Publisher marker_pub = n.advertise<visualization_msgs::Marker>("visualization_marker", 1);
uint32_t shape = visualization_msgs::Marker::CUBE;
marker.header.frame_id = "map";
marker.header.stamp = ros::Time::now();
marker.ns = "add_markers";
marker.id = 0;
marker.type = shape;
marker.action = visualization_msgs::Marker::ADD;
marker.pose.position.x = 4; // Pick Up position 4, 4
marker.pose.position.y = 4;
marker.pose.position.z = 0;
marker.pose.orientation.x = 0.0;
marker.pose.orientation.y = 0.0;
marker.pose.orientation.z = 0.0;
marker.pose.orientation.w = 1.0;
marker.scale.x = 0.25;
marker.scale.y = 0.25;
marker.scale.z = 0.25;
marker.color.r = 0.0f;
marker.color.g = 1.0f;
marker.color.b = 0.0f;
marker.color.a = 1.0;
marker.lifetime = ros::Duration();
ros::Duration(0.1).sleep(); // Why is this needed?
marker_pub.publish(marker);
ROS_INFO("Marker Placed");
ros::Duration(3.0).sleep();
marker.color.a = 0;
marker_pub.publish(marker);
ROS_INFO("Marker Hide");
ros::Duration(3.0).sleep();
marker.pose.position.x = 3.5;
marker.pose.position.y = -4;
marker.color.a = 1.0;
marker_pub.publish(marker);
ROS_INFO("Marker Reappear");
ros::Duration(2.0).sleep();
return 0;
}
Originally posted by jtrubatch on ROS Answers with karma: 3 on 2023-02-08
Post score: 0
Answer:
There must be a delay. It takes time for the subscribers to be notified that the publisher has been created, and then more time for the subscriber to establish a network connection. The ros policy for a publisher that has no active subscribers is to simply discard the message. This policy makes sense because a subscriber may never connect.
Originally posted by Mike Scheutzow with karma: 4903 on 2023-02-09
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by jtrubatch on 2023-02-09:
Got it, thanks! | {
"domain": "robotics.stackexchange",
"id": 38270,
"tags": "ros, rviz, ros-kinetic, ubuntu, markers"
} |
How do two waves after approaching and neutralising each other give birth to two waves the very next moment? | Question:
My question is after the third event(in the picture) how do the waves originate again?
An obvious answer might be that two waves simply propagate and just add up when they interfere with each other. But if we take a closer look, in the third position, all the velocities are perfectly cancelled out. Then why should another set of waves originate?
Wouldn't it be mysterious to an observer who starts observing from the third position?
Can anyone give me a mechanical interpretation? Where am I making the mistake?
Answer: At the third position the different parts of the string still have vertical velocities that aren't shown.
The wave equation is second order in time, so you need to specify the vertical velocities of each part. Not just the position. | {
"domain": "physics.stackexchange",
"id": 57088,
"tags": "classical-mechanics, waves"
} |
Problems with robotino_node when using roslaunch | Question:
Hello,
When I run roslaunch robotino_node robotino_node.launch, I get the following output which indicates that robotino_node & robotino_odometry_node are missing (when I look in robotino-ros-pkg/robotino_node/bin, there too they are missing):
SUMMARY
========
PARAMETERS
* /robot_description: <?xml version="1....
* /robot_state_publisher/publish_frequency: 20.0
* /robotino_camera_node/cameraNumber: 0
* /robotino_camera_node/hostname: 172.26.1.1
* /robotino_laserrangefinder_node/hostname: 172.26.1.1
* /robotino_laserrangefinder_node/laserRangeFinderNumber: 0
* /robotino_mapping_node/hostname: 172.26.1.1
* /robotino_node/downsample_kinect: True
* /robotino_node/hostname: 172.26.1.1
* /robotino_node/leaf_size_kinect: 0.04
* /robotino_node/max_angular_vel: 3.0
* /robotino_node/max_linear_vel: 0.5
* /robotino_node/min_angular_vel: 0.1
* /robotino_node/min_linear_vel: 0.05
* /robotino_odometry_node/hostname: 172.26.1.1
* /rosdistro: indigo
* /rosversion: 1.11.9
NODES
/
robot_state_publisher (robot_state_publisher/state_publisher)
robotino_camera_node (robotino_node/robotino_camera_node)
robotino_laserrangefinder_node (robotino_node/robotino_laserrangefinder_node)
robotino_mapping_node (robotino_node/robotino_mapping_node)
robotino_node (robotino_node/robotino_node)
robotino_odometry_node (robotino_node/robotino_odometry_node)
auto-starting new master
process[master]: started with pid [11321]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to 10ff797c-8d13-11e4-8424-001e6537844e
process[rosout-1]: started with pid [11334]
started core service [/rosout]
ERROR: cannot launch node of type [robotino_node/robotino_node]: can't locate node [robotino_node] in package [robotino_node]
ERROR: cannot launch node of type [robotino_node/robotino_odometry_node]: can't locate node [robotino_odometry_node] in package [robotino_node]
process[robotino_laserrangefinder_node-4]: started with pid [11351]
process[robotino_camera_node-5]: started with pid [11368]
process[robot_state_publisher-6]: started with pid [11380]
/opt/ros/indigo/lib/robot_state_publisher/state_publisher
[ WARN] [1419607376.930340318]: The 'state_publisher' executable is deprecated. Please use 'robot_state_publisher' instead
[ WARN] [1419607376.934934125]: The root link base_link has an inertia specified in the URDF, but KDL does not support a root link with an inertia. As a workaround, you can add an extra dummy link to your URDF.
rocess[robotino_mapping_node-7]: started with pid [11418]
[ INFO] [1419607378.754853544]: LaserRangeFinder0 disconnected from Robotino.
[ INFO] [1419607379.158983565]: Mapping disconnected from Robotino.
[ INFO] [1419607379.495352459]: Camera0 disconnected from Robotino.
Can someone help?
Originally posted by sam3891 on ROS Answers with karma: 46 on 2014-12-26
Post score: 0
Answer:
I had checked out robotino-ros-pkg from svn and run 'rosmake robotino'. This is when I got the aforementioned error. But I managed to solve it by excluding all occurrences of KinectROS.cpp which was causing the problems. So now when i run 'rosmake robotino', it generates all executables.
Originally posted by sam3891 with karma: 46 on 2015-01-07
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 20436,
"tags": "ros, roslaunch, robotino"
} |
Justification of root mean square | Question: In the top answer to the question Why do we use Root Mean Square (RMS) values when talking about AC voltage, the following was stated:
This RMS is a mathematical quantity (used in many math fields) used to compare both alternating and direct currents (or voltage). In other words (as an example), the RMS value of AC (current) is the direct current which when passed through a resistor for a given period of time would produce the same heat as that produced by alternating current when passed through the same resistor for the same time.
(By Waffle's Crazy Peanut)
The RMS value, specifically applied to a sinusoidal voltage source $V_\mathrm{p}$ is given by:
$$V_{\mathrm{RMS}} = {V_\mathrm{p} \over {\sqrt 2}}$$
Here is where my intuition conflicts (where it probably goes off).
I'd imagine the average voltage that would be "felt" (direction insignificant / i.e. absolute value) by the circuit would equate to the true average value of the voltage, which is given by integrating a half period and dividing by the length of the period (the mathematical procedure of finding the average height of a function over a given interval)
I.e. the avg. Voltage, it seems to me, should be given by the equation:
$$V_{\mathrm{avg.}} = {2V_\mathrm{p} \over {π}}$$
I know that the two conversion coefficients are close but I simply cannot see why the RMS value is the one that conforms to reality. Please enlighten me! :)
Answer: The instantaneous power expended in a resistor is proportional to $V^2$ (i.e. independent of its direction), so the average power expended is given by the mean square voltage!
The square of the average absolute voltage will not yield the power expended in the resistor.
Edit: And actually this close-to-duplicate question does have more extensive answers (though not the top one!) that essentially make the same point.
Why do we use Root Mean Square (RMS) values when talking about AC voltage
i.e. for a given resistance in an AC circuit with a peak voltage $V_p$, the power expended is $P = V_{p}^{2}/2R = V_{\rm RMS}^{2}/R$, where $V_{\rm RMS}=V_{p}/\sqrt{2}$. | {
"domain": "physics.stackexchange",
"id": 16945,
"tags": "electricity, electric-circuits, voltage"
} |
How to contributing python versions of tutorials? e.g. Tutorials for arm_navigation | Question:
I sometimes find myself converting the C++ versions of the tutorials into python.
e.g. http://code.google.com/p/gt-ros-pkg/source/browse/trunk/hrl/advait_sandbox/arm_navigation_tutorials/src/arm_navigation_tutorials/
What, if any, would be a good way to add links to python sample code to the tutorials?
Originally posted by Advait Jain on ROS Answers with karma: 140 on 2011-03-05
Post score: 0
Answer:
I can think of two things you can do:
First, you can make a wiki page for your python tutorial. The general ROS tutorials simply put up two versions for the same tutorial:
http://www.ros.org/wiki/ROS/Tutorials
For example,
Writing a Simple Publisher and Subscriber (Python)
Writing a Simple Publisher and Subscriber (C++)
Second, you can contact the stack maintainer, and maybe they'll include your tutorial package for release along with the rest of the stack.
Originally posted by Ivan Dryanovski with karma: 4954 on 2011-03-05
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 4953,
"tags": "ros, python, arm-navigation, tutorials"
} |
ROS beginner tutorials building nodes issue | Question:
I am working through the ROS begnner tutorials, specifically on this section I am running into an issue:
http://wiki.ros.org/ROS/Tutorials/WritingPublisherSubscriber%28c%2B%2B%29
I am using ROS kinetic and Ubuntu16.04 LTS.
I section 3, I have added the extra lines below to the CMakeLists.txt file:
add_executable(talker src/talker.cpp)
target_link_libraries(talker ${catkin_LIBRARIES})
add_dependencies(talker beginner_tutorials_generate_messages_cpp)
add_executable(listener src/listener.cpp)
target_link_libraries(listener ${catkin_LIBRARIES})
add_dependencies(listener beginner_tutorials_generate_messages_cpp)
Once these are added, I have saved the .txt file and have then run the following command to add dependencies for the executable targets to message generation targets:
add_dependencies(talker beginner_tutorials_generate_messages_cpp)
Note that the abve command had to be edidet to escape the brackets, it now looks like below:
add_dependencies(talker beginner_tutorials_generate_messages_cpp)
However, when I run this command I get the following error:
add_dependencies(talker: command not found
I am unsure why this is, I have seen similar issues on forums such as here https://answers.ros.org/question/67415/beginner-tutorial-catkin_make-error-directory-not-found/?answer=257676?answer=257676#post-id-257676, but could not find a solution for mine, maybe I have missed the solution somewhere.
Has anyone come across this before and has a solution?
Below is my CMakeLists.txt file (Note that I have put a . between the # so as the text does not look so large.
cmake_minimum_required(VERSION 2.8.3)
project(beginner_tutorials)
.#.# Find catkin and any catkin packages
find_package(catkin REQUIRED COMPONENTS
roscpp
rospy
std_msgs
message_generation
)
.#.#. Declare ROS messages and services
add_message_files(
FILES
Num.msg
)
add_service_files(
FILES
AddTwoInts.srv
)
.#.# Generate added messages and services
generate_messages(
DEPENDENCIES
std_msgs
)
.#.# Declare a catkin package
catkin_package(
CATKIN_DEPENDS message_runtime
)
.#.# Build talker and listener
${catkin_INCLUDE_DIRS}
)
add_executable(
talker src/talker.cpp
)
target_link_libraries(
talker ${catkin_LIBRARIES}
)
add_dependencies(
talker beginner_tutorials_generate_messages_cpp
)
add_executable(
listener src/listener.cpp
)
target_link_libraries(
listener ${catkin_LIBRARIES}
)
add_dependencies(
listener beginner_tutorials_generate_messages_cpp
)
Originally posted by jimc91 on ROS Answers with karma: 29 on 2020-05-08
Post score: 0
Original comments
Comment by gvdhoorn on 2020-05-08:\
Note that I have put a . between the # so as the text does not look so large.
that is because you are using >, which are for quoting ordinary text, not code (or build scripts, or terminal copy-pastes, etc).
If you want to format blocks verbatim (as would be needed for code, console text, etc), then paste the text into your question, select all the lines and press the Preformatted Text button (the one with 101010 on it), or press ctrl+k. That should format everything correctly.
Answer:
I have [..] then run the following command to add dependencies for the executable targets to message generation targets:
add_dependencies(talker beginner_tutorials_generate_messages_cpp)
Similar to your other question (#q351468): what you show is not a command you run in the terminal, but another statement to add to the CMakeLists.txt of the package.
From the CMakeLists.txt you show it would appear it's already there, so things should be OK like this.
Originally posted by gvdhoorn with karma: 86574 on 2020-05-08
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 34923,
"tags": "ros-kinetic, ubuntu, cmake, ubuntu-xenial"
} |
Synchronized Queue Wrapper C++11 | Question: I am trying to write a SynchronizedQueue wrapper class to provide a simple synchronized interface to underlying standard std::queue.
Could you have a review and let me know if any pitfalls and improvements you see ?
Too many mutexes/locks ? I wanted to to use condition variables to wait rather than mutexes for push & pop notifications.
Any insights is highly appreciated as I have rarely written multi-threaded programs.
#pragma once
#include <mutex>
#include <condition_variable>
#include <queue>
#include <cstdint>
template<typename T>
class SynchronizedQueue
{
public:
SynchronizedQueue(size_t maxItems)
: synchronizedQueue_()
, queueMutex_()
, pushMutex_()
, popMutex_()
, pushCV_()
, popCV_()
, MAX_ITEMS(maxItems)
{
}
void push(const T& item)
{
std::unique_lock<std::mutex> scopedLock(pushMutex_);
while (isFull())
{
std::unique_lock<std::mutex> pushLock(queueMutex_);
popCV_.wait(pushLock);
}
{
std::unique_lock<std::mutex> scopedLock(queueMutex_);
synchronizedQueue_.push(item);
pushCV_.notify_one();
}
}
T pop()
{
std::unique_lock<std::mutex> scopedLock(popMutex_);
while (isEmpty())
{
std::unique_lock<std::mutex> scopedLock(queueMutex_);
pushCV_.wait(scopedLock);
}
T t;
{
std::unique_lock<std::mutex> scopedLock(queueMutex_);
t = synchronizedQueue_.front();
synchronizedQueue_.pop();
popCV_.notify_one();
}
return t;
}
private:
bool isEmpty()
{
std::unique_lock<std::mutex> scopedLock(queueMutex_);
return synchronizedQueue_.empty();
}
bool isFull()
{
std::unique_lock<std::mutex> scopedLock(queueMutex_);
return synchronizedQueue_.size() == MAX_ITEMS;
}
std::queue<T> synchronizedQueue_;
std::mutex queueMutex_, pushMutex_, popMutex_;
std::condition_variable pushCV_, popCV_;
size_t MAX_ITEMS;
};
Answer: Synchronization
pushMutex_ and popMutex_ aren't needed. Even worse, they prevent other threads from waiting on a condition variable! Just lock queueMutex_ instead (that also prevents having to re-lock queueMutex_).
Assume that the queue is full, and thread 1 and 2 both want to push an item into the queue. Thread 1 was faster and got the lock on pushMutex_, so it proceeds to wait on pushCV_. Thread 2 in the meanwhile spins trying to lock pushMutex_ - which is still held by thread 1!
Avoid using std::condition_variable::notifiy_one() while still holding the corresponding lock! The notified thread will wake up, and the first action it has to do is reacquiring the lock that is still hold by the caller of notify_one(), so it will instantly block again (this time on acquiring the lock though, not wait()).
Implementation
The outer scopedLock in push() and pop() get shadowed by the inner scopedLock. This can lead to extra confusion in an already complex code!
Since MAX_SIZE isn't supposed to change once the object is initialized, consider marking it as const. (It can still be set with the initialization list of the constructor, but not changed afterwards.)
Further considerations
Maybe add fast returning bool try_push(const T&) and bool try_pop(T&) member functions, so users of the queue don't have to wait if they don't need to? | {
"domain": "codereview.stackexchange",
"id": 28287,
"tags": "c++, c++11, multithreading, thread-safety"
} |
Why do unstable nuclei form? | Question: Why do unstable nuclei form? Is it that we simply find unstable nuclei in nature and understand what these nuclei do in order to become more stable?
I feel like textbooks gloss over this question when addressing radioactivity.
Answer: There are a few different ways that unstable nuclei are produced:
Nuclear fusion is quite a common way to produce unstable nuclei in nature. At high enough energies, stable nuclei can fuse together to create unstable ones. For example, one step of one of the usual hydrogen-burning sequences in stars combines a helium-3 nucleus and a helium-4 nucleus (both of which are stable) into a beryllium-7 nucleus, which is unstable, with a half-life of roughly 53 days. Stars generally use nuclear fusion to produce most elements from boron up to roughly iron in their lifetimes. It's also how we produce many of the heavy synthetic elements in the laboratory, when we collide ion beams with a fixed target.
Neutron capture can also turn a stable nucleus into an unstable one. Since neutrons are uncharged, they are unaffected by the Coulomb repulsion of the protons of the nucleus and can incorporate themselves rather easily into even a stable nucleus at the right energy. Even common building materials like concrete and steel can become radioactive in the presence of enough neutron radiation at the right energy. Neutron capture can even induce nuclear fission, and in fact this is the mechanism by which nuclear fission reactors operate. Oftentimes these reactors are kickstarted using a "neutron gun" which injects neutrons of the right energy into the reactor core.
Neutron capture plays a prominent role in the creation of even heavier elements, in more extreme conditions like supernovae, neutron star mergers, and other cataclysmic events. In a stellar nucleosynthesis process such as the r process (short for "rapid neutron capture process"), seed nuclei capture neutrons to move to heavier and heavier masses. Those heavy isotopes are unstable, and beta decay toward stability. Most of those heavy nuclei have extremely-short half-lives, but some r-process nuclei are long-lived enough to be found on Earth.
Decay of other unstable nuclei is a rather obvious one, but still needs to be included as it's a distinct process. Most of the nuclei we see on Earth with short half-lives are themselves the decay products of unstable nuclei with longer half-lives. For example, the radon gas that accumulates in basements is one of the decay products of uranium-238 that has been in the soil basically since the Earth was formed.
Neutrino interactions are a tiny, but notable, contribution to nucleosynthesis. A high-energy neutrino has a small, but nonzero, probability of knocking a proton or neutron out of a nucleus. In supernovae, there are an absolutely staggering amount of neutrinos produced (obligatory xkcd what-if: https://what-if.xkcd.com/73/); since there are so many high-energy neutrinos flying around, there are actually a non-negligible number of neutrino-induced nuclear reactions that happen, and it's currently believed that neutrino-induced nucleosynthesis partly explains the observed abundances of some light odd-numbered nuclei like fluorine-19.
This is not necessarily an exhaustive list, but you'll notice that it contains both examples found in nature and examples produced in the laboratory. | {
"domain": "physics.stackexchange",
"id": 58209,
"tags": "nuclear-physics, radioactivity, stability, binding-energy"
} |
calibrating webcam | Question:
I'm trying to calibrate my webcam to remove distortions. I run
$ rosrun usb_cam usb_cam_node
Then this error and warning message is printed out:
[ERROR] [1384614141.059240716]: Unable to open camera calibration file [/home/robz/.ros/camera_info/head_camera.yaml]
[ WARN] [1384614141.060021982]: Camera calibration file /home/robz/.ros/camera_info/head_camera.yaml not found.
Not sure what that means. I wouldn't expect a calibration file to exist for a camera that I haven't been able to calibrate yet...
In any case, the node is still able to run despite the error and warning message. It publishes to a topic, and I can see it with image_view just fine. So then I run
$ ROS_NAMESPACE=usb_cam rosrun image_proc image_proc
Then, this error message is printed out:
[ERROR] [1384614331.794465524]: Rectified topic '/usb_cam/image_rect_color' requested but camera publishing '/usb_cam/camera_info' is uncalibrated
This is very confusing for me. I was under the impression that the image_proc node does the calibration. Why does it require a calibrated camera in order to run?
Originally posted by robzz on ROS Answers with karma: 328 on 2013-11-16
Post score: 1
Answer:
image_proc does not perform camera calibration. It uses a calibration to produce things like rectified images. To calibrate your camera, see at the MonocularCalibration tutorial.
Originally posted by Dan Lazewatsky with karma: 9115 on 2013-11-16
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 16179,
"tags": "ros, image-pipeline"
} |
Deriving Drude Theory from Plasma Fluid Equations | Question: Does anyone have experience in looking at Drude theory from the perspective of plasma physics instead of the standard, condensed-matter, "electrons in a metal" sort of thing and can point the way? I'm a grad student working on this topic for my masters thesis and I've spent the last hour or so reading the "Plasma Fluid Theory" section of this website, but the math is going over my head and I feel lost. The Ph.D student who is helping me has said I should keep three things in mind:
The plasma is cold, $P = 0$
Density deformations are small, $\dfrac{dn}{dt} = 0$
The ions are stationary, microwave frequency >> ion plasma frequency
It seems like there are so many different ways to formulate a fluid theory of plasmas and I can't get grasp what the right starting point is.
Answer: I once derived something similar starting from the momentum equation of a multi-fluid model:
\begin{equation}
m_{\alpha}n_{\alpha}\left[\partial_{t}\mathbf{v}_{\alpha}+\left(\mathbf{v}_{\alpha}\cdot\nabla\right)\mathbf{v}_{\alpha}\right]=n_{\alpha}q_{\alpha}\left(\mathbf{E}+\mathbf{v}_{\alpha}\times\mathbf{B}\right)-\nabla\cdot\boldsymbol{P}_{\alpha}+\sum_{\beta}\mathbf{R}_{\alpha\beta}\label{eq:2fluid_1}
\end{equation}
Where $m$ and $q$ denote mass and charge of the fluid elements respectively.
This equation decribes the conservation of momentum for each particle
species $\alpha$ (i.e. Newton's 2nd law). On the left hand
side we have inertial forces due to temporal or convective changes.
On the right hand side we have forces acting on the fluids. From left
to right these are the Lorentz force, internal forces described by
the pressure tensor (pressure or viscosity) and a term which allows
the momentum transfer from species $\alpha$ to species $\beta$.
Depending on your exact situation you can now cross terms out if they are not that important, like in your case the pressure. Since I am working on magnetic confinement in fusion devices it was save for me to neglect the whole left side since it is way smaller in these conditions than the vxB term on the right. Depending on your situation you may do the same.
If we assume that the plasma is made of electrons and one species of
ions, the term respecting transfer of momentum to electrons by electron-ion
collisions can be written as
\begin{equation}
\mathbf{R}_{ei}=-m_{e}n_{e}\nu_{ei}\left(\mathbf{v}_{e}-\mathbf{v}_{i}\right),
\end{equation}
where $\nu_{ei} = \tau_{ei}^{-1}$ is the collisionality or inverse collision time. This is called friction force and already looks similar to a current $\mathbf{J}=-n_{e}e\left(\mathbf{v}_{e}-\mathbf{v}_{i}\right)$. With only electrons and ions you will have 2 version of the momentum equation above, one for each species. This is usually called 2-fluid model.
I think you may use this as a starting point since there is now a current and the electric field in the equations and the combination of constants will make up a conductivity
\begin{equation}
\sigma =\frac{n_{e}e^{2}}{m_{e}\nu_{ei}},
\end{equation}
in the end. | {
"domain": "physics.stackexchange",
"id": 68858,
"tags": "fluid-dynamics, plasma-physics, research-level, metals"
} |
Does the velocity vector always point in the same direction as the momentum vector? | Question: I was told that the angular velocity vector does not always have to point in the same direction as the angular momentum vector. This is due to the fact that they are related by the equation $L=I \omega$. But in general, $I$ is a tensor and so the result might not be in the same direction as $\omega$. Because the equations for linear and angular motion are very symmetrical that leads me to ask -- does the velocity vector always point in the same direction as the momentum vector?
Answer:
Because the equations for linear and angular motion are very
symmetrical
In Newtonian mechanics, linear momentum is a vector while angular momentum is pseudo-vector which hints at its true nature as a higher rank tensor object.
In relativistic mechanics, four-momentum is a four-vector while angular momentum is a (rank 2) four-tensor.
So, the 'symmetry' isn't really there. Linear momentum and angular momentum are different kinds of objects.
does the velocity vector always point in the same direction as the
momentum vector
As far as I know, linear kinetic momentum and linear velocity are parallel in both classical (three-vector) mechanics
$$\mathbf p = m \mathbf v$$
and relativistic (four-vector) mechanics.
$$\mathbf P = m \mathbf U $$
However, as yuggib points out in a comment, the canonical momentum
$$\mathbf P = \frac{\partial \mathcal L}{\partial \dot {\mathbf q}}$$
is not generally parallel to $\dot {\mathbf q}$. For example, the canonical momentum of a non-relativistic charged particle is
$$\mathbf P = m\mathbf v + q\mathbf A = \mathbf p + q\mathbf A$$
where $\mathbf A$ is the magnetic vector potential. | {
"domain": "physics.stackexchange",
"id": 20753,
"tags": "newtonian-mechanics, momentum, tensor-calculus"
} |
Where can I find a complete and accurate table of CPK colours? | Question: I have been using several chemical drawing softwares including MarvinSketch, Jmol, Accelrys DS Visualizer, Avogadro, etc., all of which I have set to CPK colouring. Despite this supposedly common setting I have noticed that fluorine atoms are coloured rather varyingly from software to software. For example, MarvinSketch uses orange, Accelrys uses cyan as does Avogadro and Jmol uses algae green for fluorine atoms. I have tried Googling for the answer but I have only found incomplete tables, aside from on the Wikipedia website but I am dubious as to using Wikipedia for this information due to its reputation for inaccuracy.
Answer: Indeed, this is annoying. Because of this problem, several years ago, one of the Jmol developers and I sat down and worked out a color scheme. It was similar to Accelrys, although we tried to make certain "known colors" match (e.g., rust-like for Fe, golden for Au, etc.)
Certainly, there's been an effort to keep color consistency between Jmol and Avogadro
These colors are open source and available, e.g., through Open Babel
If there are differences between Jmol and Avogadro, I'm sure both the Jmol developers and the Avogadro developers (including myself) would be happy to reconcile them. | {
"domain": "chemistry.stackexchange",
"id": 2021,
"tags": "software, color"
} |
Is it possible that a star is the center of a galaxy? | Question: I read that the black hole at the center of the galaxy has much less mass than the galaxy itself and that it is somehow held together by dark matter.
So now I am wondering if it is possible that there are galaxies with a star at the center?
Answer:
It won't stay in the center for long.
Galaxy nuclei are full of stars. Any star passing by will exchange momentum with the central star and will perturb its position. Stars of similar mass will be able to completely eject the central star from its privileged position.
It won't live this long.
Massive stars (and it has to be massive, see p. 1) tend to live few million years and die, forming (in the general case) a stellar-mass black hole. On the other hand, galaxies live for bilions of years. So we end up with a very young galaxy with a small black hole in the center.
More black holes will sink down, promoting the central black hole growth.
Massive stars, as well as their remnant black holes, tend to sink towards the center of the clusters of stars (and the galaxy nucleus has any reason to possess the same dynamics). The object being a massive star is not of great importance, because they quickly (on the timescale of the galaxy formation) convert to black holes.
In short, the modern consensus is that galaxies do have a black hole in the center. But even if we start with a galaxy that does not have one (we know that galaxy mergers sometimes eject the central black holes), it will grow its brand new central black hole soon. | {
"domain": "astronomy.stackexchange",
"id": 6204,
"tags": "galaxy, galaxy-center"
} |
Mapping column values of one DataFrame to another DataFrame using a key with different header names | Question: I have two data frames df1 and df2 which look something like this.
cat1 cat2 cat3
0 10 25 12
1 11 22 14
2 12 30 15
all_cats cat_codes
0 10 A
1 11 B
2 12 C
3 25 D
4 22 E
5 30 F
6 14 G
I would like a DataFrame where each column in df1 is created but replaced with cat_codes. Column header names are different. I have tried join and merge but my number of rows are inconsistent. I am dealing with huge number of samples (100,000). My output should ideally be this:
cat1 cat2 cat3
0 A D C
1 B E Y
2 C F Z
The resulting columns should be appended to df1.
Answer: You can convert df2 to a dictionary and use that to replace the values in df1
cat_1 = [10, 11, 12]
cat_2 = [25, 22, 30]
cat_3 = [12, 14, 15]
df1 = pd.DataFrame({'cat1':cat_1, 'cat2':cat_2, 'cat3':cat_3})
all_cats = [10, 11, 12, 25, 22, 30, 15]
cat_codes = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
df2 = pd.DataFrame({'all_cats':all_cats, 'cat_codes':cat_codes})
rename_dict = df2.set_index('all_cats').to_dict()['cat_codes']
df1 = df1.replace(rename_dict)
If you still have some values that aren't in your dictionary and want to replace them with Z, you can use a regex to replace them.
df1.astype('str').replace({'\d+': 'Z'}, regex=True) | {
"domain": "datascience.stackexchange",
"id": 4288,
"tags": "python, pandas, dataframe"
} |
Adding a timeout to an UDP socket server | Question: I use a thread to read from a UDP socket. Afterwards the string gets parsed.
In another thread I send to every connected client via this socket. I would like to keep latency and resources for running the script as low as possible.
It would be nice to add a time-out.
Every attempt to read should be cancelled if longer than e.g. 20 ms, to keep the response time for other clients low, because I believe that the current attempt to read from this socket is blocking the loop. I was reading that some people use select() is there an advantage?
def trnm_thr(): # trnm_thr() sends commands to Arduino
global udp_clients
udp_client = None
while pySerial is not None:
if not pySerial.writable():
continue
try:
udp_msg, udp_client = udp_sock.recvfrom(512) # Wait for UDP packet from ground station
if udp_client is not None: udp_clients.add(udp_client) # Add new client to client list
if not udp_msg: continue # If message is empty continue without parsing
except socket.timeout:
logging.error("Write timeout on socket") # Log the problem
else:
try:
p = json.loads(udp_msg) # parse JSON string from socket
except (ValueError, KeyError, TypeError):
logging.debug("JSON format error: " + udp_msg.strip() )
else:
# Serial device stuff ..
Answer: First a quick question: Does this code run? Currently I see no declaration for pySerial or udp_sock. If the interpreter hit any line in your code using either of those two variables, a NameError would have thrown.
In order to review the valid content you have, I am going to assume this was a copy-paste error.
I have a few implementation comments:
If you wanted to set a timeout, you can use the aptly named socket.settimeout() function. I imagine this should suffice for your purposes. You could look into the socket.setblocking and select as this SO post says. However, unless you are listening with multiple sockets per thread, you shouldn't really have to worry whether a socket is blocking or not.
Instead of using try ... else and creating another level of indentation, use continue in your except blocks.
Don't use global unless absolutely necessary (which will practically be never). A simple fix would be to pass udp_clients to your function.
Now, onto some style comments.
I enjoy reading and understanding 'low-level' (sockets, OS stuff, etc.) code like this. However, for some reason, most of the code that I read that is this low-level, has one glaring problem: the writers truncate EVERYTHING: a simple socket becomes sock, an address becomes addr, etc.
Many of these names have become conventionalized through constant and consistent use. However, this goes against the general Python convention that its better to be too verbose instead of too terse. Instead of sock take the time to type the two extra characters to make socket.
Also, make sure your variable/function names describe what they hold/do. The name p tells us nothing about what it holds and your function name trnm_thr is so condensed I have no clue what it is supposed to do. Don't sacrifice clarity and readability for conciseness.
Don't use inline statements after if statements. This breaks the flow of the program and can throw readers off. The same goes for inline comments.
Technically your indentation level is fine. However, Pythonic convention is to use 4 spaces.
Be as specific with your except blocks as possible. Unfortunately, I don't know enough about what errors json.loads throws, so I cannot suggest anything better for:
except (ValueError, KeyError, TypeError):
than the general case mentioned above.
Here is a PEP8 compliant version of your code (with my other recommendations):
def do_something_with_sockets(udp_clients, py_serial, udp_socket):
udp_client = None
udp_socket.settimeout(.02)
while py_serial is not None:
if not py_serial.writable():
continue
try:
udp_message, udp_client = udp_socket.recvfrom(512)
except socket.timeout:
logging.error("Write timeout on socket")
continue
if udp_client is not None:
udp_clients.add(udp_client)
if not udp_message:
continue
try:
json_data = json.loads(udp_message)
except (ValueError, KeyError, TypeError):
logging.debug('JSON format error: {}'.format(udp_message.strip()))
continue
# Serial device stuff .. | {
"domain": "codereview.stackexchange",
"id": 8281,
"tags": "python, socket"
} |
Confused about getting XV11 laser data into rviz | Question:
Okay, so I finally know enough about ROS to totally tie myself up in knots here.
I've got my Neato laser up and running and pumping but data to the /scan topic but not seeing anything in rviz.
I'm running Groovy on a Beaglebone w/ Precise LTS (12.04).
I'm running rviz remotely on a separate laptop, also running Groovy on Precise LTS.
I have a Neato LIDAR hooked up to it, running the cwru-ros-pkg XV11 laser driver.
rostopic shows the laser successfully publishing to a topic called "/scan":
ubuntu@arm:~$ rostopic list
/rosout
/rosout_agg
/scan
ubuntu@arm:~$
I'm getting data from rostopic echo on both the Beaglebone and the laptop I'm running rviz on:
armadilo@talon:~$ rostopic echo /scan
header:
seq: 15555
stamp:
secs: 1367714272
nsecs: 996305089
frame_id: neato_laser
angle_min: 0.0
angle_max: 6.28318548203
angle_increment: 0.0174532923847
time_increment: 0.000163100005011
scan_time: 0.0
range_min: 0.0599999986589
range_max: 5.0
ranges: [0.2980000078678131, 0.3059999942779541, 0.3140000104904175, 0.3240000009536743, 0.3330000042915344, 0.34200000762939453, 0.35100001096725464, 0.3610000014305115
But nothing is showing up in rviz.
I'm using the cwru-ros-pkg xv11_laser_driver which publishes to the /scan topic, not sensor_msgs/LaserScan. I'm sure I'm missing a big chunk of something here but am not exactly sure what.
Any help from those who've been able to get data from their XV11 laser into rviz or other parts of the ROS stack would be appreciated.
Thanks!
'dillo
Originally posted by jetdillo on ROS Answers with karma: 41 on 2013-05-05
Post score: 0
Answer:
The problem showing the data in RViz was most likely caused by having "fixed frame" in rviz set to the wrong thing. You can see the frame_id in the laser message headers is "neato_laser". RViz either needs to have that as the fixed frame, or it needs TF messages (on topic /tf) being published which show the relationship between the "neato_laser" coordinate frame and the fixed frame. Typically these are specified in a URDF describing the robot's various links and sensors (each with its own coordinate frame name) and published by robot_state_publisher. Then there is "amcl" which does localization which can give the relationship between the robot's base_link and the map. Since "map" is a very common fixed frame to use, that is the default in rviz.
Originally posted by hersh with karma: 1351 on 2013-06-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 14068,
"tags": "ros, rviz, beaglebone, xv-11-laser-driver, ros-groovy"
} |
Calculating SNR from Frequency Domain | Question: As I am newbie to signals, I have collected an appliance signal in frequency domain as follows:
(PSD- Power Spectral Density). So, I need to calculate the SNR (Signal to Noise Ratio) and BER (Bit Error Rate) caused by this signal in Powerline Calculation.
So, I have made these codes in Matlab, but what I am stuck with is calculating the SNR and BER.
bits=10000; %number of bit
b=randi([0,1],1,bits); % generate random [0,1]
t=0:1/30:1-1/30; % Time period allocated for the signal
%ASK Carrier Signals
carrier_signa_l= sin(2*pi*t);
E1=sum(carrier_signa_l.^2);
carrier_signa_l=carrier_signa_l/sqrt(E1); %unit energy
carrier_signal_0 =0 * sin(2*pi*t); % zeros for 0 bits in the carrier signal
%MODULATION
ask=[];
for i=1:bits
if b(i)==1 % If bit = 1
ask=[ask carrier_signa_l];
else
ask=[ask carrier_signal_0];
end
end
Answer: If it is a sinusoidal signal, there will be peak (among the frequency bins) in the frequency spectrum corresponding to the tone's frequency.
Ratio of the magnitude of this peak to the sum of the magnitudes of all other bins (which are noise) correspond to Signal to Noise Ratio.
But when its a non sinusoidal signal (like the one in your plot) you have to consider the relevant band of the signal instead of a single peak. you can quantify it by specifying a frequency bin corresponding to the mojor frquency component in the signal plus some leakage (related to bandwidth) into the nearby bins.
Please take a look in the following matlab code.
N = 8192; % FFT length
leak = 50;
% considering a leakage of signal energy to 50 bins on either side of major freq component
fft_s = fft(inptSignal,N); % analysing freq spectrum
abs_fft_s = abs(fft_s);
[~,p] = max(abs_fft_s(1:N/2));
% Finding the peak in the freq spectrum
sigpos = [p-leak:p+leak N-p-leak:N-p+leak];
% finding the bins corresponding to the signal around the major peak
% including leakage
sig_pow = sum(abs_fft_s(sigpos)); % signal power = sum of magnitudes of bins conrresponding to signal
abs_fft_s([sigpos]) = 0; % making all bins corresponding to signal zero:==> what ever that remains is noise
noise_pow = sum(abs_fft_s); % sum of rest of componenents == noise power
SNR = 10*log10(sig_pow/noise_pow); | {
"domain": "dsp.stackexchange",
"id": 6577,
"tags": "matlab, signal-analysis, modulation, snr"
} |
Unknown CMake command "rospack" | Question:
I'm trying to build old ROS package (originally made in 2009 by a third party). make returns this error:
CMake Error at CMakeLists.txt:5 (rospack):
Unknown CMake command "rospack".
1st 5 lines of CMakeLists.txt:
cmake_minimum_required(VERSION 2.6)
include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake)
rospack(navgtn_common)
rospack runs from terminal with admin privilege.
rosmake seems also stuck at rospack in the same manner (output).
Google tells me that using find_package might be a solution (reference) but then I haven't figured out which package to call within that function.
Env: diamondback, Ubuntu 10.10
Any idea? Thanks.
Originally posted by 130s on ROS Answers with karma: 10937 on 2011-08-29
Post score: 0
Answer:
You are looking for a ROS package, right?
Then have a look at the rosbuild find_ros_package docs.
Originally posted by dornhege with karma: 31395 on 2011-08-29
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by 130s on 2011-08-29:
@dornhege thanks. Replacing "rospack" with "rosbuild_find_ros_package" has cleared the error. Didn't know CMake macros and command-line tools are different... | {
"domain": "robotics.stackexchange",
"id": 6551,
"tags": "cmake, rospack, make"
} |
Capability of observing transits with terrestrial telescopes of various sizes? | Question: I have access to my university's telescope, Dearborn Observatory, an 18.5 inch refractor on the shore of Lake Michigan, just north of Chicago (yes, it's an atrocious location, but the telescope still works fantastically given this), and I was wondering if it would be possible to detect exoplanet transits. I have access to a CCD. Obviously, normally this would take quite a while, but I was looking at Wikipedia's list of transiting exoplanets and several of them have periods of less than 10 hours or so.
Are there any good resources about the math for this? Or is the approximation that size of the change in light observed by the transit is just using the ratio of the area that the planet occludes to the area of the host star a reasonably good approximation?
Or is observing the transits of any known exoplanets completely unfeasible with this size of telescope? From previous nights of photographing the sky, it seems like the telescope's limit is roughly magnitude 13-15, beyond which the signal-to-noise ratio is just too low to get meaningful data.
Answer: A bright, nearby star with a large exoplanet would be best. For example, the first star to have a transiting planet observed was HD 209458, in Pegasus. It has a magnitude of 7.65. When the large "hot jupiter" transits the star it dims by a relative flux of 0.984. That corresponds to a change in magnitude of about 0.016. I.e. it changes from a magnitude 7.65 star to a 7.67 star.
If you know the radii you can calculate the relative flux during a transit by assuming the planet blocks all the light.
The radius of the planet HD209458b is about 100000km and the star 800000km, so the planet has an area 1/64 of the area of the star's disc, and as expected 1-1/64 = 0.984: which is in good agreement with the light curve.
That won't be noticeable with the naked eye, and is challenging for an amateur. A couple of links:
http://www.britastro.org/vss/ccd_photometry.htm general information on photometry. (summary: you need some kit)
http://brucegary.net/book_EOA/EOA.pdf (summary: it can be done, with a 14 inch telescope and Arizonan dark skies.) | {
"domain": "astronomy.stackexchange",
"id": 1616,
"tags": "exoplanet, planetary-transits, refractor-telescope"
} |
Find the cheapest order placed out of all of the stores visited - follow-up #2 | Question: Made some adjustments to the previous code thanks to the helpful and thoughtful review by @G.Sliepen.
Problem statement: This program should ask for the total number of shops that will be visited. At each shop, ask for the number of ingredients that need to be purchases. For each ingredient, ask for the price. Keep track of the total for the order so that you can write it down before leaving the shop. This program should also track with order was the cheapest and which shop the cheapest order was at.
Original code: Find the cheapest order placed out of all of the stores visited
1st adjustments to the original code: Adjustments based on reviews #1; Find the cheapest order placed out of all of the stores visited
Any and all hints, tips, tricks, advice, suggestions are greatly appreciated!
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <stdbool.h>
int read_positive_int(const char* prompt)
{
printf("%s", prompt);
int n;
while (true)
{
if (scanf("%d", &n) != 1)
{
fprintf(stderr, "Invalid input!\n");
return 1;
}
return n;
}
}
float read_real_positive(const char* prompt)
{
printf("%s", prompt);
float m;
while (true)
{
if (scanf("%f", &m) != 1)
{
fprintf(stderr, "Invalid input!\n");
return 1;
}
return m;
}
}
int main(void)
{
int num_shops = read_positive_int("How many shops will be visited? ");
float total_cost[num_shops];
float cheapest_order;
int cheapest_shop = 1;
for (int i = 0; i < num_shops; i++)
{
printf("You are at shop #%d.\n", i+1);
int num_ingredients = read_positive_int("How many ingredients are needed? ");
float cost_ingredient[num_ingredients];
total_cost[i] = 0;
for (int j = 0; j < num_ingredients; j++)
{
printf("What is the cost of ingredient #%d", j+1);
cost_ingredient[j]= read_real_positive("? ");
total_cost[i] += cost_ingredient[j];
}
printf("The total cost at shop #%d is $%.2f.\n", i+1, total_cost[i]);
if (i == num_shops - 1)
{
cheapest_order = total_cost[0];
for (int k = 1; k < num_shops; k++)
{
if (total_cost[k] < cheapest_order)
{
cheapest_order = total_cost[k];
cheapest_shop = k + 1;
}
}
}
}
printf("The cheapest order was at shop #%d, and the total cost of the order was $%0.2f\n", cheapest_shop, cheapest_order);
return 0;
}
Answer: I see some things that may help you further improve your program.
Eliminate arrays by carefully rethinking the problem
At the end of the program, all that is needed is the cheapest_order and the cheapest_shop. All of the arrays could be eliminated by simply keeping, as each shop total is calculated, the current cheapest order. There is no need to keep any data from a shop that is not cheapest, nor any reason to store the ingredient costs in an array.
Fix the bugs
If the scanf fails, as it does if the user inputs a letter instead of a number, the program acts as if the number 1 had been entered, and the user gets no chance to correct it. Also, when I entered -1 for the number of shops to be visited, I got this result:
How many shops will be visited? -1
The cheapest order was at shop #1, and the total cost of the order was $81082112.00
Test your code thoroughly
One way to test the code is to simply try it multiple times. A better way is to automate the process by writing a test driver. There are many ways to do this; one simple way is to manually create a few files with test input and then feed them to the program.
Don't use floating point for money
There is a problem using floating point (that is, float or double types) to represent money values. See this question for a thorough explanation for why that's the case.
An alternative is to keep a number of cents as an integer value internally. For more depth about floating point issues, I'd recommend the excellent article "What every computer scientist should know about floating-point arithmetic" by David Goldberg. | {
"domain": "codereview.stackexchange",
"id": 32213,
"tags": "c"
} |
What velocity has a mass when launched tangently on one side of moon, to let it arrive at a position diametrically opposite on the moon? | Question: If we launch an object on one side of the moon (the velocity being tangent to the surface of the moon), how big must the velocity be to let the object arrive at a position on the moon diametrically opposite to the launching site (in some device to catch the mass)? The mass is very, very small compared to the moon's mass.
The acceleration due to the moon's gravitation is, of course, dependent on the distance to the moon (inverse square law).
Because of this dependence on the distance of the acceleration, it's more difficult to compute a trajectory, especially this particular one that also involves the curvature of the moon. I guess integration is involved, but I don't see how. For sure the velocity has to be higher than the velocity (which is easy to calculate) to let the mass make a circle around the moon. If we didn't catch the mass on the other side, the trajectory would resemble something that looks like an ellipse. This trajectory doesn't consist of two (opposite) parabola's because the trajectory in the question isn't a parabola, again because the acceleration depends on the distance of the mass to the moon.
Is there someone who knows how to do the calculation?
Answer: The moon is not a easy example body because it's gravitational field has significant higher multipoles, so I will answer for an idealized spherically symmetric body.
Your first issue is to recognize that—unless you intend to power the craft somewhere along the trajectory—you are constructing an orbit, and as such it has the usual features:
It is elliptical (of semi-major axis $a$ and semi-minor axis $b$).
It has the center of the body at one focus.
You want to go from one size of the spherically shaped body to the antipodes which means that you want to connect two points on the diameter of a circle around the same center.
We expect a whole family of such curves. We need a reason to prefer one over all the others. More on that later.
Now, the line perpendicular to the major axis and through a focus of a conic section is called the latus rectum. The length of the semi-latus-rectum of an ellipse is $\ell = \frac{b^2}{a}$. In this case $\ell = r$ where $r$ is the radius of the moon or planet, leading to $ar = b^2$ for orbits meeting our needs.
We might want to select the orbit with the lowest energy demand.
The energy needed $E$ is the difference between the energy of the orbit
$$ E(a) = -G\frac{Mm}{2a} \;,$$
and the potential energy of craft at rest on the surface
$$ U_0 = -G\frac{Mm}{r} \;.$$
We get
\begin{align}
E
&= E(a) - U_0 \\
&= GMm\left[ \frac{1}{r} - \frac{1}{2a} \right]\\
&= GMm\left[ \frac{a}{b^2} - \frac{1}{2a} \right] \;.
\end{align}
Minimizing with respect to $a$ we get
$$
0 = GMm\left[ \frac{1}{b^2} + \frac{1}{2a^2} \right] \;,
$$
which is only satisfied in the limit that $a$ and $b$ both are allowed to increase without bound. This represents a demonstration of what StephenG says: our lowest energy-demand orbit is, oddly enough, an escape trajectory meaning infinite time will be required, and making this option unfeasible for most purposes.
But this is a minimum meaning that any bound curve won't meet the requirements.
Discussion: Our calculation actually assumed we wanted to come down opposite the launching point relative the fixed stars. On a rotating body a we can aim at some other point relative the fixed stars by rigging our time-of-flight so that our target geography is where we land when we get there. For locations on the equator a straight-up and straight down trajectory will actually do the job, assuming you get the timing right.
Further discussion: This oddity might be seen as a problem for the designers of ballistic missile systems except that the missile don't accelerate instantaneously as (also) assumed here, but instead have a non-trivial boost phase. The height and down-range travel attained while boosting means that antipode-targeting remains an option. | {
"domain": "physics.stackexchange",
"id": 45301,
"tags": "homework-and-exercises, newtonian-mechanics, acceleration, velocity"
} |
Speed of light paradox | Question: If I send two rockets from the Earth in opposite directions, at, say, 60% of the speed of light relative to the Earth, then relative to each other they are travelling at 120% of the speed of light. What is my problem in reasoning?
Is it to do with the fact that due to SR their relative velocities are not 120%, in either of their reference frames? If this is the case, what is the maximum relative velocity that two bodies can have from any reference frame?
Answer: You've assumed the law to add velocities is the same in special relativity as it is in Gallilean kinematics. This is wrong (as the paradox you've reached shows). Its possible to derive a way to "add" velocities which is compatible with relativity.
Your version is:
\begin{equation}
s=v_1 - v_2
\end{equation}
The correct version is:
\begin{equation}
s= \frac{v_1 - v_2}{1- \frac{v_1v_2}{c^2}}
\end{equation}
With your velocities ($v_1=0.6c$, $v_2=-0.6c$) we get:
\begin{align}
s &= \frac{1.2c}{1+ 0.6^2}\\
&\approx0.88c
\end{align}
This is how things actually work and is significantly different to the version you used when the velocities are large enough. In particular if you use it you'll always find that two observers will see each other move at less than the speed of light if any third observer sees them both moving less than the speed of light.
On the other hand when the velocities in question are small then this version is very similar to the Gallilean one which is why our instinct works for everyday speeds. For example with $v_1=-v_2 = 0.01c$:
\begin{align}
s&= \frac{(0.01+0.01)c}{1+0.01^2}\\
&\approx 0.019998c
\end{align}
Very close to the Gallilean answer of $0.02c$.
Edit: I should point out that these formulae are only correct for motion in the same line. More general motion has a more general (and messier) formulae. | {
"domain": "physics.stackexchange",
"id": 22373,
"tags": "special-relativity, speed-of-light, faster-than-light"
} |
Need help in rospy programming | Question:
hi,
I wish to program my robot such that whenever the Lidar detect an obstacle, a 'stop' will be printed every 1 second.
However, whenever I place remove the obstacle, the output will not stop printing 'stop'.
Can someone kindly help me in this programming?
Below is my code:
#!/usr/bin/env python
from __future__ import print_function
import sys
import rospy
from sensor_msgs.msg import ,LaserScan
import os
def stop():
print('stop')
time.sleep(1)
class obstacle_detector:
def __init__(self):
self.laser_scan_sub = rospy.Subscriber("/scan_raw",LaserScan,self.lasercallback)
def lasercallback(self,data):
angle = 285
self.laser_scan = 10000000
for x in range (90):
if data.ranges[angle] < self.laser_scan:
self.laser_scan = data.ranges[angle]
angle = angle + 1
if self.laser_scan < 1.5:
stop()
else:
print('go')
def main(args):
ic = obstacle_detector()
rospy.init_node('obstacle_detector', anonymous=True)
try:
rospy.spin()
Originally posted by loguna on ROS Answers with karma: 98 on 2021-02-17
Post score: 0
Answer:
The problem lies here
for x in range (90): # --> loop 90 times
# --> check if range at specific angle is lowest visited so far
if data.ranges[angle] < self.laser_scan:
self.laser_scan = data.ranges[angle]
angle = angle + 1 # --> increment angle
# WITHOUT LEAVING FOR LOOP call stop() or print go
if self.laser_scan < 1.5:
stop()
else:
print('go')
This means you'd print either stop or go 90 times PER SCAN!
The indentation of the last if clause is wrong.
Originally posted by mgruhler with karma: 12390 on 2021-02-17
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by loguna on 2021-02-18:
Thank You. Never realize is my mistake on that part.
Comment by mgruhler on 2021-02-19:
great that this helps. Please mark the answer as correct (if it is) by clicking the checkmark next to it. thanks. | {
"domain": "robotics.stackexchange",
"id": 36097,
"tags": "ros, python, ros-melodic"
} |
Geometric constraint in between two joints | Question:
Hi,
does there already exist a possibility to couple two joints by a geometric constraint like a chain drive, e.g. setting the same angle? If yes, how can I access it?
Thank you!
Originally posted by anbe on Gazebo Answers with karma: 1 on 2013-01-28
Post score: 0
Answer:
There currently is nothing like a chain drive in gazebo. Feel free to contribute, or file a ticket for this feature on bitbucket.
Originally posted by nkoenig with karma: 7676 on 2013-07-23
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 2974,
"tags": "gazebo"
} |
Speech segmentation for speaker recognition | Question: I'm trying to implement a speaker recognition system and want to make sure I'm aware of the latest trends in speech segmentation. I've read a number of very different methods at a higher level but I'm not sure which one would be best for my particular situation. I'll list what I found so far and I'd appreciate either (a) comments on the research I've done so far or (b) popular methods that I didn't mention that I should know about. Then I'll take it from there in terms of determining what will work best for my situation.
Techniques for Speech Segmentation
(that I've identified so far)
I've read that strong long-term modulation frequencies between 1-16 Hz are indicative of speech activity, but haven't been able to find any good explanations of what exactly these are. What I've been able to determine so far is that they are the temporal spectrum of specific frequency bands. (Unsupervised speech/non-speech detection for automatic speech recognition in meeting rooms by Maganti et al)
Using energy variance (high variance implies speech-like segments) independently or with other methods to build a Gaussian mixture model with two components: one for speech and one for noise, and then re-classifying windows as either speech or non-speech. (An unsupervised, sequential learning algorithm for the segmentation of speech waveforms with multiple speakers by Siu et al)
Using the Voting Experts algorithm, which utilizes iter- and intra- window entropy (a window in this sense is a 'window of windows') to determine where "chunks" are using a feature that has high inter-segment entropy, and then to classify these chunks as either speech or non-speech using 2-means clustering, where each cluster of chunks contains all chunks of one of the two classes. (Voting Experts: an unsupervised algorithm for segmenting sequences Cohen et al)
The short version of my question
Ultimately, I'd really like to know what is standard in the industry as of right now. Then I can determine what will work best for my particular situation once I have a little more base knowledge.
Answer: I'm going to answer my own question, for those who are looking for the same information as I was. Adaptive Thresholding is the most common way to segment audio, and it gives you the most bang for the buck. While there are more accurate methods out there for segmenting, adaptive thresholding is fast and very accurate, and uses simple features such as zero-crossings, energy, and entropy. I'm going to have to look into this more myself but this should be enough to get started. | {
"domain": "dsp.stackexchange",
"id": 1056,
"tags": "speech, segmentation"
} |
Error in the Algorithm Design Manual? | Question: Take a look at this excerpt from the Algorithm Design Manual, 2nd Edition by Skiena.
I have circled the confusing bit. Basically, everything is understandable to me, until that part. How can $Increment([m+1/2])$ be rewritten as $Increment([m])$? Clearly, if m is an integer, then the algorithm will not treat $m+1/2$ as $m$ and return $m+1$. Instead, $Increment$ is going to go into the $else$ clause and return $m+1/2+1$, which is clearly not $m+1$. Am I missing something here?
Answer:
How can $Increment([m+1/2])$ be rewritten as $Increment([m])$?
Recall that $[x]$ means the greatest integer less than or equal to $x$.
Since $m$ is integer, $m$ is the less than $m+0.5$ and $m+0.5 < m+1$. Thus the the greatest integer less than or equal to $m+0.5$ is $m$. | {
"domain": "cs.stackexchange",
"id": 9944,
"tags": "algorithm-analysis"
} |
Take input of delimited file of integers and output result | Question: I wanted to learn some new language so I developed a really simple vb.net "console application" (was that the right type?) that takes text input, looks at it and multiplies it if it's even and then reads each result into a msgbox.
test.txt = 1 2 3 4 5 6 7 8 9
result should be an array {1 4 3 8 5 12 7 16 9}
This would be my first vb.net programming attempt
Module MyFirstModule
Sub Main()
HelloNumbers()
End Sub
Private Sub HelloNumbers()
Dim inputFile As String
inputFile = "c:\temp\test.txt"
Dim i As Integer
Dim readResult As String() = Split(System.IO.File.ReadAllText(inputFile))
Dim multipliedResult As Integer()
Dim lower As Integer = LBound(readResult)
Dim upper As Integer = UBound(readResult)
ReDim multipliedResult(0 To upper)
For i = lower To upper
multipliedResult(i) = MultiplyEvenNumbers(CInt(readResult(i)))
Next
For i = lower To upper
If multipliedResult(i) Mod 2 = 0 Then
MsgBox(readResult(i) & " was multiplied by two for " & multipliedResult(i))
Else : MsgBox(readResult(i) & " is original")
End If
Next
End Sub
Private Function MultiplyEvenNumbers(ByVal inputNumber As Integer) As Integer
If inputNumber Mod 2 = 0 Then inputNumber = inputNumber * 2
MultiplyEvenNumbers = inputNumber
End Function
End Module
Answer: This is a good start. You created two functions to separate parts of the logic but there is more that can be refactored. The HalloNumbers sub is too big and does too much. Let's try to improve.
First the path shouldn't be hardcoded inside it so we pass it as parameter:
Sub Main()
HelloNumbers("c:\temp\test.txt")
End Sub
Now we take parts of the big HalloNumbers and move them into fuctions that do only one thing - we always strive for this because it's easier to maintain when you only have to care about one thing at a time (see SRP).
Private Sub HelloNumbers(ByVal inputFile As String)
Dim numbers As Integer() = ReadNumbers(inputFile)
Dim multipliedNumbers = numbers.Select(Function(number) MultiplyEvenNumbers(number))
PrintNumbers(multipliedNumbers)
End Sub
So we can extract the following functions:
We need to read the numbers. They are in a file. So let's require a path and after reading them we convert each number-string into an Integer with linq:
Private Function ReadNumbers(ByVal inputFile As String) As IEnumerable(Of Integer)
ReadNumbers = Split(File.ReadAllText(inputFile)).Select(Function(s, i) CInt(s))
End Function
Then we need to multiply them. This one was already there so we leave it as is.
Private Function MultiplyEvenNumbers(ByVal inputNumber As Integer) As Integer
If inputNumber Mod 2 = 0 Then inputNumber = inputNumber * 2
MultiplyEvenNumbers = inputNumber
End Function
Finaly you want to print the results. Let's do it in PrintNumbers. If you decide to write the output to the console you now just have to adjust this small function.
Private Sub PrintNumbers(ByVal numbers As IEnumerable(Of Integer))
For Each number As Integer In numbers
If number Mod 2 = 0 Then
MsgBox(String.Format("{0} was multiplied by two for {1}", number, number / 2))
Else : MsgBox(String.Format("{0} is original", number))
End If
Next
End Sub | {
"domain": "codereview.stackexchange",
"id": 21975,
"tags": "beginner, vb.net"
} |
Design a Pushdown automaton for $L = \{a^nb^m | n \le m \le 3n \} $ | Question: $L = \{a^nb^m | n \le m \le 3n \} $
This is by far the hardest pushdown automaton I had to design. I literally have no idea where to start. Here's my thought process. Firstly, I thought that for each valid pair $(n,m)$ the differences between the number of $b$s abd $a$s will be the same. That wasn't true, as the number of possible strings rose for each new value of $n$.
Next, I tried adding $a$ on top of the stack when I read $a$, and then removing them when I read $b$s. That would only cover the case when $m=n$. I can produce an automaton where $m=n$, $m=2n$ or $m=3n$, but how do I cover all the cases where $m \in [n, n+1, n+2, ..., 3n]$?
Answer: First of all your language is CFL means NCFL not DCFL because machine has push confusion. Therefore DPDA design is not possible. Only NPDA has power to accept your language.
You have to understand what is grammar for your language. Suppose $n=0,$ then $m=0$ and string becomes $\epsilon.$ So your language contain $\epsilon.$ Now for $n=1,$ then $m=1,2,3$ and strings are $ab, abb, abbb.$ For $n=2,$ then $m=2,3,4,5,6
$ and the strings are $aabb, aabbb$ etc. ... . .... . ... . ..
So your grammar will be $S\to aSb|aSbb|aSbbb|\epsilon.$
This time I haven't design NPDA as last time I did,I am writing only transition functions. Start state is $q_0$ , stack bottom is $Z_0$ and final state is $q_f.$
$\delta(q_0,a,Z_0)=(q_0,aZ_0),(q_0,aaZ_0,(q_0,aaaZ_0).$
$\delta(q_0,a,a)=(q_0,aa),(q_0,aaa),(q_0,aaaa).$
$\delta(q_0,\epsilon,Z_0)=(q_f,Z_0). $
$\delta(q_0,b,a)=(q_1,\epsilon).$
$\delta(q_1,b,a)=(q_1,\epsilon).$
$\delta(q_1,\epsilon,Z_0)=(q_f,Z_0).$ | {
"domain": "cs.stackexchange",
"id": 20114,
"tags": "formal-languages, automata, pushdown-automata, stacks"
} |
"in" and "out" states in Weinberg's QFT | Question: In Weinberg's QFT Page 109, he defines the "in" and "out" states as
the 'in' and 'out' states* $\Psi_{\alpha}^{+}$ and $\Psi_{x}^{-}$ will be found to contain the particles described by the label $\alpha$ if observations are made at $t \rightarrow-\infty$ or $t \rightarrow+\infty$, respectively.
And then he claims that
Note how this definition is framed. To maintain manitest Lorentz invariance, in the formalism we are using here, state-vectors do not change with time $-$ a state-vector $\Psi$ describes the whole spacetime history of a system of particles. (This is known as the Heisenberg picture, in distinction with the Schrödinger picture, where the operators are constant and the states change with time.) Thus we do not say that $\Psi_{\alpha} \pm$ are the limits at $t \rightarrow \mp \infty$ of a time-dependent state-vector $\Psi(t)$
However, implicit in the definition of the states is a choice of the inertial frame from which the observer views the system; different observers see equivalent state-vectors, but not the same state-vector. In particular, suppose that a standard observer $\mathcal{O}$ sets his or her clock so that $t=0$ is at some time during the collision process, while some other observer $\mathcal{O}^{\prime}$ at rest with respect to the first uses a clock set so that $t^{\prime}=0$ is at a time $t=\tau ;$ that is, the two observers' time coordinates are related by $t^{\prime}=t-\tau .$ Then if $\mathcal{O}$ sees the system to be in a state $\Psi, \mathcal{O}^{\prime}$ will see the system in a state $U(1,-\tau) \Psi=\exp (-i H \tau) \Psi .$ Thus the appearance
Now my question is: as we are talking about a state-vector $\Psi$ in the Heisenberg picture, which do not evolve over time. Why does the state vector change under the change of observers with different setting of time.
Answer: I think I got the point.
In Heisenberg's picture, the state-vectors do not change according to the Schodinger's equation governing the time evolution of the state. Since different pictures are defined in how operators and state vectors are changing with $\textbf{time evolution equation}$.
But the state vectors $\textbf{do}$ change under symmetry transformations such as Lorentz transformation. And one Lorentz transformation is "time translation", which coincides with time evolution operator of the schdinger equation but the physical meaning is different.
Now let's do a "change of inertial frame of observing the system", what we are doing here is doing a "Lorentz transformation" rather than doing a "time evolution", so the state vectors do change and it changes with the same way of time evolution incidentally. | {
"domain": "physics.stackexchange",
"id": 72990,
"tags": "quantum-field-theory, special-relativity, scattering, time-evolution"
} |
Implementation of java.util.stream.Stream (and friends) that reads lines from the internet without requiring you to manage the resources | Question: This streams lines of information over the internet using BufferedReader::lines.
However, what makes this special (and thus, extraordinarily complicated imo) is that ALL resource management is done internally -- the end user does not need to handle the resources at all.
So, no try-with-resources, no try-catch-finally, none of that. The user can fearlessly use this stream (or at least, as fearlessly as they can use any other non-resource stream).
As a result, I would like this review to focus on making sure my claim is as bullet proof as I make it out to be. Priority #1 is to make sure that this implementation cannot leak resources.
Other than that, I want the obvious things like correctness/efficiency/readability/maintainability/etc.
One potential pain point that I want to highlight -- I chose to open a connection to the URL at the last possible moment -- during terminal operations. However, that technically makes certain introspective operations a little dubious. For example, isParallel. As is, I am opening a connection to the internet just to check if my stream is parallel. I don't know how terrible that is, but I also don't see a better way to work around that pain point. Special attention to this method would be appreciated.
And finally, you may be wondering what maniac would go to such lengths when we have TWR (try with resources).
The reason why is because forgetting to do TWR is not a compiler error nor a compiler warning. And I am leading a team of (entry-level) devs to handle a fairly lofty personal project. To make their lives a little easier, I wanted to build something that wouldn't leak. Hence, this monstrosity was created.
import java.io.BufferedReader;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.net.URI;
import java.net.URL;
import java.util.Arrays;
import java.util.Comparator;
import java.util.DoubleSummaryStatistics;
import java.util.IntSummaryStatistics;
import java.util.Iterator;
import java.util.List;
import java.util.LongSummaryStatistics;
import java.util.Objects;
import java.util.Optional;
import java.util.OptionalDouble;
import java.util.OptionalInt;
import java.util.OptionalLong;
import java.util.PrimitiveIterator;
import java.util.Spliterator;
import java.util.Spliterators;
import java.util.function.*;
import java.util.stream.*;
public class StreamLinesOverInternet<T>
implements Stream<T>
{
private final Supplier<Stream<T>> encapsulatedStream;
public static void main(final String[] args)
{
final String url =
//THIS IS A >5 GIGABYTE FILE
"https://raw.githubusercontent.com/nytimes/covid-19-data/master/us.csv"
;
final Stream<String> linesFromCsv = StreamLinesOverInternet.stream(url);
//Grabs the first 10 lines from the CSV File
linesFromCsv
.limit(10)
.forEach(System.out::println)
;
//Normally, a file this size would take several seconds, if not minutes, to download, and then process.
//But, because we are processing data as soon as we fetch it, we can short-circuit once we have as much
//as we need. This is thanks to java.util.Stream, java.io.InputStream, and java.io.BufferedReader.
//In the above example, we are streaming lines from the CSV File. So, once the Stream has determined
//it can terminate early because it has enough info to correctly evaluate (called short-circuiting),
//the Stream closes the BufferedReader, which in turn, closes the other resources.
}
private StreamLinesOverInternet(final Supplier<Stream<T>> encapsulatedStream)
{
Objects.requireNonNull(encapsulatedStream);
this.encapsulatedStream = encapsulatedStream;
}
public static StreamLinesOverInternet<String> stream(final URI uri)
{
Objects.requireNonNull(uri);
final URL url;
try
{
url = uri.toURL();
}
catch (final Exception exception)
{
throw new IllegalStateException(exception);
}
final Supplier<Stream<String>> stream =
() ->
{
try
{
final InputStream inputStream = url.openStream();
final InputStreamReader inputStreamReader = new InputStreamReader(inputStream);
final BufferedReader bufferedReader = new BufferedReader(inputStreamReader);
final Stream<String> encapsulatedStream =
bufferedReader
.lines()
.onClose
(
() ->
{
try
{
bufferedReader.close();
System.out.println("CLOSED THE BUFFERED READER");
}
catch (final Exception exception)
{
throw new IllegalStateException(exception);
}
}
)
;
return encapsulatedStream;
}
catch (final Exception exception)
{
throw new IllegalStateException(exception);
}
}
;
return new StreamLinesOverInternet<>(stream);
}
public static StreamLinesOverInternet<String> stream(final String uriString)
{
Objects.requireNonNull(uriString);
if (uriString.isBlank())
{
throw new IllegalArgumentException("uri cannot be blank!");
}
try
{
final URI uri = new URI(uriString);
return stream(uri);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
private
static
<A, B, C>
C
convertStream
(
final Supplier<A> currentStreamSupplier,
final Function<A, ? extends B> function,
final Function<Supplier<B>, C> constructor
)
{
Objects.requireNonNull(currentStreamSupplier);
Objects.requireNonNull(function);
Objects.requireNonNull(constructor);
final Supplier<B> nextStreamSupplier =
() ->
{
final A currentStream = currentStreamSupplier.get();
final B nextStream = function.apply(currentStream);
return nextStream;
}
;
return constructor.apply(nextStreamSupplier);
}
private <U> StreamLinesOverInternet<U> continueStreamSafely(final Function<Stream<T>, Stream<U>> function)
{
return
convertStream
(
this.encapsulatedStream,
function,
StreamLinesOverInternet::new
)
;
}
private <U> U terminateWithValueSafely(final Function<Stream<T>, U> function)
{
try
(
final Stream<T> stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(function);
return function.apply(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
private void terminateSafely(final Consumer<Stream<T>> consumer)
{
try
(
final Stream<T> stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(consumer);
consumer.accept(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
@Override
public Optional<T> findAny()
{
return this.terminateWithValueSafely(Stream::findAny);
}
@Override
public Optional<T> findFirst()
{
return this.terminateWithValueSafely(Stream::findFirst);
}
@Override
public boolean noneMatch(final Predicate<? super T> predicate)
{
return this.terminateWithValueSafely(stream -> stream.noneMatch(predicate));
}
@Override
public boolean allMatch(final Predicate<? super T> predicate)
{
return this.terminateWithValueSafely(stream -> stream.allMatch(predicate));
}
@Override
public boolean anyMatch(final Predicate<? super T> predicate)
{
return this.terminateWithValueSafely(stream -> stream.anyMatch(predicate));
}
@Override
public long count()
{
return this.terminateWithValueSafely(Stream::count);
}
@Override
public Optional<T> max(final Comparator<? super T> comparator)
{
return this.terminateWithValueSafely(stream -> stream.max(comparator));
}
@Override
public Optional<T> min(final Comparator<? super T> comparator)
{
return this.terminateWithValueSafely(stream -> stream.min(comparator));
}
@Override
public <R, A> R collect(final Collector<? super T, A, R> collector)
{
return this.terminateWithValueSafely(stream -> stream.collect(collector));
}
@Override
public <R> R collect(final Supplier<R> supplier, final BiConsumer<R, ? super T> accumulator, BiConsumer<R, R> combiner)
{
return this.terminateWithValueSafely(stream -> stream.collect(supplier, accumulator, combiner));
}
@Override
public <U> U reduce(final U identity, final BiFunction<U,? super T,U> accumulator, final BinaryOperator<U> combiner)
{
return this.terminateWithValueSafely(stream -> stream.reduce(identity, accumulator, combiner));
}
@Override
public Optional<T> reduce(final BinaryOperator<T> accumulator)
{
return this.terminateWithValueSafely(stream -> stream.reduce(accumulator));
}
@Override
public T reduce(final T identity, final BinaryOperator<T> accumulator)
{
return this.terminateWithValueSafely(stream -> stream.reduce(identity, accumulator));
}
@Override
public <A> A[] toArray(final IntFunction<A[]> generator)
{
return this.terminateWithValueSafely(stream -> stream.toArray(generator));
}
@Override
public Object[] toArray()
{
return this.terminateWithValueSafely(Stream::toArray);
}
@Override
public void forEachOrdered(final Consumer<? super T> action)
{
this.terminateSafely(stream -> stream.forEachOrdered(action));
}
@Override
public void forEach(final Consumer<? super T> action)
{
this.terminateSafely(stream -> stream.forEach(action));
}
@Override
public StreamLinesOverInternet<T> skip(final long n)
{
return this.continueStreamSafely(stream -> stream.skip(n));
}
@Override
public StreamLinesOverInternet<T> limit(final long maxSize)
{
return this.continueStreamSafely(stream -> stream.limit(maxSize));
}
@Override
public StreamLinesOverInternet<T> peek(final Consumer<? super T> action)
{
return this.continueStreamSafely(stream -> stream.peek(action));
}
@Override
public StreamLinesOverInternet<T> sorted(final Comparator<? super T> comparator)
{
return this.continueStreamSafely(stream -> stream.sorted(comparator));
}
@Override
public StreamLinesOverInternet<T> sorted()
{
return this.continueStreamSafely(Stream::sorted);
}
@Override
public StreamLinesOverInternet<T> distinct()
{
return this.continueStreamSafely(Stream::distinct);
}
@Override
public DoubleVersion flatMapToDouble(final Function<? super T, ? extends DoubleStream> function)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.flatMapToDouble(function),
DoubleVersion::new
)
;
}
@Override
public LongVersion flatMapToLong(final Function<? super T, ? extends LongStream> function)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.flatMapToLong(function),
LongVersion::new
)
;
}
@Override
public IntVersion flatMapToInt(final Function<? super T, ? extends IntStream> function)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.flatMapToInt(function),
IntVersion::new
)
;
}
@Override
public <R> StreamLinesOverInternet<R> flatMap(final Function<? super T, ? extends Stream<? extends R>> function)
{
Objects.requireNonNull(function);
return this.continueStreamSafely(stream -> stream.flatMap(function));
}
@Override
public DoubleVersion mapToDouble(final ToDoubleFunction<? super T> toDoubleFunction)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToDouble(toDoubleFunction),
DoubleVersion::new
)
;
}
@Override
public LongVersion mapToLong(final ToLongFunction<? super T> toLongFunction)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToLong(toLongFunction),
LongVersion::new
)
;
}
@Override
public IntVersion mapToInt(final ToIntFunction<? super T> toIntFunction)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToInt(toIntFunction),
IntVersion::new
)
;
}
@Override
public <R> StreamLinesOverInternet<R> map(final Function<? super T, ? extends R> function)
{
Objects.requireNonNull(function);
return this.continueStreamSafely(stream -> stream.map(function));
}
@Override
public StreamLinesOverInternet<T> filter(final Predicate<? super T> predicate)
{
Objects.requireNonNull(predicate);
return this.continueStreamSafely(stream -> stream.filter(predicate));
}
@Override
public void close()
{
this.terminateSafely(Stream::close);
System.out.println("closed " + this);
}
@Override
public StreamLinesOverInternet<T> onClose(final Runnable runnable)
{
Objects.requireNonNull(runnable);
return this.continueStreamSafely(stream -> stream.onClose(runnable));
}
@Override
public StreamLinesOverInternet<T> unordered()
{
return this.continueStreamSafely(Stream::unordered);
}
@Override
public StreamLinesOverInternet<T> parallel()
{
return this.continueStreamSafely(Stream::parallel);
}
@Override
public StreamLinesOverInternet<T> sequential()
{
return this.continueStreamSafely(Stream::sequential);
}
@Override
//THIS FEELS LIKE A HORRIBLE IDEA, and yet, it doesn't seem that bad.
public boolean isParallel()
{
return this.terminateWithValueSafely(Stream::isParallel);
}
@Override
public Spliterator<T> spliterator()
{
final List<T> list = this.toList();
return list.spliterator();
}
@Override
public Iterator<T> iterator()
{
final Spliterator<T> spliterator = this.spliterator();
return Spliterators.iterator(spliterator);
}
public static class IntVersion
implements IntStream
{
private final Supplier<IntStream> encapsulatedStream;
private IntVersion(final Supplier<IntStream> encapsulatedStream)
{
Objects.requireNonNull(encapsulatedStream);
this.encapsulatedStream = encapsulatedStream;
}
private IntVersion continueStreamSafely(final UnaryOperator<IntStream> function)
{
return
convertStream
(
this.encapsulatedStream,
function,
IntVersion::new
)
;
}
private <U> U terminateWithValueSafely(final Function<IntStream, U> function)
{
try
(
final IntStream stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(function);
return function.apply(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
private void terminateSafely(final Consumer<IntStream> consumer)
{
try
(
final IntStream stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(consumer);
consumer.accept(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
@Override
public Spliterator.OfInt spliterator()
{
final int[] array = this.terminateWithValueSafely(IntStream::toArray);
return Arrays.spliterator(array);
}
@Override
public PrimitiveIterator.OfInt iterator()
{
final Spliterator.OfInt spliterator = this.spliterator();
return Spliterators.iterator(spliterator);
}
@Override
public StreamLinesOverInternet.IntVersion parallel()
{
return this.continueStreamSafely(IntStream::parallel);
}
@Override
public StreamLinesOverInternet.IntVersion sequential()
{
return this.continueStreamSafely(IntStream::sequential);
}
@Override
public StreamLinesOverInternet<Integer> boxed()
{
return
convertStream
(
this.encapsulatedStream,
IntStream::boxed,
StreamLinesOverInternet::new
)
;
}
@Override
public OptionalInt findAny()
{
return this.terminateWithValueSafely(IntStream::findAny);
}
@Override
public OptionalInt findFirst()
{
return this.terminateWithValueSafely(IntStream::findFirst);
}
@Override
public boolean noneMatch(final IntPredicate intPredicate)
{
Objects.requireNonNull(intPredicate);
return this.terminateWithValueSafely(intStream -> intStream.noneMatch(intPredicate));
}
@Override
public boolean allMatch(final IntPredicate intPredicate)
{
Objects.requireNonNull(intPredicate);
return this.terminateWithValueSafely(intStream -> intStream.allMatch(intPredicate));
}
@Override
public boolean anyMatch(final IntPredicate intPredicate)
{
Objects.requireNonNull(intPredicate);
return this.terminateWithValueSafely(intStream -> intStream.anyMatch(intPredicate));
}
@Override
public IntSummaryStatistics summaryStatistics()
{
return this.terminateWithValueSafely(IntStream::summaryStatistics);
}
@Override
public OptionalDouble average()
{
return this.terminateWithValueSafely(IntStream::average);
}
@Override
public long count()
{
return this.terminateWithValueSafely(IntStream::count);
}
@Override
public OptionalInt max()
{
return this.terminateWithValueSafely(IntStream::max);
}
@Override
public OptionalInt min()
{
return this.terminateWithValueSafely(IntStream::min);
}
@Override
public int sum()
{
return this.terminateWithValueSafely(IntStream::sum);
}
@Override
public <R> R collect(final Supplier<R> supplier, final ObjIntConsumer<R> accumulator, BiConsumer<R, R> combiner)
{
Objects.requireNonNull(supplier);
Objects.requireNonNull(accumulator);
Objects.requireNonNull(combiner);
return this.terminateWithValueSafely(intStream -> intStream.collect(supplier, accumulator, combiner));
}
@Override
public OptionalInt reduce(final IntBinaryOperator intBinaryOperator)
{
Objects.requireNonNull(intBinaryOperator);
return this.terminateWithValueSafely(intStream -> intStream.reduce(intBinaryOperator));
}
@Override
public int reduce(final int identity, final IntBinaryOperator intBinaryOperator)
{
Objects.requireNonNull(intBinaryOperator);
return this.terminateWithValueSafely(intStream -> intStream.reduce(identity, intBinaryOperator));
}
@Override
public int[] toArray()
{
return this.terminateWithValueSafely(IntStream::toArray);
}
@Override
public void forEachOrdered(final IntConsumer intConsumer)
{
Objects.requireNonNull(intConsumer);
this.terminateSafely(intStream -> intStream.forEachOrdered(intConsumer));
}
@Override
public void forEach(final IntConsumer intConsumer)
{
Objects.requireNonNull(intConsumer);
this.terminateSafely(intStream -> intStream.forEach(intConsumer));
}
@Override
public IntVersion skip(final long n)
{
return this.continueStreamSafely(intStream -> intStream.skip(n));
}
@Override
public IntVersion limit(final long n)
{
return this.continueStreamSafely(intStream -> intStream.limit(n));
}
@Override
public IntVersion peek(final IntConsumer intConsumer)
{
Objects.requireNonNull(intConsumer);
return this.continueStreamSafely(intStream -> intStream.peek(intConsumer));
}
@Override
public IntVersion sorted()
{
return this.continueStreamSafely(IntStream::sorted);
}
@Override
public IntVersion distinct()
{
return this.continueStreamSafely(IntStream::distinct);
}
@Override
public IntVersion flatMap(final IntFunction<? extends IntStream> intFunction)
{
Objects.requireNonNull(intFunction);
return this.continueStreamSafely(intStream -> intStream.flatMap(intFunction));
}
@Override
public DoubleVersion asDoubleStream()
{
final Supplier<DoubleStream> nextStreamSupplier =
() ->
{
final IntStream currentStream = this.encapsulatedStream.get();
final DoubleStream nextStream = currentStream.asDoubleStream();
return nextStream;
}
;
return new DoubleVersion(nextStreamSupplier);
}
@Override
public LongVersion asLongStream()
{
final Supplier<LongStream> nextStreamSupplier =
() ->
{
final IntStream currentStream = this.encapsulatedStream.get();
final LongStream nextStream = currentStream.asLongStream();
return nextStream;
}
;
return new LongVersion(nextStreamSupplier);
}
@Override
public DoubleVersion mapToDouble(final IntToDoubleFunction intToDoubleFunction)
{
Objects.requireNonNull(intToDoubleFunction);
final Supplier<DoubleStream> nextStreamSupplier =
() ->
{
final IntStream currentStream = this.encapsulatedStream.get();
final DoubleStream nextStream = currentStream.mapToDouble(intToDoubleFunction);
return nextStream;
}
;
return new DoubleVersion(nextStreamSupplier);
}
@Override
public LongVersion mapToLong(final IntToLongFunction intToLongFunction)
{
Objects.requireNonNull(intToLongFunction);
final Supplier<LongStream> nextStreamSupplier =
() ->
{
final IntStream currentStream = this.encapsulatedStream.get();
final LongStream nextStream = currentStream.mapToLong(intToLongFunction);
return nextStream;
}
;
return new LongVersion(nextStreamSupplier);
}
@Override
public <U> StreamLinesOverInternet<U> mapToObj(final IntFunction<? extends U> intFunction)
{
Objects.requireNonNull(intFunction);
final Supplier<Stream<U>> nextStreamSupplier =
() ->
{
final IntStream currentStream = this.encapsulatedStream.get();
final Stream<U> nextStream = currentStream.mapToObj(intFunction);
return nextStream;
}
;
return new StreamLinesOverInternet<>(nextStreamSupplier);
}
@Override
public IntVersion map(final IntUnaryOperator intUnaryOperator)
{
Objects.requireNonNull(intUnaryOperator);
return this.continueStreamSafely(intStream -> intStream.map(intUnaryOperator));
}
@Override
public IntVersion filter(final IntPredicate intPredicate)
{
Objects.requireNonNull(intPredicate);
return this.continueStreamSafely(intStream -> intStream.filter(intPredicate));
}
@Override
public void close()
{
this.terminateSafely(IntStream::close);
System.out.println("closed " + this);
}
@Override
public IntVersion onClose(final Runnable runnable)
{
Objects.requireNonNull(runnable);
return this.continueStreamSafely(intStream -> intStream.onClose(runnable));
}
@Override
public IntVersion unordered()
{
return this.continueStreamSafely(IntStream::unordered);
}
@Override
public boolean isParallel()
{
try
(
final IntStream stream = this.encapsulatedStream.get()
)
{
return stream.isParallel();
}
catch (final Exception exception)
{
throw new IllegalStateException(exception);
}
}
}
public static class LongVersion
implements LongStream
{
private final Supplier<LongStream> encapsulatedStream;
private LongVersion(final Supplier<LongStream> encapsulatedStream)
{
Objects.requireNonNull(encapsulatedStream);
this.encapsulatedStream = encapsulatedStream;
}
private LongVersion continueStreamSafely(final UnaryOperator<LongStream> function)
{
Objects.requireNonNull(function);
final Supplier<LongStream> nextStreamSupplier =
() ->
{
final LongStream currentStream = this.encapsulatedStream.get();
final LongStream nextStream = function.apply(currentStream);
return nextStream;
}
;
return new LongVersion(nextStreamSupplier);
}
private <U> U terminateWithValueSafely(final Function<LongStream, U> function)
{
try
(
final LongStream stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(function);
return function.apply(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
private void terminateSafely(final Consumer<LongStream> consumer)
{
try
(
final LongStream stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(consumer);
consumer.accept(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
@Override
public Spliterator.OfLong spliterator()
{
final long[] array = this.terminateWithValueSafely(LongStream::toArray);
return Arrays.spliterator(array);
}
@Override
public PrimitiveIterator.OfLong iterator()
{
final Spliterator.OfLong spliterator = this.spliterator();
return Spliterators.iterator(spliterator);
}
@Override
public StreamLinesOverInternet.LongVersion parallel()
{
return this.continueStreamSafely(LongStream::parallel);
}
@Override
public StreamLinesOverInternet.LongVersion sequential()
{
return this.continueStreamSafely(LongStream::sequential);
}
@Override
public StreamLinesOverInternet<Long> boxed()
{
final Supplier<Stream<Long>> nextStreamSupplier =
() ->
{
final LongStream currentStream = this.encapsulatedStream.get();
final Stream<Long> nextStream = currentStream.boxed();
return nextStream;
}
;
return new StreamLinesOverInternet<>(nextStreamSupplier);
}
@Override
public OptionalLong findAny()
{
return this.terminateWithValueSafely(LongStream::findAny);
}
@Override
public OptionalLong findFirst()
{
return this.terminateWithValueSafely(LongStream::findFirst);
}
@Override
public boolean noneMatch(final LongPredicate longPredicate)
{
Objects.requireNonNull(longPredicate);
return this.terminateWithValueSafely(longStream -> longStream.noneMatch(longPredicate));
}
@Override
public boolean allMatch(final LongPredicate longPredicate)
{
Objects.requireNonNull(longPredicate);
return this.terminateWithValueSafely(longStream -> longStream.allMatch(longPredicate));
}
@Override
public boolean anyMatch(final LongPredicate longPredicate)
{
Objects.requireNonNull(longPredicate);
return this.terminateWithValueSafely(longStream -> longStream.anyMatch(longPredicate));
}
@Override
public LongSummaryStatistics summaryStatistics()
{
return this.terminateWithValueSafely(LongStream::summaryStatistics);
}
@Override
public OptionalDouble average()
{
return this.terminateWithValueSafely(LongStream::average);
}
@Override
public long count()
{
return this.terminateWithValueSafely(LongStream::count);
}
@Override
public OptionalLong max()
{
return this.terminateWithValueSafely(LongStream::max);
}
@Override
public OptionalLong min()
{
return this.terminateWithValueSafely(LongStream::min);
}
@Override
public long sum()
{
return this.terminateWithValueSafely(LongStream::sum);
}
@Override
public <R> R collect(final Supplier<R> supplier, final ObjLongConsumer<R> accumulator, BiConsumer<R, R> combiner)
{
Objects.requireNonNull(supplier);
Objects.requireNonNull(accumulator);
Objects.requireNonNull(combiner);
return this.terminateWithValueSafely(longStream -> longStream.collect(supplier, accumulator, combiner));
}
@Override
public OptionalLong reduce(final LongBinaryOperator longBinaryOperator)
{
Objects.requireNonNull(longBinaryOperator);
return this.terminateWithValueSafely(longStream -> longStream.reduce(longBinaryOperator));
}
@Override
public long reduce(final long identity, final LongBinaryOperator longBinaryOperator)
{
Objects.requireNonNull(longBinaryOperator);
return this.terminateWithValueSafely(longStream -> longStream.reduce(identity, longBinaryOperator));
}
@Override
public long[] toArray()
{
return this.terminateWithValueSafely(LongStream::toArray);
}
@Override
public void forEachOrdered(final LongConsumer longConsumer)
{
Objects.requireNonNull(longConsumer);
this.terminateSafely(longStream -> longStream.forEachOrdered(longConsumer));
}
@Override
public void forEach(final LongConsumer longConsumer)
{
Objects.requireNonNull(longConsumer);
this.terminateSafely(longStream -> longStream.forEach(longConsumer));
}
@Override
public LongVersion skip(final long n)
{
return this.continueStreamSafely(longStream -> longStream.skip(n));
}
@Override
public LongVersion limit(final long n)
{
return this.continueStreamSafely(longStream -> longStream.limit(n));
}
@Override
public LongVersion peek(final LongConsumer longConsumer)
{
Objects.requireNonNull(longConsumer);
return this.continueStreamSafely(longStream -> longStream.peek(longConsumer));
}
@Override
public LongVersion sorted()
{
return this.continueStreamSafely(LongStream::sorted);
}
@Override
public LongVersion distinct()
{
return this.continueStreamSafely(LongStream::distinct);
}
@Override
public LongVersion flatMap(final LongFunction<? extends LongStream> longFunction)
{
Objects.requireNonNull(longFunction);
return this.continueStreamSafely(longStream -> longStream.flatMap(longFunction));
}
@Override
public DoubleVersion asDoubleStream()
{
final Supplier<DoubleStream> nextStreamSupplier =
() ->
{
final LongStream currentStream = this.encapsulatedStream.get();
final DoubleStream nextStream = currentStream.asDoubleStream();
return nextStream;
}
;
return new DoubleVersion(nextStreamSupplier);
}
@Override
public DoubleVersion mapToDouble(final LongToDoubleFunction longToDoubleFunction)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToDouble(longToDoubleFunction),
DoubleVersion::new
)
;
}
@Override
public IntVersion mapToInt(final LongToIntFunction longToIntFunction)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToInt(longToIntFunction),
IntVersion::new
)
;
}
@Override
public <U> StreamLinesOverInternet<U> mapToObj(final LongFunction<? extends U> longFunction)
{
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToObj(longFunction),
StreamLinesOverInternet<U>::new
)
;
}
@Override
public LongVersion map(final LongUnaryOperator longUnaryOperator)
{
Objects.requireNonNull(longUnaryOperator);
return this.continueStreamSafely(longStream -> longStream.map(longUnaryOperator));
}
@Override
public LongVersion filter(final LongPredicate longPredicate)
{
Objects.requireNonNull(longPredicate);
return this.continueStreamSafely(longStream -> longStream.filter(longPredicate));
}
@Override
public void close()
{
this.terminateSafely(LongStream::close);
}
@Override
public LongVersion onClose(final Runnable runnable)
{
Objects.requireNonNull(runnable);
return this.continueStreamSafely(longStream -> longStream.onClose(runnable));
}
@Override
public LongVersion unordered()
{
return this.continueStreamSafely(LongStream::unordered);
}
@Override
public boolean isParallel()
{
return this.terminateWithValueSafely(LongStream::isParallel);
}
}
public static class DoubleVersion
implements DoubleStream
{
private final Supplier<DoubleStream> encapsulatedStream;
private DoubleVersion(final Supplier<DoubleStream> encapsulatedStream)
{
Objects.requireNonNull(encapsulatedStream);
this.encapsulatedStream = encapsulatedStream;
}
private DoubleVersion continueStreamSafely(final UnaryOperator<DoubleStream> function)
{
return
convertStream
(
this.encapsulatedStream,
function,
DoubleVersion::new
)
;
}
private <U> U terminateWithValueSafely(final Function<DoubleStream, U> function)
{
try
(
final DoubleStream stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(function);
return function.apply(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
private void terminateSafely(final Consumer<DoubleStream> consumer)
{
try
(
final DoubleStream stream = this.encapsulatedStream.get();
)
{
Objects.requireNonNull(consumer);
consumer.accept(stream);
}
catch (final Exception exception)
{
throw new RuntimeException(exception);
}
}
@Override
public Spliterator.OfDouble spliterator()
{
final double[] array = this.terminateWithValueSafely(DoubleStream::toArray);
return Arrays.spliterator(array);
}
@Override
public PrimitiveIterator.OfDouble iterator()
{
final Spliterator.OfDouble spliterator = this.spliterator();
return Spliterators.iterator(spliterator);
}
@Override
public StreamLinesOverInternet.DoubleVersion parallel()
{
return this.continueStreamSafely(DoubleStream::parallel);
}
@Override
public StreamLinesOverInternet.DoubleVersion sequential()
{
return this.continueStreamSafely(DoubleStream::sequential);
}
@Override
public StreamLinesOverInternet<Double> boxed()
{
return
convertStream
(
this.encapsulatedStream,
DoubleStream::boxed,
StreamLinesOverInternet<Double>::new
)
;
}
@Override
public OptionalDouble findAny()
{
return this.terminateWithValueSafely(DoubleStream::findAny);
}
@Override
public OptionalDouble findFirst()
{
return this.terminateWithValueSafely(DoubleStream::findFirst);
}
@Override
public boolean noneMatch(final DoublePredicate doublePredicate)
{
Objects.requireNonNull(doublePredicate);
return this.terminateWithValueSafely(doubleStream -> doubleStream.noneMatch(doublePredicate));
}
@Override
public boolean allMatch(final DoublePredicate doublePredicate)
{
Objects.requireNonNull(doublePredicate);
return this.terminateWithValueSafely(doubleStream -> doubleStream.allMatch(doublePredicate));
}
@Override
public boolean anyMatch(final DoublePredicate doublePredicate)
{
Objects.requireNonNull(doublePredicate);
return this.terminateWithValueSafely(doubleStream -> doubleStream.anyMatch(doublePredicate));
}
@Override
public DoubleSummaryStatistics summaryStatistics()
{
return this.terminateWithValueSafely(DoubleStream::summaryStatistics);
}
@Override
public OptionalDouble average()
{
return this.terminateWithValueSafely(DoubleStream::average);
}
@Override
public long count()
{
return this.terminateWithValueSafely(DoubleStream::count);
}
@Override
public OptionalDouble max()
{
return this.terminateWithValueSafely(DoubleStream::max);
}
@Override
public OptionalDouble min()
{
return this.terminateWithValueSafely(DoubleStream::min);
}
@Override
public double sum()
{
return this.terminateWithValueSafely(DoubleStream::sum);
}
@Override
public <R> R collect(final Supplier<R> supplier, final ObjDoubleConsumer<R> accumulator, BiConsumer<R, R> combiner)
{
Objects.requireNonNull(supplier);
Objects.requireNonNull(accumulator);
Objects.requireNonNull(combiner);
return this.terminateWithValueSafely(doubleStream -> doubleStream.collect(supplier, accumulator, combiner));
}
@Override
public OptionalDouble reduce(final DoubleBinaryOperator doubleBinaryOperator)
{
Objects.requireNonNull(doubleBinaryOperator);
return this.terminateWithValueSafely(doubleStream -> doubleStream.reduce(doubleBinaryOperator));
}
@Override
public double reduce(final double identity, final DoubleBinaryOperator doubleBinaryOperator)
{
Objects.requireNonNull(doubleBinaryOperator);
return this.terminateWithValueSafely(doubleStream -> doubleStream.reduce(identity, doubleBinaryOperator));
}
@Override
public double[] toArray()
{
return this.terminateWithValueSafely(DoubleStream::toArray);
}
@Override
public void forEachOrdered(final DoubleConsumer doubleConsumer)
{
Objects.requireNonNull(doubleConsumer);
this.terminateSafely(doubleStream -> doubleStream.forEachOrdered(doubleConsumer));
}
@Override
public void forEach(final DoubleConsumer doubleConsumer)
{
Objects.requireNonNull(doubleConsumer);
this.terminateSafely(doubleStream -> doubleStream.forEach(doubleConsumer));
}
@Override
public DoubleVersion skip(final long n)
{
return this.continueStreamSafely(doubleStream -> doubleStream.skip(n));
}
@Override
public DoubleVersion limit(final long n)
{
return this.continueStreamSafely(doubleStream -> doubleStream.limit(n));
}
@Override
public DoubleVersion peek(final DoubleConsumer doubleConsumer)
{
Objects.requireNonNull(doubleConsumer);
return this.continueStreamSafely(doubleStream -> doubleStream.peek(doubleConsumer));
}
@Override
public DoubleVersion sorted()
{
return this.continueStreamSafely(DoubleStream::sorted);
}
@Override
public DoubleVersion distinct()
{
return this.continueStreamSafely(DoubleStream::distinct);
}
@Override
public DoubleVersion flatMap(final DoubleFunction<? extends DoubleStream> doubleFunction)
{
Objects.requireNonNull(doubleFunction);
return this.continueStreamSafely(doubleStream -> doubleStream.flatMap(doubleFunction));
}
@Override
public LongVersion mapToLong(final DoubleToLongFunction doubleToLongFunction)
{
Objects.requireNonNull(doubleToLongFunction);
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToLong(doubleToLongFunction),
LongVersion::new
)
;
}
@Override
public IntVersion mapToInt(final DoubleToIntFunction doubleToIntFunction)
{
Objects.requireNonNull(doubleToIntFunction);
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToInt(doubleToIntFunction),
IntVersion::new
)
;
}
@Override
public <U> StreamLinesOverInternet<U> mapToObj(final DoubleFunction<? extends U> doubleFunction)
{
Objects.requireNonNull(doubleFunction);
return
convertStream
(
this.encapsulatedStream,
stream -> stream.mapToObj(doubleFunction),
StreamLinesOverInternet<U>::new
)
;
}
@Override
public DoubleVersion map(final DoubleUnaryOperator doubleUnaryOperator)
{
Objects.requireNonNull(doubleUnaryOperator);
return this.continueStreamSafely(doubleStream -> doubleStream.map(doubleUnaryOperator));
}
@Override
public DoubleVersion filter(final DoublePredicate doublePredicate)
{
Objects.requireNonNull(doublePredicate);
return this.continueStreamSafely(doubleStream -> doubleStream.filter(doublePredicate));
}
@Override
public void close()
{
this.terminateSafely(DoubleStream::close);
System.out.println(this);
}
@Override
public DoubleVersion onClose(final Runnable runnable)
{
Objects.requireNonNull(runnable);
return this.continueStreamSafely(doubleStream -> doubleStream.onClose(runnable));
}
@Override
public DoubleVersion unordered()
{
return this.continueStreamSafely(DoubleStream::unordered);
}
@Override
public boolean isParallel()
{
return this.terminateWithValueSafely(DoubleStream::isParallel);
}
}
}
```
Answer: try-with-resources and exception handling are features, not bugs. The vastly less complicated approach is to use Java as Java was intended - write an AutoCloseable utility class. For a very typical example of a situation where the JDK itself operates this way, see DirectoryStream. This is how Java is designed to work. Attempting otherwise is going to be some mix of non-constructive, difficult, non-idiomatic, or "the bad kind of surprise".
I am opening a connection to the internet just to check if my stream is parallel. I don't know how terrible that is
Very.
TWR keeps getting forgotten.
If someone keeps forgetting to hold the steering wheel, the solution is not to remove the steering wheel and build a more complicated, less accessible steering wheel behind the dashboard. Your entry-level developers should be learning how to write idiomatic Java, rather than learning (or - as the case may be - not learning) a custom framework that attempts to be too clever. There are standard tools, some mentioned in the comments, that flag when resources are mismanaged. Learn to use these tools.
To quote a separate commenter,
I would like to consume a stream shared between threads. The stream should autodetect when it is not used any more
It's only safe to close the stream once we can guarantee that all threads are done (i.e. after a join). This can still use AutoCloseable. And what does it even mean to auto-detect when a stream isn't used any more? When it hits EOF? No: because then we break the case where a user wants to perform special logic on EOF, or seek away from EOF. How about when it "goes out of scope"? Other than TWR, this is not possible, since (unlike, say, C++) Java scope is not directly coupled to memory management. The clearest and least-surprising mechanism to declare that we're done with a stream is not automatic, but explicit.
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.stream.Stream;
public class Main {
public static class HTTPLineStream implements AutoCloseable {
public final URI uri;
private final BufferedReader reader;
public HTTPLineStream(URI uri) throws IOException {
this.uri = uri;
reader = new BufferedReader(
new InputStreamReader(
uri.toURL().openStream()
)
);
}
public HTTPLineStream(String uri) throws URISyntaxException, IOException {
this(new URI(uri));
}
public Stream<String> lines() {
return reader.lines();
}
@Override
public void close() throws IOException {
// also closes the inner stream
reader.close();
}
}
public static void main(String[] args) {
try (HTTPLineStream stream = new HTTPLineStream(
"https://raw.githubusercontent.com/nytimes/covid-19-data/master/us.csv"
)) {
stream.lines()
.limit(10)
.forEach(System.out::println);
}
catch (IOException e) {
e.printStackTrace();
}
catch (URISyntaxException e) {
e.printStackTrace();
}
}
}
date,cases,deaths
2020-01-21,1,0
2020-01-22,1,0
2020-01-23,1,0
2020-01-24,2,0
2020-01-25,3,0
2020-01-26,5,0
2020-01-27,5,0
2020-01-28,5,0
2020-01-29,5,0 | {
"domain": "codereview.stackexchange",
"id": 45553,
"tags": "java, stream, lazy"
} |
Running xinput to change touchpad settings | Question: I am looking for some feedback on the code below, mainly for efficiency or mechanism correctness (like eval vs. subprocess). I am also curious if .find() is the best mechanism to use here. I am not familiar with regex, so my crutch (or benefit, depending) is using .find() over learning a sublanguage like regex.
import subprocess
get_device_id = subprocess.Popen(["xinput", "list", "SynPS/2 Synaptics TouchPad"], stdout=subprocess.PIPE)
gdi_str = str(get_device_id.stdout.read())
gdi_id_find = (gdi_str[gdi_str.find('id='):])
gdi_len = 3
gdi = (gdi_id_find[gdi_len:5])
if gdi.isdigit():
device_id = gdi
else:
pass
get_prop_id = subprocess.Popen(["xinput", "list-props", "15"], stdout=subprocess.PIPE)
r = str(get_prop_id.stdout.read())
if "2 Synaptics TouchPad" in r:
b = (r[r.find('libinput Tapping Enabled ('):])
bLen = len('libinput Tapping Enabled (')
b = (b[bLen:b.find(')')])
if b.isdigit():
prop_id = b
else:
pass
subprocess.run(["xinput", "set-prop", device_id, prop_id, "1"])
This code parses the output of:
xinput list SynPS/2 Synaptics TouchPad
to get a numerical device ID, in this case 15:
SynPS/2 Synaptics TouchPad id=15 [slave pointer (2)]
The code then runs:
xinput list-props 15 # 15 is our parsed id
and parses this output:
Device 'SynPS/2 Synaptics TouchPad':
Device Enabled (147): 1
Coordinate Transformation Matrix (149): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
libinput Tapping Enabled (300): 0
libinput Tapping Enabled Default (301): 0
libinput Tapping Drag Enabled (302): 1
libinput Tapping Drag Enabled Default (303): 1
libinput Tapping Drag Lock Enabled (304): 0
libinput Tapping Drag Lock Enabled Default (305): 0
libinput Tapping Button Mapping Enabled (306): 1, 0
libinput Tapping Button Mapping Default (307): 1, 0
libinput Natural Scrolling Enabled (282): 0
libinput Natural Scrolling Enabled Default (283): 0
libinput Disable While Typing Enabled (308): 1
libinput Disable While Typing Enabled Default (309): 1
libinput Scroll Methods Available (286): 1, 1, 0
libinput Scroll Method Enabled (287): 1, 0, 0
libinput Scroll Method Enabled Default (288): 1, 0, 0
libinput Click Methods Available (310): 1, 1
libinput Click Method Enabled (311): 1, 0
libinput Click Method Enabled Default (312): 1, 0
libinput Middle Emulation Enabled (291): 0
libinput Middle Emulation Enabled Default (292): 0
libinput Accel Speed (293): 0.000000
libinput Accel Speed Default (294): 0.000000
libinput Left Handed Enabled (298): 0
libinput Left Handed Enabled Default (299): 0
libinput Send Events Modes Available (267): 1, 1
libinput Send Events Mode Enabled (268): 0, 0
libinput Send Events Mode Enabled Default (269): 0, 0
Device Node (270): "/dev/input/event10"
Device Product ID (271): 2, 7
libinput Drag Lock Buttons (284): <no items>
libinput Horizontal Scroll Enabled (285): 1
for the purposes of getting the prop-id for "libinput Tapping Enabled", in this case 300:
libinput Tapping Enabled (300): 0
Once the value is established, the code does a basic check for .isdigit() and if True, presents the values to:
subprocess.run(["xinput", "set-prop", device_id, prop_id, "1"])
Which ultimately sets the value to 1:
libinput Tapping Enabled (300): 1
Answer: There's no need for a redundant pass after your if; these can be deleted:
else:
pass
xinput --list and xinput --list-props aren't long-running commands. It's simpler to subprocess.run() them, with standard output redirected to a variable:
get_device_id = subprocess.run(["xinput", "list",
"SynPS/2 Synaptics TouchPad"],
capture_output=True)
if get_device_id.returncode != 0
sys.exit(get_device_id.returncode)
gdi_str = get_device_id.stdout
I don't believe that first command is actually required - xinput --list-props is quite happy to accept a device name instead of a property:
xinput --list-props 'SynPS/2 Synaptics TouchPad' | {
"domain": "codereview.stackexchange",
"id": 31423,
"tags": "python, beginner, python-3.x, linux, child-process"
} |
Adding classes from a given array or an object | Question: I want to add classes to elements based on a source array or object.
For example, it can be:
classesArr = ["class1", "class2", "class3"];
or
classesObj = {class1: "class1", class2: "class2", class3: "class3"};
the way I currently do it is:
// for arrays
element = document.getElementById("myelement");
classesArr.forEach(className => {
element.classList.add(className);
});
or
// for objects
element = document.getElementById("myelement");
for (const [key, className] of Object.entries(classesObj) {
element.classList.add(className)
}
But perhaps there are some JavaScript tricks that can do it simpler (maybe even in 1 line?)
Answer: You’re not using the keys of the returned 2D array of Object.entries, so you could use the flat Object.values instead.
More importantly, the DOMTokenList.add method takes a list as its argument already, so there is no need to explicitly iterate as you can just spread the array for it (element.classList.add(...classesArr)).
As a function, you could have the element being worked on be the argument so there wouldn’t be those loosely floating references to elements. | {
"domain": "codereview.stackexchange",
"id": 44404,
"tags": "javascript"
} |
Determine standard enthalpy of formation of salt given other reactions | Question: I'm a little confused by this question, perhaps my understanding of enthalpy isn't as good as it should be.
For the compound $\ce{KNO3}$, which of the following reactions is used to determine the standard enthalpy of formation, $\Delta H^\circ_\mathrm f$?
$$\ce{K(s) + N (g) + O3(g) -> KNO3 (s)\\
2K + N2 + 2O3 -> 2KNO3\\
2K + N2 +3O2 -> 2KNO3\\
K + 1/2 N2 + 3/2 O2 -> KNO3}$$
I know that the $\Delta H^\circ_\mathrm f(\ce{KNO3})= \pu{−494.6 kJ/mol}$ from reading a table, I'm just confused about how do I get from that number to determine the formation of a compound.
Answer: The standard enthalpy of formation $\Delta H_\mathrm f$ refers to the formation of 1 mole of product, with the educts and product being in their respective standard states. The fourth equation is therefore correct, as it gives 1 mole $\ce{KNO3}$ as the product. Equation 3 is equation 4 multiplied by 2, which would give $2\Delta H_\mathrm f$. The first two equations can be ruled out because they do not contain the educts nitrogen and/or oxygen in their standard states, which are $\ce{N2}$ and $\ce{O2}$, respectively. | {
"domain": "chemistry.stackexchange",
"id": 1704,
"tags": "physical-chemistry, thermodynamics"
} |
knowrob_omics & comp_ros not found | Question:
Hello,
I am working with knowrob and so far everything works fine. The problem I have is related to knowrob_omics & comp_ros packages which are not included in knowrob.
I tried locating them using roslocate without any success. I am following the tutorials at the moment and they are packages that are referenced inside the tutorials.
I even tried by downloading roboearth but still no luck.
Any help would be appreciated.
Thanks in advance for your reply.
Originally posted by George P on ROS Answers with karma: 11 on 2012-07-23
Post score: 1
Answer:
Hi,
those packages are not yet contained in the public distribution. Please send me an email (http://ias.cs.tum.edu/people/tenorth) and I can give you access to them.
best,
Moritz
P.S. Please tag future questions with "knowrob", then I'll get an email notifier whenever a new question is posted and my response times are faster.
Originally posted by moritz with karma: 2673 on 2012-08-07
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 10318,
"tags": "knowrob"
} |
What if you 'placed' the Chicxulub asteroid onto the Earth? | Question: We all know what sort of devastating impact an asteroid could have if it collided with the Earth.
But I'm curious to know what the effects would be if you removed the speed element from it, and were somehow able to casually 'place' a giant asteroid onto our planet?
I think I read somewhere that the Chicxulub asteroid was most likely around 9 miles long, and that the end of it would have extended beyond our atmosphere if it hit the Earth lengthways.
But would this have any effect if it was travelling at, say, 2mph? There would be nothing burning in the atmosphere or shockwaves etc.
Would it have any affect at all apart from squashing whatever was underneath?
If the answer is no... how about placing a moon-sized object onto the Earth?
Answer: An asteroid resting on Earth would be a mountain. Or, for smaller asteroids, a pile of gravel.
Mountains are limited in altitude by the strength of stone to resist compression: a too tall mountain would sink down as the base crumbled and spread out. The limit on Earth is about 10 km. Besides the strength issue mountains are also floating ("isostasy") on the fluid but denser interior of Earth, so even if it doesn't fall apart it may sink down partially.
So the impactor would not hold together that well. A big ball of rock 10 km in diameter would first definitely sink down several kilometres into the crust, and almost definitely break. That means that something like $10^{15}$ kg will descend several kilometres, giving an energy $mgh \approx 10^{19}$ J (and there are estimates 100 times larger for the mass). That is "just" like a big hurricane, but will mostly be released as localized seismic energy and heating. The end result would be a massive pile that likely destabilizes the crust in the vicinity a lot: far less than an actual high velocity impact, but still a major disaster.
If you do the same with the moon you will break up the crust, boil off the oceans and wipe out all life on the planet. | {
"domain": "astronomy.stackexchange",
"id": 5309,
"tags": "asteroids, size, impact, modeling"
} |
about polarization of light | Question: In the course of circuits and electronics, I remember there is an experiment to show the polarization of the wave as lissajous figures. I am wondering for polarized laser, is there any way to visualize the polarization in the similar way? I try to use a light splitter to split the (circular) polarized light into two perpendicular beams and used photo detector to receive the beam and send them into oscilloscope in X and Y channel. But the scope doesn't really show the 'lissajous' figure. So I am wondering if this is the right way to visualize the polarized light? Thanks
Answer: Circularly polarized light should create a circle on the oscilloscope, which is a type of Lissajous curve. Any polarization of light produces a Lissajous curve with the restriction that the two frequencies of the x and y parameters are the same. So the only possibilities are a circular, elliptical, and linear polarization.
If your oscilloscope doesn't show a circle for circularly polarized light, something has gone wrong in the experiment.
Edit: this does assume your laser is a simple harmonic wave, so that the electric field varies sinusoidally and we therefore have in-phase x and y components. Also, see polarization. | {
"domain": "physics.stackexchange",
"id": 5443,
"tags": "polarization"
} |
Best approach for waiting on an elevator door to open | Question:
Hello,
I am working on an Ros module that allows our robot to traverse between two floors using a elevator.
The elevator uses a sliding door.
Currently navigation is good enough to get into position in front of the door and wait.
I was thinking I would do something like
Position in front of elevator (done)
Detect that the door is closed
Wait until the door opens
Enter the elevator/ turn to face the door (allow the door to close again)
.... And get out of the elevator/ Switch floor maps and so on.
My problem is steps 2) and 3).
Some of my questions:
I am unsure how I say to ROS that I want you to wait for obstacle at position x,y to clear? Then continue. <-- This a big question for me (If you could explain how this is done I would be very grateful)
How do I keep the planner from quitting if the robot has to wait a hour for the door to open?
How do I keep the planner from trying to take alternate routes inside the elevator? (this may be an mute point as the robot's planner will only see one route into the elevator)
If this approach is just bad overall feel free to suggest another more robust version.
P.S.
This is 2D navigation/obstacle detection using a LIDAR sensor. I have an premade accurate map of the environment that the robot is using AMCL to localize from. Knowing this I am able to say location x,y is an door on a map.
Originally posted by goatfig on ROS Answers with karma: 13 on 2014-01-09
Post score: 1
Answer:
I guess the straight-forward approach is to not instruct the robot to drive into the elevator directly. Split that and have it drive to the door first, wait on your own, and then send the command to drive inside.
Originally posted by dornhege with karma: 31395 on 2014-01-10
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by goatfig on 2014-01-10:
Right now I can already do that. I drive up infront of the door.
Once I am in front of the door I am not sure how to tell ros that the "obstacle" in front of it is special. That the obstacle will clear by itself. And then to wait for the obstacle to clear.
Comment by goatfig on 2014-01-10:
For instance I want to know how to do:
DoorPoint = x,y // point on map that is the entry point of the door
While (DoorPoint == occupied) { wait in one place}
// Door has opened
Set new navigation goal inside elevator.
Comment by goatfig on 2014-01-10:
Sorry for all the comments...
Can I just look at the costmap at the point DoorPoint. And when that is clear go ahead and set the next navigation goal inside the elevator?
Does this sound like an reasonable way to use ROS or am I "hacking" at problem by using the costmap?
Comment by dornhege on 2014-01-12:
Yes, looking at the costmap is the correct way as that is what move_base will do to decide if it can execute or not.
Comment by goatfig on 2014-01-13:
Thank you this is what I will do then. Is there an "correct" way to interface with the costmap? Like if I want to say are there obstacles in an 5 meter radius around the current position of the robot? (Basically I am looking for helper methods to deal with the complexity of the HUGE costmap)
Comment by dornhege on 2014-01-13:
Best check out the costmap documentation. Those distances are somewhat convolved in the costs. | {
"domain": "robotics.stackexchange",
"id": 16626,
"tags": "ros"
} |
Do Sulfuric Acid and Calcium Hydroxide Completely Neutralize One Another? | Question: I just completed a unit test at school in which there was a simple fill in the blanks question which asked:
Would 1 mol of sulfuric acid and 1 mol calcium hydroxide in water form an acidic, basic, or neutral solution?
The answer I wrote was neutral.
My reasoning was that sulfuric acid ($\ce{H2SO4}$) has two protons it can donate, and calcium hydroxide ($\ce{Ca(OH)2}$) has two protons it can accept. However, after the test I realized that H2SO4 dissociates into HSO4- which is a weak acid and won't dissociate completely.
After some research I have consistently found that the neutralization equation is written as: $\ce{Ca(OH)2 + H2SO4 -> CaSO4 + 2H2O}$
Do sulfuric acid and calcium hydroxide completely neutralize one another? Or was I wrong to say that the resulting solution is neutral.
Could someone please explain why I am right or wrong?
Answer: The first proton transfer is simple enough and you have no doubts.
$$\ce{H2SO4 + Ca^2+ + 2 OH- -> HSO4- + Ca^2+ + OH- + H2O}\tag{1}$$
Now, you wonder whether hydrogensulphate is a strong enough acid to combine with the remaining hydroxide. Well, hydrogensulphate may be a weak acid but hydroxide is a strong base. So even the second step will go to completion:
$$\ce{HSO4- + Ca^2+ + OH- + H2O -> SO4^2- + Ca^2+ + 2 H2O}\tag{2}$$
Frankly, it does not end there. Calcium and sulphate form a salt with a very low solubility so this salt will precipitate out of solution.
$$\ce{SO4^2- + Ca^2+ + 2 H2O -> CaSO4 v + 2 H2O}\tag{3}$$
So the only thing you are left with in reasonable amounts is water and a precipitate. This, of course, is a neutral solution.
However, you don’t really always have to go that far. Check if there are strong acids and bases (don’t forget hydroxides), cancel them away and then check what remains. If you had done that, you would have noticed that there are as many hydroxide ions as acidic protons, overall giving an approximately neutral solution. | {
"domain": "chemistry.stackexchange",
"id": 9334,
"tags": "acid-base"
} |
Why does the star product satisfy the "Bopp Shift relations": $f(x,p)\star g(x,p)=f(x+\frac{i}{2}\partial_p,p-\frac{i}{2}\partial_x) g(x,p)$? | Question: In (Curtright, Fairlie, Zachos 2014), the authors mention (Eq. (14) in this online version) the following relation, known as "Bopp shifts":
$$f(x,p)\star g(x,p)=f\left(x+\frac{i}{2}\partial_p,p-\frac{i}{2}\partial_x\right) g(x,p),\tag1$$
where the $\star$-product is defined as
$$\star\equiv\exp\left[ \frac{i}{2}(\partial_x^L\partial_p^R - \partial_p^L \partial_x^R)\right],\tag2$$
and I'm denoting with $\partial_i^L,\partial_i^R$ the partial derivative $\partial_i$ applied to the left or right, respectively.
I'm trying to get a better understanding of where this comes from. As far as I understand, $f$ and $g$ are regular functions here (usually Wigner functions I suppose), so $f\star g$ should produce another "regular" function.
If this is so, what do the derivatives in the argument of $f$ mean exactly?
If I were to simply apply (2) to $f\star g$, I would naively get the following expression:
$$f\star g =
\sum_{s=0}^\infty \frac{(i/2)^s}{s!} \sum_{k=0}^s (-1)^k
(\partial_x^{s-k}\partial_p^k f) (\partial_x^k \partial_p^{s-k}g). \tag3
$$
How is this compatible with (1)?
In fairness, if I were to very handwavily apply (2) to (1) without being too careful, I could think of $\partial_p^R$ and $\partial_x^R$ in the exponential as $c$ numbers, and the $\star$ operator as only acting on $f$, so that $\frac{i}{2}\partial_x^L\partial_p^R$ would be the operator enacting the translation $x\mapsto x+\frac{i}{2}\partial_p$, and similarly for the other term in the exponential. At a purely formal level this seems to make sense, but more concretely I'm not sure what the expression (1) is even supposed to represent, and how it is consistent with (3).
Answer: TL;DR: The underlying basic identity behind the Bopp shift is a Taylor expansion, which amounts to a translation/shift, $$e^{\hat{A}\partial_x}f(x)~=~f(x+\hat{A})\tag{A}$$
Here we assume the operator $\hat{A}$ does not depend on $x$.
Sketched proof:
$$\begin{align} (f\star g)(x,p)~=~&\left. e^{\frac{i\hbar}{2}(\partial_p\partial_{x^{\prime}}-\partial_x\partial_{p^{\prime}})}f(x^{\prime},p^{\prime})g(x,p)\right|_{x^{\prime}=x,p^{\prime}=p}\cr
~\stackrel{(A)}{=}~&\left. f\left(x^{\prime}+\frac{i\hbar}{2}\partial_p,p^{\prime}-\frac{i\hbar}{2}\partial_x\right)g(x,p)\right|_{x^{\prime}=x,p^{\prime}=p}
\cr
~=~& f\left(x+\frac{i\hbar}{2}\partial_p,p-\frac{i\hbar}{2}\partial_x\right)g(x,p).
\end{align}\tag{B}$$
$\Box$ | {
"domain": "physics.stackexchange",
"id": 71302,
"tags": "quantum-mechanics, operators, quantum-optics, phase-space, wigner-transform"
} |
How is a unidirectional lamina transversely isotropic? | Question:
What I don't understand specifically is that if there happen to be more fibers in the $x_2$ direction than the $x_3$ direction, wouldn't that make the material properties in those directions different? That would violate the transverse isotropy.
I'm thinking that the opposite ought to be true. The material properties along the fibers should be the same and different in the transverse directions.
I would appreciate it if the answer was given in such as way that it relates physically to the situation.
Answer: You have to distinguish between a microscopic and a macroscopic point of view. The diagram is only a symbol for a lot of fibers evenly distributed in the $x_2-x_3$ plane. The material itself is transverse isotropic. But if you are creating a flat-bar with this material, then the flat-bar as a whole is clearly not transverse isotropic anymore. It would be orthotropic.
For example, steel is isotropic(microscale) but an I-beam out of steel(macroscopic) isn't. | {
"domain": "physics.stackexchange",
"id": 28642,
"tags": "material-science, continuum-mechanics, solid-mechanics"
} |
See behind the black hole | Question: Why in this video does the 2nd black hole appears to change size and appear larger the farther away it gets? How can you see behind it? http://www.youtube.com/watch?v=ENd8Sz0AFOk
Answer: Black holes bend light passing them and this means they act as a lens. The phenomenon is called gravitational lensing. The way black holes bend light is different to the way a conventional lens, for example in a magnifying glass, bends light and as a result there can be some very odd visual effects. In this case the lensing by the black hole at the front is magnifying the black hole at the back and distorting it into an Einstein ring.
Working out exactly what the gravitational lensing does is exceedingly complicated. If anyone is interested some details of the calculation shown in this image are described in this paper. | {
"domain": "physics.stackexchange",
"id": 27257,
"tags": "black-holes, spacetime, curvature, gravitational-lensing"
} |
Strength of a welded steel gate with vertical bars vs. crossed diagonal bars | Question: Looking for some hints on how to create a rough estimate for the following problem.
Given two steel gates with the same dimensions, same material - e.g. everything is the same. The only difference is that the middle parts have different structures.
When applying some force to the top, the gate will start to be more and more deformed, and at some force the gate will touch the ground at the place where the blue arrow points.
I'm looking for a rough estimate of how much more force is needed for the second gate - i.e. how much more "sturdy" is the second gate.
I really don't need any exact calculation, but probably will need some material data, so:
common steel thin-walled beam (25mm x 25mm x 2mm wall thick)
each joint point is welded, we can be simplify and assume that the welds are exactly as strong as the material itself
the suspension points can hold infinite force
and any other possible simplification - this problem isn't for any rocket-science but for solving an evening talk with a friend.
Answer: As grfrazee said, you won't know for sure until you do a finite element analysis. I was intrigued by this question as a colleague and I got into a discussion about this. While we both agreed the diagonal bracing would be better at resisting deflection, we wondered by what factor it would be better.
We were really curious so we settled the debate and did a quick structural analysis on SkyCiv Structural 3D (can try for free for one month if anyone is wondering). It took around an hour to set up both gates and analyze them mainly because we had to generate the node positions from scratch. Anyways here are the results of the linear static analysis which take into account the assumptions and simplifications you made. We applied a 5 kN POINT LOAD at both F1 and F2 and made each support a pin support at the locations you specified. Note that in the 3D colored results the deflection is 12X greater than the actual deflection of the gate in both scenarios - it is exaggerated so you can see the deflected shape of the gates.
Gate #1
$\text{y-deflection at the bottom-left of the gate} = 31.74\text{ mm}$
$\text{Max total deflection} = 32.10\text{ mm}$
Gate #2
$\text{y-deflection at the bottom-left of the gate} = 7.84\text{ mm}$
$\text{Max total deflection} = 7.55\text{ mm}$
Diagonal bracing (Gate #2) is clearly the winner. So when both gates are subjected to the same load it looks like Gate #2 resists deflection better (i.e. is more stiff) by a factor of 4.25.
Some more interesting points:
There's a pretty high bending stress at that top right support in both scenarios ~ 350 MPa.
The analysis didn't take into account self-weight of the gates.
Also let me add that there looks to be a scaling issue with the diagonal grid you have drawn, because when I modeled it I found that there were far less points than what was suggested by your diagram. I ensured that the parallel spacing between each rhombus was 300mm. This means the diagonal of each rhombus is roughly 424mm. Your gate is 3300mm in length so that means around 8 rhombi should fit across your gate in the x-direction - but you've drawn around 12. Just thought I'd let you know. | {
"domain": "engineering.stackexchange",
"id": 363,
"tags": "structural-engineering, beam, stresses, reinforcement"
} |
Did the Spectr-R space-based radio telescope use on-board accelerometer to measure non-gravitational acceleration for baseline correction? | Question: This answer to Why is space-based VLBI scattering sub-structure "Hopefully, a new promising tool to reconstruct the true image of observed background target(s)"? summarizes the contribution of the RadioAstron mission, a VLBI collaboration of radio telescopes using the Spectr-R space-based 10 m dish in high Earth orbit to produce "very-VLBI" observations.
Figure 2 of Gwinn et al. (2014): Discovery of Substructure in the Scatter-Broadened Image of Sgr A* shown below shows projected baseline distances out past 260,000 km!
What's notably different in space-based VLBI is that some of the radio telescopes are not rotating with the Earth but basically doing their own thing, flying though space unattached to a (mostly) rigid planet.
For orbits at this distance it's absolutely necessary to consider gravity from the Earth, Moon and Sun to reconstruct even short segments of trajectory. Luckily present day ephemerides make this possible, and combining those with fringe optimization algorithms (example) one can get a good idea how to build a baseline trajectory for an observation.
However, large spacecraft are also subject to non-gravitational forces that also affect their trajectory.
An accelerometer will not register gravitational orbital perturbations as it's subject to the same gravity as the rest of the spacecraft, but things like photon pressure from the Sun which is quite difficult to model accurately can in principle be directly measured in real time with a sufficiently sensitive accelerometer.
From http://www.asc.rssi.ru/radioastron/ I found On the optimization of the RadioAstron mission by using advanced observing methods at ground radio telescopes and tracking stations, and the advantages of using on-board H-maser frequency standard and on-board accelerometer (Astro Space Center, Moscow, June 2003). 5. High accuracy orbit determination and anomalous acceleration nicely describes the fringe-fitting technique and concerns about residual accelerations due to both
solar wind pressure
solar photon pressure
Note that a residual error of only 2 millimeters over the coherent integration time (typically of order 1000 seconds) results in a 10% loss of fringe visibility!
High accuracy orbit determination and anomalous acceleration
[...]SuperSTAR accelerometer (AM) recently developed and tested by ONERA provides the accuracy of -10 m/s2 along all three axis of the spacecraft [7]. The evaluations presented above have shown that solar pressure, solar wind (variable in strength and direction) especially inside the magnetosphere, and evaporation of gas from the spacecraft will cause SRT acceleration in the range of 10-10 - 10-8 m/s2.
Conclusion: AM will provide a possibility to reduce considerably the effects of errors in SRT acceleration thus increasing coherent integration time from several minutes to several hours when new reference-sources observations could be done. AM will also help to decrease time of fringe search at the correlator because of smaller values in uncertainty of delay and fringe rate.
Kellerman, K.I., Vermeulen, R.C., Zensus, J.A., & Cohen, M.H., AJ, 115, 1295-1318, 1998.
and later
Conclusion
On-board H-maser frequency standard and high accuracy on-board accelerometer included into the scientific payload of RadioAstron mission will permit us to increase the coherent integration time up to 5-30 minutes at the correlator before fringe detection. This will be resulted in 2-5 times improvement in sensitivity by increasing the coherence time up to 5-30 minutes. To reach these potential figures we propose advanced observing methods using the measurements of the atmospheric path delay variations by the monitoring of 22 GHz water vapor line emission along a line of sight to the observing source (WLM) and/or by using reference radio telescope located at high mountain. Additional gain in sensitivity can be obtained by applying self-calibration in fringe-fitting procedure during image reconstruction.
As it is known from regular ground VLBI observations, maximum coherence time at 22 GHz is about 80 seconds. WLM observing technique or/and usage of reference radio telescope on high mountain (HMRT) will increase the integration time by 2-3 times. On-board H-maser frequency standard will also provide the possibility to increase the integration time by 2-3 times. On-board accelerometer will provide necessary accuracy of orbit determination to realize potential maximum integration time by 2-3 and to simplify fringe search at the correlator.
Here it is, my question:
This is all written in 2003 and in future tense. Was the accelerometer actually used in data analysis throughout Spektr-R's long-lived VLBI mission, or were non-gravitational acceleration models eventually used which were likely to be smoother than noisy accelerometer data?
Answer: a "tentative answer".
I did not find that the accelerometer was installed this satellite. On the contrary, I found a number of articles on improving the accuracy of the satellite position with mathematical methods. For example string "Ballistic and navigation support for “Spektr-R” spacecraft" for googling.
1/
1.In russian
The paper describes developed models and techniques for orbit determination and prediction of a spacecraft, the motion of which undergoes significant perturbations due to variable solar radiation pressure and occasional firings of stabilization system thrusters. To compare the orbit of the spacecraft
has been determined and predicted by several ways using real tracking and on-board data.
Для получения дополнительной информации о возмущениях от разгрузок двигателей-маховиков и переменного светового давления будем использовать бортовые измерения, в том числе данные звездных датчиков об ориентации КА в пространстве, параметры работы двигателей стабилизации, а также скорости вращения двигателей маховиков.
To obtain additional information about the disturbances from the unloading of the flywheel motors and variable light pressure, we will use onboard measurements, including the data of star sensors on the spacecraft orientation in space, the operation parameters of the stabilization motors, as well as the rotation speed of the flywheel motors.
https://lppm3.ru/files/journal/XXX/MathMontXXX-Borovin.pdf
In English: "DETERMINATION AND PREDICTION OF ORBITAL PARAMETERS OF THE
RADIOASTRON MISSION"
Direct solar radiation pressure (SRP) and operation of stabilization thrusters during unloadings of reaction wheels are the major uncertainties that affect the Radioastron. Both effects have significant impact on the orbit and cannot be ignored since accurate orbit is vital for correct processing of interferometric observations. This paper introduces developed direct SRP model, which allows to calculate both acceleration and torque due to impacting solar radiation. The spacecraft is not equipped with accelerometers, but a telemetry of the reaction wheels can be used to measure perturbing torque and to obtain additional information about unknown parameters of the SRP model. The paper shows how the SRP model along with consideration of unloadings significantly improves the accuracy of the orbit.
https://issfd.org/ISSFD_2014/ISSFD24_Paper_S18-5_zakhvatkin.pdf
Determination and Prediction of Orbital Parameters of the Radioastron Mission
Summary
Adjustable Radioastron SRP model was developed and tested.
Parameters of the SRP model was estimated by using both motion of the center of mass and motion around the center of mass.
Determined orbits are successfully used for correlation of the Radioastron observations.
An unloading prediction approach, important for future Sun-Earth L2 missions (Spectr-R,‘Millimetron) based on the same platform, was tested on the Radioastron data
http://www.kiam1.rssi.ru/pubs/20140509_ISSFD24_Zakhvatkin.pdf
Navigation Support for the RadioAstron Mission
A developed method of determination of orbital parameters allows one to estimate, along with orbit elements, some additional parameters that characterize solar radiation pressure and perturbing accelerations due to unloadings of reactiion wheels. A parameterized model of perturbing action of solar radiation pressure on the spacecraft motion is described (this model takes into account the shape, reflecting properties of surfaces, and spacecraft attitude). Some orbit determination results are presented obtained by the joint processing of radio measurements of slant range and Doppler, laser range measurements used to calibrate the radio measurements, optical observations of right ascension and declination, and telemetry data on space craft thrusters' firings during an unloading of reaction wheels
This fact makes the following demands to the accuracy of the determination of spacecraft motion parameters: at position Δr = ±600 m; at velocity = ±2 cm/s; and at acceleration Δw = ±10–8 m/s2
http://www.asc.rssi.ru/radioastron/publications/articles/cr_2014,52,342.pdf | {
"domain": "astronomy.stackexchange",
"id": 5707,
"tags": "radio-astronomy, space-telescope, radio-telescope, vlbi"
} |
Electromagnetism as curvature similar to gravity? | Question: My knowledge of Physics is very surface-level, so sorry if this question doesn't make sense. Before Einstein's theory of relativity, people would think of the gravitational field as a force exerted by massive objects. Now we have a new interpretation where mass simply curves space-time and it is the movement of objects through the curved spacetime that causes the illusion of a force.
Now, the Electromagnetic force looks a lot like the gravitational force, with both following an inverse square law. Is it possible then to interpret the electromagnetic force as the consequence of movement through a curved surface as well? Is there any attempt at this already?
Answer: Yes indeed, as pointed out by many responders in their comments here. Kaluza found that if Einstein's equations for general relativity in three dimensions of space and one of time were rewritten to include an extra spatial dimension, he could obtain from that Maxwell's equations for electrodynamics, which Einstein considered a significant breakthrough. Later, Klein proposed the idea that the extra dimension had the shape of a circle and was compactified, rendering it invisible to us.
It was also demonstrated that by rewriting Maxwell's equations in five dimensions instead of four, one could mathematically extract the equations of general relativity from the result.
However, these formulations all contained features that did not square with the real world and which could not be fixed in the math. So as appealing as the idea may have been initially, Kaluza-Klein theory wound up being a dead end as far as unifying gravity with electromagnetism- but the basic idea of "extra" compactified dimensions with complex topologies lives on in string theory. | {
"domain": "physics.stackexchange",
"id": 81652,
"tags": "electromagnetism, general-relativity, curvature"
} |
Mathematical explanation of quantum teleportation | Question: I am now studying quantum teleportation. I get what the process is like but I'm wondering why it happens this way.
You've got two entangled particles A and B whose wavefunctions are entangled. You also have a third particle C, which is the one you want to teleport. You get C entangled with either of the two and final result is that the wavefuction of one of the two A and B (depending on which you entangled C with) becomes the same as the C's wavefunction is.
Why does it occur this way (mathematically explained, if possible)? Why does the wavefunction of A or B becomes that of C?
What is the equation explaining this process (addition of wavefunctions, maybe)?
Answer: If I had to summarize quantum teleportation in one equation, I would write
$$
|\psi\rangle \otimes |\beta_{00}\rangle
= \displaystyle \frac{1}{2} \sum_{z,x \in \{0,1\}}
|\beta_{zx}\rangle \otimes X^x Z^z |\psi\rangle
$$
You can verify this by explicitly writing out all terms on the right-hand side. Here $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$ is an arbitrary qubit state that we want to teleport, $X$ and $Z$ are Pauli matrices and
$$
|\beta_{zx}\rangle
= \dfrac{|0,x\rangle + (-1)^z |1,\bar{x}\rangle}{\sqrt{2}}
$$
are the Bell states for $z, x \in \{0,1\}$ where $\bar{x}$ is the negation of $x$.
Intuitively, this equation says that if Alice has $|\psi\rangle$ and a half of a maximally entangled state $|\beta_{00}\rangle$, then that's the same as her having one of the four Bell states $|\beta_{zx}\rangle$ and Bob having $X^x Z^z |\psi\rangle$. You can think of Bob's state as a "corrupt" version of $|\psi\rangle$. If Alice measures her two qubits in Bell basis, she gets bits $x$ and $z$ which she sends to Bob, who can apply $Z^z X^x$ to recover $|\psi\rangle$. The coefficient 1/2 represents the fact that Alice's bits $x$ and $z$ are uniformly random.
Teleportation becomes less mysterious if you know that one-time pad is it's classical analogue. One-time pad lets you use shared randomness to transmit private random bits over a public channel in the same way as teleportation lets you use shared entanglement to transmit quantum bits over a classical channel.
More precisely, the analogy goes as follows. Alice has a private bit and shares a perfectly correlated pair of random bits with Bob ($00$ and $11$ with probability 1/2). If she XORs her private bit with her half of the shared random bit and sends the result publicly to Bob, he can recover Alice's private bit by XORing the received bit with his half of the shared random bit. Teleportation works very similarly, except we have to send two classical bits, since we are teleporting a pure qubit state which has two degrees of freedom.
You can see my blog post for more details: Teleportation and superdense coding. | {
"domain": "physics.stackexchange",
"id": 37367,
"tags": "quantum-mechanics, wavefunction, quantum-teleportation"
} |
High pressure collapsing time of Water, Air or any medium by size and speed of waves | Question: Explosion in water ( air ) creates high pressure, and expands water ( air ), and after that it collapses back.
How to calculate time or speed of collapsing, if wave propagation speed of water or air and diameter of explosion are known ?
Answer: The Rayleigh-Plesset equation models the bubble dynamics under the assumption of the bubble being spherical in an incompressible liquid. The Wikipedia page of the derivation nicely expands that from the book Brennen, Christopher E. (1995). Cavitation and Bubble Dynamics. Oxford University Press. ISBN 978-0-19-509409-1. This book has a corrected edition on Amazon.
The Wikipedia page Cavitation as well as its See Also section of links have a comprehensive treatment on this topic. | {
"domain": "physics.stackexchange",
"id": 74862,
"tags": "fluid-dynamics, pressure, acoustics, explosions"
} |
kinect failed openni_launch | Question:
Hello everyone,
I'm using ubuntu 11.04 and ros electric.
I'm trying to use my kinect to "play" with rviz with it, but when I run "roslaunch openni_launch openni.launch", I have a lot of errors:
[ERROR] [1338444270.129531457]: Failed to load nodelet [/camera/depth/metric_rect] of type [depth_image_proc/convert_metric]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444270.129984353]: Service call failed!
[camera/depth/metric_rect-8] process has died [pid 13516, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-depth-metric_rect-8*.log
[ERROR] [1338444270.508399698]: Failed to load nodelet [/camera/depth/metric] of type [depth_image_proc/convert_metric]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444270.508836324]: Service call failed!
[camera/depth/metric-9] process has died [pid 13525, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-depth-metric-9*.log
[ERROR] [1338444271.056389706]: Failed to load nodelet [/camera/depth/points] of type [depth_image_proc/point_cloud_xyz]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444271.056824167]: Service call failed!
[camera/depth/points-10] process has died [pid 13532, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-depth-points-10*.log
[ERROR] [1338444272.950893886]: Failed to load nodelet [/camera/driver] of type [openni_camera/driver]: Failed to load library /opt/ros/electric/stacks/openni_kinect/openni_camera/lib/libopenni_nodelet.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444272.953776888]: Service call failed!
[ERROR] [1338444273.133584291]: Failed to load nodelet [/camera/register_depth_rgb] of type [depth_image_proc/register]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444273.137115493]: Service call failed!
[camera/driver-2] process has died [pid 13487, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-driver-2*.log
[ERROR] [1338444273.468703498]: Failed to load nodelet [/camera/depth_registered/metric_rect] of type [depth_image_proc/convert_metric]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[camera/register_depth_rgb-11] process has died [pid 13539, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-register_depth_rgb-11*.log
[FATAL] [1338444273.473603163]: Service call failed!
[ERROR] [1338444273.642909828]: Failed to load nodelet [/camera/points_xyzrgb_depth_rgb] of type [depth_image_proc/point_cloud_xyzrgb]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444273.646029612]: Service call failed!
[camera/depth_registered/metric_rect-13] process has died [pid 13550, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-depth_registered-metric_rect-13*.log
[ERROR] [1338444273.819169199]: Failed to load nodelet [/camera/depth_registered/metric] of type [depth_image_proc/convert_metric]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444273.819648350]: Service call failed!
[camera/points_xyzrgb_depth_rgb-15] process has died [pid 13556, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-points_xyzrgb_depth_rgb-15*.log
[ERROR] [1338444273.988537941]: Failed to load nodelet [/camera/disparity_depth] of type [depth_image_proc/disparity]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444273.993103764]: Service call failed!
[camera/depth_registered/metric-14] process has died [pid 13554, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-depth_registered-metric-14*.log
[ERROR] [1338444274.165960486]: Failed to load nodelet [/camera/disparity_depth_registered] of type [depth_image_proc/disparity]: Failed to load library /opt/ros/electric/stacks/openni_kinect/depth_image_proc/lib/libdepth_image_proc.so. Make sure that you are calling the PLUGINLIB_REGISTER_CLASS macro in the library code, and that names are consistent between this macro and your XML. Error string: Cannot load library: /opt/ros/electric/stacks/perception_pcl/pcl/lib/libpcl_visualization.so.1.1: symbol __cxa_pure_virtual, version libmysqlclient_16 not defined in file libmysqlclient.so.16 with link time reference
[FATAL] [1338444274.173001454]: Service call failed!
[camera/disparity_depth-16] process has died [pid 13562, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-disparity_depth-16*.log
[camera/disparity_depth_registered-17] process has died [pid 13565, exit code 255].
log files: /home/ken/.ros/log/6cbc0d00-aae6-11e1-a5f0-001422ad9fc2/camera-disparity_depth_registered-17*.log
I have run rostopic and I can see some /camera/depth or camera/rgb topics so I tried to see something in running rosrun image_view image_view image:=/camera/depth/image but the window is empty.
I have run rviz and added a "image" and chosen the topic /camera/rgb/image_color but it says me "No image received".
I may do something wrong, I have started to read ros tuto 2 weeks ago.
Hope I'm saying anything crazy.
Cheers
Brioche
Originally posted by Brioche on ROS Answers with karma: 115 on 2012-05-30
Post score: 3
Original comments
Comment by marco.puni on 2012-05-30:
I had the same errors yesterday and now I fixed. I'm not a master, and I use fuerte and ubuntu 12.04 so I don't know if it works also for you, but I noticed that for me works only kinect for xbox, kinect for pc tell me no devices connected.
Comment by marco.puni on 2012-05-30:
then in my case I also see, don't know why, that works just once, then I have same errors more or less, and i have to restart ubuntu and works again.. try this, if not, somebody more expert than me maybe can help..
Comment by Brioche on 2012-06-03:
I have just upgraded my ubuntu and now it works fine ( rviz is lagging but it works!!)
Answer:
Hi,
Try openni_camera
roslaunch openni_camera openni_node.launch
It works for me.
Thanks,
Karthik
Originally posted by karthik with karma: 2831 on 2012-05-30
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 9606,
"tags": "kinect, rviz, openni-launch"
} |
Emulating boolean circuits using addition and multiplication (mod 5) | Question: I'm trying to use gates that do addition and multiplication modulo 5 to emulate logic gates.
Assuming false and true are mapped to 0 and 1 respectively (with 2, 3, and 4 being invalid), I figured out I can map the operations like this:
a and b -> a*b (mod 5)
a or b -> 2*(a+b)*(a+b+2) (mod 5)
I was wondering if there was a simpler approach.
For the application I have in mind, a toy example of secure multi party computation using secret sharing, I haven't shown/discovered/figured-out yet if it's safe to re-use private values. If I have to recompute a, b, and a+b two times in order to do an or, costs would be exponential in the length of the circuit. (I'm only using tiny circuits so that's not a big deal, but it would be interesting to know if it was just a non-issue via a clever transformation.)
Answer: There are many other solutions. For example, even keeping your True and False, you can have $a\lor b = 1 - (1-a)(1-b) = a+b-ab = a + (1-a)b$ and so on. | {
"domain": "cs.stackexchange",
"id": 1991,
"tags": "circuits, modular-arithmetic"
} |
Proof that Energy Momentum Tensor of Scalar Field Theory satisfies Weak Energy Condition | Question: It's a question on Sean Carroll's Spacetime and Geometry, where we are supposed to prove that the energy momentum tensor of scalar field theory satisfies Weak Energy Condition (WEC). The energy momentum tensor is
$$T_{\mu\nu}=\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\left(\nabla_{\lambda}\phi\nabla^{\lambda}\phi+V(\phi)\right),$$
and the condition for WEC is
$$T_{\mu\nu} U^\mu U^\nu \geq 0,$$
where $U^\mu$ is an arbitrary non-spacelike vector(=timelike or null).
But how can this be proved when there are no known properties about the scalar field variable $\phi$ and potential $V(\phi)$?
Answer: I don't have the book, so can't check out his assumptions, so this might not quite answer your question, since you're asking about arbitrary 4 vectors $U^{\mu}$, but I'll offer it in case some of it is useful. In proving the weak energy condition (which is part way to proving the dominant energy condition), the 4 vectors in question are timelike. If this is the case, I might try the following:
Assume a signature (- + + + )
Starting with
$$T_{\mu\nu}=\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\left(\nabla_{\lambda}\phi\nabla^{\lambda}\phi+V(\phi)\right)$$
if $U^{\mu}$ is timelike and future pointing, then at any given point we can work in an orthonormal frame for which the components are $U^{\mu}=(1,0,0,0)$
If we then demonstrate the positivity of $T_{\mu\nu}U^{\mu}U^{\nu}$ in that frame, then it will hold in any frame since it's a scalar.
So, plugging in the components of $U^{\mu}$, we get
$$T_{\mu\nu}U^{\mu}U^{\nu}=(\nabla_{0}\phi)^{2}+\frac{1}{2}(g^{\lambda\rho}\nabla_{\lambda}\phi\nabla_{\rho}\phi+V(\phi))$$
$$=\frac{1}{2}(\nabla_{0}\phi)^{2}+\delta^{ij}\nabla_{i}\phi\nabla_{j}\phi+V(\phi)$$
So provided $V(\phi)$ is positive and $\phi$ is a real field (which it surely is otherwise you'd have had complex conjugates in the energy momentum tensor), then in that frame, at that point $T_{\mu\nu}U^{\mu}U^{\nu}$ is positive.
But this is just the weak energy condition you'd have to work a bit harder to prove the dominant energy condition. | {
"domain": "physics.stackexchange",
"id": 2188,
"tags": "general-relativity, energy, spacetime, field-theory"
} |
State of $N$-body system after time $t$ (under gravity and inelastic collision) | Question: Given the centers of gravity of $n$ spherical bodies of unit mass, $p_1$, $p_2$, ...$p_n$, and assuming perfectly inelastic collisions, how does one find the location of the bodies after time $t$?
Note: If $t$ is long enough, I think all bodies will agglomerate to a single body of mass $n$ (but I'm not sure whether the location of this single body will be the same as the center of gravity of the initial state).
Just to clarify, everything has zero velocity at time $t = 0$.
Answer: There very likely isn't an analytic way to show where the $n$ bodies will be after time $t$, depending how large $n$ is. The best you can hope for is doing a numerical simulation.
Essentially you are evolving two equations (one to get the new position & one to get the new velocity) for each of the particles in the system. That is, for each body you solve,
$$\mathbf{x}(t+\mathrm{d}t)\simeq\mathbf{x}(t)+\mathbf{v}(t)\,\mathrm{d}t\\
\mathbf{v}(t+\mathrm{d}t)\simeq\mathbf{v}(t)+\mathbf{a}(t)\,\mathrm{d}t$$
where $\mathbf{a}(t)=\mathbf{F}(t)/m$ with $\mathbf{F}(t)$ the gravitational force, $\mathbf{x}$ and $\mathbf{v}$ the position & velocity vectors and $\mathrm{d}t$ an increment in time. More advanced techniques are called symplectic integration techniques, which are generally recommended for $n$-body simulations because they conserve energy whereas other common methods (e.g., Euler scheme) do not.
Assuming you create a class Body that stores the mass, position & velocity of each particle, your time-evolution algorithm, using a symplectic method, would be something like,
// initializes bodies, t_end, dt, etc.
while (t <= t_end) {
// Compute new accel.
for (auto body : bodies) {
accel[body] = calc_force(body, bodies) / body.mass;
}
// Compute new position
for (auto body : bodies) {
body.position += (body.velocity + 0.5 * accel[body] * dt) * dt;
}
// Compute new accel.
for (auto body : bodies) {
accel2[body] = calc_force(body, bodies) / body.mass;
}
// Compute new velocity
for (auto body : bodies) {
body.velocity += 0.5 * (accel[body] + accel2[body]) * dt;
}
t += dt;
}
(I have other details here and here; see also search: verlet or search: symplectic integrator for more)
The function calc_force can be optimized a bit since $F_{i,j}=-F_{j,i}$, but you get the idea. Now note that this is still pretty slow since you have a double sum, which is $\mathcal{O}(n^2)$ operation. There are faster techniques that can drop this to $\mathcal{O}(n\log n)$ (e.g., Barnes-Hut), but this may be a bit more than what you want, depending on the size of $n$.
If you want the particles to aggregate, then you'd need to account for this during the integration. For instance, if the distance between any two bodies is below some threshold, then you need to figure out how the merge algorithm (e.g., do collision analysis to get new velocity & do simple addition for masses) should be inserted in the evolution algorithm (probably check after each time step, but could also do every few steps). You may want to see if there are any questions on collision detection on GameDev.SE to help you on that, as I don't think there are any here on Physics.SE. | {
"domain": "physics.stackexchange",
"id": 57085,
"tags": "newtonian-mechanics, newtonian-gravity, orbital-motion, computational-physics, many-body"
} |
Why L is defined as L = SPACE$( \log n)$ instead of L = SPACE$(\log^2 n)$ or L = SPACE$(\sqrt n)$? | Question: $L$ is the class of languages that are decideable in logarithmic space on a deterministic Turing machine. In other words,
L = SPACE$( \log n)$
But why $\log n$, instead of $\log^2 n$ or $\sqrt n$. This is what, I find out in the Theory of computation book by Michael Sipser Theory of computation book by Michael Sipser, Chapter 8
Logarithmic space is just large enough to solve a number of interesting computational problems, and it has attractive mathematical properties such as robustness even when mathematical model and input encoding method change.
I am not able to understand completely, how mathematical properties and input encoding are related to defining L complexity class.
Answer: The complexity class $\mathsf{L}$ satisfies many desirable properties:
It is closed under concatenation and iteration.
The corresponding function class is closed under composition.
The same complexity class is obtained for any number of work tapes.
It is resilient under "reasonable" input transformations with polynomial blowup (see below).
It can accommodate most known NP-hardness reductions.
It is a subset of $\mathsf{P}$.
It supports pointers indexing the input.
What is an input transformation? Consider the case of graphs. We can encode a graph either as an adjacency matrix of as adjacency lists. We can convert between the two in logspace, and so a graph problem which is in $\mathsf{L}$ under one of them is also in $\mathsf{L}$ under the other.
Logspace is the natural space analog of $\mathsf{P}$, considering the containment $\mathsf{SPACE}(f(n)) \subseteq \mathsf{TIME}(2^{f(n)})$.
It also shows up in the refined Schaefer's dichotomy theorem, as the lowest non-trivial complexity class. | {
"domain": "cs.stackexchange",
"id": 8562,
"tags": "complexity-theory, space-complexity"
} |
Model to handle all the data/networking from Foursquare API in IOS | Question: I posted not too long ago, asking for tips/improvements I could use on my models. I have changed quite a lot, so I thought I'll give it another go.
My model is supposed to handle all the data I get from the Foursquare API:
venueService.h
typedef void (^TNVenueServiceCompletionBlock)(UIImage *image, NSError *error);
@class TNVenueImageData;
@interface TNVenueService : NSObject
@property (nonatomic) NSString *clientID;
@property (nonatomic) NSString *clientSecret;
@property (nonatomic) NSString *radius;
@property (nonatomic) NSArray *venueObject;
@property (nonatomic) NSArray *imageData;
+ (instancetype)venueService;
- (NSDictionary *)mainCategoryKeys;
- (NSArray *)mainCategories;
- (void)performVenueLocationRequest:(CLLocation *)location identifier:(NSString *)identifier;
- (void)performPhotoDetailsRequest:(NSString *)identifier;
- (void)downloadImage:(TNVenueImageData *)imageData completionBlock:(TNVenueServiceCompletionBlock)completion;
@end
venueService.m
@implementation TNVenueService
+ (instancetype)venueService
{
TNVenueService *venueService = [[TNVenueService alloc] init];
venueService.clientID = @"MY_CLIENT_ID";
venueService.clientSecret = @"MY_CLIENT_SECRET";
venueService.radius = @"2000"; //start-radius
return venueService;
}
- (NSURLRequest *)URLRequestWithFormat:(NSString *)attributes, ...
{
va_list arguments;
va_start(arguments, attributes);
NSString *urlPath = [[NSString alloc] initWithFormat:attributes arguments:arguments];
va_end(arguments);
NSURL *url = [NSURL URLWithString:urlPath];
NSURLRequest *request = [NSURLRequest requestWithURL:url];
return request;
}
- (void)performVenueLocationRequest:(CLLocation *)location identifier:(NSString *)identifier
{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
NSError *error = nil;
NSURLResponse *response = nil;
NSData *data = [NSURLConnection sendSynchronousRequest:[self buildRequestForVenueLocation:location identifier:identifier]
returningResponse:&response error:&error];
if (!error) {
NSDictionary *responseDictionary = [NSJSONSerialization JSONObjectWithData:data options:kNilOptions error:&error];
NSMutableArray *tempArray = [NSMutableArray array];
for (NSDictionary *data in [responseDictionary valueForKeyPath:@"response.venues"]) {
TNVenueObject *venueObject = [[TNVenueObject alloc] initVenueWithName:data[@"name"]
location:data[@"location"]
contact:data[@"contact"]
identifier:data[@"id"]];
[tempArray addObject:venueObject];
}
NSSortDescriptor *distanceSortDiscriptor = [NSSortDescriptor sortDescriptorWithKey:@"distance"
ascending:YES
selector:@selector(localizedStandardCompare:)];
[tempArray sortUsingDescriptors:@[distanceSortDiscriptor]];
_venueObject = [NSArray arrayWithArray:tempArray];
[[NSNotificationCenter defaultCenter] postNotificationName:@"didFinishStoringVenueObject" object:nil];
} else {
NSLog(@"%@", error.localizedDescription);
}
});
}
- (NSDictionary *)mainCategoryKeys
{
return @{@"Arts & Entertainment" : @"4d4b7104d754a06370d81259",
@"Colleges & Universities" : @"4d4b7105d754a06372d81259",
@"Events" : @"4d4b7105d754a06373d81259",
@"Food" : @"4d4b7105d754a06374d81259",
@"Nightlife Spots" : @"4d4b7105d754a06376d81259",
@"Outdoors & Recreation" : @"4d4b7105d754a06377d81259",
@"Professional & Other Places" : @"4d4b7105d754a06375d81259",
@"Residences" : @"4e67e38e036454776db1fb3a",
@"Shops & Services" : @"4d4b7105d754a06378d81259",
@"Travel & Transport" : @"4d4b7105d754a06379d81259"};
}
- (NSArray *)mainCategories
{
return @[@"Arts & Entertainment",
@"Colleges & Universities",
@"Events",
@"Food",
@"Nightlife Spots",
@"Outdoors & Recreation",
@"Professional & Other Places",
@"Residences",
@"Shops & Services",
@"Travel & Transport"];
}
- (void)performPhotoDetailsRequest:(NSString *)identifier
{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
NSError *error = nil;
NSURLResponse *response = nil;
NSData *data = [NSURLConnection sendSynchronousRequest:[self buildRequestForVenuePhotoDetails:identifier] returningResponse:&response error:&error];
if (!error) {
NSDictionary *responseDictionary = [NSJSONSerialization JSONObjectWithData:data options:kNilOptions error:&error];
NSMutableArray *tempArray = [NSMutableArray array];
if ([[[responseDictionary valueForKeyPath:@"response.venue.photos.groups"] valueForKey:@"items"] count] > 0) {
for (NSDictionary *data in [[responseDictionary valueForKeyPath:@"response.venue.photos.groups"] valueForKey:@"items"][0]) {
TNVenueImageData *imageData = [[TNVenueImageData alloc] initWithPrefix:data[@"prefix"]
width:data[@"width"]
height:data[@"height"]
suffix:data[@"suffix"]
userInfo:data[@"user"]];
[tempArray addObject:imageData];
}
_imageData = [NSArray arrayWithArray:tempArray];
[[NSNotificationCenter defaultCenter] postNotificationName:@"didFinishStoringImageData" object:nil];
} else {
NSLog(@"No Images...");
}
} else {
NSLog(@"%@", error.localizedDescription);
}
});
}
- (void)downloadImage:(TNVenueImageData *)imageData completionBlock:(TNVenueServiceCompletionBlock)completion
{
NSError *error = nil;
NSURL *urlString = [NSURL URLWithString:[self constructImageURL:imageData]];
NSData *data = [NSData dataWithContentsOfURL:urlString options:NSDataReadingUncached error:&error];
__block UIImage *image;
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
image = [UIImage imageWithData:data];
completion(image, error);
});
}
- (NSString *)getTodaysDate
{
NSDate *date = [NSDate date];
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setDateFormat:@"yyyMMdd"];
return [formatter stringFromDate:date];
}
- (NSString *)constructImageURL:(TNVenueImageData *)imageData
{
return [NSString stringWithFormat:@"%@%@x%@%@", imageData.photoPrefix,
imageData.photoWidth, imageData.photoHeight, imageData.photoSuffix];
}
- (NSURLRequest *)buildRequestForVenueCategory
{
return [self URLRequestWithFormat:@"https://api.foursquare.com/v2/venues/categories?client_id=%@&client_secret=%@&v=%@",
_clientID, _clientSecret, [self getTodaysDate]];
}
- (NSURLRequest *)buildRequestForVenueLocation:(CLLocation *)location identifier:(NSString *)identifier
{
return [self URLRequestWithFormat:@"https://api.foursquare.com/v2/venues/search?ll=%f,%f&radius=%@&categoryId=%@&client_id=%@&client_secret=%@&v=%@", location.coordinate.latitude, location.coordinate.longitude, _radius, identifier, _clientID, _clientSecret, [self getTodaysDate]];
}
- (NSURLRequest *)buildRequestForVenuePhotoDetails:(NSString *)identifier
{
return [self URLRequestWithFormat:@"https://api.foursquare.com/v2/venues/%@?client_id=%@&client_secret=%@&v=%@",
identifier, _clientID, _clientSecret, [self getTodaysDate]];
}
@end
Any further help as to what can be improved once more, would be very helpful! :)
Answer: First, I'll focus on some of the easy, obvious things I see. I know the main point of this code is some of the networking stuff, but there are some other things I want to comment on first.
First and foremost, this line:
@class TNVenueImageData;
It's fine to declare the class in the .h and wait until the .m to import the file... but you must have clipped off some lines, because I don't see the actual import anywhere.
But at the end of the day, it doesn't actually hurt to import the file in the .h. After all, anyone who imports this file is likely going to also import the other file. Double importing doesn't hurt (it's only imported once, no matter how many times you try). And now if I'm using this class, I could just import venueService and not have to import the image data file, because the venue service file is importing it for me.
Next, what's this stuff exactly?
venueService.clientID = @"MY_CLIENT_ID";
venueService.clientSecret = @"MY_CLIENT_SECRET";
venueService.radius = @"2000"; //start-radius
Surely "MY_CLIENT_..." can't be the actual values here. But the method is simply venueService and takes no arguments, so these are definitely constant values. They're also public properties.
So, first of all, we can start by defining these as constants within the .m file.
Outside of the @interface or @implementation, up near the imports, define some constants:
static NSString * const kVenueServiceClientID = @"MY_CLIENT_ID";
static NSString * const kVenueServiceClientSecret = @"MY_CLIENT_SECRET";
static NSString * const kVenueServiceRadius = @"2000";
Now if you need to use these values within the class, simple call the constant variable.
If you want to use the variables outside the class, you can create readonly properties for each.
In the .h, set up your properties as such:
@property (nonatomic,readonly) NSString *clientID;
@property (nonatomic,readonly) NSString *clientSecret;
@property (nonatomic,readonly) NSString *radius;
Now in the .m, overwrite the default accessor for each:
- (NSString *)clientID {
return kVenueServiceClientID;
}
And repeat for the other two.
Now, as for these methods: mainCategories and mainCategoryKeys, I have a handful of problems.
The first problem is that the method names are confusing from an Objective-C standpoint. mainCategoryKeys returns a dictionary of values... while mainCategories returns an array which confusingly enough happens to be the same values you'd get if you called allKeys on the dictionary returned by mainCategoryKeys.
I assume the strange values in the value part of the dictionary are keys used with the API.
So, first of all, let's see what we can do to fix the method names here.
The method that returns a dictionary returns a dictionary, so how about: mainCategoryKeyDictionary, which makes it more clear that the values are actually keys to something else.
As for the other method, some name options include mainCategoryNames, or mainCategoryDescriptions, but honestly, the fact that an identical array can be produced simply by calling allKeys on the dictionary makes me think this method is kind of extraneous.
Now that we've got them renamed, I there are still other problems. First, constants, again. The same logic as above applies.
But the bigger issue I see here is that this should probably be a class level method rather than an instance method. In another language like Java, this might not even be a method, but just a public static variable. Objective-C doesn't offer class level variables, but there's a commonly used pattern to achieve the same effect in Objective-C. It looks like this:
+ (NSDictionary *)mainCategoryKeyDictionary {
@synchronized(self) {
static NSDictionary *keyDictionary;
if (!keyDictionary) {
keyDictionary = @{ /*stuff*/ };
}
return keyDictionary;
}
}
Where /*stuff*/ is replace with the actual keys and values of course.
This is more efficient in terms of speed and memory usage.
As for the methods that do the actual networking, NSNotificationCenter is the 3rd best option out of 3 ways I know for doing what you're trying to do with it.
If you are going to use NSNotificationCenter, there's two important things you need to do with the notification name. First, use the reverse domain format for the actual name.
com.example.yourNotificationsActualName
The same naming scheme, by the way, needs to be applied to any threads you create. You don't want to accidentally register for some other notification that coincidentally has the same name you picked... and reverse domain naming is a good way to avoid that.
Second of all, these notification names absolutely must be defined constants available to anyone that has imported the .h for this file. How else will they know what notifications to register for when using your code?
But ultimately, these are two reasons why the other options are better.
The other options I know of are completion blocks and the delegate-protocol pattern.
First of all, based on your previous question, I know you have some experience with completion blocks. I'm personally not a huge fan of completion blocks, but they're there, and despite my personal bias, I consider them equally as good as protocol-delegate pattern.
The protocol-delegate pattern is something you're already familiar with in Objective-C even if you don't realize it.
A protocol-delegate setup consists of three parts.
The protocol. There are lots of already existing protocols in Cocoa. One of the most common ones everyone gets familiar with is UITableViewDelegate and UITableViewData source. A protocol is simply a listing of required and/or optional method and/or property declarations. Objects can choose to conform to a protocol, and those that do are guaranteed to respond to any of the required methods and can potentially respond to the optional ones.
Here's an example of what a protocol definition looks like:
@protocol FooBarDelegate <NSObject>
@required - (void)fooBaringDidCompleteWithData:(NSData *)fooBarData;
@optional - (void)fooBaringDidFailWithError:(NSError *)error;
@end
So this is a protocol definition. The protocol name is MyExampleProtocol, and this protocol conforms to NSObject protocol. This simply means that this protocol includes EVERYTHING in the NSObject protocol as well as the listed stuff.
The protocol also includes two methods, one required, one optional.
The delegator. The delegator is simply any class that has a reference to an object conforming to the protocol. The delegator has to know about the protocol. In most cases (personal experience), the delegator's file is the one that defines the protocol, though it's perfectly acceptable for it to be defined in another file and imported into the delegator's file.
Typically, the delegator will have a property called delegate, and it's typically (although not always, depending on what the delegate it totally needed for) defined simply as id (conforming to the protocol). Although you can be more specific as to what type the object is (UIViewController conforming to a specific protocol isn't completely uncommon). Example:
@property (nonatomic,**weak**) id<FooBarDelegate> delegate;
The weak is VERY important here. Using strong typically creates a retain cycle because typically the delegate has strong reference to the delegator. And weak is better than assign because weak will auto-zero preventing us from calling methods to a deallocated object.
The delegate. In the @interface part of the delegate (either .h or .m, we have to define our delegate as conforming to the protocol.
Example:
@interface MyFooBarDelegateExampleClass <FooBarDelegate>
Now, Xcode will warn us that we're missing any method that FooBarDelegate defines as required, so we implement the required methods.
So, we've got an idea of the three parts. Let's go back to the delegator.
When some operation is complete and we're ready to let the delegate know, here's how we do it.
[self.delegate fooBaringDidCompleteWithData:myFooBarData];
We've called a method in that class and sent it the data we built in this class. It's now free to do whatever it wants with the data.
Now that was a required method. If it's an optional method, we need to do some checking first:
if ([self.delegate respondsToSelector:@selector(fooBaringDidFailWithError:)]) {
[self.delegate fooBaringDidFailWithError:error];
}
Now, if the delegate has implemented this method, we've sent it the error information and it can do with it what it wants. If it didn't implement the method, we just move on.
Getting comfortable with the delegate-protocol pattern is very important in Objective-C and will help you think about a lot of problems in a much different way that can make lots of things much easier for you. | {
"domain": "codereview.stackexchange",
"id": 7642,
"tags": "objective-c, ios"
} |
combine two features in dataset? | Question: I have a data set containing the number of security gaps and the level of that gap for a specific website.
Now suppose I have 2 features in this data set, the first feature is the number of a specific security gap and the second feature is the risk of this gap for a specific website.
How can I combine these two features into one?
What is the best way to apply feature engineering to these features?
Thanks
Answer: So basically you have three values per security gap:
The type of gap (i.e. its label)
The risk of that gap for that specific website. As I understand it this value is different for each website, even if the type of the security gap is the same.
The number of occurrences of the gap
One reasonable way to combine the features is to make a feature vector where the indices are the type of the gap, and the values are the risk of the gap multiplied with the occurences. However, this would mean information loss, due to a gap with risk 1 occuring 3 times being identical to a gap with risk 3 occuring once.
A different way to combine the features is just to make a feature vector that consists of the gap risk values and the gap occurences appended (for a feature vector with length of 2 * num(type_of_gaps). Assuming a model with a fully connected structure, the pattern between the connected values may be determined by the model itself during learning.
A third way would be to make a feature vector filled with tuples, which is not really feature engineering as much as it is restructuring the data into a manageable data type. However, you can combine this, when flattened, with an algorithm using a 1D convolution layer with stride kernel 2 and stride 2 in order to process the tuples together, instead of all the values separately. | {
"domain": "ai.stackexchange",
"id": 3472,
"tags": "machine-learning, datasets, data-preprocessing, feature-maps"
} |
Why does S11 seem to be independent of transmission media in this RF transmission experiment? | Question: Hi friends I am starting to characterize a system in which we will plan to measure the response of a sample of liquid to radio waves. In order to do this I am exciting two antennas with the ports of an Agilent two-port network analyzer as shown in the image below.
So far I see no significant difference in the data when the container of liquid is present or absent between the two antennas. The data shows vey high values for S11 and S22, and low values for S12 and S21 mostly independent of the sample presence.
May I know if I should expect some difference due to the transmission media, or what I might test or change in the setup in order to become more sensitive to the sample?
Answer: I've added your images back to your question, slightly modified, I hope you don't mind. Now you might consider those arrows I've added, then go back and check to see if you are using ~950 MHz antennas (which are $\lambda/4$ and will resonate at integer multiples) and then ask if that's really the best thing to use. Away from resonance, they will naturally reflect most of the power.
You are mostly measuring the bandwidth of the antennas, except near the resonance where they can effectively radiate.
The network analyzer will measure the whole system, and so unless you use really good antennas that are nearly perfectly matched, mostly you will see reflections between the coax and the antenna connected to it.
An excellent, isolated antenna would dip down much deeper than -20 dB, but for a cheap WiFi antenna that's not bad, it means that roughly 99% of the power in the coax is leaving the driven antenna. However, at that point, still only 1% of the power is being received by the adjacent antenna and coupled into the other cable, so most of the power is probably going into free space.
As a first step:
What you can try is to focus on only the frequency where the antenna resonates naturally, which look like about 950 MHz. Scan only around there, with many small steps in frequency in order to make a smooth curve.
I think that if you use patience and study this narrow range, (say 800 to 1100 MHz or so) you might really see a difference between sample-in and sample-out.
However, the difference might be related to the effect of the dielectric sample on the antenna resonance, so you can also repeat the experiment with your sample near one antenna but away from the other; to the left of the left antenna, and to the right of the right antenna.
This may give you a better understanding of the limitations of your current set-up, and some ideas how to improve it. I'd recommend reading further as well.
Good luck, and have fun! | {
"domain": "physics.stackexchange",
"id": 54736,
"tags": "antennas, radio-frequency, network"
} |
How to get ROS2 parameter hosted by another Node | Question:
Hello,
I have been looking through the documentation and can not find any means to get or set parameters of other nodes than the one that has declared the parameters. Is this by design or is there a way to get the parameter with something like node.get_parameter(other_node_name, parameter_name)? I currently have parameters duplicated across configuration files because I can't figure out how to get the nodes to reach each others configurations. Any help is greatly appreciated!
I am using ROS2 dashing.
Thanks
Originally posted by mequi on ROS Answers with karma: 111 on 2019-12-23
Post score: 3
Answer:
In ROS 2 parameters are available via service interfaces:
root@d0a03d7984eb:/# ros2 run demo_nodes_cpp listener &
root@d0a03d7984eb:/# ros2 service list
/listener/describe_parameters
/listener/get_parameter_types
/listener/get_parameters
/listener/list_parameters
/listener/set_parameters
/listener/set_parameters_atomically
In rclpy I believe there is no helper and you need to create a service client yourself and call it, similar to what is done in ros2 service
In rclcpp there is are ParameterClient classes
on which you can call functions like get_parameters and it will handle the service calls
the use looks like:
auto parameters_client = std::make_shared<rclcpp::SyncParametersClient>(my_node, "remote_node_name");
while (!parameters_client->wait_for_service(1s)) {
if (!rclcpp::ok()) {
RCLCPP_ERROR(this->get_logger(), "Interrupted while waiting for the service. Exiting.");
rclcpp::shutdown();
}
RCLCPP_INFO(this->get_logger(), "service not available, waiting again...");
}
auto parameters = parameters_client->get_parameters({"remote_param1", "remote_param2"});
you can then access them:
std::stringstream ss;
// Get a few of the parameters just set.
for (auto & parameter : parameters)
{
ss << "\nParameter name: " << parameter.get_name();
ss << "\nParameter value (" << parameter.get_type_name() << "): " <<
parameter.value_to_string();
}
There is also a non-blocking / asynchronous version rclcpp::AsyncParametersClient
Originally posted by marguedas with karma: 3606 on 2020-01-25
This answer was ACCEPTED on the original site
Post score: 6
Original comments
Comment by lin404 on 2020-02-05:
@marguedas I tried it with rclpy, but no luck. Do you mind to give me a sample in python for this? Also, seems there is no SyncParametersClient ore AsyncParametersClient like method in rclpy ? Thank you!
Comment by marguedas on 2020-02-05:\
Also, seems there is no SyncParametersClient ore AsyncParametersClient like method in rclpy ?
Yes that is what I tried to say when saying " there is no helper and you need to create a service client yourself and call it". The fact thatthey don't exist make the example more verbose than in C++.
Do you mind to give me a sample in python for this?
I replied on https://answers.ros.org/question/343322/ros2rclpy-how-to-set-parameter-hosted-by-another-node-via-service/ with a correction to your example
Comment by lin404 on 2020-02-05:
@marguedas Thanks for your help a lot!
Comment by marguedas on 2020-04-12:
@mequi Did this answer solved your question?
If not can you comment with what information is missing.
If yes can you please accept the answer by clicking the checkmark on the left. This will remove the question from the unanswered list
Comment by kmilo7204 on 2020-07-09:
Just additional information for anyone who needs it parameter.get_value();
Comment by rela on 2023-05-23:
i wanted to re-use infrastructure that expected a normal service client (not a rclcpp::SyncParametersClient) so i ended up building it like like a service to /my_node_name/get_parameters and called with request->names.push_back("my_param_name") and got it from the response at response->values[0] (in my case response->values[0].string_value (check size before accessing it)) | {
"domain": "robotics.stackexchange",
"id": 34191,
"tags": "ros, ros2, parameters, rclpy, parameter-server"
} |
openni_camera compilation error | Question:
Hi,
I'm trying to compile openni_camera with rosmake, but an error occured :
XnCppWrapper.h : No such file or directory
I don't know what to do...
Thank you !
Originally posted by AnSooooo on ROS Answers with karma: 16 on 2013-01-09
Post score: 0
Answer:
I solved it with installing openni librairies, and including the directory openni/build/openni/Include into the CMakeList.txt of openni_camera
Originally posted by AnSooooo with karma: 16 on 2013-01-14
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 12335,
"tags": "kinect, openni, openni-camera"
} |
How can I measure the frequency of vibration of a surface/object? | Question: I'd like to know if there is a solution that allows me to measure the frequency of vibration of a surface/object. Primarily I am seeking to improve my speaker/home audio setup. The vibrometers I've found (and can afford) can give me the amplitude of vibration, speed of vibration and acceleration of vibration. I want to be able to measure what frequencies an object is vibrating at, though. If anyone could help me out, either by pointing me towards tools that can meet my needs, or letting me know how to derive the frequency from other data, I'll be grateful.
Answer: An easy and cheaper way to do this is with a microphone (basically an uncalibrated vibration sensor!) and an oscilloscope, upon which you display the microphone output. The frequency of vibration can be extracted from the oscilloscope display by measuring the time between peaks of the waveform; the inverse of that time is the frequency of the vibration. | {
"domain": "physics.stackexchange",
"id": 68437,
"tags": "frequency, measurements, home-experiment, vibrations"
} |
Camera position w.r.t. basketball court | Question: I am new in Computer Vision. Suppose you have 8 points (players in a basketball court) in a 2D image (png photo), that is we know the pixel coordinates of each. I would like to compute camera position w.r.t. the field. We have the actual dimensions of the field. Also, I can write the equations of two or three transversal lines in the field that are parallels lines from top view. With this data can we find a position vector of the camera w.r.t. the field ?
Any idea on where to read about this ?
Answer: You said you could see lines, so I assume you could see their intersections. If you could assume to have at least 4 fixed points of the court which are always visible (corners might be a good choice), this problem is similar to the 3D pose estimation problem, since you are essentially looking for a pose (position and orientation) of a known 3D object in a 2D image.
You might find it useful to read about POSIT, an iterative algorithm published in 1992 and there is also this algorithm from 1999. Both algorithm solve the 3D pose estimation problem. | {
"domain": "cs.stackexchange",
"id": 10466,
"tags": "computer-vision"
} |
How does CO2 contribute to global warming? | Question: CO2 is a common agent in fire extinguishers, which suppresses fire. How does this same chemical contribute to global warming when it also puts out fires?
Answer: It seems that you are wondering how a chemical that prevents heat (putting out a fire) can also cause heating (global warming). These two things happen through unrelated processes.
Fire extinguishers that use CO2 act by displacing oxygen. The CO2 doesn't directly interact with the fire but just takes up space so that oxygen cannot interact with the fire. There is a secondary effect that the CO2 exiting the extinguisher is cooler than the air it is displacing so it helps to remove heat from the fire (by putting it somewhere else).
These are reasons that CO2 extinguishers are not suitable for all kinds of fire suppression, particular for fires that have oxidizers and do not rely on atmospheric oxygen for combustion.
The contribution to global warming from CO2 (and all other greenhouse gases) is a radiative effect. Every molecule has specific frequencies of electromagnetic radiation they absorb and emit. The earth (and all things) emit electromagnetic radiation based upon its surface temperature. This energy would be lost to space (cooling the earth) except that the greenhouse gases absorb radiation at the same wavelengths the earth emits at. This means that, e.g. CO2, intercepts this energy headed for space and instead absorbs it, causing the CO2 to warm up a little. This CO2 also emits radiation but will emit less than it absorbs (and some of its emission is back toward the earth, not space), so there is a net energy gain due to atmospheric GHG, which causes warming.
As you can see, the mechanisms that CO2 contributes to global warming and fire suppression are unrelated (radiative vs displacement) which is why this molecule can be used in these seemingly contradictory applications. | {
"domain": "earthscience.stackexchange",
"id": 517,
"tags": "climate-change"
} |
Angular momentum definition? | Question: The definition of linear momentum is this: Momentum is a vector quantity
defined as the product of an object's
mass, $m$, and its velocity, $\vec v$.
So According to that definition,The definition of angular momentum should be this: "Angular momentum is the
product of the angular velocity of the
body or system and its moment of
inertia with respect to the rotation
axis, and that is directed along the
rotation axis".
But how can we define angular momentum as this:
"a pseudovector $\vec r \times \vec p$,
the cross product of the particle's
position vector $\vec r$ (relative to some
origin) and its momentum vector $\vec p =
m\vec v$"?
Answer:
So the definition of angular momentum should be this: "Angular momentum is the product of the angular velocity of the body or system and its moment of inertia with respect to the rotation axis, and that is directed along the rotation axis".
That's not a useful definition at all, because (i) it does not specify what this "moment of inertia" thing is, and (ii) it's not even correct in the general case.
To be more specific, the 'general case' means a rigid body with an arbitrary shape, in which case the moment of inertia is a full 3×3 matrix $I$, which relates the angular momentum $\vec L$ and the angular velocity $\vec \omega$ via a linear transformation,
$$\vec L=I\vec \omega\tag1$$
which in component language reads
$$L_k=\sum_{k=1}^3I_{kj}\omega_j.\tag 2$$
In certain specific frames (e.g. with axes along the axes of symmetry if those exist) the matrix $I_{kj}$ can be diagonal, but in general it's not.
Note, in particular, that having nonzero off-diagonal elements of $I_{kj}$ means that $\vec \omega$ and $\vec L$ need not be parallel. That is, the angular momentum is not always directed along the axis of rotation. (For a rotating free body of arbitrary shape, since $\vec L$ is conserved, this means that the instantaneous axis of rotation changes over time, which is again in line with how things are.)
But how can we define angular momentum as this: "a pseudovector $\vec r \times \vec p$, the cross product of the particle's position vector $\vec r$ (relative to some origin) and its momentum vector $\vec p = m\vec v$"?
As always with questions of the form "why do we define X as Y?", the answer is "because we can". More specifically, we can define anything we want, and the definition only 'sticks' if it's useful. The definition of $\vec L=\vec r\times\vec p$ for a free particle sticks because it is useful: it is still conserved for a system that's isolated or under the influence of a central force, and it can be used in broader contexts than just a body in pure rotational motion.
In addition, it reduces to the alternative definition in terms of a moment of inertia and an angular velocity $\omega$ when the latter exists (i.e. when the particle is in a rotational motion, defined by $\vec v=\vec \omega\times \vec r$, which requires that $\vec v\cdot\vec r=0$, which is not always the case), via standard manipulations which are found in any suitable classical mechanics textbook (e.g. Goldstein).
More generally, though, one should have a thorough understanding of the existing definitions before claiming that they 'should' be something else, particularly if that something else is incorrect. | {
"domain": "physics.stackexchange",
"id": 30304,
"tags": "newtonian-mechanics, angular-momentum, definition, rotational-kinematics"
} |
Python 3 calculator with tkinter | Question: I have been programming for a long time. Only recently have I decided to take a stab at Python (I should be working with C# as I am in school for it, but I don't care for Windows, long story).
I was on this site and it showed a source for a calculator. I took it, and put it in PyCharm and started to study. By the time I was done I had changed the source significantly. I had added keyboard binding and reduced a lot of the redundant code in it.
My question is simple: is this code that I wrote efficient from a python standard viewpoint?
# -*-coding: utf-8-*-
# !/usr/bin/python3.5
from tkinter import Tk, Button, Entry, END
import math
class Calc:
def getandreplace(self): # replace x, + and % to symbols that can be used in calculations
# we wont re write this to the text box until we are done with calculations
self.txt = self.e.get() # Get value from text box and assign it to the global txt var
self.txt = self.txt.replace('÷', '/')
self.txt = self.txt.replace('x', '*')
self.txt = self.txt.replace('%', '/100')
def evaluation(self, specfunc): # Evaluate the items in the text box for calculation specfunc = eq, sqroot or power
self.getandreplace()
try:
self.txt = eval(str(self.txt)) # evaluate the expression using the eval function
except SyntaxError:
self.displayinvalid()
else:
if any([specfunc == 'sqroot', specfunc == 'power']): # Square Root and Power are special
self.txt = self.evalspecialfunctions(specfunc)
self.refreshtext()
def displayinvalid(self):
self.e.delete(0, END)
self.e.insert(0, 'Invalid Input!')
def refreshtext(self): # Delete current contents of textbox and replace with our completed evaluatioin
self.e.delete(0, END)
self.e.insert(0, self.txt)
def evalspecialfunctions(self, specfunc): # Calculate square root and power if specfunc is sqroot or power
if specfunc == 'sqroot':
return math.sqrt(float(self.txt))
elif specfunc == 'power':
return math.pow(float(self.txt), 2)
def clearall(self): # AC button pressed on form or 'esc" pressed on keyboard
self.e.delete(0, END)
self.e.insert(0, '0')
def clear1(self, event=None):
# C button press on form or backspace press on keyboard event defined on keyboard press
if event is None:
self.txt = self.e.get()[:-1] # Form backspace done by hand
else:
self.txt = self.getvalue() # No need to manually delete when done from keyboard
self.refreshtext()
def action(self, argi: object): # Number or operator button pressed on form and passed in as argi
self.txt = self.getvalue()
self.stripfirstchar()
self.e.insert(END, argi)
def keyaction(self, event=None): # Key pressed on keyboard which defines event
self.txt = self.getvalue()
if any([event.char.isdigit(), event.char in '/*-+%().']):
self.stripfirstchar()
elif event.char == '\x08':
self.clear1(event)
elif event.char == '\x1b':
self.clearall()
elif event.char == '\r':
self.evaluation('eq')
else:
self.displayinvalid()
return 'break'
def stripfirstchar(self): # Strips leading 0 from text box with first key or button is pressed
if self.txt[0] == '0':
self.e.delete(0, 1)
def getvalue(self): # Returns value of the text box
return self.e.get()
def __init__(self, master): # Constructor method
self.txt = 'o' # Global var to work with text box contents
master.title('Calulator')
master.geometry()
self.e = Entry(master)
self.e.grid(row=0, column=0, columnspan=6, pady=3)
self.e.insert(0, '0')
self.e.focus_set() # Sets focus on the text box text area
# Generating Buttons
Button(master, text="=", width=10, command=lambda: self.evaluation('eq')).grid(row=4, column=4, columnspan=2)
Button(master, text='AC', width=3, command=lambda: self.clearall()).grid(row=1, column=4)
Button(master, text='C', width=3, command=lambda: self.clear1()).grid(row=1, column=5)
Button(master, text="+", width=3, command=lambda: self.action('+')).grid(row=4, column=3)
Button(master, text="x", width=3, command=lambda: self.action('x')).grid(row=2, column=3)
Button(master, text="-", width=3, command=lambda: self.action('-')).grid(row=3, column=3)
Button(master, text="÷", width=3, command=lambda: self.action('÷')).grid(row=1, column=3)
Button(master, text="%", width=3, command=lambda: self.action('%')).grid(row=4, column=2)
Button(master, text="7", width=3, command=lambda: self.action('7')).grid(row=1, column=0)
Button(master, text="8", width=3, command=lambda: self.action('8')).grid(row=1, column=1)
Button(master, text="9", width=3, command=lambda: self.action('9')).grid(row=1, column=2)
Button(master, text="4", width=3, command=lambda: self.action('4')).grid(row=2, column=0)
Button(master, text="5", width=3, command=lambda: self.action('5')).grid(row=2, column=1)
Button(master, text="6", width=3, command=lambda: self.action('6')).grid(row=2, column=2)
Button(master, text="1", width=3, command=lambda: self.action('1')).grid(row=3, column=0)
Button(master, text="2", width=3, command=lambda: self.action('2')).grid(row=3, column=1)
Button(master, text="3", width=3, command=lambda: self.action('3')).grid(row=3, column=2)
Button(master, text="0", width=3, command=lambda: self.action('0')).grid(row=4, column=0)
Button(master, text=".", width=3, command=lambda: self.action('.')).grid(row=4, column=1)
Button(master, text="(", width=3, command=lambda: self.action('(')).grid(row=2, column=4)
Button(master, text=")", width=3, command=lambda: self.action(')')).grid(row=2, column=5)
Button(master, text="√", width=3, command=lambda: self.evaluation('sqroot')).grid(row=3, column=4)
Button(master, text="x²", width=3, command=lambda: self.evaluation('power')).grid(row=3, column=5)
# bind key strokes
self.e.bind('<Key>', lambda evt: self.keyaction(evt))
# Main
root = Tk()
obj = Calc(root) # object instantiated
root.mainloop()
I don't really care for the names of some of the function names and variable names. I like using descriptive names, so names like self.e would have been called self.textbox or something. These things are leftovers from the web copy I found and haven't changed them.
Per request, the original code for this is below.
#-*-coding: utf-8-*-
from Tkinter import *
import math
class calc:
def getandreplace(self):
"""replace x with * and ÷ with /"""
self.expression = self.e.get()
self.newtext=self.expression.replace(self.newdiv,'/')
self.newtext=self.newtext.replace('x','*')
def equals(self):
"""when the equal button is pressed"""
self.getandreplace()
try:
self.value= eval(self.newtext) #evaluate the expression using the eval function
except SyntaxError or NameErrror:
self.e.delete(0,END)
self.e.insert(0,'Invalid Input!')
else:
self.e.delete(0,END)
self.e.insert(0,self.value)
def squareroot(self):
"""squareroot method"""
self.getandreplace()
try:
self.value= eval(self.newtext) #evaluate the expression using the eval function
except SyntaxError or NameErrror:
self.e.delete(0,END)
self.e.insert(0,'Invalid Input!')
else:
self.sqrtval=math.sqrt(self.value)
self.e.delete(0,END)
self.e.insert(0,self.sqrtval)
def square(self):
"""square method"""
self.getandreplace()
try:
self.value= eval(self.newtext) #evaluate the expression using the eval function
except SyntaxError or NameErrror:
self.e.delete(0,END)
self.e.insert(0,'Invalid Input!')
else:
self.sqval=math.pow(self.value,2)
self.e.delete(0,END)
self.e.insert(0,self.sqval)
def clearall(self):
"""when clear button is pressed,clears the text input area"""
self.e.delete(0,END)
def clear1(self):
self.txt=self.e.get()[:-1]
self.e.delete(0,END)
self.e.insert(0,self.txt)
def action(self,argi):
"""pressed button's value is inserted into the end of the text area"""
self.e.insert(END,argi)
def __init__(self,master):
"""Constructor method"""
master.title('Calulator')
master.geometry()
self.e = Entry(master)
self.e.grid(row=0,column=0,columnspan=6,pady=3)
self.e.focus_set() #Sets focus on the input text area
self.div='÷'
self.newdiv=self.div.decode('utf-8')
#Generating Buttons
Button(master,text="=",width=10,command=lambda:self.equals()).grid(row=4, column=4,columnspan=2)
Button(master,text='AC',width=3,command=lambda:self.clearall()).grid(row=1, column=4)
Button(master,text='C',width=3,command=lambda:self.clear1()).grid(row=1, column=5)
Button(master,text="+",width=3,command=lambda:self.action('+')).grid(row=4, column=3)
Button(master,text="x",width=3,command=lambda:self.action('x')).grid(row=2, column=3)
Button(master,text="-",width=3,command=lambda:self.action('-')).grid(row=3, column=3)
Button(master,text="÷",width=3,command=lambda:self.action(self.newdiv)).grid(row=1, column=3)
Button(master,text="%",width=3,command=lambda:self.action('%')).grid(row=4, column=2)
Button(master,text="7",width=3,command=lambda:self.action('7')).grid(row=1, column=0)
Button(master,text="8",width=3,command=lambda:self.action(8)).grid(row=1, column=1)
Button(master,text="9",width=3,command=lambda:self.action(9)).grid(row=1, column=2)
Button(master,text="4",width=3,command=lambda:self.action(4)).grid(row=2, column=0)
Button(master,text="5",width=3,command=lambda:self.action(5)).grid(row=2, column=1)
Button(master,text="6",width=3,command=lambda:self.action(6)).grid(row=2, column=2)
Button(master,text="1",width=3,command=lambda:self.action(1)).grid(row=3, column=0)
Button(master,text="2",width=3,command=lambda:self.action(2)).grid(row=3, column=1)
Button(master,text="3",width=3,command=lambda:self.action(3)).grid(row=3, column=2)
Button(master,text="0",width=3,command=lambda:self.action(0)).grid(row=4, column=0)
Button(master,text=".",width=3,command=lambda:self.action('.')).grid(row=4, column=1)
Button(master,text="(",width=3,command=lambda:self.action('(')).grid(row=2, column=4)
Button(master,text=")",width=3,command=lambda:self.action(')')).grid(row=2, column=5)
Button(master,text="√",width=3,command=lambda:self.squareroot()).grid(row=3, column=4)
Button(master,text="x²",width=3,command=lambda:self.square()).grid(row=3, column=5)
#Main
root = Tk()
obj=calc(root) #object instantiated
root.mainloop()
Answer:
When I was learning Python, I found The Zen of Python quite helpful.
Formatting
I agree about renaming self.e to self.textbox. Descriptive names are generally better, unless this results in an overly long and unwieldy name. In addition to that there are a few more formatting issues. (You may find the PEP 8 Style Guide helpful)
Redundant comments. (See https://www.python.org/dev/peps/pep-0008/#inline-comments) For example, in this line:
obj = Calc(root) # object instantiated
The comment is not particularly helpful here as we can see that Calc(root) clearly instanciates a new Calc object. The comment on the following line, on the other hand, is more helpful:
self.txt = self.getvalue() # No need to manually delete when done from keyboard
Method names that do not use underscores to separate words. For example, instead of stripfirstchar we should have strip_first_char
While I could not find any mention of this in PEP 8, in my experience a class's __init__ method is almost always placed before any other methods.
Use docstrings instead of comments to document entire functions. For example:
def getvalue(self): # Returns value of the text box
return self.e.get()
becomes
def getvalue(self):
"""Returns value of the text box"""
return self.e.get()
There is no need to wrap a method in a lambda when it can be used on its own. For example, this:
Button(master, text='AC', width=3, command=lambda: self.clearall()).grid(row=1, column=4)
can be rewritten as:
Button(master, text='AC', width=3, command=self.clearall).grid(row=1, column=4)
Use or instead of any when all conditions are known beforehand. For example:
any([event.char.isdigit(), event.char in '/*-+%().'])
can be rewritten as
event.char.isdigit() or event.char in '/*-+%().'
Practical Issues
The C button in the GUI does not work properly. This is because of an indentation issue in the method clear1. The call to self.refresh_text should be outside the else block.
If I remove all characters in the text, then try to type something, the program will raise an IndexError. This can be fixed by changing the condition in the if statement in the strip_first_char method to
len(self.txt) > 0 and self.txt[0] == '0'
Open the window only if this program is being run as __main__. Check if __name__ == '__main__' before opening the window. This is to be sure that this will not happen if someone is trying to use this program as a library. (e.g. embedding this calculator in another application)
% should be a special function. As it is, if I type 1%1, the program will interpret this as 1/1001 when it should cause some sort of syntax error. There are other ways to fix this, but this seems to be both the easiest to implement and the way most calculators I have seen handle this.
Using eval is usually a very bad idea
I see no way to remove eval from this code without significant changes. Letting the user type their math directly makes it harder to not use eval here, otherwise you could store the math in an internal, easy to parse form, and convert that to a more user-friendly string before displaying it, but this would require rewriting almost all of the program. | {
"domain": "codereview.stackexchange",
"id": 32722,
"tags": "python, beginner, python-3.x, calculator, tkinter"
} |
How do you quantify the "much greater" operator "$\gg$"? | Question: I'm specifically asked to compute the charge in the Earth-Moon system knowing that the gravitational force between the two bodies is much greater ($\gg$) than the electrostatic one. However, I don't know how many orders of magnitude should I take in order to make a good estimation.
Does anybody know, not just in this specific case, but perhaps in a more general one, how can one quantify these comparisons?
Answer: The most common case where this comes up is when you're dealing with a problem where it's helpful to linearize using a Taylor expansion. For example, a decaying exponential
$$
e^{-t/t_0} = 1
- \frac{t}{t_0}
+ \frac12\left(\frac{t}{t_0}\right)^2
- \frac1{3!}\left(\frac{t}{t_0}\right)^3
+ \cdots
$$
or a smallish logarithm
$$
\ln(1+x) = x - \frac{x^2}{2}
+ \frac{x^3}{3} - \frac{x^4}{4}
+ \cdots
$$
or the sine of an angle
$$
\sin\theta = \theta - \frac1{3!}\theta^3 + \frac{1}{5!} \theta^5
- \frac{1}{7!} \theta^7
+ \cdots
$$
or an actual binomial, like the acceleration due to gravity near Earth's surface
$$
g(h) = \frac{GM_\oplus}{(R_\oplus + h)^2}
= \frac{GM_\oplus}{R_\oplus^2} \left(
1 - 2 \frac{h}{R_\oplus}
+ \frac{(-2)(-3)}{2!} \left( \frac{h}{R_\oplus} \right)^2
+ \frac{(-2)(-3)(-4)}{3!} \left( \frac{h}{R_\oplus} \right)^3
+ \cdots
\right)
$$
Now, when we compute a physically interesting quantity, like
$$
g\approx 9.806\,65\,\rm m/s^2 \approx 9.81\, m/s^2 \approx 9.8\,m/s^2 \approx 10\,m/s^2,
$$
we have a built-in way to indicate the precision that we intend: we truncate the "insignificant" digits from the end of the decimal representation. It's pretty common to limit a result to two or three significant figures, which corresponds to an implied precision of somewhere under 1%.
In the Taylor expansions, there is always some dimensionless parameter which is raised to a different power in every term ($h/R_\oplus$ in the last example). If that dimensionless parameter is smaller than 0.1, then each term in the Taylor expansion corresponds, roughly, to a single significant digit in the final result. And if the dimensionless parameter is smaller than 0.01, you can expect to get your three-ish significant digits from the first two terms in the series! This is why you hear people, when pressed, say that "$\gg$" means something like "different by a factor of ten or more."
For your specific example: we have data on the moon's orbit from lunar laser ranging that tells us the orbit is described by general relativity to a precision of about a centimeter. A one-centimeter shift in the orbit of the moon would change the moon-earth gravitational potential energy by
$$
\delta U
= GM_\oplus M_\text{moon}
\left(
\frac{1}{d} - \frac{1}{d+1\,\rm cm}
\right)
= \frac{GM_\oplus M_\text{moon}}{d}
\left(
\frac{1\,\rm cm}{d}
+ \cdots
%\mathcal O\left(
%\frac{1\,\rm cm}{d}
%\right)^2
\right)
$$
where $d$ is the average Earth-Moon distance. I think your numerical intuition should confirm that $d\gg 1\,\rm cm$.
In jabirali's answer he shows that the electrostatic and gravitational interactions would have roughly equal strength if the earth-moon system had $\sqrt{Qq}\approx 10^{14}\,\rm C$. By specifying how well we know the earth-moon distance we can put up a better limit:
\begin{align}
\sqrt{Qq} &\lesssim 10^{14}\,\rm C \frac{1\,cm}{384,400\,km}
\\&\lesssim 10^3\,\rm C
\end{align}
The feebleness of the gravitational force is pretty well-discussed, but I'm actually surprised at how small that is. | {
"domain": "physics.stackexchange",
"id": 19863,
"tags": "order-of-magnitude"
} |
Help with cob_trajectory_controller | Question:
Hello, I am working with cob_trajectory_controller and MoveIt!. Currently I can generate and plan a new trajectory in MoveIt and upon trying to execute the trajectory the cob_controller fails everytime with a out_of_range error.
Here is the error from the cob_trajectory_controller:
core service [/rosout] found
process[arm_controller/joint_trajectory_controller-1]: started with pid [27186]
[ INFO] [1532628301.414878883]: getting JointNames from parameter server: //arm_controller
[ INFO] [1532628301.420821815]: starting controller with DOF: 7 PTPvel: 0.400000 PTPAcc: 0.200000 maxError 0.150000
[ INFO] [1532628301.427403982]: Setting controller frequency to 100.000000 HZ
[ INFO] [1532628416.297625889]: Received new goal trajectory with 20 points
[ INFO] [1532628416.317896242]: Calculated 21 zwischenPunkte
[ INFO] [1532628416.319265583]: Calculated 21 splinepoints
terminate called after throwing an instance of 'std::out_of_range'
what(): vector::_M_range_check
[arm_controller/joint_trajectory_controller-1] process has died [pid 27186, exit code -6, cmd /home/ralab/Documents/ros/devel/lib/cob_trajectory_controller/cob_trajectory_controller __name:=joint_trajectory_controller __log:=/home/ralab/.ros/log/1f275a8e-90fb-11e8-bbfd-7085c274b189/arm_controller-joint_trajectory_controller-1.log].
log file: /home/ralab/.ros/log/1f275a8e-90fb-11e8-bbfd-7085c274b189/arm_controller-joint_trajectory_controller-1*.log
I am using ROS Indigo (cob_trajectory_controller is from indigo-dev), Ubuntu 14.04 32bit, Schunk PowerCube and modular robotics are from Indigo
Any help would be great! Thank you
Kyle
Originally posted by PatFGarrett on ROS Answers with karma: 31 on 2018-07-26
Post score: 0
Answer:
SOLVED: The problem was not with the cob trajectory controller but with the /arm_controller/joint_trajectory_controller/follow_joint_trajectory/goal topic that is published by MoveIt!. The planning group I created in the setup assistant did not include all the arms joints and therefore the vector of positions that MoveIt! published was too small causing COB trajectory controller to continuously run out of range.
Originally posted by PatFGarrett with karma: 31 on 2018-08-01
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 31378,
"tags": "moveit, ros-indigo"
} |
Safe string formatting in Python | Question: I am a fan of f"{strings}" in Python,
However some situations could possibly be dangerous when doing f strings from user input, leaking API keys or even code execution!
So I decided to make something to mitigate these attacks. It works by searching with regex over the input string, and will return an empty string if it detects any evil piece of code.
It uses the fmt package
import fmt as f
import re
import os # This is for checking the system call
import doctest
# Setup
SECRET_GLOBAL = 'this is a secret'
class Error:
def __init__(self):
passs
def a_function():
return SECRET_GLOBAL
# Here is where the code begins
DANGEROUS_CODES = [
re.compile(r'(\.system\(.*\))'), # Any call to system
re.compile(r'(__[\w]+__)'), # Any internals
re.compile(r'([\w\d]+\(\))') # Functions
]
def safe_format(st):
'''
Safe python f-string formatting
this will detect evil code from fstring making formatting safe.
args:
st (str): The f-string
returns:
Empty string, if dangerous code is found
Executes the fstring normally if no dangerous code is found
Test globals acces
>>> safe_format('{Error().__init__.__globals__[SECRET_GLOBAL]}')
''
Test function acces
>>> safe_format('a_function()')
''
Test code execution via import
>>> safe_format('{__import__("os").system("dir")}')
''
Test code execution with imported os
>>> safe_format('{os.system("dir")}')
''
Test no stack trace
>>> safe_format('{0/0}')
''
Test acceptable fstring
>>> safe_format('{6 * 6}')
'36'
'''
if any(re.search(danger, st) for danger in DANGEROUS_CODES):
return ''
try:
return f(st)
except Exception as e:
return ''
if __name__ == '__main__':
doctest.testmod()
Did i miss anything?
Is this approach acceptable?
Answer: If you forbid function calls you cannot use getters which then is a big restriction. If you allow them it is hard to get it safe. Also your regexes will have collateral damages and filter legal expressions. So I think your approach is not so good. Don't allow users to mess with format strings. Use Template from string for that purpose.
to your regexes:
if you have a re for function you do not need one for system
[\w] includes [\d]
your re function catches only calls without arguments
re function and re system do not care about whitespaces and are vulnerable
all regexes do collateral damage when matching outside the format braces
here are some more test that currently fail
'''
some attacks
>>> safe_format('a_function(666)')
''
>>> safe_format('a_function ()')
''
>>> safe_format('{os. system("dir")}')
''
some collateral damage
>>> safe_format('...system(left) {5}')
'...system(left) 5'
>>> safe_format('f() gives {5}')
'f(g) gives 5'
''' | {
"domain": "codereview.stackexchange",
"id": 32075,
"tags": "python, python-3.x, strings, security"
} |
Is synesthesia caused by crossing the circuitry of different sensory inputs? | Question: I have a question about human perception. After reading the book "Sensory Perceptual Issues in Autism and Asperger Syndrome" and knowing about synesthesia (https://en.wikipedia.org/wiki/Synesthesia) something occurred to me.
If unique circuits of neurons are responsible for each of our senses, and the circuits can cross with each other (synesthesia), and given that autistic people can switch off a sense or may have problems filtering inputs, and all impulses are of one kind, how can the brain know where these inputs are coming from? For example if input is coming only from the eye, can a person think that it is coming from the ear? Can we see with our ears, or hear with our eyes, through some sort of circuit-crossing?
Answer: Short Answer
Synesthesia happens at a point during processing where we are not dealing with "raw visual input" or "raw auditory input" anymore, but already with more abstract constructs such as "colors" or "sounds".
Full Answer
Your question seems to consider sensory perception as a matter of "inputs" and the brain detecting those inputs, such that for example the only difference, from the brain's point of view, between visual inputs and auditory inputs is that one comes from the ear and the other from the eye.
A more accurate way of seeing it is that our sensory perceptions are really the brain constructing a coherent reality from the information it got from our senses. And I do mean "information" in the most abstract, computer-sciency sense possible. It's not like our eye is a photographic plate that contains "what we see", and the brain just has to reconstruct that. The processing of visual information, by which I mean picking out various features from the light received and deriving meaning from them, starts right in the retina. Some examples from the "14.5 Visual Processing in the Retina" section in this page:
http://nba.uth.tmc.edu/neuroscience/s2/chapter14.html
On bipolar cells:
The two bipolar cell types have different functional properties.
The off bipolar cells function to detect dark objects in a lighter background.
The on bipolar cells function to detect light objects in a darker background.
On horizontal cells:
The surround effect, produced by the horizontal cells, enhances brightness contrasts to produce sharper images, to make an object appear brighter or darker depending on the background and to maintain these contrasts under different illumination levels.
On the retinal ganglion cells:
The retinal ganglion cells provide information important for detecting the shape and movement of objects.
Type P retinal ganglion cells are color-sensitive object detectors.
Type M retinal ganglion cells are color-insensitive motion detectors.
On amacrine cells:
There are 20 or more types of amacrine cells based on their morphology and neurochemistry. The roles of three types have been identified. One type
is responsible for producing the movement sensitive (rapidly adapting) response of the Type M ganglion cells.
enhances the center-surround effect in ganglion cell receptive fields.
connects rod bipolar cells to cone bipolar cells, thus allowing ganglion cells to respond to the entire range of light levels, from scotopic to photopic.
Similarly in our ears, the cilia cells that vibrate in response to sound are frequency-dependent, and it is information about those frequencies that gets transmitted to the rest of the brain for processing. In other words the very organ that detects sound also performs a kind of Fourier transform on the sound waves it detects before anything else.
See for example:
https://psychology.stackexchange.com/questions/15274/how-does-the-inner-ear-encode-sound-intensity
This parsing, combining and interpreting of information continues throughout the perceptual circuits of the brain, and not in a straightforward way either - it is hypothesized that there are two independent or semi-independent processing streams for visual information, the ventral and dorsal streams, one which is for identifying objects and the other for guiding actions:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4678292/
There is also information flowing the other way, where more "advanced" brain regions draw conclusions about what's actually there and feed those conclusions back into the more "basic perception" parts to ensure that that's what we see. I.e. perception has both "feedforward" and "feedback" processes, or "bottom-up" and "top-down".
All of this is to say, there is absolutely no way for the brain to confuse information from the retina and information from the ear; the early processing in both of those is too tightly linked to the organ itself, and perception as a whole is inextricably linked to cognition. It's not like a mouse and keyboard that you can plug into different sockets; the retina technically is part of the brain in the first place!
This is not to say visual and auditory senses cannot be confused, obviously synesthesia happens, but it likely happens at a point in the processing where we're not dealing with "raw visual input" or "raw auditory input" anymore, but already more abstract constructs such as "colors" or "sounds".
For example this paper (gotten from the Wikipedia page https://en.wikipedia.org/wiki/Neural_basis_of_synesthesia) :
https://web.archive.org/web/20060527085838/http://psy.ucsd.edu/~edhubbard/papers/JCS.pdf
Hypothesizes that the synaesthetic association of letters with colors is caused by cross-wiring between the area of the visual cortex that processes letters and the one that processes colors.
This paper from the same source:
https://academic.oup.com/brain/article-abstract/118/3/661/321747?redirectedFrom=fulltext
Finds that color-word synaesthetes had, when told words the associated with colors, activation in the language and "advanced" visual processing areas but not in more basic visual processing areas.
Similarly for autism, given that sensory perception happens at every level in the brain, including conscious thought, those perceptions can be blocked or filtered (or not) by the brain at any stage. No need to assume it is at the "raw input" stage. | {
"domain": "biology.stackexchange",
"id": 8409,
"tags": "cell-biology, neuroscience, perception, synesthesia"
} |
Reduce the Number of Intensity Levels of a Grayscale Image in MATLAB | Question: I have written a Matlab script to reduce the number of intensity levels of each pixel of a grayscale image from 256 to some power of 2.
img_color = imread('photo.jpg');
img_gray = rgb2gray(img_color);
imshow(img_gray);
[rows, cols] = size(img_gray);
noOfDesiredIntensityLevels = 2; // test data. will check for 4,8,16,32,etc.
bitsNeededToRepresentIntensityLevels = log2(noOfDesiredIntensityLevels);
new_img = img_gray;
for i = 1 : rows
for j = 1 : cols
new_img(i,j) = floor(img_gray(i,j)/(2^(8-bitsNeededToRepresentIntensityLevels)));
end
end
figure
imshow(new_img);
On execution, the script returns a black image. My expectation was that the image will be turned into black-and-white (intensity value for each pixel will be either a 0 or a 1).
What am I missing here?
P.S: I am a novice in Matlab and Image Processing. So, please ignore any mistakes in my understanding.
Answer: I think by number of levels you want the image full scale grey-scale to be divided piece-wise into given number of levels.
For example: -
If number of levels = 2, then you want only two grey-scales in your image i.e. (0 and 128) or (128 and 255) depending upon if you are using floor or ceil within the range
If number of levels = 4, then you want 4 different levels in your image i.e. (0, 64, 124, 192)
Solution -
This operation can be done in a single line
new_img = ceil(img_gray./step)*step;
Here I have divide the entire range into desired number of parts (levels) with size as step and used ceil function to restrict the result to the upper bounds (you can use floor for lower bounds)
Full Code -
img_color = imread('peppers.png');
img_gray = rgb2gray(img_color);
imshow(img_gray);
[rows, cols] = size(img_gray);
noOfDesiredIntensityLevels = 2;
step = ceil(255/(noOfDesiredIntensityLevels - 1));
new_img = ceil(img_gray./step)*step;
figure
imshow(new_img);
% optional code to show the levels
allSteps = 0;
currStep = 0;
while(currStep < 255)
currStep = currStep + step;
allSteps = [allSteps currStep];
end
allSteps
Edit 1 - I have included few extra lines which shows you the different levels (steps) in an array. Its just optional but nice to know the exact values of the levels. | {
"domain": "dsp.stackexchange",
"id": 7808,
"tags": "image-processing, matlab, discretization"
} |
No laser scan received | Question:
I'm trying to use amcl to localize my robot using a bag file. Steps I do are these:
Publishing the map: rosrun map_server map_server map.yaml
Running amcl: rosrun amcl amcl scan:=base_scan _odom_frame:=odom_combided
Playing the bag file: rosbag play small.bag
Published topics are:
/amcl/parameter_descriptions
/amcl/parameter_updates
/amcl_pose
/base_odometry/odom
/base_odometry/odometer
/base_odometry/state
/base_scan
/clock
/initialpose
/map
/map_metadata
/move_base_simple/goal
/particlecloud
/robot_pose_ekf/odom_combined
/rosout
/rosout_agg
/tf
/tilt_scan
/torso_lift_imu/data
/torso_lift_imu/is_calibrated
But amcl node prints out this message continuously: [ WARN] [1391522838.082207618]: No laser scan received (and thus no pose updates have been published) for 1391522838.082082 seconds. Verify that data is being published on the /base_scan topic.
And /amcl_pose publishes nothing.
rxgraph output:
Originally posted by maysamsh on ROS Answers with karma: 139 on 2014-02-04
Post score: 0
Original comments
Comment by Procópio on 2014-02-06:
can you post your tf tree?
Comment by hawesie on 2014-06-02:
It's not just a typo odom_combided != odom_combined?
Answer:
If you are playing back a bagfile, you should probably use simulated time, since the timestamps on the recorded and played back messages will not match the current system time (the "no laser scan recieved for huge number of seconds" is a good indicator of this). To use sim time, you should set the "use_sim_time" parameter to true before starting AMCL:
rosparam set use_sim_time true
And when starting your bagfile, you should tell it to broadcast the clock (so that the current system time matches the timestamps on your messages being played):
rosbag play small.bag --clock
Originally posted by fergs with karma: 13902 on 2014-02-04
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by maysamsh on 2014-02-04:
Now it says: [ WARN] [1391568852.036509093, 1302532602.518714771]: MessageFilter [target=/odom ]: Dropped 100.00% of messages so far. Please turn the [ros.amcl.message_notifier] rosconsole logger to DEBUG for more information.
[ WARN] [1391568852.036509093, 1302532602.518714771]: MessageFilter [target=/odom ]: Dropped 100.00% of messages so far. Please turn the [ros.amcl.message_notifier] rosconsole logger to DEBUG for more information.
Comment by fergs on 2014-02-05:
Have you given AMCL a starting localization, either via parameter or from RVIZ?
Comment by maysamsh on 2014-02-05:
@fergs, no should I do it? if yes, how?
Comment by fergs on 2014-02-06:
http://wiki.ros.org/navigation/Tutorials/Using%20rviz%20with%20the%20Navigation%20Stack shows how to send an initial pose using RVIZ | {
"domain": "robotics.stackexchange",
"id": 16879,
"tags": "ros, navigation, amcl-pose, localilzation, amcl"
} |
What's the basic premise of General Relativity? | Question: What is the basic assumption(s) required to explore general relativity?
For example, if one merely assumes that the speed of light $c$ is the same for all observers, and the laws of physics are the same for all observers, then it's possible using a mix of algebra and some calculus to derive every SR result from time dilation to length contraction, $E = mc^2$, and so on.
What is the basic premise of general relativity? I assume that all of SR is included but is there anything else?
Answer: The basic premise behind general relativity is the equivalence principle, the idea that an object moving in an accelerating (non-inertial) reference frame is indistinguishable from one moving under the influence of a gravitational field.
(As an aside, Einstein's original proofs of time dilation, length contraction, even $E=mc^2$ don't involve any calculus - they are quite simple and elegant. GR on the other hand is rather calculus-intensive) | {
"domain": "physics.stackexchange",
"id": 8112,
"tags": "general-relativity, differential-geometry, metric-tensor, equivalence-principle, covariance"
} |
Why don't metals disintergrate in light? | Question: I've been learning about photoelectricity. An electron can gain the energy from a single photon, and if that energy is greater than the work function of the metal the electron can leave the metal. However I was under the impression that the electrons in the metal are responsible for holding the nuclei in the metal together?
Surely if the metal is exposed to light, where the photons have more energy than the work function, for long enough (or very intense light), all/majority of the electrons will leave the metal and no longer be holding the nuclei together.
Also will the work function increase with time, as there are less and less electrons, so the ratio of positive nuclei to negative electrons increases and the electrons are more attracted to the nuclei, so more energy is required to break that attraction?
(addtionally, how come the photoelectric effect is not noticed in daily life, with metals becoming positively charged just by being in light?)
Answer: Metals do disintegrate in light, just very slowly. Light is often a very weak source and its effect is quite unnoticeable. There are many systems that use highly concentrated light beams, lasers, for etching purposes. http://en.wikipedia.org/wiki/Laser_engraving
Note, however, that as electrons are removed, it becomes roughly exponentially harder to remove them as the material becomes positively charged. It is very unlikely in any process for a significant portion of electrons to be removed without removing the nuclei. | {
"domain": "physics.stackexchange",
"id": 3446,
"tags": "photoelectric-effect"
} |
Best Practice: URDF descriptions, real robots, gazebo plugins and dependencies | Question:
Most complex URDF models use quite a few gazebo plugins. This often means that a dependency to "gazebo_plugins" (or other plugins) exists in the manifest.xml, which in turn means the inclusion of gazebo itself as a dependency. On a real robot or any setup that never simulates the robot model, this is a quite needless and heavyweight dependency. Are there any best practices for working around this in a systematic fashion, so one has a basic URDF model without gazebo dependencies, as well as a simulation version with all gazebo controllers?
I have a few ideas involving two packages, using ENV tags in xacro files and so on, but that seems rather cumbersome to me.
/edit: So here's the scenario that motivated my question: I have a vehicle model in the hector_ugv_description package. This contains the basic urdf file. For this model, I want to pull the low poly hokuyo UTM-30LX urdf model from the hector_sensors_description package. I also want to use a sonar sensor from another package on the vehicle, which uses a custom plugin (that has to be built for the sensor to work in gazebo). So from what I gather, I have to make 6 packages:
Not depending on anything gazebo, for use on real robot (only urdf/xacro and containing gazebo tags):
hector_ugv_description
hector_sensors_description
hector_sonar_description
Depending on gazebo for use in simulation (so plugins are properly built when building with rosmake)
hector_ugv_gazebo (depends on needed plugins, hector_ugv_description, hector_sensors_gazebo and hector
sonar_gazebo)
hector_sensors_gazebo (depends on
needed plugins and hector_sensors_description)
hector_sonar_gazebo (depends on
needed plugins and hector_sonar_description)
Originally posted by Stefan Kohlbrecher on ROS Answers with karma: 24361 on 2012-02-08
Post score: 7
Answer:
Perhaps I'm not interpreting your question correctly, but if you wanted to be able to load your urdf in a real-robot situation, you could use the line:
<param name="robot_description" command="$(find xacro)/xacro.py '$(find my_robot_description)/urdf/my_robot.urdf.xacro'" />
or something similar. What I've always seen done is that there are two main files: one urdf that describes only the solid bodies of the robot (named with the .urdf.xacro extension), and one with only the controller descriptions (named with the .gazebo.xacro extension). You then provide two separate wrapper urdf files to be included in your launch files.
<param name="robot_description" command="$(find xacro)/xacro.py '$(find my_robot_description)/urdf/robot_real_world.urdf.xacro'" />
or
<param name="robot_description" command="$(find xacro)/xacro.py '$(find my_robot_description)/urdf/robot_simulator.urdf.xacro'" />
The file "robot_real_world.urdf.xacro would look like this:
<robot name="robot"
xmlns:xacro="http://www.ros.org/wiki/xacro"
xmlns:xi="http://www.w3.org/2001/XInclude">
<include filename="$(find my_robot_description)/urdf/body.urdf.xacro" />
</robot>
And the file "robot_simulator.urdf.xacro" would look like this:
<robot name="robot"
xmlns:xacro="http://www.ros.org/wiki/xacro"
xmlns:xi="http://www.w3.org/2001/XInclude">
<include filename="$(find my_robot_description)/urdf/body.urdf.xacro" />
<include filename="$(find my_robot_description)/urdf/body_controllers.gazebo.xacro" />
</robot>
If you have all of these urdf files in a separate package, your real world robot doesn't need to depend on gazebo at all. You can just reference these files by name, regardless of whether that package is built or not.
EDIT:
If you want to avoid large dependencies, you don't have to depend on "{}_description" packages. They don't get built, so there's no point in having them as a dependency. It will throw a run-time error if the package doesn't exist though. That's your only problem.
Originally posted by DimitriProsser with karma: 11163 on 2012-02-08
This answer was ACCEPTED on the original site
Post score: 7
Original comments
Comment by Stefan Kohlbrecher on 2012-02-09:
Ok makes sense, after looking at how it's done for pr2 and turtlebot I adapted our stuff to look similar. Thanks!
Comment by Stefan Kohlbrecher on 2012-02-08:
Hi, I did an edit with a example scenario. Thanks for the answer so far :)
Comment by kump on 2018-10-03:
What is supposed to be in the body_controllers.gazebo.xacro file? Can you post some example? Would transmission tags be in the gazebo.xacro file or the urdf.xacro file?
Comment by kump on 2018-10-09:
I guess everything that is in the tags, right? So in my case just gazebo-ros controll plugin.
Comment by kump on 2018-10-09:
What is the use case for robot description URDF file other than for simulation? | {
"domain": "robotics.stackexchange",
"id": 8148,
"tags": "gazebo, simulation, simulator-gazebo, dependencies, best-practices"
} |
Launching empty world segfault in ROS | Question:
I am on an Ubuntu 12.04 LTS 64bit and running off of an Intel 915 chipset. My current gazebo distribution is 1.6.16
Running empty_world.launch would segfault, so I ran debug.launch and this is the result:
roslaunch gazebo_worlds debug.launch
... logging to /home/falkor/.ros/log/5f384930-0f1f-11e2-a67f-5891cf52dcee/roslaunch-pc-16477.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is ignored
process[master]: started with pid [16493]
ROS_MASTER_URI=http://127.0.0.1:11311
setting /run\_id to 5f384930-0f1f-11e2-a67f-5891cf52dcee
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in ignored
process[rosout-1]: started with pid [16506]
started core service [/rosout]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in ignored
process[gazebo-2]: started with pid [16520]
Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in ignored
process[gazebo_gui-3]: started with pid [16530]
GNU gdb (Ubuntu/Linaro 7.4-2012.04-0ubuntu2) 7.4-2012.04
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /opt/ros/fuerte/stacks/simulator_gazebo/gazebo/gazebo/bin/gzserver...Gazebo multi-robot simulator, version 1.0.2
Copyright (C) 2011 Nate Koenig, John Hsu, Andrew Howard, and contributors.
Released under the Apache 2 License.
http://gazebosim.org
(no debugging symbols found)...done.
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
Gazebo multi-robot simulator, version 1.0.2
Copyright (C) 2011 Nate Koenig, John Hsu, Andrew Howard, and contributors.
Released under the Apache 2 License.
http://gazebosim.org
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[New Thread 0x7fffe6e65700 (LWP 16549)]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[New Thread 0x7fffe5e3a700 (LWP 16550)]
[New Thread 0x7fffe5639700 (LWP 16552)]
[New Thread 0x7fffe4e38700 (LWP 16553)]
[New Thread 0x7fffd7fff700 (LWP 16554)]
[New Thread 0x7fffd77fe700 (LWP 16561)]
[New Thread 0x7fffd6ffd700 (LWP 16562)]
[New Thread 0x7fffd67fc700 (LWP 16563)]
[ INFO] [1349463833.537501371]: waitForService: Service [/gazebo/set_physics_properties] has not been advertised, waiting...
Msg Waiting for master
Msg Connected to gazebo master @ http://localhost:11345
Msg Waiting for master
Msg Connected to gazebo master @ http://localhost:11345
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[New Thread 0x7fffc1117700 (LWP 16566)]
[New Thread 0x7fffc0916700 (LWP 16567)]
[New Thread 0x7fffb7fff700 (LWP 16568)]
[New Thread 0x7fffb77fe700 (LWP 16569)]
[New Thread 0x7fffb6ffd700 (LWP 16570)]
[New Thread 0x7fffb67fc700 (LWP 16571)]
[New Thread 0x7fffb5ffb700 (LWP 16572)]
[New Thread 0x7fffb57fa700 (LWP 16573)]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[ INFO] [1349463850.488025154]: joint trajectory plugin missing , defaults to 0.0 (as fast as possible)
[New Thread 0x7fffb4dd0700 (LWP 16577)]
[New Thread 0x7fff97fff700 (LWP 16608)]
[New Thread 0x7fff977fe700 (LWP 16609)]
[New Thread 0x7fff96ffd700 (LWP 16619)]
[ INFO] [1349463856.300181864, 0.214000000]: waitForService: Service [/gazebo/set_physics_properties] is now available.
[ INFO] [1349463856.332807161, 3.872000000]: Starting to spin physics dynamic reconfigure node...
/opt/ros/fuerte/stacks/simulator_gazebo/gazebo/scripts/gui: line 2: 16534 Segmentation fault `rospack find gazebo`/gazebo/bin/gzclient -g `rospack find gazebo`/lib/libgazebo_ros_paths_plugin.so
[gazebo_gui-3] process has died [pid 16530, exit code 139, cmd /opt/ros/fuerte/stacks/simulator_gazebo/gazebo/scripts/gui __name:=gazebo_gui __log:=/home/falkor/.ros/log/5f384930-0f1f-11e2-a67f-5891cf52dcee/gazebo_gui-3.log].
log file: /home/falkor/.ros/log/5f384930-0f1f-11e2-a67f-5891cf52dcee/gazebo_gui-3*.log
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe6e65700 (LWP 16549)]
0x00007ffff76d1955 in boost::asio::basic_socket >::close() ()
from /opt/ros/fuerte/stacks/simulator_gazebo/gazebo/gazebo/lib/libgazebo_transport.so.1
(gdb) ni
[tcsetpgrp failed in terminal_inferior: Inappropriate ioctl for device]
[Thread 0x7fffb6ffd700 (LWP 16570) exited]
[Thread 0x7fffb67fc700 (LWP 16571) exited]
[Thread 0x7fff97fff700 (LWP 16608) exited]
[Thread 0x7fffb5ffb700 (LWP 16572) exited]
[Thread 0x7fffb77fe700 (LWP 16569) exited]
[Thread 0x7fffb7fff700 (LWP 16568) exited]
[Thread 0x7fffb4dd0700 (LWP 16577) exited]
[Thread 0x7fff96ffd700 (LWP 16619) exited]
[Thread 0x7fff977fe700 (LWP 16609) exited]
[Thread 0x7fffb57fa700 (LWP 16573) exited]
[Thread 0x7fffc1117700 (LWP 16566) exited]
[Thread 0x7fffd67fc700 (LWP 16563) exited]
[Thread 0x7fffd6ffd700 (LWP 16562) exited]
[Thread 0x7fffd77fe700 (LWP 16561) exited]
[Thread 0x7fffd7fff700 (LWP 16554) exited]
[Thread 0x7fffe4e38700 (LWP 16553) exited]
[Thread 0x7fffe5639700 (LWP 16552) exited]
[Thread 0x7fffe5e3a700 (LWP 16550) exited]
[Thread 0x7fffe6e65700 (LWP 16549) exited]
[Thread 0x7ffff7fb9800 (LWP 16535) exited]
Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.
(gdb) q
[gazebo-2] process has finished cleanly
log file: /home/falkor/.ros/log/5f384930-0f1f-11e2-a67f-5891cf52dcee/gazebo-2*.log
---
[Originally posted](https://answers.gazebosim.org/question/48/launching-empty-world-segfault-in-ros/) by [falcon1](https://answers.gazebosim.org/users/39/falcon1/) on Gazebo Answers with karma: 1 on 2012-10-05
Post score: 0
---
### Original comments
**Comment by [SL Remy](https://answers.gazebosim.org/users/43/sl-remy/) on 2012-10-21**:\
Do you also get a segfault when running stageros?
**Comment by [gerkey](https://answers.gazebosim.org/users/74/gerkey/) on 2013-01-11**:\
Can you post a backtrace from where the segfault occurs?
Answer:
This error:
[tcsetpgrp failed in terminalinferior: Inappropriate ioctl for device]
Looks like a system level problem. As in you have a fundamental issue with hardware or software outside the scope of gazebo.
Originally posted by nkoenig with karma: 7676 on 2012-10-18
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by gerkey on 2013-01-11:
It looks like the tcsetpgrp error is coming from gdb, something to do with how it's trying to manipulate the console. I don't think that it's related to the segfault. | {
"domain": "robotics.stackexchange",
"id": 2761,
"tags": "gazebo"
} |
Are nucleic acids found in cell membranes? | Question: I've found various results online and I was recently marked in on an important test as wrong when I made the assumption they were not found in the cell membrane. Does anyone know what the correct answer is in this case? Thanks in advance.
Answer: Nucleic acids are not structural components of cell membranes (ribosomes and nucleus are the main places where they are found).
However, being open system, cell exchanges chemicals with its surrounding and cellular membrane can transport nucleic acids as well. This is why these acids can potentially be detected in the cell membranes.
If you have the exact question expression it can help with more precise answer. | {
"domain": "biology.stackexchange",
"id": 3705,
"tags": "cell-biology, cell-membrane"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.