anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Changing variables between two different metric ansatzes in the calculation of the Klein-Gordon equation | Question: My question concerns changing variables in the calculation of the Klein-Gordon equation for a scalar field given two different "guesses" for the metric.
I consider the following Einstein tensor, which describes an action with a scalar field $\phi(r)$ with potential $V(\phi)$:
$$
R_{\mu \nu}-\kappa \left(\partial_\mu \phi\partial_\nu \phi+g_{\mu \nu}V(\phi)\right)=0
$$
Now, we'll propose a metric of the form
$$
ds^2=-f(r)dt^2+f^{-1}(r)dr^2+a^2(r)d\sigma^2
$$
This gives us a Klein-Gordon equation of the form (Eqn. 1)
$$
\square \phi=g^{11}\phi''-(g^{00}\Gamma_{00}^1+g^{11}\Gamma_{11}^1+g^{22}\Gamma_{22}^1+g^{33}\Gamma_{33}^1)\phi'\notag\\
=f(r)\phi''(r)+\left(f'(r)+2\frac{a'(r)}{a(r)}f(r)\right)\phi'(r)=\frac{dV}{d\phi}
$$
Now, we can also define the metric in the form
$$
ds^2=p(r')\left\{
-b(r')dt^2+\frac{1}{b(r')}dr'^2+r'^2d\sigma^2
\right\}
$$
The Klein-Gordon equation now yields (Eqn. 2)
$$\frac{b(r')}{p(r')}\phi''(r')+\phi'(r')\left\{\frac{b(r')p'(r
)}{p^2(r')}+\frac{2b(r')}{r'p(r')}+\frac{b'(r')}{p(r')}\right\}=\frac{dV}{d\phi} $$
where now the derivative is taken with respect to r'.
My goal is to go from Eqn 1. from Eqn 2. with the following change of variables:
$$f(r')=p(r')b(r'),\quad a(r')=r'\sqrt{p(r')},\quad \frac{dr'}{dr}=\frac{1}{p(r')}$$
For example, for the second derivative term,
$$f\phi''=pb \frac{d^2\phi}{dr^2}\notag\\
=pb\frac{d}{dr}\frac{dr'}{dr}\frac{d\phi}{dr'}\notag\\
=pb \frac{d}{dr'}\left(\frac{1}{p}\frac{d\phi}{dr'}\right)\frac{1}{p}\notag\\
=-\frac{p'b}{p^2}\phi'+\frac{b}{p}\phi'' $$
where the variables are implied and primed-derivative notation is reserved for the $r'$ coordinate. The term proportional to $\phi''$ matches the form in Eqn. 2, but the term proportional to $\phi'$ seems a bit off. Doing the exact same thing for the other terms in $\phi'$ and adding them altogether, I find that
$$f'+2\frac{a'}{a}f=\frac{2b}{r'}+\frac{2bp'}{p}+b' $$
Putting this altogether, I find
$$\frac{b(r')}{p(r')}\phi''(r')+\phi'(r')\left\{
-\frac{b(r')p'(r')}{p^2(r')}+\frac{2b(r')}{r'}+\frac{2b(r') p'(r')}{p(r')}+b'(r')
\right\} $$
Although I am close, I am off by a few signs and factors. However, I have checked it several times and all seems to be fine. Is my mapping incorrect? When I change variables in the derivative, am I doing something incorrect, or am I missing something more subtle?
Answer: I have discovered the issue: the derivative of $\phi'$ was not transformed properly. The correct calculation is given below:
$$\left(f'+2\frac{a'}{a}f\right)\phi'\notag\\
=\left(b\frac{p'}{p}+b'+\frac{2b}{r'}+\frac{bp'}{p}\right)\frac{1}{p}\phi'\notag\\
=\frac{bp'}{p^2}+\frac{b'}{p}+\frac{2b}{r'p}+\frac{bp'}{p^2} $$
The first term cancels the $\frac{d\phi}{dr'}$ term that pops out of $\frac{d^2\phi}{dr^2}$, yielding the correct Eqn. 2 given in the question. | {
"domain": "physics.stackexchange",
"id": 68772,
"tags": "general-relativity, black-holes, metric-tensor, klein-gordon-equation, no-hair-theorem"
} |
Euclidean geometry theorem proving complexity | Question: Euclidean geometry is complete, so the problem of determining whether a statement $A$ is provable is computable. Do we know its time complexity?
Answer: It is common to prove the decidability of first-order Euclidean geometry by encoding the language of Euclidean geometry into the language of real closed fields and then showing that the latter is decidable. A singly-exponential space upper bound on deciding the first-order theory of real closed fields was proven in Ben-Or, Kozen, and Reif (1986). This implies a doubly-exponential time upper bound.
I believe this is the best known complexity for the decision problem for general first-order sentences in the language of real closed fields. However, I am not sure whether deciding real closed fields is equivalent (bidirectionally) to deciding Euclidean geometry, since a typical encoding of (say) the language of Tarski's axioms into the language of real closed fields only uses a small subset of the possible polynomials. So maybe first-order Euclidean geometry can be decided with lower complexity than this. It is at least PSPACE-hard, since TQBF is easily encoded into first-order Euclidean geometry. | {
"domain": "cs.stackexchange",
"id": 16468,
"tags": "time-complexity, computational-geometry"
} |
Do both ends of a muscle contract? | Question: I was under the impression that both ends of a muscle contract. For instance, the fibers of the biceps run parallel to the humerus so I thought they pulled toward the middle.
But now I'm confused because it sounds like the contraction of the fibers doesn't necessarily have to parallel the motion of the muscle itself. In other words, just because the fibers contract doesn't mean that the muscle lengthens or shortens in a particular direction.
Can someone clarify if muscles contract in both directions?
Or perhaps a better question is do muscle contract in a particular direction and is this the same direction that the fibers contract or does the muscle as a whole move in a different direction in comparison to the direction the fibers are contracting?
Answer: The muscular fibers (or more exactly the actin and myosin filaments) are contracted towards each other. This makes them "move to the middle" and you build up a force on both sides of the muscle. See this image (from the Wikipedia), which illustrates this process:
There is also an animated picture available, which illustrates this process even better (from here):
The process of the contraction applies a equal force to both endpoints of the muscle. | {
"domain": "biology.stackexchange",
"id": 3507,
"tags": "muscles"
} |
Give a grammar for a language on Σ={a,b,c} that accepts all strings containing exactly one a | Question: I have created the following solution but its left recursive:
S--> a|bSc|cSb|Sbc
Also it is not accepting:
"ab" or "cba" or "abb" or abc.
Somebody please guide me.
Zulfi.
Answer: $$S\rightarrow aA, bS,cS$$
$$A\rightarrow bA, cA, \epsilon$$
Note that you need at least two non-terminals for this language, since you need to differentiate two states, namely whether an $a$ has already been added/found or not. | {
"domain": "cs.stackexchange",
"id": 15107,
"tags": "regular-languages, formal-grammars"
} |
Calculating match % and ranking according to that | Question: I'm creating a website like where users will answer some yes/no questions set by me, up to them how many of those questions they want to answer. After a user submits his answer(s), he will be shown top 5 matches along with their match percentages. If two users have 10 common questions and their answers match for 8 of those questions then their match % will be 80%.
I can make this but my concern is about efficiency. A way of making this: If a user wants to see his top matches then match % (or match ratio) will be calculated for him vs every other user in the system. This will be stored in a temporary array. Array is sorted. Top 5 matches from the array are displayed.
Any less resource intensive way to calculate and show top matches?
Edit: If the top matches can be calculated without first calculating match percentages then i'm open to that.
Answer: There is probably no algorithm that is significantly better than doing a pairwise comparison between all pairs of users.
You can, as you say, memoize (remember) the top-5 match for each user, so that you never have to recompute the top-5 for that user again in the future, but you'll still need to compare that user to each other user at least once. Basically, each time a new user appears, you compare them to all other users, to compute their top-5 matches (and update the top-5 matches of other users by adding this user to the top-5 lists of other users where appropriate).
That's basically the brute-force algorithm. I don't expect there to be an alternative that is better in practice.
In principle, there are a variety of methods. One method is to use a locality-sensitive hash. The simplest form of LSH is to choose a random subset of questions, then for each user extract their answers to those questions, hash that, and store the user in a bucket associated with the user's hash value. Then two users who are very similar have a decent chance of ending up in the same bucket. If you construct 100 such data structures, each with a different subset of questions, and if you choose the parameters right, you might have a decent chance of finding all of the top-5 matches in this way (along with some other extraneous matches, but those can be filtered out by doing a full comparison for each potential match). However, this is pretty fiddly with respect to parameters, and if people tend to answer only some of the questions, I suspect it won't perform that well in practice. | {
"domain": "cs.stackexchange",
"id": 15431,
"tags": "algorithms, complexity-theory, approximation"
} |
How can the class of tail recursive functions be compared to the classes of PR and R? | Question: How can the class of tail recursive functions (TR) be compared to the classes of primitive recursive functions (PR) and recursive functions (R)?
The computation of a PR function always halts. This does not apply to TR functions.
Given a tail recursive function $f$:
$$f(x) = f(x)$$
The function $f$ obviously will result in an endless recursion and therefore $TR \not\subseteq PR $.
Also a PR function has a finite time and space complexity, where a TR function can have a finite space and infinite time complexity.
For this reason I assume:
$$PR \subset TR \subset R $$
Is this assumption correct?
Is the Ackermann function $\in TR$?
Is there a well known function which is in $R$, but not in $TR$?
Answer: Every computable function can be expressed in continuation-passing-style, in which all calls are tail-calls.
The trick is to add a "continuation" parameter to every function. Instead of making a non-tail-call to a function, you make a tail call to that function with a modified continuation, describing what to do with the result. All instances where a value is directly returned (such as recursion base cases) are replaced by calling the continuation in tail-position with the result as an argument.
Transformation from the lambda calculus into CPS can be done mechanically. So, $TR=R$.
EDIT: addressing the comments about higher-order functions:
Tail-calls are almost always discussed in the context of higher order functions and the lambda calculus. So the problem is, what precisely is our definition of $TR$?
You can certainly add a wrapper around a CPS function to give it the type $\mathbb{N}^n \to \mathbb{N}$, by giving it an initial continuation of $\lambda k \ldotp k$. If higher-order functions are allowed internally, then the result that $TR=R$ still holds.
If higher order functions aren't allowed internally, what is our definition of $TR$? If we define it in the same way as $PR$, then it is going to only contain primitive recursive problems by definition (since it's just the restruction of $PR$ to tail-recursion). If we add $\mu$ for infinite search, I think we're just going to get $R$, since we can encode higher-order functions using integers. So, I'm not sure there's a meaningful question to be asked in the non-higher-order case.
EDIT 2:
As for the class of first-order functions that only allow tail recursion, with Constant, Successor, Projection and Composition functions, and extension by tail recursion:
h(x1 ... xn) =
if c(x1 ... xn) = 0 then
h(g1(x1), ..., gn(xn))
else
f(x1, ..., xn)
where $c$, $g_i$ and $f$ are all tail-recursive functions, I think we can prove that it's Turing Complete, by solving Post's Correspondence Problem, which is undecidable but semi-decidable:
Assume that we've got nice functions for dealing with strings encoded as integers, with concatenation, etc.
Let $pcpInst(k, n)$ be a function which takes an integer $k$ and returns the $k$th string over the alphabet $\{1, \ldots, n \}$.
Let $c(k, x_1, \ldots, x_n) = $ be a function, where $k$ is an integer, and each $x_i$ is a pair containing two strings over a binary alphabet. Thus function does the following:
Computes $k_1 \cdots k_p = pcpInst(k,n)$, the $k$th possible PCP solution indices.
Constructs $s_1=\pi_1(x_{k_1}) \cdots \pi_1(x_{k_p}))$. This is the string we get by concatenating the first string of the arguments indexed by our $k_i$ sequence. We define $s_2$ with $pi_2$ similarly.
Return $0$ if $s_1 \neq s_2$, return $1$ otherwise.
Now, we'll define our function to solve the a PCP instance with $n$ strings:
$h(k, x_1, \ldots, x_n) = h(S(k), x_1, \ldots, x_n)$ if $c(k, x_1, \ldots, x_n) = 0 $
$h(k, x_1, \ldots, x_n) = 0$ otherwise
Now we define $h'(x_1, \ldots, x_n) = h(0, x_1, \ldots, x_n)$.
It is clear to see that $h'(x_1, \ldots, x_n)$ returns 0 if and only if there is a solution to the correspondence problem defined by pairs of strings $x_1, \ldots, x_n$. If there is a solution, we eventually iterate to it by increasing $k$, and return $0$ when our $c$ function returns 1. If there is no solution, we never return.
The trick here is ensuring that $c$ is itself tail-recursive. I am fairly confident that it is, but that proving so would be tedious. But since it is performing simple string manipulations and equality checks, I would be very surprised if it is not tail recursive. | {
"domain": "cs.stackexchange",
"id": 5700,
"tags": "computability, recursion, primitive-recursion, tail-recursion"
} |
Is it possible to blur an image in such way that a person with sight problems could see it sharp? | Question: If someone has short or long sight, is it possible to tune image on a computer monitor in such way, that a person could see it sharp as if they were wearing glasses? If not, will 3d monitor make it possible?
Answer: Let's take a simple original picture to look at - just two nearby dots on a white background. If you have bad vision, the dots look blurred.
The way good vision works is to ensure that all the light hitting any particular small area of your retina comes from the same direction in front of you. Conversely, all the light coming from one direction hits one specific spot on your retina.
When you have bad vision, the light from a locus of nearby directions all hits on the same part of your retina, and the light from a particular direction is smeared out over an area on your retina. Hence, blurred vision is an averaging effect. When you look at the dots, you'll see them smear out into each other.
You might try to compensate for this by making a "counter-blurred" image where the source dots are smaller, but if the original dots are close enough that light from the center of one dot is spilling over to overlap light from the center of the second dot, making the dots smaller won't fix that problem. Hence, the dots will always appear blurred. You can't create the impression that the original has for someone with good vision.
A photograph is really just a bunch of nearby dots, and so the same problem applies.
I don't know about the 3D monitor, though. I suppose if it can control the direction of light coming off it, it could be modified to focus the light some and create a sharp image for someone with blurred vision. | {
"domain": "physics.stackexchange",
"id": 24426,
"tags": "optics, vision"
} |
What is the status of stacks versus packages versus metapackages? | Question:
I'm wondering what the status and recommendations are regarding stacks versus packages versus metapackages.
In the Groovy documentation it says that "the concept of stacks has been removed", per REP 127.
http://wiki.ros.org/groovy#Removal_of_Stacks
http://ros.org/reps/rep-0127.html).
However everything else on ros.org seems not to have noticed (e.g. http://wiki.ros.org/ROS/Concepts).
Is there an updated "ROS concepts" page somewhere I'm not aware of? Has the decision been changed? Is the documentation simply having trouble keeping up?
Originally posted by leblanc_kevin on ROS Answers with karma: 357 on 2013-09-05
Post score: 1
Answer:
The Concepts page has not been updated as it's still correct for older versions and stacks are being slowly phased out. Until they are completely phased out we need to keep the documentation. Updates to note the phaseout would be helpful for completeness.
Metapackages are a specific type of package which has no content just dependencies. This replaces the concept of the stack but connects much closer to the metapackage concepts in other package managers.
Originally posted by tfoote with karma: 58457 on 2013-09-05
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by leblanc_kevin on 2013-09-06:
Thanks for the clarification. I definitely think it would be a good idea to highlight the fact that stacks are (or will soon be) deprecated, especially in the introductory pages and tutorials. Not only for completeness, but to avoid new stacks being created. | {
"domain": "robotics.stackexchange",
"id": 15432,
"tags": "ros, stacks, packages, release"
} |
How can I can solution to BCS gap eq. around critical point $T_c$? | Question: The BCS gap equation is $$1=gn\int_0^{\Delta\epsilon} d\epsilon \frac{1}{\sqrt{\epsilon^2+\Delta^2}}\tanh\frac{\sqrt{\epsilon^2+\Delta^2}}{2kT}.$$ At the critical point, we have $\Delta=0$, therefore we can calculate $T_c$. Now I want to investigate the relation $$\frac{\Delta(T)}{\Delta(0)}=1.73(1-\frac{T}{T_c})^{\frac{1}{2}}.$$ But I can't find a good approxiamation to reach the answer. I also didn't find a solution on the net. Does anyone can help me?
Answer: Denotations are different but the keypoints seem clear. Let
$$\frac{1}{V_0\rho_F}=\int_{0}^{\omega_D}d\epsilon\frac{\tanh(\lambda(\epsilon)/2T)}{\lambda(\zeta)},\quad \lambda(\epsilon)=\sqrt{\Delta(T)^2+\epsilon^2}.$$
Let $T=T_c-\Delta T$ and we expand near $T_c$ w.r.t. $\Delta T$ and $\Delta$,
$$1\approx V_0\rho_F\int_{0}^{\omega_D}d\xi\frac{\tanh(\xi/2T)}{\xi}+\frac{V_0\rho_F\Delta T}{2T_c^2}\int_{0}^{\omega_D}\frac{d\xi}{\cosh^2(\xi/2T_c)}+\\+\frac{V_0\rho_F\Delta^2}{4T_c}\int_{0}^{\omega_D}\frac{d\xi}{\xi^2}\left(\frac{1}{\cosh^2(\xi/2T_c)}-\frac{2T_c\tanh(\xi/2T_c)}{\xi}\right).$$
Let $\kappa=\zeta/2T_c$ and then notice that $\omega_D/T_c\rightarrow\infty$, so
$$0\approx\frac{V_0\rho_F\Delta T}{T_c}\int_{0}^{\infty}\frac{d\kappa}{\cosh^2\kappa}+\frac{V_0\rho_F\Delta^2}{8T_c^3}\int_{0}^{\infty}\frac{d\kappa}{\kappa^2}\left(\frac{1}{\cosh^2\kappa}-\frac{\tanh\kappa}{\kappa}\right).$$
The last integral is
$$\int_{0}^{\infty}\frac{d\kappa}{\kappa^2}\left(\frac{1}{\cosh^2\kappa}-\frac{\tanh\kappa}{\kappa}\right)=-\frac{7\zeta(3)}{\pi^2}.$$
Finally, we find
$$-\frac{7\zeta(3)}{\pi^2}\frac{V_0\rho_F\Delta^2}{8T_c^3}=\frac{V_0\rho_F\Delta T}{T_c}\rightarrow\boxed{\Delta^2=\frac{8\pi^2}{7\zeta(3)}T_c^2(T-T_c).}$$
For $\Delta(T=0)$ is known that
$$\frac{\Delta(0)}{T_c}=\pi e^{-\gamma_E},$$
where $\gamma_E\approx 0.57$ is Euler-Mascheroni constant. For $\Delta(T)$ near $T_c$ we write
$$\Delta(T)=\left\{\frac{8\pi^2}{7\zeta(3)}\right\}^{1/2}T_c\sqrt{T-T_c}.$$
Dividing this expression by the expression for $\Delta(T=0)$, we find
$$\frac{\Delta(T)}{\Delta(0)}=\left\{\frac{8}{7\zeta(3)}\right\}^{1/2}\frac{e^{\gamma_E}T_c}{T_c}\left(1-\frac{T}{T_c}\right)^{1/2},$$
$$\frac{\Delta(T)}{\Delta(0)}\approx 1.737\left(1-\frac{T}{T_c}\right)^{1/2}$$
Appendix: computation of integral
To evaluate the integral, we use Weirstrass factorization theorem,
$$\cosh(x)=\prod_{n\geq 0}\left(1+\frac{4x^2}{\pi^2(2n+1)^2}\right),$$
and then take twice logarithmic derivative, $\partial_x(\ln(\cdot))$, $\partial_x^2(\ln(\cdot))$, which gives
$$\frac{\tanh x}{x}=8\sum_{n\geq 0}\frac{1}{(2n+1)^2\pi^2+4x^2},\quad \frac{1}{\cosh^2x}=\sum_{n>0}\frac{8[(2n+1)^2\pi^2-4x^2]}{[(2n+1)^2\pi^2+4x^2]^2}.$$
Considering all the above, we obtain
$$\int_{0}^{\infty}\frac{d\kappa}{\kappa^2}\left(\frac{1}{\cosh^2\kappa}-\frac{\tanh\kappa}{\kappa}\right)=-\sum_{n\geq 0}\frac{8}{\pi^2(2n+1)^3}=\\=-\frac{8}{\pi^2}\left(\zeta(3)-\frac{1}{8}\zeta(3)\right)=-\frac{7\zeta(3)}{\pi^2}.$$
Appendix: relation between gap at $T=0$ and $T_c$
From gap equation, we see that with $T\rightarrow 0$, the gap $\Delta\rightarrow\Delta(T=0)\equiv \Delta_0$, so
$$1=V_0\rho_F\int_{0}^{\omega_D}\frac{d\epsilon}{\sqrt{\epsilon^2+\Delta_0^2}}.$$
Performing integration, we find
$$\frac{1}{V_0\rho_F}=\mathrm{arcsech}\,\left(\frac{\omega_D}{\Delta_0}\right)\rightarrow \boxed{\Delta_0=2\omega_D\exp\left\{-\frac{1}{V_0\rho_F}\right\}}.$$
Now let $T\rightarrow T_c$, so $\Delta\rightarrow 0$, then
$$1=V_0\rho_F\int_{0}^{\omega_D}\frac{d\epsilon}{\epsilon}\tanh\left\{\frac{\beta_c\epsilon}{2}\right\},$$
$$1=V_0\rho_F\int_{0}^{\beta_c\omega_D/2}\frac{dx\,\tanh x}{x}.$$
Integrating by parts, we obtain
$$\int_{0}^{\beta_c\omega_D/2}\frac{dx\,\tanh x}{x}=\left.\tanh x\ln x\right|_{0}^{\beta_c\omega_D/2}-\int_{0}^{\infty}\frac{dx\,\ln x}{\cosh^2x},$$
where in the last integral we assumed $\beta_c\omega_D/2\sim\infty$. The answer for this integral is
$$\int_{0}^{\infty}\frac{dx\,\ln x}{\cosh^2x}=-\gamma_E+\ln\frac{\pi}{4}$$
and this can be verified by differentiation w.r.t parmeter. So, we have
$$\int_{0}^{\beta_c\omega_D/2}\frac{dx\,\tanh x}{x}=\ln\frac{\beta_c\omega}{2}-\ln\frac{\pi}{4e^{\gamma_E}}=\ln\frac{2e^{\gamma_E}\omega_D}{\pi T_c},$$
which gives us
$$\boxed{T_c=\frac{2e^{\gamma_E}\omega_D}{\pi}\exp\left\{-\frac{1}{V_0\rho_F}\right\}.}$$
Comparing expression for $\Delta_0$ and $T_c$, we find that
$$\boxed{\frac{\Delta_0}{T_c}=\pi e^{-\gamma_E}\approx 1.76.}$$
References:
A. Altland & B. Simons, "Condensed Matter Field Theory", Ch. 7, problems for ch. 7 | {
"domain": "physics.stackexchange",
"id": 98979,
"tags": "homework-and-exercises, phase-transition, superconductivity, approximations"
} |
The Bucketizer Script | Question: This code is written in a SSIS Script Component that basically accomplishes what I previously had as a T-SQL script, that was reviewed here.
I need to split a 80 character string that contains 20 buckets of 4 characters each, without any delimiter, and generate a new row for each bucket. I'm using a script component to achieve this in SSIS - here's the code:
using System;
using System.Linq;
using System.Text.RegularExpressions;
using System.Data;
using Microsoft.SqlServer.Dts.Pipeline;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
[SSISScriptComponentEntryPoint]
public class ScriptMain : UserComponent
{
/// <summary>
/// This code splits a row's [Sizes] column into buckets of 4 characters,
/// and adds a new row to the output buffer for each non-empty bucket found.
/// </summary>
/// <param name="Row">The row that is currently passing through the component</param>
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
var buckets = Regex.Match(Row.Sizes, "^(?<bucket>.{4})+$")
.Groups["bucket"].Captures
.Cast<Capture>()
.Select(capture => capture.Value)
.ToArray();
try
{
for (int index = 0; index < buckets.Length; index++)
{
var bucket = buckets[index].Trim();
if (bucket.Contains(' '))
{
// size codes don't contain whitespaces; something went really wrong if that's the case:
throw new FormatException("Size code '" + bucket +"' contains a whitespace. This is interpreted as a parse error.");
}
if (string.IsNullOrEmpty(bucket))
{
// don't output a row for empty buckets
continue;
}
OutputSizesBuffer.AddRow();
OutputSizesBuffer.ETLStart = Row.ETLStart;
OutputSizesBuffer.SourceId = Row.SourceId;
OutputSizesBuffer.SourceTable = Row.SourceTable;
OutputSizesBuffer.SizeRangeId = Row.SizeRangeId;
OutputSizesBuffer.SizeRangeIndex = index + 1; // make it 1-based.
OutputSizesBuffer.Code = bucket;
}
}
catch (Exception exception)
{
var cancel = false;
ComponentMetaData.FireError(0, "Script Component", exception.ToString(), string.Empty, 0, out cancel);
}
}
}
I would like this code to be reviewed from a performance perspective, because I will be needing very similar script components soon, to process hundreds of thousands of rows; this specific instance works well, but only processes the "master data", which is only a dozen rows or so: if there's a performance issue with this code, it's only later that I'm going to find out - I'd rather get it peer reviewed now and make adjustments before I use similar code to process many, many rows.
Is the error/exception handling overkill? Is the regex pattern self-documenting? (wait, is that even possible?)
Answer: The regex silently fails on inputs that don't have length a multiple of 4; i.e. no results are returned. Things that fail silently are incredibly hard to debug. It's easy to imagine trying to figure out why no rows are being inserted for the input
jvqbjqbxvvlxlcwkuivasoeljaxamrthcecbudnxvqdboxroudfvrwoureqisaldcduhauoqtxdclcw
until you realise that it only has 79 characters.
You would want to measure the performance, but I would suggest not using a regex and writing a simple method, like so
private static IEnumerable<string> ToBuckets(string input)
{
const int Length = 4;
for (var i = 0; i < input.Length; i += Length)
{
yield return input.Substring(i, Length);
}
}
We should add input validation to make it easier to find out exactly what went wrong.
if (input == null)
{
throw new ArgumentNullException("input");
}
if (input.Length % Length != 0)
{
throw new ArgumentException(string.Format("Argument length must be a multiple of {0}", Length));
}
I made a microbenchmark comparing the two methods that you can find here. For 100,000 calls, the regex solution takes ~2.5s compared to ~0.2s for this version.
It looks like you're using exceptions for control flow with the FormatException. I would suggest making the try-catch more specific, both in the code that is executed within the try, and in the exceptions that you're catching.
If we split out the OutputSizesBuffer parts to its own method, we end up with something like this, which I find a bit cleaner (illustrative code only)
var buckets = ToBuckets(Row.Sizes).Select(bucket => bucket.Trim()).ToArray();
for (int index = 0; index < buckets.Length; index++)
{
// Don't output a row for empty buckets.
if (bucket.Length == 0)
{
continue;
}
if (bucket.Contains(' '))
{
// Size codes don't contain whitespace; something went really wrong if that's the case.
FireError(string.Format(Resources.BucketContainsWhiteSpace, bucket));
return;
}
try
{
AddRow(index + 1, bucket);
}
catch (SomeSpecificException e)
{
FireError(e.Message);
return;
}
} | {
"domain": "codereview.stackexchange",
"id": 10826,
"tags": "c#, performance, parsing, regex, ssis"
} |
Simple iPhone Notes app | Question: I'm a beginner developer, I'm learning online (mostly from the Apple documentation guides), and couple a days ago I started a notes app project. It's the first app I'm building without a guide.
It's very simple so I'd love to post it here and get your feedback on how to improve my code to be more efficient.
The app have two view controllers:
NotesListViewController
CreateNotesViewController
I use Core Data so I created an entity called Note that have one string attribute called content, after that I created the model class from the entity section by doing editor/create NSManagedObject subclass.
So those are my class:
NotesListViewController.h:
#import <UIKit/UIKit.h>
@interface NotesListViewController : UITableViewController
- (IBAction) unwindToList:(UIStoryboardSegue *)segue;
@end
NotesListViewController.m:
#import "NotesListViewController.h"
#import "Note.h"
#import "CreateNotesViewController.h"
@interface NotesListViewController ()
@property (nonatomic, strong) NSMutableArray *notes;
@property (nonatomic) NSInteger editedRow;
@end
@implementation NotesListViewController
- (NSManagedObjectContext *) managedObjectContext
{
NSManagedObjectContext *context = nil;
id delegate = [[UIApplication sharedApplication] delegate];
if ([delegate performSelector:@selector(managedObjectContext)]){
context = [delegate managedObjectContext];
}
return context;
}
- (void) prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender
{
if ([[segue identifier] isEqualToString:@"editSegue"]) {
CreateNotesViewController *destination = (CreateNotesViewController *)[segue destinationViewController];
NSInteger indx = [self.tableView indexPathForCell:sender].row;
Note *noteToPass = self.notes[indx];
destination.note = noteToPass;
self.editedRow = indx;
NSManagedObject *selectedNote = [self.notes objectAtIndex:[self.tableView indexPathForSelectedRow].row];
destination.editNote = selectedNote;
} else if ([[segue identifier] isEqualToString:@"addSegue"]) {
self.editedRow = -1;
}
}
- (IBAction)unwindToList:(UIStoryboardSegue *)segue
{
CreateNotesViewController *source = (CreateNotesViewController *)[segue sourceViewController];
Note *recieivedNote = source.note;
if (recieivedNote != nil && self.editedRow == -1) {
[self.notes addObject:recieivedNote];
} else if (recieivedNote != nil)
{
[self.notes replaceObjectAtIndex:self.editedRow withObject:recieivedNote];
}
[self.tableView reloadData];
}
- (id)initWithStyle:(UITableViewStyle)style
{
self = [super initWithStyle:style];
if (self) {
// Custom initialization
}
return self;
}
- (void)viewDidLoad
{
[super viewDidLoad];
self.notes = [[NSMutableArray alloc] init];
}
- (void) viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
NSManagedObjectContext *managedObjectContext = [self managedObjectContext];
NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] initWithEntityName:@"Note"];
self.notes = [[managedObjectContext executeFetchRequest:fetchRequest error:nil] mutableCopy];
[self.tableView reloadData];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#pragma mark - Table view data source
- (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView
{
return 1;
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
// Return the number of rows in the section.
return self.notes.count;
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier forIndexPath:indexPath];
// Configure the cell...
NSManagedObject *noteToDisplay = [self.notes objectAtIndex:indexPath.row];
[cell.textLabel setText:[noteToDisplay valueForKey:@"content"]];
return cell;
}
- (BOOL) tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath
{
return YES;
}
- (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath
{
NSManagedObjectContext *context = [self managedObjectContext];
if (editingStyle == UITableViewCellEditingStyleDelete)
{
[context deleteObject:[self.notes objectAtIndex:indexPath.row]];
NSError *error = nil;
if ([context save:&error]) {
NSLog(@"Can't Delete! %@ %@", error, [error localizedDescription]);
}
[self.notes removeObjectAtIndex:indexPath.row];
[self.tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationLeft];
}
}
@end
CreateNotesViewController.h:
#import <UIKit/UIKit.h>
#import "Note.h"
@interface CreateNotesViewController : UIViewController
@property (nonatomic, strong) Note *note;
@property (strong) NSManagedObject *editNote;
@end
CreateNotesViewController.m:
#import "CreateNotesViewController.h"
@interface CreateNotesViewController ()
@property (weak, nonatomic) IBOutlet UIBarButtonItem *saveButton;
@property (weak, nonatomic) IBOutlet UITextView *myTextView;
@end
@implementation CreateNotesViewController
- (NSManagedObjectContext *) managedObjectContext
{
NSManagedObjectContext *context = nil;
id delegate = [[UIApplication sharedApplication] delegate];
if ([delegate performSelector:@selector(managedObjectContext)]){
context = [delegate managedObjectContext];
}
return context;
}
- (void) prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender
{
if (sender != self.saveButton) return;
if (self.myTextView.text.length > 0) {
self.note.content = self.myTextView.text;
NSManagedObjectContext *context = [self managedObjectContext];
if (self.editNote) {
[self.editNote setValue:self.myTextView.text forKey:@"content"];
} else {
// creating a new managed object
NSManagedObject *newNote = [NSEntityDescription insertNewObjectForEntityForName:@"Note" inManagedObjectContext:context];
[newNote setValue:self.myTextView.text forKey:@"content"];
}
NSError *error = nil;
if (![context save:&error]) {
NSLog(@"Can't Save! %@ %@", error, [error localizedDescription]);
}
else
{
NSLog(@"Saved! %@ %@", error, self.myTextView.text);
}
}
}
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
self.myTextView.text = self.note.content;
// listen for keyboard hide/show notifications so we can properly adjust the table's height
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(keyboardWillShow:)
name:UIKeyboardWillShowNotification
object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:@selector(keyboardWillHide:)
name:UIKeyboardWillHideNotification
object:nil];
}
- (void)adjustViewForKeyboardReveal:(BOOL)showKeyboard notificationInfo:(NSDictionary *)notificationInfo
{
// the keyboard is showing so ƒ the table's height
CGRect keyboardRect = [[notificationInfo objectForKey:UIKeyboardFrameEndUserInfoKey] CGRectValue];
NSTimeInterval animationDuration =
[[notificationInfo objectForKey:UIKeyboardAnimationDurationUserInfoKey] doubleValue];
CGRect frame = self.myTextView.frame;
// the keyboard rect's width and height are reversed in landscape
NSInteger adjustDelta = UIInterfaceOrientationIsPortrait(self.interfaceOrientation) ? CGRectGetHeight(keyboardRect) : CGRectGetWidth(keyboardRect);
if (showKeyboard)
frame.size.height -= adjustDelta;
else
frame.size.height += adjustDelta;
[UIView beginAnimations:@"ResizeForKeyboard" context:nil];
[UIView setAnimationDuration:animationDuration];
self.myTextView.frame = frame;
[UIView commitAnimations];
}
- (void)keyboardWillShow:(NSNotification *)aNotification
{
[self adjustViewForKeyboardReveal:YES notificationInfo:[aNotification userInfo]];
}
- (void)keyboardWillHide:(NSNotification *)aNotification
{
[self adjustViewForKeyboardReveal:NO notificationInfo:[aNotification userInfo]];
}
- (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil
{
self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil];
if (self) {
// Custom initialization
}
return self;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view.
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
@end
Note.h:
#import <Foundation/Foundation.h>
#import <CoreData/CoreData.h>
@interface Note : NSManagedObject
@property (nonatomic, retain) NSString * content;
@end
In the Note.m file I didn't added anything:
#import "Note.h"
@implementation Note
@dynamic content;
@end
And I use storyboard to create my interface that looks like this:
So it's very basic, you can only:
create a note (saved using core data)
view your notes list
edit a note (updated using core data)
delete a note (deleted using core data)
I really enjoy programming in Objective-C and want to get better and better, so if you could be kind enough to give me your feedback on the code that would be awesome.
Answer: Look into the NSFetchedResultsController to power your table view. It's something specifically designed to be used with UITableViews that are powered by Core Data.
In essence, it uses a fetch request and returns how many objects are in each section, how many sections there are, what header/footer information they have etc. It has the added benefit that if the underlying managed object context changes (for example when a new object is added, updated or deleted) the fetched results controller is updated automatically. When that happens you can send your table view a reloadData message so it's updated as if by magic. An NSFetchedResultsController adds another level of complexity to Core Data, but since you're already confidently using the framework my guess is you'll like it ;-)
As a side note: look into Protocols. UIKit uses them extensively, they can be very useful and they're easy to create. With Protocols you can put one controller in charge of presenting and dismissing another view controller without having to rely on an unwind segue. The benefit is that one controller can hold on to a managed object, it gets changed in another view controller, and when that's dismissed the originating controller can use the changed object (to save or evaluate it).
I've got a full working project on GitHub which illustrates both the NSFetchedResultsController and uses a Protocol to dismiss the Detail View. It's similar to what you're building (a CRUD Project - as in Create, Read, Update, Delete):
https://github.com/versluis/Master-Detail-CRUD
Then of course there's adding a search feature to your table view... but that's for another time. When you're ready, I have a demo project for that too: https://github.com/versluis/TableSearch2014
Let me know if it helps, and have fun with Objective C!
PS: Kudos for sharing, and for not starting from a template. Well done! | {
"domain": "codereview.stackexchange",
"id": 7196,
"tags": "beginner, objective-c, ios"
} |
Simulate Dust Layer on PV Cell? | Question: Is it possible to simulate a layer of dust deposited on top of a PV cell in a controlled manner?
One potential method I had was to put a given weight of a particulate/powder (chalk powder, talcum powder, etc.) into water, and pour the water on top of the solar panel, letting it dry and thus leaving the powder on top of the panel. Would this be viable, and if so, what would be the best powder to use?
Answer: A PV cell's system efficiency will vary exactly linearly with the amount of light blocked by the dust. For the particle sizes and wavelengths involved, refraction and multiple scattering effects are minimal. What this leaves you with is a way to "map" dust collection to the equivalent uniform, neutral density filter as SolarMike implied in his comment.
I would recommend setting up two cells, one as a clean reference, and the other with arbitrary dust levels. Make a few measurements and generate a graph of ,say, power loss vs. total volume of powder deposited. You may want to repeat a few times to guarantee more or less uniform distribution of powder over the active area. | {
"domain": "engineering.stackexchange",
"id": 1714,
"tags": "simulation, experimental-physics, photovoltaics"
} |
Student Showdown - Q and A battle game, part 1: The Champions | Question: Just a self challenge/fun thing.
I have a scene where the player picks a character, of course it works but I'm repeating myself an awful lot. I would appreciate any formatting suggestions and making it more streamlined in general.
Champion ranger = new Champion.Generator(
"Legolas' Lover", // name
new Image("/Resources/Ranger.gif"), // icon
new Image("/Resources/Ranger.jpg")) // portrait
.attack(4)
.armor(2)
.health(8)
.description("Has less health than most, but decent armor, and strong attacks!")
.generate();
Champion dragon = new Champion.Generator(
"Scion of Daenerys", // name
new Image("/Resources/Dragon.gif"), // icon
new Image("/Resources/Dragon.jpg")) // portrait
.attack(4)
.armor(2)
.health(8)
.description("Is a motherloving dragon")
.generate();
Champion angel = new Champion.Generator(
"Full Metal Bitch", // name
new Image("/Resources/Angel.gif"), // icon
new Image("/Resources/Angel.jpg")) // portrait
.attack(4)
.armor(2)
.health(8)
.description("Really likes rock music, tons of defense")
.generate();
Champion spirit = new Champion.Generator(
"Forlorn Phantasm", // name
new Image("/Resources/Spirit.gif"), // icon
new Image("/Resources/Spirit.jpg")) // portrait
.attack(4)
.armor(2)
.health(8)
.description("So much health, find yourself spirited away~")
.generate();
Champion knight = new Champion.Generator(
"Dark Lancelot", // name
new Image("/Resources/Knight.gif"), // icon
new Image("/Resources/Knight.jpg")) // portrait
.attack(4)
.armor(2)
.health(8)
.description("He's flipping round tables!")
.generate();
Button rangerButton = new Button("Choose the Ranger.", new ImageView(ranger.portrait()));
rangerButton.setTooltip(new Tooltip(ranger.description()));
rangerButton.setContentDisplay(ContentDisplay.TOP);
Button dragonButton = new Button("Choose the Knight.", new ImageView(dragon.portrait()));
dragonButton.setTooltip(new Tooltip(dragon.description()));
dragonButton.setContentDisplay(ContentDisplay.TOP);
Button angelButton = new Button("Choose the Angel.", new ImageView(angel.portrait()));
angelButton.setTooltip(new Tooltip(angel.description()));
angelButton.setContentDisplay(ContentDisplay.TOP);
Button spiritButton = new Button("Choose the Spirit.", new ImageView(spirit.portrait()));
spiritButton.setTooltip(new Tooltip(spirit.description()));
spiritButton.setContentDisplay(ContentDisplay.TOP);
Button knightButton = new Button("Choose the Knight.", new ImageView(knight.portrait()));
knightButton.setTooltip(new Tooltip(knight.description()));
knightButton.setContentDisplay(ContentDisplay.TOP);
VBox rangerLayout = new VBox(10);
rangerLayout.setAlignment(Pos.CENTER);
rangerLayout.getChildren().add(rangerButton);
VBox dragonLayout = new VBox(10);
dragonLayout.setAlignment(Pos.CENTER);
dragonLayout.getChildren().add(dragonLayout);
VBox angelLayout = new VBox(10);
angelLayout.setAlignment(Pos.CENTER);
angelLayout.getChildren().add(angelLayout);
VBox spiritLayout = new VBox(10);
spiritLayout.setAlignment(Pos.CENTER);
spiritLayout.getChildren().add(spiritButton);
VBox knightLayout = new VBox(10);
knightLayout.setAlignment(Pos.CENTER);
knightLayout.getChildren().add(knightButton);
HBox mainLayout = new HBox();
mainLayout.getChildren().addAll(
rangerLayout,
dragonLayout,
angelLayout,
spiritLayout,
knightLayout
);
stage.setScene(new Scene(mainLayout));
stage.show();
In case it helps, the Champion class:
public class Champion {
private String name;
private String description;
private Image icon;
private Image portrait;
private int armor;
private int attack;
private int health;
private int maxHealth;
public Image portrait() {
return portrait;
}
public Image icon() {
return icon;
}
public String name() {
return name;
}
public String description() {
return description;
}
Champion(Generator generator) {
name = generator.name;
description = generator.description;
portrait = generator.portrait;
icon = generator.icon;
attack = generator.attack;
armor = generator.armor;
health = generator.health;
maxHealth = generator.health;
}
static class Generator {
String name;
String description = "Please add a description for me Legato!";
Image icon;
Image portrait;
int attack = 4;
int armor = 0;
int health = 15;
Generator(String name, Image icon, Image portrait){
this.name = name;
this.icon = icon;
this.portrait = portrait;
}
public Generator attack(int val){ attack = val;return this; }
public Generator armor(int val){ armor = val; return this; }
public Generator health(int val){ health = val; return this; }
public Generator description(String text) { description = text; return this; }
public Champion generate(){ return new Champion(this); }
}
}
Answer: You can create a couple of functions to avoid duplications. You already abstracted a lot away by introducing the Champion class, so this is actually rather easy:
private Button getButton(Champion champion, String name) {
Button button = new Button("Choose the " + name + ".", new ImageView(champion.portrait()));
button.setTooltip(new Tooltip(champion.description()));
button.setContentDisplay(ContentDisplay.TOP);
return button;
}
private VBox getBox(Button button) {
VBox layout = new VBox(10);
layout.setAlignment(Pos.CENTER);
layout.getChildren().add(button);
return layout;
}
If you add the name string to the Champion class, you wouldn't need to pass it to the method.
Then use it like this:
HBox mainLayout = new HBox();
mainLayout.getChildren().addAll(
getBox(getButton(ranger, "Ranger")),
[...]
); | {
"domain": "codereview.stackexchange",
"id": 13268,
"tags": "java, javafx, battle-simulation"
} |
Hardy weinberg equilibrium and Wright Fisher model | Question: Is the hardy weinberg equilibrium derived by using a model similar to the Wright Fisher model, just without assuming genetic drift and finite pop size? Both seem to use the same assumptions except Wright Fisher takes into account finite pop size and genetic drift. What is the model used to derive Hardy-Weinberg equilibrium called?
Answer: Welcome to Biology.SE
When you talk about the 'Wright-Fisher model' I suppose you are referring to the 'Wright-Fisher model of genetic drift (WF)' involving the use of a binomial distribution.
You are correct that the two models make very similar assumptions. However those assumptions are generally speaking found in a large set of different models just because they often simplify the math. I think it is quite wise to consider these two models as very much independent.
Shared assumptions
Both the WF and the Hardy-Weinberg model (HW) assume
non-overlapping generation
panmictic population
absence of selection, migration and mutation
random mating
no sex-ratio bias
Of course, violation of some of these assumptions are relatively easy to model but such complication is often not taught in intro classes to population genetics.
Wright-Fisher model derivation
Let $2N$ be the total number of haplotypes, where $N$ is the population size. I here just assumed a diploid population but derivation for a haploid population is just as easy. Let $k$ be the number of occurrences of allele A, so that the frequency of the allele A is $p=\frac{k}{2N}$. Assuming sexual reproduction only, the number of occurrences of the allele A in the next generation is
$$k' = {2N \choose k} p^k (1-p)^{2N-k}$$
Quick side note: Effective population size
The variance of this distribution is $var(k') = 2N p (1-p)$. The variance of the allele frequency in the next generation is therefore $var(p') = var(\frac{k'}{2N}) = \frac{1}{4N^2} var(k') = \frac{2N p (1-p)}{4N^2} = \frac{p(1-p)}{2N}$.
Replacing $N$ by $N_e$ and solving for $N_e$ yields
$$N_e = \frac{p(1-p)}{2 var(p')},$$
which is the definition of the effective population size $N_e$.
Wright-Fisher model assumptions
The WF does not per say assume a finite population size, it is just that when the population size is infinite, the model loses all of its appeal!
Hardy-Weinberg model derivation
See here for a more complete discussion
The derivation of Hardy-Weinberg genotype frequencies is simple enough to not rely on any pre-existing model. Let's consider the simple version of the HW model, the one with a bi-allelic locus. The two alleles A and B exist at frequency $p$ and $1-p$ respectively. Imagine you have to draw an allele from this population. The probability to draw a given allele is equal to its frequency. Now draw a second allele. Assuming the population is large enough or assuming that selfing is allowed, the probability of drawing a given allele is still the same at the second draw than at the first draw.
As such the probability of drawing the allele A twice is simply $p p = p^2$. Similarly the probability of drawing the allele B twice is simply $(1-p)(1-p)=(1-p)^2$. The probability of first drawing the A allele and then the B allele is $p(1-p)$ and the probability of first drawing the B allele and then the A allele is $(1-p)p$. If we don't make the discrimination between AB and BA genotypes and just call them all AB, then the frequency of the AB genotypes is $2p(1-p)$. Of course $p^2 + 2p(1-p) + (1-p)^2 = 1$.
Hardy-Weinberg model assumptions
It is common to say that HW assume infinite population size. This is correct if you make from HW the interpretation that the genotype frequencies are exactly at $p^2$, $2p(1-p)$ and $(1-p)^2$ and that no evolutionary change will occur. But if you interpret HW as indicating the expected genotype frequencies from which deviation exist, then you do no make the assumption of infinite population size. Under this second interpretation, one can test deviation from HW expectations with a binomial test (or most often, when the population size is large enough with a $\chi^2$ goodness-of-fit test).
Related post
Solving Hardy Weinberg problems is very related and offers a good tutorial. | {
"domain": "biology.stackexchange",
"id": 5897,
"tags": "population-genetics"
} |
Simple string tokenizer wrapper | Question: Is this orthogonal?
#ifndef TOKENIZER_H
#define TOKENIZER_H
#include <iostream>
#include <string>
#include <sstream>
#include <vector>
class StringTokenizer
{
private:
std::vector<std::string> tokens;
unsigned int index;
public:
StringTokenizer(std::string const & str, const char delim)
{
index = 0;
// Populate the token vector...
std::istringstream stream(str);
std::string token;
while (std::getline(stream, token, delim))
{
tokens.push_back(token);
}
}
~StringTokenizer(void) { };
bool HasMoreTokens() { return index < tokens.size(); };
std::string NextToken() { return TokenAtIndex(index++); };
std::string PreviousToken() { return TokenAtIndex(index--); };
std::string TokenAtIndex(int x)
{
std::string token = "";
try
{
token = tokens.at(x);
}
catch (const std::out_of_range& range_error)
{
std::cerr << "[!!] Out of Range: " << range_error.what() << std::endl;
}
return token;
}
void Clear() { index = 0; };
};
#endif
Answer: Your constructor is nigh-identical to this up-voted answer for splitting a string.
The rest of the class doesn't add much value: IMO the vector provides the same API as your class and more besides; so a user might prefer to have the vector than to have your StringTokenizer instance which wraps/encapsulates/hides the vector.
vector.at returns a const reference, instead of returning by value: perhaps your methods should too. | {
"domain": "codereview.stackexchange",
"id": 6444,
"tags": "c++, design-patterns, strings"
} |
What is the Stinespring dilation of $T\otimes I$ for some CPTP map $T$? | Question: Let $T: \mathcal{H}_A \rightarrow \mathcal{H}_B$ be a CPTP map with Stinespring extension $U: \mathcal{H}_{A} \rightarrow \mathcal{H}_{B} \otimes \mathcal{H}_E$.
That is $U$ is an isometry such that for all states $\rho_A$, we have $Tr_{E} \left( U \rho_A U^\dagger\right)= T(\rho_A)$.
I am interested in Stinespring dilation of $T \otimes \mathbb{I}_C$ where $C$ is some additional register. My guess is $U \otimes \mathbb{I}_C$ should work but I am unable to prove.
That is, consider a state $\rho_{AC}$ and $\sigma_{BC}= T_A \otimes \mathbb{I}_C (\rho_{AC})$. Will the following equality hold:
$Tr_{E} \left( (U\otimes \mathbb{I}_C) \rho_{AC} (U\otimes \mathbb{I}_C)^\dagger\right)= \sigma_{BC}$.
Ultimately, I want to extend this to $U \otimes V$ but I suppose I can break it up to ($U \otimes \mathbb{I}) (\mathbb{I} \otimes V)$.
Answer: Yes. Note that in general
${\rm Tr}_1(\rho_{12} \otimes \rho_3) = {\rm Tr}_1(\rho_{12}) \otimes \rho_3$.
It's easy to verify your equality for $\rho_{AC} = \rho_A \otimes \rho_C$ :
$$
{\rm Tr}_{E} \left( (U\otimes \mathbb{I}_C) \rho_{AC} (U\otimes \mathbb{I}_C)^\dagger\right)
= {\rm Tr}_{E} \left( U \rho_{A} U^\dagger \otimes \rho_C \right)
= {\rm Tr}_{E} \left( U \rho_{A} U^\dagger \right) \otimes \rho_C
= (T \otimes \mathbb{I}_C)(\rho_{AC}).
$$
But since both parts are linear over $\rho_{AC}$ (it's common in QM) the equality is true for any $\rho_{AC}$. | {
"domain": "quantumcomputing.stackexchange",
"id": 3507,
"tags": "quantum-state, quantum-operation, linear-algebra, kraus-representation"
} |
Is there any theoretically proven optimal compression algorithm? | Question: Is Huffman coding always optimal since it uses Shanon's ideas?
What about text, image, video, ... compression?
Is this subject still active in the field? What classical or modern references should I read?
Answer: Huffman coding is optimal for a symbol-to-symbol coding where the probabilities of every symbol are independent and known before-hand. However, when these conditions are not satisfied (as in image, video), other coding techniques such as LZW, JPEG, etc. are used. For more details, you can go through the book "Introduction to Data Compression" by Khalid Sayood. | {
"domain": "cs.stackexchange",
"id": 450,
"tags": "algorithms, information-theory, data-compression"
} |
Python node with more than one publisher/subscriber | Question:
Is it possible to make a ros python node that has more than one subscriber and more than one publisher? Are there examples of this?
Originally posted by david.c.liebman on ROS Answers with karma: 125 on 2014-01-20
Post score: 0
Answer:
Yes.
I'm not aware of any good examples offhand, but the basic principle is that you simply instantiate multiple copies of the publisher and subscriber objects.
(I also believe that this question has been asked and answered before)
Originally posted by ahendrix with karma: 47576 on 2014-01-20
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 16702,
"tags": "ros, python, node, publisher"
} |
common lisp: nested loop beautification (checkboard texture generator) | Question: I've written a small code snippet that generates checkboard texture using cl-opengl.
(defun init-texture (tex)
(gl:bind-texture :texture-2d tex)
(gl:tex-parameter :texture-2d :texture-min-filter :nearest)
(gl:tex-parameter :texture-2d :texture-mag-filter :nearest)
(print *test-texture*)
(let* ((tex-w 16) (tex-h 16) (pixel-size 4)
(pixel-data (make-array (* (* tex-w tex-h) pixel-size) :element-type '(unsiged-byte 8) :adjustable nil :initial-element 0)))
(loop for y from 0 to (- tex-h 1) do
(let ((line-offset (* (* tex-w pixel-size) y)))
(loop for x from 0 to (- tex-w 1) do
(let ((x-offset (+ line-offset (* x pixel-size))) (c (if (oddp (+ x y)) 255 0)))
(setf (aref pixel-data x-offset) 255)
(setf (aref pixel-data (+ x-offset 1)) c)
(setf (aref pixel-data (+ x-offset 2)) c)
(setf (aref pixel-data (+ x-offset 3)) 255)))))
(gl:tex-image-2d :texture-2d 0 :rgba tex-w tex-h 0 :rgba :unsigned-byte pixel-data)))
Problem:
it looks ugly and is unnecessarily verbose, especially (- tex-h 1) and 4 setf in a row.
How can I "beautify"/simplify this?
Program logic:
generate 1d array of (unsigned-byte 8). Array size is tex-w*tex-h*pixel-size, where tex-w is 16, tex-h is 16, pixel-size is 4. 1 pixels. That's 16x16 texture data in rgba format where every pixel takes 4 elements.
Fill array. If (oddp (+ x y)), put white pixel, otherwise put blue pixel.
Answer: Not much better:
(defun init-texture (tex &aux (tex-w 16) (tex-h 16) (pixel-size 4))
(gl:bind-texture :texture-2d tex)
(gl:tex-parameter :texture-2d :texture-min-filter :nearest)
(gl:tex-parameter :texture-2d :texture-mag-filter :nearest)
(print *test-texture*)
(let ((pixel-data (make-array (* tex-w tex-h pixel-size)
:element-type '(unsiged-byte 8)
:adjustable nil
:initial-element 0)))
(loop for y below tex-h
for line-offset = (* tex-w pixel-size y)
do (loop for x below tex-w
for x-offset = (+ line-offset (* x pixel-size))
for c = (if (oddp (+ x y)) 255 0)
do (setf (aref pixel-data (+ x-offset 0)) 255
(aref pixel-data (+ x-offset 1)) c
(aref pixel-data (+ x-offset 2)) c
(aref pixel-data (+ x-offset 3)) 255)))
(gl:tex-image-2d :texture-2d 0 :rgba tex-w tex-h 0 :rgba :unsigned-byte pixel-data)))
To get rid of the arefs is not really possible. One way would be to use the function REPLACE:
(replace pixel-data (vector 255 c c 255) :start1 x-offset)
But now it allocates a vector for that. Then one might want to wish that the vector would be allocated on the stack:
(let ((new-data (vector 255 c c 255)))
(declare (dynamic-extent new-data))
(replace pixel-data (vector 255 c c 255) :start1 x-offset))
Another way is not to write the arefs, but to hide it behind a macro:
(defmacro vector-put-at ((vector start) &rest data)
`(progn
,@(loop for d in data and i from 0
collect `(setf (aref ,vector (+ ,start ,i)) ,d))))
The you can write above as:
(defun init-texture (tex &aux (tex-w 16) (tex-h 16) (pixel-size 4))
(gl:bind-texture :texture-2d tex)
(gl:tex-parameter :texture-2d :texture-min-filter :nearest)
(gl:tex-parameter :texture-2d :texture-mag-filter :nearest)
(print *test-texture*)
(let ((pixel-data (make-array (* tex-w tex-h pixel-size)
:element-type '(unsiged-byte 8)
:adjustable nil
:initial-element 0)))
(loop for y below tex-h
for line-offset = (* tex-w pixel-size y)
do (loop for x below tex-w
for x-offset = (+ line-offset (* x pixel-size))
for c = (if (oddp (+ x y)) 255 0)
do (vector-put-at (pixel-data x-offset)
255 c c 255)
(gl:tex-image-2d :texture-2d 0 :rgba tex-w tex-h 0 :rgba :unsigned-byte pixel-data))) | {
"domain": "codereview.stackexchange",
"id": 4287,
"tags": "common-lisp, opengl"
} |
Importing a costmap layer from a pgm | Question:
I've got a costmap that uses a static, obstacle, inflation, and another custom layer that I wrote. Is there an easy way to load an additional layer drawn on a PGM grayscale image? If not, I could probably manage to analyze the image with python or C++ and add values to a new costmap layer from there. Thanks!
Originally posted by mattc_vec on ROS Answers with karma: 50 on 2015-08-03
Post score: 0
Answer:
This is actually what the map_server is for. See the documentation for more details.
It takes a grayscale image (.pgm should work), where whiter pixels are free and darker pixels are occupied.
Remember to also configure the respective yaml file correctly, as described in the documentation.
This map can then be loaded in another static map layer, as described here.
Originally posted by mgruhler with karma: 12390 on 2015-08-04
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 22366,
"tags": "ros, navigation, costmap, pgm, layered-costmap"
} |
roslaunch ocl deployer node does not recognize '-s foo.ops --' | Question:
When I run:
rosrun ocl deployer-gnulinux -s foo.ops
This works just fine, however when I try this from within a *.launch file
(documented here: http://www.ros.org/wiki/ocl)
I get:
Failed to parse arguments: Unknown option -s
It looks like the arguments are being treated differently when called this way.
Originally posted by brianw1 on ROS Answers with karma: 26 on 2011-11-01
Post score: 0
Answer:
Hey Brian,
you are definitively using the correct syntax. All I can think of is something goes wrong with combining it with the gnome-terminal command. I'm using very similar syntax, but with the x-terminal instead, so maybe you could try that one and see if that gives you any better results:
< node name="orocos" pkg="ocl" type="deployer-gnulinux" launch-prefix="xterm -e" args="-s $(find orocos)/system.ops -- ">
Originally posted by Steven Bellens with karma: 735 on 2011-11-03
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 7152,
"tags": "ros, roslaunch, orocos"
} |
Is it possible to reconstruct convolutional layers' input using transposed convolution? | Question: I've been trying to visualize internal activations in CNN and came across this paper: "Visualizing and Understanding Convolutional Networks" by Zeiler & Fergus.
In the paper they mentioned reconstructing the input image from internal convnet activations using deconvnet. Specifically, on reversing the convolutional (filter) layers, they said:
To invert this, the deconvnet uses transposed versions of
the same filters, but applied to the rectified maps, not
the output of the layer beneath. In practice this means
flipping each filter vertically and horizontally.
I believe they are referring to just a simple transposed convolution operation, since convolution with flipped weights is equivalent to applying the transposed convolution operation.
My question is that transposed convolution is not an inverse of the convolution operation. This simple snippet shows just that:
import torch
import torch.nn as nn
i = torch.randn((10,10)).unsqueeze(0)
c = nn.Conv2d(1, 1, 2, bias=False)
ct = nn.ConvTranspose2d(1, 1, 2, bias=False)
ct.weight = nn.Parameter(c.weight)
torch.isclose(i, ct(c(i)) # not true
So I don't really understand how they claim that the output from deconvnet is a representation of the internal activations of the convnet.
Answer: You're not able reconstruct convolutional layers' inputs using transposed convolutions (in most cases). The term invert is a bit confusing here -- I interpret this to mean inverting the space of inputs and outputs, not the values themselves. If you look at section 2.1, for example, they state: "We present a novel way to map these activities back to the input pixel space, showing what input pattern originally caused a given activation in the feature maps."
In your code snippet, even though the values of i and ct(c(i) are different, the shape should be the same, as ct transforms the activations from post-convolutional space to pre-convolutional space.
You can see this in your snippet, but also in the mathematical formulation of convolutions and transposed convolutions. Let $W \in \mathbb{R}^{(n,m)}$ be the sparse matrix representation of the convolutional kernel, then, as you mentioned, $W^T \in \mathbb{R}^{(m,n)}$ is the transposed convolutional kernel (see the previous link for specifics on how this works).
For some input $x \in \mathbb{R}^n$, if you do $W^T W x$, you get another $\mathbb{R}^n$ vector back but $W^T W \neq I$ in most cases (i.e., unless $W$ is orthogonal).
Now, let's think about this in the context of this paper (i.e., what's the point of these deconvolution operations?). A CNN can be thought of as a feature extractor which converts the high-dimensional input representation into a low(er) dimensional, dense representation that contains the most important features in the image for the classification task such that you should be able to linearly separate the target classes in this dense, lower dimensional feature space. Because this feature space is of much lower dimension than the input image, you can't fit all of the information in the input image into this vector. So, each layer should be iteratively extracting important features from their inputs. By design, the convolutional layers aren't invertible -- they're selecting the important features and throwing away the unimportant ones.
Because of this, if you reverse the convolution operations, yes, the image now looks different, but this difference tells you what the convolution operation is doing! | {
"domain": "ai.stackexchange",
"id": 3962,
"tags": "convolutional-neural-networks, pytorch, data-visualization"
} |
hg clone Permission denied | Question:
$ hg clone https://bitbucket.org/osrf/gazebo_models
destination directory: gazebo_models
abort: Permission denied: gazebo_models
Originally posted by benshaul on Gazebo Answers with karma: 5 on 2013-04-05
Post score: 0
Answer:
Looks like you're running the hg clone command in a directory where you don't have permission to create files. Try it in your home directory, e.g.:
cd $HOME
hg clone https://bitbucket.org/osrf/gazebo_models
Originally posted by gerkey with karma: 1414 on 2013-04-07
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by benshaul on 2013-04-07:
thanks.... | {
"domain": "robotics.stackexchange",
"id": 3187,
"tags": "gazebo"
} |
Tolerance on dowel pin length | Question: Is the length of a steel dowel pin standardized to a certain tolerance? McMaster-Carr provides tolerances on the diameter of dowel pins but not the length. In the event where I want a dowel pin to fit into two blind locating holes, I'd like to know if I need to leave a little extra "wiggle room" in the hole depth.
For example, I have two parts that need to meet as close together as possible. One part has a 5 mm deep locating hole for a dowel pin. If I have a 10 mm dowel, can I specify a 5 mm deep locating hole on the second part (assuming the dowel is actually <10 mm long)? Or should I choose something deeper (5.5 or 6 mm perhaps)?
Answer: Usually the nominal length of the dowel is the total overall including the actual ground, pin section + the rounded end and + beveled end. Usually lengths are not toleranced so you should have a deeper hole then the nominal length as they run slightly larger.
ALSO....If you are using dowel pin to locate two blind holes, make sure to use a pull dowel and grind a "vent" on the side on the side of the pin so that it can be removed. Better if one of those holes doesn't have to be blind so the pin can be pushed out. A regular dowel pin, pressed into a blind hole is a real chore to try and remove. | {
"domain": "engineering.stackexchange",
"id": 2298,
"tags": "mechanical-engineering, design, machining"
} |
Get random sets of 5 from 50 | Question: From a deck of 50 need to get sets of 5 as random as possible and as fast as possible. The thought here is to shuffle to get 10 sets of five at a time with no collisions.
Int1 and Int2 will not change during a run. If they do other bad stuff would happen.
public MainWindow()
{
InitializeComponent();
Stopwatch sw = new Stopwatch();
sw.Start();
for(int i = 0; i < 100000; i++ )
{
//Debug.WriteLine(string.Join(", ", NextFive(11, 12)));
int count = NextFive(11, 12).Count();
}
sw.Stop();
Debug.WriteLine(sw.ElapsedMilliseconds.ToString("N0"));
}
Random rand = new Random();
private int[] deckBase;
int nextFiveLastStart = 45;
private IEnumerable<int> NextFive(int int1, int int2)
{
if (int1 == int2)
throw new IndexOutOfRangeException("int1 == int2");
if (int1 > 51 || int2 > 51)
throw new IndexOutOfRangeException("int1 > 51 || int2 > 51");
if (deckBase == null)
{
deckBase = new int[50];
int j = 0;
for (int i = 0; i < 52; i++)
{
if (i == int1 || i == int2)
continue;
deckBase[j] = i;
j++;
}
}
if (nextFiveLastStart >= 45)
{
nextFiveLastStart = 0;
// Yates shuffle
for (int i = 49; i >= 1; i--)
{
int j = rand.Next(i + 1);
if (j != i)
{ // exchange values
int curVal = deckBase[i];
deckBase[i] = deckBase[j];
deckBase[j] = curVal;
}
}
}
else
nextFiveLastStart += 5;
return deckBase.Skip(nextFiveLastStart).Take(5);
}
Answer:
private IEnumerable<int> NextFive(int int1, int int2)
The names int1 and int2 give absolutely no clue about what they are good for. I tried to ready the code but I cannot figure it out. Looking at if (i == int1 || i == int2) I guess it means exclude but who knows.
throw new IndexOutOfRangeException("int1 == int2");
This isn't IndexOutOfRangeException but rather ArgumentException. The argumetns have invalid values but they might be within the allowed range.
throw new IndexOutOfRangeException("int1 > 51 || int2 > 51");
I agree this is the right type of the exception but I preferred it checking each value separately to give the user a hint which one of the parameters is incorrect.
Random rand = new Random();
private int[] deckBase;
int nextFiveLastStart = 45;
Inconsistent access modifier usage. private implicit, private explicit, private implicit... Pick one ;-)
int j = 0;
for (int i = 0; i < 52; i++)
{
if (i == int1 || i == int2)
continue;
deckBase[j] = i;
j++;
}
There's no need for the int j = 0; you can put it inside the for:
for (int i = 0, j = 0; i < 52; i++)
{
if (i == int1 || i == int2)
continue;
deckBase[j] = i;
j++;
} | {
"domain": "codereview.stackexchange",
"id": 23149,
"tags": "c#, .net"
} |
Is it possible to control joint torque using the position input and torque feedback? | Question: So I am working with a UR10 manipulator which doesn't have a direct torque interface. However, it provides torque/velocity/position feedback for each joint as well as position/velocity interfaces for joint control.
I have a feeling the answer is "yes", but I've been having trouble finding examples and comments on the feasibility of this approach.
Thanks!
Answer: If you're trying to do torque control, then you'll probably get best results if you could work with joint accelerations, because:
$$
\tau = I \alpha \\
$$
Where $\tau$ is joint torque, $I$ is the moment of inertia, and $\alpha$ is the joint angular acceleration. It's a linear relationship and should be pretty straightforward to control with a PID controller.
You don't have an acceleration input for your joint, though. What you have is a speed input for your joint. SO, what you could do is to setup a PID controller with torque error as your input, joint acceleration reference as your output, and then perform a numeric integration of the output to get a speed reference. This would look like:
$$
\tau_{\mbox{err}} = \tau_{\mbox{ref}} - \tau_{\mbox{fbk}} \\
e_P = \tau_{\mbox{err}} \\
e_I = e_I + e_P*\Delta t \\
e_D = \frac{e_P - e_{P_{\mbox{prev}}}}{\Delta t} \\
e_{P_{\mbox{prev}}} = e_P \\
a_{\mbox{ref}} = (K_P e_P) + (K_I e_I) + (K_D e_D) \\
v_{\mbox{ref}} = v_{\mbox{ref}} + a_{\mbox{ref}} * \Delta t \\
$$ | {
"domain": "robotics.stackexchange",
"id": 1374,
"tags": "control, robotic-arm, ros, torque, manipulator"
} |
Where does this relativistic relation involving the delta function come from? | Question: \begin{equation}
\int\delta(E^2-\mathbf{p}^2-m^2)dE=\frac{1}{2E_\mathbf{p}}
\end{equation}
Shouldn't integrating the delta function like this just give 1?
Answer: With the help from the comments this now makes sense.
\begin{equation}
\int \delta(E^2-p^2-m^2)dE
\end{equation}
With $$E_p^2-p^2-m^2=0$$
Use substitutions
\begin{equation}
f(E)=E^2-p^2-m^2\quad df=2EdE
\end{equation}
\begin{equation}
\int \delta(f)\frac{df}{2E(f)}=\frac{1}{2E_p}
\end{equation}
$E(f)$ is easily found by inverting $f$
Thanks! | {
"domain": "physics.stackexchange",
"id": 22316,
"tags": "homework-and-exercises, notation, integration, dirac-delta-distributions"
} |
Brainfuck Interpreter in Java | Question: Description
To increase the awareness of my previous brainfuck question here's also a brainfuck interpreter.
This is written with Java 8
Class Summary (298 lines in 4 files, making a total of 7409 bytes)
BrainF.java: Represents a Brainfuck program/execution
BrainFCommand.java: Enum for the various Brainfuck commands
Code
This code can also be found on github
BrainF.java: (198 lines, 4575 bytes)
public class BrainF {
public static final int DEFAULT_MEMORY_SIZE = 0x1000;
public BrainF(int memorySize, String code, Stream<Byte> in) {
this(memorySize, in);
addCommands(code);
}
public BrainF(int memorySize, Stream<Byte> in) {
memory = new byte[memorySize];
input = in.iterator();
}
private final List<BrainFCommand> commands = new ArrayList<>();
private final Iterator<Byte> input;
private int commandIndex;
private final byte[] memory;
private final StringBuilder output = new StringBuilder();
private int memoryIndex;
public void addCommands(String string) {
string.chars().mapToObj(i -> BrainFCommand.getCommand((char) i)).filter(obj -> obj != null).forEachOrdered(commands::add);
}
private void changeMemory(int i) {
checkMemoryIndex();
memory[memoryIndex] += i;
}
private void findMatching(BrainFCommand decrease, BrainFCommand increase, int direction) {
int matching = 1;
while (true) {
commandIndex += direction;
BrainFCommand current = commands.get(commandIndex);
if (current == decrease) {
matching--;
if (matching == 0) {
break;
}
}
else if (current == increase) {
matching++;
}
}
}
public byte getMemory() {
return memory[memoryIndex];
}
public void runToEnd() {
while (commandIndex < commands.size()) {
step();
}
}
public BrainFCommand step() {
if (commandIndex >= commands.size()) {
return null;
}
BrainFCommand command = commands.get(commandIndex);
perform(command);
commandIndex++;
return command;
}
public void setMemory(byte value) {
memory[memoryIndex] = value;
}
public String getOutput() {
return output.toString();
}
public int getMemoryIndex() {
return memoryIndex;
}
public int getCommandIndex() {
return commandIndex;
}
public void perform(BrainFCommand command) {
switch (command) {
case ADD:
changeMemory(1);
break;
case END_WHILE:
if (getMemory() != 0) {
findMatching(BrainFCommand.WHILE, BrainFCommand.END_WHILE, -1);
}
break;
case NEXT:
memoryIndex++;
checkMemoryIndex();
break;
case PREVIOUS:
memoryIndex--;
checkMemoryIndex();
break;
case READ:
byte value = input.next();
setMemory(value);
break;
case SUBSTRACT:
changeMemory(-1);
break;
case WHILE:
if (getMemory() == 0) {
findMatching(BrainFCommand.END_WHILE, BrainFCommand.WHILE, 1);
}
break;
case WRITE:
char write = (char) getMemory();
output.append(write);
break;
case NONE:
default:
break;
}
}
private void checkMemoryIndex() {
if (memoryIndex < 0) {
memoryIndex += memory.length;
}
if (memoryIndex >= memory.length) {
memoryIndex -= memory.length;
}
}
public byte[] getMemoryArray(int fromIndex, int length) {
return Arrays.copyOfRange(memory, fromIndex, fromIndex + length);
}
public void setCommands(String text) {
commands.clear();
addCommands(text);
}
public void reset() {
Arrays.fill(memory, (byte) 0);
commandIndex = 0;
memoryIndex = 0;
output.setLength(0);
}
public int getMemorySize() {
return memory.length;
}
public byte getMemory(int index) {
return memory[index];
}
public static BrainF createFromCodeAndInput(int memorySize, String code, String input) {
return createFromCodeAndInput(memorySize, code, input.chars().mapToObj(i -> (byte) i ));
}
public static BrainF createFromCodeAndInput(int memorySize, String code, Stream<Byte> inputStream) {
return new BrainF(DEFAULT_MEMORY_SIZE, code, inputStream);
}
public static BrainF createFromCode(String code) {
return createFromCodeAndInput(DEFAULT_MEMORY_SIZE, code, "");
}
public static BrainF createFromCode(int memorySize, String code) {
return createFromCodeAndInput(memorySize, code, "");
}
public static BrainF createWithDefaultSize() {
return createUsingSystemInputWithMemorySize(DEFAULT_MEMORY_SIZE);
}
public static BrainF createUsingSystemInputWithMemorySize(int memorySize) {
Stream<Byte> in = Stream.generate(() -> {
try {
return (byte) System.in.read();
}
catch (Exception e) {
throw new RuntimeException(e);
}
});
return new BrainF(memorySize, in);
}
}
BrainFCommand.java: (27 lines, 619 bytes)
public enum BrainFCommand {
NONE((char) 0), NEXT('>'), PREVIOUS('<'), WRITE('.'), READ(','), ADD('+'), SUBSTRACT('-'), WHILE('['), END_WHILE(']');
private final char ch;
private static final Map<Character, BrainFCommand> commands = new HashMap<>();
static {
for (BrainFCommand comm : BrainFCommand.values()) {
commands.put(comm.ch, comm);
}
}
private BrainFCommand(char ch) {
this.ch = ch;
}
public static BrainFCommand getCommand(char ch) {
return commands.getOrDefault(ch, NONE);
}
}
Usage / Test
Just a simple test to prove it works. A simple GUI can be found on Github (be sure to click the 'Save Code' button if you use it)
BrainTest.java: (67 lines, 1992 bytes)
public class BrainTest {
@Test
public void gotoCorrectEndWhile() {
BrainF brain = BrainF.createWithDefaultSize();
brain.addCommands(">+>[-]+ ");
brain.addCommands("++[-->++]--> Find next 254 and go one step beyond it");
brain.addCommands(" Loop through all 254s");
brain.addCommands("+++[--- Make sure that we are not at 253 (end)");
brain.addCommands("++[--<++]-- ");
assertEquals(BrainFCommand.NEXT, brain.step());
assertEquals(BrainFCommand.ADD, brain.step());
assertEquals(BrainFCommand.NEXT, brain.step());
assertEquals(BrainFCommand.WHILE, brain.step());
assertEquals(6, brain.getCommandIndex());
assertEquals(BrainFCommand.ADD, brain.step());
}
@Test
public void simpleLoopMultiplication() {
BrainF brain = BrainF.createWithDefaultSize();
brain.addCommands("++[>+++<-]>>>");
brain.runToEnd();
assertArrayEquals(new byte[] { 0, 6, 0, 0, 0, 0, 0, 0, 0, 0 },
brain.getMemoryArray(0, 10));
}
@Test
public void printAlphabet() {
BrainF brain = BrainF.createWithDefaultSize();
brain.addCommands("++++++[>++++++++++>++++<<-]>+++++>++[-<.+>]");
brain.runToEnd();
assertEquals("ABCDEFGHIJKLMNOPQRSTUVWXYZ", brain.getOutput());
}
@Test
public void input() {
BrainF brain = BrainF.createFromCodeAndInput(BrainF.DEFAULT_MEMORY_SIZE, "+++,.", "a");
brain.runToEnd();
assertEquals("a", brain.getOutput());
}
@Test
public void simpleCommands() {
BrainF abc = BrainF.createWithDefaultSize();
abc.addCommands("+>++>+++<");
abc.runToEnd();
assertEquals(9, abc.getCommandIndex());
assertEquals(1, abc.getMemoryIndex());
assertEquals(2, abc.getMemory());
abc.perform(BrainFCommand.PREVIOUS);
assertEquals(1, abc.getMemory());
abc.perform(BrainFCommand.NEXT);
abc.perform(BrainFCommand.NEXT);
assertEquals(3, abc.getMemory());
}
}
Questions
Is the Stream<Byte> a good choice for the input? I was considering several but this seemed to me like the best option
Any other comments welcome, but I would prefer more 'higher-level' comments than stylistic and naming details. I don't think I made any style or naming mistakes, but if you really think I have then feel free to comment on those as well.
Answer: Move the memory into it's own class. The concept of the tape which the BF program is operating can be cleanly made its own class with a limited interface.
Make private or eliminate interface cruft: perform, getMemoryIndex, reset, addCommands, setCommands, step. For a BF interpreter, it really only make sense to set a particular program and then run it. Manipulating or inspecting the state beyond that really isn't something makes sense for an interpreter. As it stands, you've got a interface with way too many options.
Don't pass input to the constructor. If this were at all a serious interpreter, you'd probably want to be able to run the same program against different input. So it really makes more sense to have input be a parameter to a run function.
checkMemoryIndex and findMatching have poor names. Both changes the state of the BF interpreter but neither names makes me think it will. I expect check* to assert if the state went bad and find* to return the result. | {
"domain": "codereview.stackexchange",
"id": 9371,
"tags": "java, interpreter, brainfuck"
} |
RGB-6D-SLAM performance on a robot | Question:
Hello folks,
I have a Kinect mounted on a Create (not a turtlebot, but you get the idea), and I was hoping to build some 3D maps of my lab with that platform and the RGDB-6D-SLAM package. Sadly (even with the newly-released version) it doesn't work very well; even when driving (slowly!) a loop of only six or so meters, I get serious object doubling and (later) a catastrophic matching failure that throws one half of the map entirely out-of-plane; it's not even topologically correct.
My question is this: what's the correct next step? Fidgeting with RGBD-SLAM parameters? On a loop this tiny, I predict that gmapping + pointcloud_to_laserscan would have no trouble; it seems to me that graph techniques with full 3D and RGB should be able to do better.
I'm happy to provide the offending bagfile to anybody who would like to try this for themselves.
Thoughts?
[Edit]
I figured I'd put the bagfiles up:
First bag (435MB)
Second bag (1GB) (Higher frame rate)
Third bag (1GB) (Higher frame rate, different camera angle)
I'm curious to see what people think; it would be really handy to get this working. @Felix, any thoughts?
Thanks!
Originally posted by Mac on ROS Answers with karma: 4119 on 2011-04-20
Post score: 2
Original comments
Comment by Mac on 2011-04-26:
Hi Felix. Thanks for taking a look. I've tried it with USE_GICP_CODE turned on; no improvement. (This was several days ago; IIRC, the behavior changed, but didn't get better.)
Comment by Felix Endres on 2011-04-26:
Hi Mac,
I'll have a look into it. Meanwhile, please try to enable the use of USE_GICP_CODE at the top CMakeLists.txt.
Answer:
I tried your files today. Performance is really bad. First and second contain a difficult scenario. Most features aren't visible for long and sometimes there are few salient features. At other times there are objects occuring multiple times, which leads to undesired matches and transformations. The third set is is similar but somewhat more benign.
I managed to get reasonable results by changing some settings (and fixing a bug or two) that were not in global_definitions.cpp yet. I will commit the changes to the freiburg_tools repository tommorrow.
Additionally to the new parameters and bugfixes, I suggest you increase the number of ...min/max_features... and decrease the values for ...min_translation... and ...min_rotation... However, I haven't tried running on default parameters after the new changes, so it might as well work with default parameters.
One thing I noticed is that your bag files contain badly synced images and point clouds. Usually the timestamp of depth image and point cloud should be identical and the timestamp of the corresponding monochrome image shouldn't be more than 1/30 seconds away. This can lead to bad feature locations when projecting them to 3D. This might be a problem of bag file recording, but to make sure, I introduced a parameter that can be set to true to block such asynchronous frames. Also the visualization now overlays the monochrome image and the depth image in different colors, so you can check whether this is a problem.
Originally posted by Felix Endres with karma: 6468 on 2011-05-02
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Mac on 2011-06-02:
Rather than have the (underpowered) netbook on my robot generate the point clouds, I used openni_camera_unstable to bag the rgb and depth images, and then generate the point clouds in an offline pass; that greatly improved the synchronization; I only drop a few frames due to sync errors now.
Comment by Felix Endres on 2011-05-31:
Cool. Could you elaborate on the "careful bagging". What do you do different now?
Comment by Mac on 2011-05-31:
Felix, your theory about synchronization was spot-on: openni_camera_unstable plus some careful bagging produced much better results. There are still issues with frame-to-frame tracking, but hey, progress!
Comment by Felix Endres on 2011-05-11:
Make sure openni is NOT running, then "rosbag play third.bag" in a terminal, "rosrun rgbdslam rgbdslam" in another (use space to start processing), and then the experimental octomap server, as described elsewhere in this forum.
Comment by ngidgas on 2011-05-10:
Can you explain how you used these bags to generate the map? I am trying to debug my rgbdslam -> octomap setup.
Comment by Mac on 2011-05-06:
Ah, my mistake. Your link suggests checking out from openslam; I missed the distinction. I'll give this a try (possible after ICRA) and let you know.
Comment by Felix Endres on 2011-05-06:
As mentioned above i committed the changes to the freiburg tools repository. I will only update the openslam.org repository for major changes. Sorry for the inconvenience. There is still a huge issue with spurious feature matches that is hard to address. Let me know whether this helps you so far.
Comment by Mac on 2011-05-05:
Thanks for taking a look! Have these changes hit the repository yet? My just-update checkout from openslam.org shows a last-updated of April 26th. | {
"domain": "robotics.stackexchange",
"id": 5413,
"tags": "ros, slam, navigation, kinect, create-robot"
} |
Querying VizieR using SQL interface on their website | Question: I'm trying to query http://tapvizier.u-strasbg.fr/adql/?%20V/147/sdss12 using the following query, built by clicking Construct your query and Columns and constraints, and then selecting the columns and constraints I want. My goal is to get a csv file of photometric magnitudes and redshift data from SDSS DR12.
-- output format : csv
SELECT TOP 100 "V/147/sdss12".RA_ICRS, "V/147/sdss12".DE_ICRS, "V/147/sdss12".mode, "V/147/sdss12".q_mode, "V/147/sdss12".class, "V/147/sdss12".SDSS12, "V/147/sdss12".m_SDSS12,
"V/147/sdss12".objID, "V/147/sdss12"."Sp-ID", "V/147/sdss12".ObsDate, "V/147/sdss12".Q, "V/147/sdss12".umag, "V/147/sdss12".e_umag, "V/147/sdss12".gmag, "V/147/sdss12".e_gmag,
"V/147/sdss12".rmag, "V/147/sdss12".e_rmag, "V/147/sdss12".imag, "V/147/sdss12".e_imag, "V/147/sdss12".zmag, "V/147/sdss12".e_zmag, "V/147/sdss12".zsp, "V/147/sdss12".zph,
"V/147/sdss12".e_zph, "V/147/sdss12"."<zph>", "V/147/sdss12".e_zsp, "V/147/sdss12".gPmag, "V/147/sdss12".gpmag, "V/147/sdss12".ipmag, "V/147/sdss12".iPmag, "V/147/sdss12".rpmag,
"V/147/sdss12".rPmag, "V/147/sdss12".upmag, "V/147/sdss12".uPmag, "V/147/sdss12".zpmag, "V/147/sdss12".zPmag
FROM "V/147/sdss12"
WHERE "V/147/sdss12".e_zsp<0.01
AND "V/147/sdss12".Q=3
but when I press Test, I get the following error message:
adql.db.exception.UnresolvedIdentifiersException: 11 unresolved identifiers: V/147/sdss12 [l.5 c.72 - l.5 c.92], V/147/sdss12 [l.5 c.95 - l.5 c.115], V/147/sdss12 [l.5 c.118 - l.5 c.138], V/147/sdss12 [l.5 c.141 - l.5 c.161], V/147/sdss12 [l.5 c.164 - l.5 c.184], V/147/sdss12 [l.6 c.1 - l.6 c.21], V/147/sdss12 [l.6 c.24 - l.6 c.44], V/147/sdss12 [l.6 c.47 - l.6 c.67], V/147/sdss12 [l.6 c.70 - l.6 c.90], V/147/sdss12 [l.6 c.93 - l.6 c.113]! - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".gPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".gpmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".ipmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".iPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".rpmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".rPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".upmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".uPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".zpmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".zPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - 10 unresolved identifiers: V/147/sdss12 [l.5 c.72 - l.5 c.92], V/147/sdss12 [l.5 c.95 - l.5 c.115], V/147/sdss12 [l.5 c.118 - l.5 c.138], V/147/sdss12 [l.5 c.141 - l.5 c.161], V/147/sdss12 [l.5 c.164 - l.5 c.184], V/147/sdss12 [l.6 c.1 - l.6 c.21], V/147/sdss12 [l.6 c.24 - l.6 c.44], V/147/sdss12 [l.6 c.47 - l.6 c.67], V/147/sdss12 [l.6 c.70 - l.6 c.90], V/147/sdss12 [l.6 c.93 - l.6 c.113]! - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".gPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".gpmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".ipmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".iPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".rpmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".rPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".upmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".uPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".zpmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12". - Ambiguous table reference ""V/147/sdss12"" in ""V/147/sdss12".zPmag" ! It may come (at least) from "V/147/sdss12" or from "V/147/sdss12".
What am I doing wrong here?
Answer: You didn't do anything wrong.
The last 10 column names differ only in upper/lower case,
the query generator didn't disambiguate them,
and the error message was inaccurate.
You can fix it by putting the ambiguous column names in quotes.
Omitting the unnecessary table qualifiers:
SELECT TOP 100 RA_ICRS, DE_ICRS, mode, q_mode, class, SDSS12, m_SDSS12,
objID, "Sp-ID", ObsDate, Q, umag, e_umag, gmag, e_gmag,
rmag, e_rmag, imag, e_imag, zmag, e_zmag, zsp, zph,
e_zph, "<zph>", e_zsp, "gPmag", "gpmag", "ipmag", "iPmag", "rpmag",
"rPmag", "upmag", "uPmag", "zpmag", "zPmag"
FROM "V/147/sdss12"
WHERE e_zsp < 0.01
AND Q = 3 | {
"domain": "astronomy.stackexchange",
"id": 6444,
"tags": "photometry, sky-survey"
} |
Are the non-standard one-way speed of light conventions just transformations of coordinates? | Question: There are a lot of posts and confusion regarding the fact that different standards of simultaneity result in different one-way speeds of light (OWSOL) (that may be non-isotropic). Of course, the theory ends up being equivalent to standard SR, which is why we can't measure the one-way speed of light alone and it is left as a convention.
I am trying to understand this, and as far as I can tell, it seems to me that the difference between these different conventions are just a matter simple coordinate transformations. Does it not just come down to starting in a standard inertial reference frame $(ct, x)$, taking the coordinate transformation
$$ ct' = ct + \kappa x\qquad\text{ and }\qquad x' = x \qquad (\kappa\in [-1, 1]), $$
and then declaring the planes $(\text{const}, x')$ as the new simultaneity planes? In this formulation, the speeds of light are $c' = c/|1\pm \kappa|$.
The line element here would end up as
$$ ds^{2} = c^{2}dt^{2} - dx^{2} = c^{2}dt'^{2}-2\kappa c\, dx' dt' - (1-\kappa^{2})dx'. $$
My question is, is my understanding correct? Or am I missing something here?
To go further with my question, if the non-standard OWSOL comes from a change of coordinates, then can we really say the speed of light is different? Given that the metric (the underlying object not the representation) is supposed coordinate independent, light still follows null geodesics, so how can we claim the speed of light is any different in this point of view?
Answer:
My question is, is my understanding correct? Or am I missing something here?
This is correct. Your convention is the one used by Anderson. Other conventions are possible, such as the one used by Reichenbach. But Anderson's seems the most convenient to me.
To go further with my question, if the non-standard OWSOL comes from a change of coordinates, then can we really say the speed of light is different?
Yes.
Similarly, a Lorentz transform is also a change in coordinates. When we do the Lorentz transform of a baseball then we can really say that the speed of the baseball is different. Changing the coordinates really does change coordinate-dependent quantitates like speed.
Given that the metric (the underlying object not the representation) is supposed coordinate independent, light still follows null geodesics, so how can we claim the speed of light is any different in this point of view?
The norm of a tangent vector is invariant. The speed is coordinate dependent. They are conceptually different things entirely.
The fact that worldlines with null tangent vectors have speed $c$ is only true in certain coordinate systems (e.g. inertial coordinates). This equivalence fails in other coordinates. In such coordinates the speed is no longer $c$ even though the worldline is still null. | {
"domain": "physics.stackexchange",
"id": 96143,
"tags": "special-relativity, metric-tensor, coordinate-systems, speed-of-light, one-way-speed-of-light"
} |
Structuring the program that demonstrates collision detection within a field of view | Question: I recently needed to detect agents within a certain-field of view and came across this question in gamedev stack exchange. To learn how it works, I followed the first answer's guidance and decided to make a program that demonstrates how a "a field of view like collision detection" is done. But throughout the process, I struggled a lot with how to structure the program. Here is the code.
#define OLC_PGE_APPLICATION
#include "olcPixelGameEngine.h"
#define PI 3.14159f
#define MAX(a, b) a > b ? a : b
#define MIN(a, b) a > b ? b : a
struct Point
{
Point()
{
}
Point(olc::vf2d _position, float _directionAngle, float _rotationAngle) :
position(_position), directionAngle(_directionAngle), rotationAngle(_rotationAngle)
{
}
olc::vf2d position = { 0.0f, 0.0f };
float directionAngle = 0.0f;
float rotationAngle = 0.0f;
bool withinSensoryRange = false;
olc::Pixel color;
};
struct Triangle
{
Triangle()
{
}
Triangle(olc::vf2d _p1, olc::vf2d _p2, olc::vf2d _p3) :
p1(_p1), p2(_p2), p3(_p3)
{
}
olc::vf2d p1 = { 0.0f, -7.0f };
olc::vf2d p2 = { -5.0f, 5.0f };
olc::vf2d p3 = { 5.0f, 5.0f };
Triangle TranslateAndRotate(const float rotationAngle, olc::vf2d offset)
{
Triangle tri;
tri.p1.x = cosf(rotationAngle) * p1.x - sinf(rotationAngle) * p1.y + offset.x;
tri.p1.y = sinf(rotationAngle) * p1.x + cosf(rotationAngle) * p1.y + offset.y;
tri.p2.x = cosf(rotationAngle) * p2.x - sinf(rotationAngle) * p2.y + offset.x;
tri.p2.y = sinf(rotationAngle) * p2.x + cosf(rotationAngle) * p2.y + offset.y;
tri.p3.x = cosf(rotationAngle) * p3.x - sinf(rotationAngle) * p3.y + offset.x;
tri.p3.y = sinf(rotationAngle) * p3.x + cosf(rotationAngle) * p3.y + offset.y;
return tri;
}
};
class PlayGround : public olc::PixelGameEngine
{
public:
PlayGround()
{
sAppName = "PlayGround";
}
private:
bool debug = true;
private:
Triangle agent1;
float rotationAngle1 = 0.0f;
float sensoryRadius1 = 50.0f;
float fov1 = PI;
float agent1Speed = 120.0f;
float directionPointDistance1 = 60.0f;
olc::vf2d position1 = { 300.0f, 150.0f };
private:
olc::Pixel offWhite = olc::Pixel(200, 200, 200);
private:
float pointsSpeed = 10.0f;
int nPoints = 1000;
std::vector<std::unique_ptr<Point>> points;
private:
float GetDistance(float x1, float y1, float x2, float y2)
{
return sqrtf((x2 - x1) * (x2 - x1) + (y2 - y1) * (y2 - y1));
}
float DirectionAngle(float rotationAngle)
{
return rotationAngle - (PI / 2.0f);
}
private:
bool OnUserCreate() override
{
for (int i = 0; i < nPoints; i++)
{
//4 random floats between 0 and 1 for initializing x, y and rotation angle and direction angle for point
float rx = static_cast <float> (rand()) / static_cast <float> (RAND_MAX);
float ry = static_cast <float> (rand()) / static_cast <float> (RAND_MAX);
float rra = static_cast <float> (rand()) / static_cast <float> (RAND_MAX);
float rda = static_cast <float> (rand()) / static_cast <float> (RAND_MAX);
std::unique_ptr<Point> point = std::make_unique<Point>(olc::vf2d(rx * 600, ry * 300), rda * (PI * 2), rra * (PI * 2));
points.push_back(std::move(point));
}
return true;
}
bool OnUserUpdate(float elapsedTime) override
{
//USER CONTROLS
if (GetKey(olc::UP).bHeld)
{
position1.x += cosf(DirectionAngle(rotationAngle1)) * elapsedTime * agent1Speed;
position1.y += sinf(DirectionAngle(rotationAngle1)) * elapsedTime * agent1Speed;
}
if (GetKey(olc::RIGHT).bHeld)
rotationAngle1 += 3.0f * elapsedTime;
if (GetKey(olc::LEFT).bHeld)
rotationAngle1 -= 3.0f * elapsedTime;
if (GetKey(olc::Q).bHeld)
fov1 -= 3.0f * elapsedTime;
if (GetKey(olc::W).bHeld)
fov1 += 3.0f * elapsedTime;
if (GetKey(olc::A).bHeld)
sensoryRadius1 -= 50.0f * elapsedTime;
if (GetKey(olc::S).bHeld)
sensoryRadius1 += 50.0f * elapsedTime;
if (GetKey(olc::D).bPressed)
debug = !debug;
fov1 = MAX(MIN(fov1, PI), 0);
sensoryRadius1 = MAX(MIN(sensoryRadius1, 200), 0);
//TRANSFORMATIONS FOR TRIANGLE
Triangle transformedAgent1 = agent1.TranslateAndRotate(rotationAngle1, position1);
//points that connects to the triangle to show the directiom vector
olc::vf2d direction1;
direction1.x = (cosf(DirectionAngle(rotationAngle1)) * directionPointDistance1) + position1.x;
direction1.y = (sinf(DirectionAngle(rotationAngle1)) * directionPointDistance1) + position1.y;
//these are the two field of view points one at angle + fov and other at angle - fov
olc::vf2d fovPoints11;
olc::vf2d fovPoints12;
//calculating position based on the position of triangle, fov and the sensory range
fovPoints11.x = (cosf(DirectionAngle(rotationAngle1 + fov1)) * sensoryRadius1) + position1.x;
fovPoints11.y = (sinf(DirectionAngle(rotationAngle1 + fov1)) * sensoryRadius1) + position1.y;
fovPoints12.x = (cosf(DirectionAngle(rotationAngle1 - fov1)) * sensoryRadius1) + position1.x;
fovPoints12.y = (sinf(DirectionAngle(rotationAngle1 - fov1)) * sensoryRadius1) + position1.y;
//COLLISION DETECTION
//within the sensory radius
for (auto& point : points)
{
float distance = GetDistance(point->position.x, point->position.y, position1.x, position1.y);
if (distance < sensoryRadius1)
point->withinSensoryRange = true;
else
{
point->color = olc::BLACK;
point->withinSensoryRange = false;
}
}
//within the field of view
for (auto& point : points)
{
if (point->withinSensoryRange)
{
olc::vf2d normalizedForwardVector = (direction1 - position1).norm();
olc::vf2d normalizedPointCentreVector = (point->position - position1).norm();
float dot = normalizedPointCentreVector.dot(normalizedForwardVector);
if (dot >= cosf(fov1))
debug ? point->color = olc::RED : point->color = olc::WHITE;
else
debug ? point->color = olc::GREEN : point->color = olc::BLACK;
}
}
//RENDERING
Clear(olc::Pixel(52, 55, 54));
if (debug)
{
//draw control instructions
DrawString(2, 40, "This is a toy program made to demonstrate how collision\ndetection within "
"a field of view works. Black flies represent the\npoints that are comletely out "
"of range. In debug mode,\nGreen ones represent the ones that are within the sensory\nraidus. The "
"ones in the sensory radius are tested to\nsee if they are in the field of view, and "
"if they\nare,they appear red.\n\nWhen debug mode is off, white flies\nrepresent the flies that can "
"be seen", offWhite);
DrawString(2, 10,
"Press up, right and left keys for movement.\n"
"Press w to increase FOV and q to reduce it.\n"
"Press s to increase sensory range and a to decrease it.", offWhite);
}
DrawString(2, 290, "Press d to toggle text and geometric debug data.", olc::Pixel(200, 250, 200));
//display info
std::ostringstream fovValue;
fovValue << "FOV: " << round(fov1 * 2.0f * (180 / PI)) << " degrees";
DrawString(440, 280, fovValue.str(), offWhite);
std::ostringstream sensoryRangeValue;
sensoryRangeValue << "Sensory Range: " << round(sensoryRadius1);
DrawString(440, 265, sensoryRangeValue.str(), offWhite);
//transform (wobble while moving forward) and draw all the points
for (auto& point : points)
{
point->rotationAngle += 0.05f;
point->directionAngle -= 0.05f;
point->position.x += cosf(point->directionAngle) * sinf(point->rotationAngle) * elapsedTime * pointsSpeed;
point->position.y += sinf(point->directionAngle) * sinf(point->rotationAngle) * elapsedTime * pointsSpeed;
if (point->rotationAngle > PI * 2)
point->rotationAngle = 0;
if (point->rotationAngle < 0)
point->rotationAngle = PI * 2;
if (point->directionAngle > PI * 2)
point->directionAngle = 0;
if (point->directionAngle < 0)
point->directionAngle = PI * 2;
if (point->position.x > 600)
point->position.x = 0;
if (point->position.x < 0)
point->position.x = 600;
if (point->position.y > 300)
point->position.y = 0;
if (point->position.y < 0)
point->position.y = 300;
Draw((int)point->position.x, (int)point->position.y, point->color);
}
if (debug)
{
//lines from centre of triangle to fov points
DrawLine((int)position1.x, (int)position1.y, (int)fovPoints11.x, (int)fovPoints11.y, olc::RED);
DrawLine((int)position1.x, (int)position1.y, (int)fovPoints12.x, (int)fovPoints12.y, olc::RED);
//field of view points
FillCircle((int)fovPoints11.x, (int)fovPoints11.y, 2, olc::RED);
FillCircle((int)fovPoints12.x, (int)fovPoints12.y, 2, olc::RED);
//color the points between the two fov points in red
float tempAngle = DirectionAngle(rotationAngle1 + fov1);
while (tempAngle > DirectionAngle(rotationAngle1 - fov1))
{
for (int i = 0; i < 3; i++)
Draw((int)(cosf(tempAngle) * (sensoryRadius1 + i)) + position1.x,
(int)(sinf(tempAngle) * (sensoryRadius1 + i)) + position1.y, olc::RED);
tempAngle -= 0.01f;
}
//draw sensory radius
DrawCircle((int)position1.x, (int)position1.y, sensoryRadius1, olc::GREEN);
//the straingt line signifying direction
DrawLine((int)position1.x, (int)position1.y, (int)direction1.x, (int)direction1.y, offWhite);
}
//Draw the main triangle body
FillTriangle(
(int)transformedAgent1.p1.x, (int)transformedAgent1.p1.y,
(int)transformedAgent1.p2.x, (int)transformedAgent1.p2.y,
(int)transformedAgent1.p3.x, (int)transformedAgent1.p3.y,
offWhite);
return true;
}
};
int main()
{
PlayGround playGround;
if (playGround.Construct(600, 300, 2, 2))
playGround.Start();
}
I am sure its bad and tons of optimizations can be made, but for this particular question, I want to focus on how I could have structured it better. Thank you.
It is made using pixel game engine, so if you wish to test it out, it needs this. Its a single file library, so easy to set up.
Answer: Avoid using macros
Try to avoid macros where possible; usually a better solution is available. For constants, prefer using constexpr:
static constexpr float PI = 3.14159...f;
Instead of MIN and MAX, just use std::min() and std::max(). Or if you can use C++17, use std::clamp().
Constructors and member initialization
If you need to explicitly add a default constructor, prefer doing that using = default.
When initializing members to zero, you can do that very concisely using value initialization, which looks like this:
olc::vf2d position{};
Separate translation and rotation
Instead of having a TranslateAndRotate() function, consider splitting that up into a separate Translate() and Rotate(). This is more flexible, and it also removes an ambiguity: does your function translate first and then rotate, or the other way around? You can also greatly simplify these functions, especially when using a little helper to rotate single Points:
Triangle Translate(olc::vf2d offset)
{
return {p1 + offset, p2 + offset, p3 + offset};
}
Triangle Rotate(float angle)
{
static const auto rotate = [](olc::vf2d p, float angle) -> olc::vf2d {
return {
p.x * std::cos(angle) - p.y * std::sin(angle),
p.x * std::sin(angle) + p.y * std::cos(angle)
};
};
return {rotate(p1, angle), rotate(p2, angle), rotate(p3, angle)};
}
I used a lambda here, but you could also write that as a regular function. You can now use this as follows:
Triangle transformedAgent1 = agent1.Rotate(rotationAngle1).Translate(position1);
Avoid using smart pointers unnecessarily
There is no reason to use std::unique_ptr to store Points in a vector. Instead just write:
std::vector<Point> points;
When adding points to this class, you can write:
for (int i = 0; i < nPoints; i++)
{
float rx = ...;
float ry = ...;
float rra = ...;
float rrd = ...;
points.emplace_back(olc::vf2d{rx * 600, ry * 600}, rda * PI * 2, rra * PI * 2);
}
Use proper random number generators
Since C++11 there are proper random number generator functions, there is no need to use the rather ugly rand().
Use std::
Teach yourself to use std:: consistently when using functions from the standard library, and avoid relying on using namespace std. Also note that using math functions from std:: has the benefit of automatically using the right overload. For example, instead of sinf(...), write std::sin(...), if the argument is a float it will pick the overload for floats. That's one less thing to worry about.
Be consistent using olc::vf2d
If you have a proper type for 2D vectors, you rarely have to pass coordinates in separate floats anymore. For example, GetDistance can be rewritten to:
float GetDistance(olc::vf2d p1, olc::vf2d p2)
{
return std::hypot(p2.x - p1.x, p2.y - p1.y);
}
And use it like so in OnUserUpdate():
float distance = GetDistance(point.position, position1);
Avoid repetition
Avoid repeating yourself where possible, even if it's small things, like the ternaries in OnUserUpdate():
if (dot >= cosf(fov1))
debug ? point.color = olc::RED : point.color = olc::WHITE;
This can be rewritten to:
if (dot >= std::cos(fov1))
point.color = debug ? olc::RED : olc::WHITE;
Split the code into more functions
The function OnUserUpdate() is quite long. Consider splitting it up into multiple functions, so that OnUserUpdate() just gives you a high-level overview of what it is doing:
bool OnUserUpdate(float elapsedTime) override
{
HandleInput(elapsedTime);
UpdateState(elapsedTime);
DrawScreen();
return true;
}
Each of these functions could be split into multiple parts as well if necessary, for example updating the state could look like:
void UpdateState(float elapsedTime)
{
CollisionDetection();
WobblePoints(elapsedTime);
}
And drawing could look like:
void DrawScreen()
{
Clear({52, 55, 54});
if (debug)
DrawDebugInformation();
DrawInformation();
DrawPoints();
DrawBody();
}
Wrapping values
If you wnat to clamp a variable between a low and high value, you can use the std::max+std::min() trick, or C++17's std::clamp(). However, you also have a few variables that you want to let wrap. You are doing this four times, so already you should have created a function to do this that will reduce code duplication.
Your code also checks if a value is larger than, say, PI * 2, and if so you reset it to zero. However, suppose the value was actually PI * 2 + 0.1, then ideally after wrapping the result should be 0.1, not 0. You could change your code to subtract PI * 2 until it is in range, but we can avoid if and while statement altogether by making use of std::floor, like so:
float wrap(float value, float max)
{
value /= max;
value -= std::floor(value); // value is now in the range [0, 1)
value *= max;
return value;
}
And with this you can write:
point.rotationAngle = wrap(point.rotationAngle, PI * 2);
point.directionAngle = wrap(point.directionAngle, PI * 2);
point.position.x = wrap(point.position.x, 600);
point.position.y = wrap(point.position.y, 300); | {
"domain": "codereview.stackexchange",
"id": 41309,
"tags": "c++"
} |
How is the average distance between 2 objects orbiting around a third object calculated? | Question: I want to know the average distance between Io and Europa knowing that Io's semi-major axis (around Jupiter of course) is 421,800 km and Europa's semi-major axis is 671,100 km.
At first I thought it's the average of the closest approach (671,100 - 421,800 = 249,300 km) and the farthest approach (671,100 + 421,800 = 1,092,000 km), but the average of those is exactly 671,100 km, which is Europa's semi-major axis, and I find that hard to believe.
Assuming the orbits are co-planar and perfectly circular, which they more or less are, surely there must be a simple formula to calculate the average distance?
I asked this on another site and the answer was "run a simulation!", which is a very good answer, and I would've already done that if I knew how lol.
Answer: If we set the radius of Io's orbit to 1, then Europa's is about $a=$1.591. Since they are in a 2:1 orbital resonance we would expect that number to the 3/2 power to be exactly 2. It's close (2.007) but there's enough of a difference to make it interesting; since Jupiter rotates rapidly I'm guessing that it's related to this.
Okay based on @JamesK's idea of keeping one fixed I tried to get the analytical integral
$$<r_{12}> = \frac{1}{2 \pi}\int_0^{2 \pi}\sqrt{(a-\cos \theta)^2 + \sin^2 \theta} \ d\theta$$
from Wolfram Alpha but I got a
Standard computation time exceeded...
message which I've never seen before, (screenshot) so I did it in Python.
While @JamesK's answer is "the greater of the two" which here would be 1.591 or there would be 671,100 kilometers, because this is a 2D and not a 1D problem and Pythagoras has something to say, I get a different value.
('ratio: ', 1.5910384068278804)
('d.mean(): ', 1.7524934914237922)
('James_K: ', 1.5910384068278804)
The Python script below returns 1.752 or about 739,200 kilometers (solid line) versus the mean projected 1 dimensional distance (dashed line).
import numpy as np
import matplotlib.pyplot as plt
a = 671100. / 421800
print('ratio: ', a)
theta = np.linspace(0, 2*np.pi, 100001)[::-1] # don't double-count the endpoints
d = np.sqrt((a - np.cos(theta))**2 + np.sin(theta)**2)
print('d.mean(): ', d.mean())
theta_degs = (180/np.pi) * theta
plt.plot(theta_degs, d)
plt.xlabel('phase angle (deg)')
plt.ylabel("distance normalized to Io's SMA")
plt.plot(theta_degs, d.mean() * np.ones_like(theta_degs), '-k')
plt.plot(theta_degs, a * np.ones_like(theta_degs), '--k')
plt.ylim(0, None)
plt.show()
James_K = ((a-1.) + (a+1.)) / 2.
print('James_K: ', James_K) | {
"domain": "astronomy.stackexchange",
"id": 4623,
"tags": "orbit"
} |
Why did LED lit up while soldering? | Question: I was soldering an LED when accidentally the soldering wire touched the LED's other Pin while the soldering iron was touching the other and the LED lit up, not bright but bright enough to be seen.
I did it again to see did it actually light up, and it did. I noticed my bare feet were touching the ground so I raised it to observe, the LED got dim.
Another thing I observed was that no matter what pin (cathode or anode) touched what (soldering iron or soldering wire), it lit up.
Why did it lit up?
I know that heat causes particles (electrons in this case) to move (kinetic molecular theory), but does it move enough to make an LED light up? As a diode it has a voltage value after it allows current to flow, was the heat enough to make it past that limiting voltage? (I don't remember the actual term).
Why did the polarity not matter?
I have no clue about this. LED like any other diode is biased means allowing current flow in one direction, than why does this happen? Does the electron movement not matter in this case?
Answer: Because the diode lit for both ways of connecting your finger and soldering tip to the diode, the soldering tip has an AC voltage on it, and one side of the AC is connected to your building ground (the floor your bare feet were touching). The solder tip is heated by a coil for which it sounds like has 120 VAC on it. There is a short through the insulation of that coil (< few 10 Kohms of resistance) to the solder tip. Your feet must also have < few 10 Kohms ground. This would provide the ~mA to light the diode.
You are lucky you didn't get a shock. Measure the voltage between your solder tip and a water pipe in your house with an AC voltmeter. You probably need a new soldering iron.
PS: Good description and wonderful experimental method in checking both polarities ... but like some of the great scientists of history you risked your life !! About 15 mA through your chest would have put your heart into fibrillation. | {
"domain": "physics.stackexchange",
"id": 24745,
"tags": "thermodynamics, electricity, light-emitting-diodes"
} |
What are the available obstacle avoidance algorithms in ROS? | Question:
Hello
What are obstacle avoidance algorithms which available in ROS to implement unmanned ground vehicle (UGV)?
Hiba...
Originally posted by hiba on ROS Answers with karma: 3 on 2014-11-10
Post score: 0
Answer:
Hiba,
See the nav_core documentation. Specifically have a look at BaseLocalPlanner interface.
Rick
Originally posted by Rick Armstrong with karma: 567 on 2014-11-10
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 20014,
"tags": "ros, obstacle-avoidance"
} |
Selection Rules for RF and microwave transition | Question: Consider we want coupling between two hyperfine levels both in the ground state of 87Rb atom. Is the dipole transition rules still valid if we use RF or microwave?
Answer: The selection rules concern the conservation of angular momentum when considering the photon + atom system. Thus, the selection rules are the same for RF and microwave illumination. However, RF and microwave photons have orders of magnitude different frequencies and this is very relevant for using them to drive atomic transitions.
The two hyperfine levels in Rb are separated by 6.8 GHz. This is a frequency which is squarely in the realm of microwave, not RF. I consider anything between 1 MHz and a few hundred MHz to be definitely RF. outside of that range it may also be RF but it gets hazier. In any case, if you try to drive a hyperfine transition using RF illumination at a few hundred MHz it will not work because you are too far off resonance. Thus, the reason you can't drive hyperfine transitions isn't because it is forbidden by selection rules, but rather because it is highly suppressed spectroscopically.
RF is rather used to drive transitions between different sublevels within either the $F=1$ or $F=2$ hyperfine manifolds. | {
"domain": "physics.stackexchange",
"id": 52152,
"tags": "quantum-mechanics, atomic-physics, quantum-optics, spectroscopy"
} |
What causes primordial gravitational wave in the early universe? | Question: Is it collision between primordial black holes or perhaps big bang itself, I read that if we can observe very first polarized light e.g. CMBR etc then inflation theory is proven. So my question is what's polarizing those very first light and how was it possible? Actually I'm more interested could gravitational wave polarize light? If so how?
P.s. kindly use equation sparingly
Answer: This is the present cosmological model, the Big Bang model:
In the very beginning before the particles in the standard model with the charges and other quantum numbers appear , before 10^-32 seconds from the origin, quantum gravity is supposed to reign supreme, with a quantum mechanical particle/field called inflaton which homogenizes everything to an accuracy of 10^-5 ( taken from the homogeneous to that accuracy cosmic microwave background radiation).
At the time of 10^-32 seconds the standard model particles start appearing , whith charges etc, and the photon there. All particles interact gravitationally , by construction of the theory. The gravitational waves generated during the inflation period from the inflaton field will also, within the standard theory, be interacting with the existing soup of primordial particles .
The space polarization of photons will be different if they come from an interaction with a gravitational wave, than with quarks or electrons , and this can be modeled and checked, whether it exists in the radiation patterns of the CMB. This is the objective of the BICEP experiments.
If seen it will be a confirmation of the existence of gravitational waves, which in the present model will be coming from the inflation period, as black holes neutron stars and their mergings come much later in the timeline of the Big Bang, after the formation of neutral hydrogen which leads to stars etc. | {
"domain": "physics.stackexchange",
"id": 47453,
"tags": "gravitational-waves, polarization"
} |
Finding customer searches and graphing their items | Question: One of the reports I am working on requires me to locate searches a customer makes, and graph them in certain manners.
I have one query I'm particularly concerned about. This one takes the most amount of time (relative to the batch) and seems quite ugly to me.
Essentially, this should graph the search results by the day of the week and give me: the total number of searches, the average searches per week day, the number of searches that had no returned results, and the percent of searches that have no results.
SELECT
DATENAME(WEEKDAY, [AtDateTime]) AS [Day of Week]
,COUNT(*) AS [Number of Searches]
,CAST(CAST(COUNT(*) AS DECIMAL(10, 2)) / COUNT(DISTINCT CONVERT(DATE, [AtDateTime])) AS DECIMAL(10, 2)) AS [Average Searches per Day]
,SUM(CASE WHEN [NumFound] = 0 THEN 1 ELSE 0 END) AS [Number of Searches with no Results]
,CAST(CAST(SUM(CASE WHEN [NumFound] = 0 THEN 1 ELSE 0 END) AS DECIMAL(10, 2)) / COUNT(*) AS DECIMAL(10, 4)) AS [Percent of Searches with no Results]
FROM [DB].[dbo].[SearchHistory]
WHERE NOT EXISTS
(
SELECT 1 FROM [DB].[dbo].[CustomerExclusions]
WHERE [CustomerNumber] = [SearchHistory].[CustomerNumber]
AND [ExcludeFromSearch] = 1
)
GROUP BY DATENAME(WEEKDAY, [AtDateTime]), DATEPART(WEEKDAY, [AtDateTime])
ORDER BY DATEPART(WEEKDAY, [AtDateTime])
Any advice/critique is welcome.
DDL for the SearchHistory table:
CREATE TABLE [dbo].[SearchHistory](
[CustomerNumber] [char](8) NOT NULL,
[Username] [varchar](16) NOT NULL,
[AtDateTime] [datetime2](7) NOT NULL,
[Terms] [varchar](128) NOT NULL,
[NumFound] [int] NOT NULL,
[NumInStock] [int] NOT NULL
) ON [PRIMARY]
ALTER TABLE [dbo].[SearchHistory] ADD CONSTRAINT [DF_SearchHistory_AtDateTime] DEFAULT (sysdatetime()) FOR [AtDateTime]
GO
DDL for the CustomerExclusions table:
CREATE TABLE [dbo].[CustomerExclusions](
[CustomerNumber] [char](8) NOT NULL,
[ExcludeFromSearch] [bit] NOT NULL,
[ExcludeFromErrors] [bit] NOT NULL,
[ExcludeFromPageHits] [bit] NOT NULL,
CONSTRAINT [PK_CustomerExclusions] PRIMARY KEY CLUSTERED
(
[CustomerNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
The indexes on SearchHistory are as follows:
CREATE NONCLUSTERED INDEX [IX_SearchHistory_CustomerNumber_Username_AtDateTime_Terms] ON [dbo].[SearchHistory]
(
[CustomerNumber] ASC,
[Username] ASC,
[AtDateTime] ASC,
[Terms] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [IX_SearchHistory_NumFound] ON [dbo].[SearchHistory]
(
[NumFound] ASC
)
INCLUDE ( [CustomerNumber],
[Terms]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [IX_SearchHistory_NumFound_CustomerNumber] ON [dbo].[SearchHistory]
(
[NumFound] ASC,
[CustomerNumber] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
The CustomerExclusions table has no indexes on it, apart from the PK.
This is for SQL Server 2012.
There are two records in CustomerExclusions, and almost 22k records in SearchHistory. When both filters are applied, both records in CustomerExclusions match ExcludeFromSearch = 1, and ~17k records in SearchHistory are returned when the CustomerExclusions filter is applied.
Answer: Regarding performance the code looks fine. Possibly you are missing useful indexes.
As far as the code itself is concerned I have the following observations.
Remove three part names.
[DB].[dbo].[SearchHistory] and [DB].[dbo].[CustomerExclusions]
It is common to want to run queries against different versions of the database. Hardcoding the database name in to SQL makes this harder.
If this SQL lives inside a stored proc in a copy of the database DB_copy on the same instance as DB you may be unaware that this is, in fact, referencing a different database for some time. Similarly if the SQL is sent by an application it is likely that most people will assume that changing the "Initial Catalogue" in the connection string will change the tables being queried.
Add table aliases and use them for all columns in the correlated sub query
/*Original Code*/
FROM [DB].[dbo].[SearchHistory]
WHERE NOT EXISTS
(
SELECT 1 FROM [DB].[dbo].[CustomerExclusions]
WHERE [CustomerNumber] = [SearchHistory].[CustomerNumber]
AND [ExcludeFromSearch] = 1
)
/*Rewritten Code*/
FROM dbo.SearchHistory sh
WHERE NOT EXISTS (SELECT *
FROM dbo.CustomerExclusions ce
WHERE ce.CustomerNumber = sh.CustomerNumber
AND ce.ExcludeFromSearch = 1)
If the table CustomerExclusions does not have any columns called CustomerNumber or ExcludeFromSearch but these do exist in SearchHistory then the original correlated sub query will resolve them from the outer scope giving you incorrect results rather than an error message. Using two part names as in the second example would give an error in that case as well as making the code more explicitly self documenting. I have also removed the square brackets as they are not required and I find them distracting. For reasons of personal preference I have replaced the 1 with a *. It makes no difference to the execution plan.
Other changes.
I used a CTE to remove some of the repeated formulas scattered around the code.
I replaced your use of SUM()/COUNT(*) with AVG
It is only necessary to cast one of the operands to avoid integer division - not both.
I changed the column names returned by the query so they meet the rules for standard identifiers and don't need to be quoted.
I changed to the non standard column_alias = expression syntax as I find it makes the query easier to read when the select list contains long expressions.
WITH CTE
AS (SELECT AtDateTime,
/*Use CASE if < SQL Server 2012*/
IIF(NumFound = 0, 1, 0) AS NoResults,
DATENAME(WEEKDAY, AtDateTime) AS WeekDayName,
DATEPART(WEEKDAY, AtDateTime) AS WeekDayNumber
FROM dbo.SearchHistory sh
WHERE NOT EXISTS (SELECT *
FROM dbo.CustomerExclusions ce
WHERE ce.CustomerNumber = sh.CustomerNumber
AND ce.ExcludeFromSearch = 1))
SELECT WeekDayName,
NumberOfSearches
= COUNT(*),
AverageSearchesPerDay
= COUNT(*) / CAST(COUNT(DISTINCT CONVERT(DATE, AtDateTime)) AS DECIMAL(10, 2)),
NumberOfSearchesWithNoResults
= SUM(NoResults),
PercentOfSearchesWithNoResults
= CAST(AVG(CAST(NoResults AS DECIMAL(10, 2))) AS DECIMAL(10, 4))
FROM CTE
GROUP BY WeekDayNumber,
WeekDayName
ORDER BY WeekDayNumber | {
"domain": "codereview.stackexchange",
"id": 17692,
"tags": "sql, sql-server, t-sql"
} |
Regex to match phone numbers, with comments | Question: Regular expressions are one of the worst aspects of debugging hell. Since they are contained in string literals, It's hard to comment how they work when the expressions are fairly long.
I have the following regular expression:
\b\d{3}[-.]?\d{3}[-.]?\d{4}\b
Source: RegExr.com
I commented it like this:
import java.util.regex.Pattern;
public class Main {
public static void main(String[] args) {
String regexStr = "";
regexStr += "\\b"; //Begin match at the word boundary(whitespace boundary)
regexStr += "\\d{3}"; //Match three digits
regexStr += "[-.]?"; //Optional - Match dash or dot
regexStr += "\\d{3}"; //Match three digits
regexStr += "[-.]?"; //Optional - Match dash or dot
regexStr += "\\d{4}"; //Match four digits
regexStr += "\\b"; //End match at the word boundary(whitespace boundary)
if (args[0].matches(regexStr)) {
System.out.println("Match!");
} else {
System.out.println("No match.");
}
}
}
What would be the best way to go about commenting regular expressions to make them readable for beginners? Is there a better way than what I have shown?
Answer: This is a difficult topic to pin down; there will be a lot of opinions, without any clear "right" answers. Additionally, a given match might contextually be better-commented in one fashion vs. a different fashion. That said, I will do my best to try and give rationale for making some commenting choices over others.
While I know about embedded regex comments, they tend to make things more confusing, rather than less. Using them causes subtle changes in how whitespace is treated in the regex, and they're visually rather intrusive. Unless you are allowed to pass only a regex around, with no other attendant code or comments, I would avoid using embedded regex comments. The only time I have ever used these is with regexes that were consumed by a client application (I had no other means by which to comment my expressions), and when I had to write some regexes that needed to be carried back and forth between two languages.
Line-by-line commenting in the enclosing language, as in your selection, is the next option. Most programming/scripting environments support breaking strings up onto more than one line and commenting them. This can be somewhat less visually intrusive than directly embedded regex comments, especially given that the regex's whitespace isn't compromised in any way, but there is still additional "noise" overhead in terms of extra quotes and joining syntax (such as + signs in C# and Java, and << in C++). While the underlying strategy is not inherently a bad one, commenting every single atom of the regex is probably too extreme -- try to break the comments down into larger functional groups, and only go atom-by-atom for particularly tricky stuff. There is an inherent downside to the multiline comment scheme, though, and that is not being able to see the whole regex in one piece. What I see happen in practice is that people write the regex in a single place, then come back and re-work it into multiple lines like this as they go through and comment up their finished code. Ironically, the next person tends to wind up putting it all back into one line so they can more readily edit it. :)
I very recently wrote a shell script that did a huge number of complicated regexes. The route that I took was a hybrid -- above my sed commands, I broke down useful matching units and explained them, but the actual pipeline remained in its normal context, as in this example snippet:
#!/bin/bash
# Rename the files to reflect the class of tests they contain.
# head -n5 "$FILE" - Grab the first five lines of the file, which hold (in order) the values for key length, IV length, text length, AAD length, and tag length for all the test entries contained in that file.
# ^.* 0x(.?.) 0x(.?.) - Match the two 1-2 digit hex numbers at the end of the lines
# ibase=16; \2\1 - Put the bytes back into big-endian, and strip the 0x (prep for BC)
# { while read; do echo $REPLY | bc; done; } - Pipe each line to BC one by one, converting the hex values back to decimal
# :a - Label "a"
# N - Append another line to the buffer
# $!ba - If this is NOT the last line, branch to A
# s/\n/-/g - Replace all the newlines in the processing space with dashes
mv "$FILE" "$BASEFILE"`head -n5 "$FILE" | sed -re 's/^.* 0x(.?.) 0x(.?.)/ibase=16; \2\1/g' | { while read; do echo $REPLY | bc; done; } | sed -re ':a' -e 'N' -e '$!ba' -e 's/\n/-/g'`.mspd
The upside to this is that you get the benefit of comments tied to specific chunks of regex, while also being able to see all the parts of the regex in their full context. The downside, rather obviously, is that when you update the regex in their full context, you then have to update your comment-copy of those parts to match. In practice, however, I found it easier to alter/fix my regexes in the full context, and then fix up the comments in my "go through and comment up the finished code" phase, than to try and wrangle with the chopped-up regexes.
As with all workflows, your preferences may vary. At the very least, however, I would recommend you comment larger chunks of your regex at more logical points, like so:
import java.util.regex.Pattern;
public class Main {
public static void main(String[] args) {
String regexStr = "";
regexStr = "\\b" //Begin match at the word boundary(whitespace boundary)
+ "\\d{3}[-.]?" //Match three digits, then an optional . or - separator (area code)
+ "\\d{3}[-.]?" //Repeat (three more digits)
+ "\\d{4}" //Last four digits
+ "\\b"; //End match at the word boundary(whitespace boundary)
if (args[0].matches(regexStr)) {
System.out.println("Match!");
} else {
System.out.println("No match.");
}
}
}
(Note how the "+" are aligned with the assignment, making it easy to visually track the span, and also keeping them away from both the regex data and the comments).
Or, using my preferred method:
import java.util.regex.Pattern;
public class Main {
public static void main(String[] args) {
// \b Begin match at the word boundary
// \d{3}[-.]? Match three digits, then an optional . or -
// Repeat (first one is area code, second is first three digits of the local number)
// \d{4} Last four digits of the local number
// \b End match at the word boundary
String regexStr = "\\b\\d{3}[-.]?\\d{3}[-.]?\\d{4}\\b";
if (args[0].matches(regexStr)) {
System.out.println("Match!");
} else {
System.out.println("No match.");
}
}
} | {
"domain": "codereview.stackexchange",
"id": 7157,
"tags": "java, strings, regex"
} |
Finding a certain prefix of a string | Question: Let $\Sigma = \{ \sigma_1 , ..., \sigma_t \}$ and let $S$ be a string from $\Sigma^*$. Denote: $n=|S|$, that is $S$ has $n$ letters. I'd like to find the shortest prefix $T$ of $S$ such that $S$ is a prefix of $T^n$ ($T^n= T \cdot .... \cdot T$, $n$ times).
I tried to build the suffix tree for $S$, and then checking each node, by their levels - from lowest level to top level, because then the corresponding prefix will be shortest at the lowest level of the suffix tree and will get longer - and then for each node checking if this property applies for the prefix of the current suffix node I'm looking at, but that would not necessarily give me the shortest prefix, because the suffix tree may have on the same level different lengths suffices.
I'm not sure as to how I'll be able to find the shortest prefix $T$? Perhaps there is a different way of using a suffix tree?
Or maybe there is a way to solve this efficiently without using suffix trees at all?
Answer:
Repetitions and periods in strings constitute one of the most fundamental
areas of string combinatorics.1
The shortest prefix T of S where,
$S = T^k.Q , s.t: k>= 1, Q =$ some prefix of T, then |T| is called the period of string.
We say that string w has period p if w[i] = w[i + p] 1
Conventionally, finding T can be done by the Knuth–Morris–Pratt algorithm as well as Boyer–Moore string search algorithm.
Proving the correctness is beyond the scope of this answer.(More precisely, beyond my scope). See 1 for a better idea.
Before I came to know this, I attempted to solve this using KMP. And later, luckily, others have confirmed it. This could be helpful if one is wondering how could one possibly get an idea that KMP would work. I first tried to figure out the pattern and thought KMP could do the job.
Wondering how ? Our aim is to make $S$ a prefix of $T^n$ i.e. $TT \cdot .... \cdot T$, $n$ times. You can see that T is repeating itself, so, should some prefix of $S$ too.
E.g. take $S = abc$, the only way you can have $TTT$ and $S$ as a prefix of $TTT$ ($T^3$) is if $T = abc$, then $abc$ is a prefix of $abcabcabc$. The minimum part which is repeating is the complete string $abc$. Hence, $T = abc$.
But, if $S = aba$, then $T$ can be $ab$, and $aba$ is a prefix of $TTT$ i.e. $ababab$. The minimum part which is repeating is just the string $ab$. Hence, $T = ab$.
You're roughly getting the idea that finding that part of prefix of $S$, where it is repeating itself in the rest of $S$ and ending with some part of it.
Final e.g. would be $S = abababa$, then again $T$ would remain $ab$, i.e, the prefix which is getting repeated and ending with some part of it. We could form $S$ as $T^k.Q: s.t. k >= 1, Q =$ some prefix of $T$, here $a$.
Now your job is pretty easy. For every prefix in $S$, check if it is present in the rest of $S$ as described above. The shortest one you find would be the answer.
KMP has an essential part called "Prefix Function". Below is the code for the Prefix function in C / C++. When I ran it over a couple of strings S, I found good results.
void make_prefix_fun(char pat[], int a[], int m){
// i is moving forward, and j is staying behind.
a[0] = 0 ;
for ( int i = 1, j = 0 ; i < m ; i++ ){
while ( pat[j] != pat[i] && j > 0 )
j = a[j-1];
if ( pat[j] == pat[i] )
j++;
a[i] = j;
}
}
Sample Cases:
aba
a b a
0 0 1
// ^ T = S[1..(|S| - 1)]
adbashdbhabsjd
a d b a s h d b h a b s j d
0 0 0 1 0 0 0 0 0 1 0 0 0 0
// T is the complete S ^
abcdabcda
a b c d a b c d a
0 0 0 0 1 2 3 4 5
// ^. T = S[1..(|S| - 5)]
// Cool. Found something here.
abababababababababa
a b a b a b a b a b a b a b a b a b a
0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
// Cool. Found something here. ^. T = S[1..(|S| - 17)]
abababababababababaaaa
a b a b a b a b a b a b a b a b a b a a a a
0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 1 1 1
// Working exactly as expected. ^
Found a pattern? Yes, we do.
If you observe the array, after the first couple of consecutive 0s,
if we get the series, 1,2,3...x strictly upto the end, we can have our T = S[1..(|S| - x)] and the length of your required T is N - a[N] where N = |S|.
else T = entire string S.
The required T is the longest prefix of S, which is repeating itself in S, and S ends with some prefix of T or in other words T = S[1...|T|], where |T| is called the period of the string.
References:
1 - Repetitions in strings: algorithms and combinatorics, Crochemore et al.
Credits:
devnum, Mihai Calancea, Hendrik Jan. | {
"domain": "cs.stackexchange",
"id": 5914,
"tags": "algorithms, optimization, strings, substrings, suffix-trees"
} |
If $f(m_1,m_2-1)=f(m_1-1,m_2)$, does it mean that $f(m_1,m_2) $ is independent of $m_1$ and $m_2$? | Question: In this article Theory of Complex Spectra. II Giulio Racah defines $f(m_{1} m_{2} ; jm)$ by
$$
\left(m_{1} m_{2} \mid j m\right)=(-1)^{j_{1}-m_{1}} f\left(m_{1} m_{2} ; j m\right)\left[\left(j_{1}+m_{1}\right) !\left(j_{2}+m_{2}\right) !(j+m) !\right]^{\frac{1}{2}} /\left[\left(j_{1}-m_{1}\right) !\left(j_{2}-m_{2}\right) !(j-m) !\right]^{\frac{1}{2}}
$$
where $\left(m_{1} m_{2} \mid j m\right)$ are the Clebsch-Gordan coefficients.Then, he shows that
$$
(j-m)(j+m+1) f\left(m_{1} ~~m_{2} ; j m+1\right)=f\left(m_{1}~~ m_{2}-1 ; j m\right)-f\left(m_{1}-1~~ m_{2} ; j m\right) \tag 1
$$
Now he claims that if we set $m=j$ in $\left(1\right)$, we see that $f\left(m_{1}~~ m_{2} ; j j\right)$ is independent of $m_{1}$ and $m_{2}$, so we may write
$$
f\left(m_{1}~~ m_{2} ; j j\right)=A_{j}.
$$
My question is: why is $f\left(m_{1}~~ m_{2} ; j j\right)$ independent of $m_{1}$ and $m_{2}$?
Answer: You appreciate that $f\left(m_{1}~~~ m_{2} ; j m\right)$ is a function of 3, not 4 variables, since $m_1+m_2=m$, and the second argument, whatever it is, is always the difference of the 4th minus the first, so can be eliminated. So, define
$$
g(m_1;j,m)\equiv f\left(m_{1}~~~ m_{2} ; j m\right),
$$
so, obviously, (1) implies its right hand side $g(m_1;j,m)-g(m_1-1;j,m) $ vanishes for $m\to j$,
$$
g(m_1;j,j)=g(m_1-1;j,j),
$$
That is, it is independent of the integrally-spaced $m_1$, and, hence, the superfluous $m_2$,
so that
$$
g(m_1;j,j)=A_j.
$$ | {
"domain": "physics.stackexchange",
"id": 76084,
"tags": "quantum-mechanics, angular-momentum, mathematical-physics, representation-theory"
} |
Is there a good method to extract a reference sin/cos from a group of signals? | Question: For the removal of a pure interference from a group of sampled signals, if I know the exact frequency of the interference I can simply use an I/Q demodulation technique (i.e., a phasor projection) to calculate it and remove it from the signals of interest. But I don’t have a pure version of the interference so I need to derive it from the data.
I know this is related to clock recovery, but with limited signal record lengths any control loop (e.g., PLL or DLL) would be problematic.
So far I have simply hacked together an ad-hoc relatively long sequence of bandpass filtering, limiting, bandpass filtering, normalization for the I reference and afterwards all-pass filtering/Hilbert transform and normalization for the Q reference. It seems to work, but it’s too much of a hack. And it gets complicated if there are multiple interferers.
Is there a better/simpler way to extract these references?
If not, what would be a good alternative for simple bandpass filters? (as the requirements for these are quite stringent.)
If this was implemented in a real-time system (continuous data stream) instead, would you go the PLL/DLL route, or would this technique be good enough or even equivalent?
Although the most common method to remove these types of interference is to use a notch filter, the ringing in the impulse response of such filters is unacceptable for many applications.
This is particularly true for applications in which an intentional impulsive large artifact is present.
Answer: The best way to accomplish your desired result is to estimate the parameters of the interfering tone(s) and remove it(them) from the signal in the time domain.
To get you started on how to go about this, I recommend you read these two articles of mine:
Two Bin Exact Frequency Formulas for a Pure Real Tone in a DFT
Phase and Amplitude Calculation for a Pure Real Tone in a DFT: Method 1
This needs to be done frame by frame, so there are a whole bunch of implementation details that come into play. I'm really busy right now, so I will come back later with a fuller explanation.
For the two bin frequency formula, for best results, you will want to define your frame length on an integer + one half number of cycles. That way you only need to calculate two DFT bins. The amplitude/phase method will work on just two bins, but you will probably want to do four for that.
Given what you have said, this approach should be very plausible.
I would state that fact as you have 250 samples per cycle. So suppose you aimed for 2 1/2 cycles, that would mean a frame length of 625 samples. Any size in that range will put the interfering bin solidly between bin 2 and 3 (zero based indexing) and those are the only DFT bins you need so calculated a full DFT via FFT just for two bins is inefficient. The longer duration you have, the more bin separation you get between frequencies, but less well the DFT works unless the tones are steady. All the DFT cares about is sample counts, not what the underlying time scale is.
The proper units for frequency is cycles per frame which corresponds to the bin index or cycles per sample which you have given as 1/250.
Sample code is coming, and so is a further explanation. You need to think in terms of frames, constructing a copy of your interfering tone in a separate set of buffer because you will need to have overlapped buffers. Further details also forthcoming.
Still working on my two bin coding. Hope you are too.
You want to think of your signal as a series of frames. For each frame you are going to take a reading of the interfering tone. Some frames will be good reads and some frames will be bad reads, and it is important to have a metric which tells you the quality of each read so you can set a threshold.
You will read the frequency and the amplitude and phase are best done as a cartesian complex number. You will want to interpolate these values over a span of the good frames to give you parameters to generate a copy of the interfering signal so you can subtract it from your signal.
The reconstruction can be done by overlapping frames with averaging in the time domain as I said before, or a better way is to do a higher degree approximation of the angle in the sinusoidal function.
This method should work quite well for a slowly varying pure tone. (and possibly the basis of a music compression algorithm?) | {
"domain": "dsp.stackexchange",
"id": 7634,
"tags": "matlab, signal-analysis, frequency"
} |
How to extract characteristics from text using machine learning? | Question: I would like to develop some kind of model/algorithm that allows me to extract the characteristics of a given product name. (let's say the brand, model and color).
I am looking for a solution similar to the one offered by MonkeyLearn and its model Laptop Feature Extract.
For example:
Given the item "Apple iPhone 6s, 64GB Silver", It should compute:
{
brand: "iPhone",
model: "6s",
capacity: "64Gb",
color: "Silver"
}
Any suggestions will be appreciated.
Thank you.
Answer: What you need to look for is called "Named Entity recognition". From Wikipedia
Named-entity recognition (NER) (also known as entity identification,
entity chunking and entity extraction) is a subtask of information
extraction that seeks to locate and classify named entity mentions in
unstructured text into pre-defined categories such as the person
names, organizations, locations, medical codes, time expressions,
quantities, monetary values, percentages, etc.
There are already trained models for that, but most of them are for generic usage. For example in Python
import spacy
nlp = spacy.load('en_core_web_sm')
doc = nlp('European authorities fined Google a record $5.1 billion on Wednesday for abusing its power in the mobile phone market and ordered the company to alter its practices')
print([(X.text, X.label_) for X in doc.ents])
The output is
[('European', 'NORP'),
('Google', 'ORG'),
('$5.1 billion', 'MONEY'),
('Wednesday', 'DATE')]
Source of code: TowardsDataScience
In your case, you have to either train an NER yourself for phone specifications or find one that is available in public. | {
"domain": "datascience.stackexchange",
"id": 4526,
"tags": "machine-learning, python, deep-learning, machine-learning-model"
} |
Using ROS java on android for robot control | Question:
I'm brand new to ROS so bear with me:
I was hoping for some advise as to how to approach a project I am attempting. I have logic control written in Java (as an Android application) that I want to implement on a Dr. Robot Jaguar 4x4 robot (compatible with ROS Fuerte). My plan was to create a simple Android app that could just act as a passthrough converting java commands into ROS instructions, however as far as I can tell, Ros Java was designed to use Catkin and I can't figure out how to build the Jaguar drivers with Catkin. I really appreciate any suggestions.
Originally posted by navy_robots on ROS Answers with karma: 1 on 2014-09-11
Post score: 0
Answer:
You should also target at least ROS Hydro, since that is where rosjava/android really matured and became useable.
You can use a rosbuild workspace together with a catkin workspace, or you can catkinize the provided Dr. Robot package by rewriting the manifest.xml->package.xml and updating CMakeLists.txt.
There may also be some dependency issues with migrating the code from fuerte to hydro, but from secondhand experience it's not bad, just stick to the migration guides.
Originally posted by paulbovbel with karma: 4518 on 2014-09-12
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 19379,
"tags": "ros, robot, rosjava, android"
} |
Can ethyl acetate isolate cinnamaldehyde from cinnamon oil? | Question: I recently watched a video on extracting cinnamaldehyde from cinnamon using dichloromethane as a solvent. I would like to attempt the process. However, I would prefer not to use haloalkanes because of the health risks. Would ethyl acetate be a suitable replacement solvent for dichloromethane in extracting cinnamaldehyde from cinnamon oil?
Answer: Yes, ethyl acetate can be used to isolate cinnamaldehyde from cinnamon oil, though there are better non-haloalkane options. In a study that aimed to optimize the extraction of cinnamaldehyde from cinnamon powder, it was found that from methanol, ethanol, ethyl acetate, and water, methanol was the most efficient. The exact method used in the study was:
Collection of plant material: Bark of cinnamon was collected from a local market at Visakhapatnam, AP. The bark was cleaned and dried under sunlight for $24\ \mathrm{h}$. The dried bark was powdered and used as a raw material and stored in an air tight container. Cinnamon powder was sieved by using different particle sizes ranging from 354 to 125 microns.
Preparation of the extract: Cinnamon powder ($2\ \mathrm{g}$) was added with ethanol ($25\%$) and methanol ($25\%$) in different flasks and the volume was made $50\ \mathrm{mL}$. The solution was soaked for $1\ \mathrm{d}$ and $3\ \mathrm{d}$ respectively. After the soaking time, the solution was filtered using Whatman No.1 filter paper and the filtrate solution was heated to $78^\circ \mathrm{C}$ and $65^\circ \mathrm{C}$ and made up to $50\ \mathrm{mL}$ with distilled water and hexane and incubated for $2\ \mathrm{h}$. | {
"domain": "chemistry.stackexchange",
"id": 5731,
"tags": "organic-chemistry, home-experiment, solvents, extraction"
} |
SPOJ problem - The last digit of a number to a power (follow up) | Question: As a follow up to my post here. Please refer to the problem statement in that post.
I have edited this code to make it time efficient, but doing so the size of the source file has crossed the limit of 700 bytes. How do I reduce the file size?
#include <stdio.h>
int main()
{
int t;
scanf("%d",&t);
while(t--)
{
long long int base,exponent;
scanf("%lld%lld",&base,&exponent);
if(exponent==0)
{
printf("1\n");
}
else
{
int i;
int dig[5];
dig[0]=1;
for(i=1;i<=4;i++)
{
dig[i]=(base*dig[i-1])%10;
}
if(exponent%4==0)
printf("%d\n",dig[4]);
else if(exponent%4==1)
printf("%d\n",dig[1]);
else if(exponent%4==2)
printf("%d\n",dig[2]);
else if(exponent%4==3)
printf("%d\n",dig[3]);
}
}
return 0;
}
Answer: if(exponent%4==0)
printf("%d\n",dig[4]);
else if(exponent%4==1)
printf("%d\n",dig[1]);
else if(exponent%4==2)
printf("%d\n",dig[2]);
else if(exponent%4==3)
printf("%d\n",dig[3]);
can be written (storing value in variable)
int exp_mod = exponent%4;
if(exp_mod==0)
printf("%d\n",dig[4]);
else if(exp_mod==1)
printf("%d\n",dig[1]);
else if(exp_mod==2)
printf("%d\n",dig[2]);
else if(exp_mod==3)
printf("%d\n",dig[3]);
which can be written (replacing hard-coded value)
int exp_mod = exponent%4;
if(exp_mod==0)
printf("%d\n",dig[4]);
else
printf("%d\n",dig[exp_mod]);
which can be written (using a single printf)
int exp_mod = exponent%4;
int index = (exp_mod==0) ? 4 : exp_mod;
printf("%d\n",dig[index]);
Maybe this could be written using mathematical tricks but it's already much shorter (and you could remove the index variable). | {
"domain": "codereview.stackexchange",
"id": 13178,
"tags": "c, programming-challenge"
} |
Detect when costmap2DROS is updated | Question:
I am using a Costmap2DROS object to access a occupancy grid created with gmapping.
I want to execute some algorithm on the map whenever it gets updated. I figured the easiest way to do this is to have a subscriber to the "/map" topic with a callback that executes the algorithm.
I am worried about the overhead associated with the subscriber. I don't want the "/map" data to be cached in some queue somewhere, since it is being cached by Costmap2DROS.
I guess I would want a queue_size = zero.
I don't want ROS to send the occupancyGrid message to my callback function because I don't need it. I can get the data from Costmap2DROS.
Originally posted by Sebastian on ROS Answers with karma: 363 on 2015-08-07
Post score: 0
Answer:
That's an interesting case.
You could subscribe to the topic_tools::ShapeShifter msg on the map topic, which would AFAIK not deserialize the message automatically.
Another simple solution would to boost::bind to your subscription callback but discard the _1 argument. Something like:
class Node
{
init(){
nh.subscribe(boost::bind(&Node::mapCb, this))
}
void mapCb(){
//do stuff
}
};
I'm not 100% if this would skip the (de)serialization, or merely prevent the shared object from being created.
Originally posted by paulbovbel with karma: 4518 on 2015-08-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Sebastian on 2015-08-07:
Am I supposed to be subscribing to the /map topic directly if I want to do something like this?
It really seems like costmap2DROS should provide some way of achieving this behavior. Isn't updating after new map data common design decision?
Comment by paulbovbel on 2015-08-12:
FYI the costmap's publish and update frequencies are actually separate parameters. If you're looking to insert something into the costmap's update loop, writing a costmap_2d plugin may be the way to go.
Comment by paulbovbel on 2015-08-12:
This is the route I went down with frontier_exploration, and I can tell you that it was the wrong approach. Just read the map data from the costmap object periodically, and run your algorithm as needed. If your frequencies are synced, you may OCCASIONALLY get a stale or dropped frame.
Comment by Sebastian on 2015-08-14:
Ok, I think I will go with the periodic route, for now at least. Thanks. | {
"domain": "robotics.stackexchange",
"id": 22399,
"tags": "ros, navigation, costmap, roscpp, costmap-2d-ros"
} |
What is the difference between subvolcanic rocks and plutonic rocks? | Question: I'm studying Plutonic (Intrusive) rocks and I really don't understand the difference between Plutonic Rocks that form in Sills and Dikes from the so called subvolcanic rocks, also known as a hypabyssal rocks. Aren't they the same? Why?
Answer: Main difference between a subvolcanic verses a plutonic is depth at which the rock solidified at from its molten state. Plutonic implies a depth greater than subvolcanic by definition but I am sure there is some overlap between shallow end of plutonic and subvolcanic. Depth of emplacement for plutonic would mostly be much greater than 2 km.
Plutonic is an igneous rock formed by solidification at considerable depth beneath the earth's surface.
Subvolcanic rock, also known as a hypabyssal rock, is an intrusive igneous rock that is emplaced at medium to shallow depths (<2 km) within the crust. | {
"domain": "earthscience.stackexchange",
"id": 1645,
"tags": "geology, rocks, magmatism"
} |
Checking whether a number is a Smith number | Question: I have created a Java program to check if the number entered by the user is a Smith number (a number whose sum of digits equals the sum of the digits of its prime factorization). The program works fine but takes too long to check numbers like 4937775. How do I reduce the time taken by it to execute?
import java.util.*;
class check
{
static boolean a;
static boolean checkprime(long n)
{
int i, factors= 0;
for(i = 1;i<=n ;i++)
{
if(n%i==0)
factors++;
}
if(factors == 2)
return true;
else
return false;
}
static long sumfinder(long n)
{
long sum = 0;
while(n!=0)
{
sum = sum + n%10;
n= n/10;
}
return sum;
}
static boolean checksmith(long n)
{
long sum = 0;
if(checkprime(n)== true)
{
System.out.println(n + " is a prime number ");
return false;
}
else
{
System.out.println(n + " is not a prime number ");
//generate prime factors:
long num = n;
outer:
for(long i = 1; i<= num ; i++)
{
if(checkprime(i)== true)
{
if(num%i== 0)
{
sum= sum+ sumfinder(i);
num = num/i;
i = 1;
if (num == 1)
{
break outer;
}
}
}
}
}
System.out.println(sum);
System.out.println(sumfinder(n));
if (sumfinder(n)== sum)
return true;
else
return false;
}
static void display()throws InputMismatchException
{
Scanner sc = new Scanner(System.in);
System.out.println("Enter a number");
long n = sc.nextInt();
if(checksmith(n)== true)
{
System.out.println("Smith number");
}
else
System.out.println("Not a Smith number");
}
public static void main(String [] args)
{
display();
}
}
Answer:
class check
Java naming convention is that class names should start with an uppercase letter. Check is also not very informative: what does it check?
static boolean a;
Is this used anywhere?
static boolean checkprime(long n)
Again, Java naming convention is that (a) camel-case should be used (checkPrime instead of checkprime); (b) Boolean-valued properties should generally have names beginning is (isPrime instead of checkPrime). In this case isPrime reads quite naturally in context (e.g. if (isPrime(n)) ...).
int i, factors= 0;
for(i = 1;i<=n ;i++)
{
if(n%i==0)
factors++;
}
if(factors == 2)
return true;
else
return false;
Why give i such a wide scope?
Why use factors at all? If you find a divisor which is not 1 or n you can return false immediately.
What's going on with the whitespace in for(i = 1;i<=n ;i++)? The semicolon separates expressions: I can't understand why you'd want whitespace inside the expression i = 1 but not after the semicolon which separates the 1 from i<=n.
There are much faster ways to test primality. Aside from the comments made in another answer about only testing up to sqrt(n), look around for discussion on BPSW or just use BigInteger.isProbablePrime.
static long sumfinder(long n)
Again, names. If the name of a method is a noun, I expect it to describe the object returned: but the long returned is not a sum-finder. It's the sum. I suggest that a better name for this method would be digitSum.
{
long sum = 0;
while(n!=0)
{
sum = sum + n%10;
n= n/10;
}
return sum;
}
I understand this to be at the level of throwaway code rather than production: if it were production code, I'd expect to see either some code to handle the case n < 0 or a comment to explain why that is unnecessary.
static boolean checksmith(long n)
See previous comments on names.
Existing answers propose some better ways to write the body of this method; I want to focus as much on general points of style as on algorithms, although I do have one or two points to add there too.
long sum = 0;
Which sum? There are two relevant sums, right?
if(checkprime(n)== true)
== true is a crime against KISS.
System.out.println(n + " is a prime number ");
That's an unexpected side effect.
//generate prime factors:
I'm not sure what exactly the scope of this comment is, but I wouldn't describe the following loop as generating prime factors.
outer:
?? There's no inner loop, so what purpose does this label serve?
for(long i = 1; i<= num ; i++)
That should strike terror into the heart of anyone who wants their program to terminate in a reasonable time. There are three possible cases: (a) i could perfectly well be an int; (b) i gets incremented by more than 1 elsewhere; (c) this loop will execute more than \$2^{31}\$ times. Here it's not case (b), so which of the other two is it?
(Aside: the rewrites correctly observe that you only need to go up to sqrt(n), but if n is a long that could still be about \$2^{31.5}\$. It might be worth looking at faster factorisation methods, starting with Pollard's rho).
if(checkprime(i)== true)
{
if(num%i== 0)
In addition to previous comments, why check whether i is prime? That check is slower than the num % i check, and if the second check succeeds then the first one must also for basic number-theoretic reasons.
{
sum= sum+ sumfinder(i);
Here you could have an early-abort if the partial digit sum of prime factors already exceeds the digit sum of the original number. That same idea carries over into the suggested rewrites of the method.
if (sumfinder(n)== sum)
return true;
else
return false;
Those six lines could be simplified to return sumfinder(n) == sum;
static void display()throws InputMismatchException
Um. If the method is called display, I expect it to do output. Why is it throwing an input-related exception? | {
"domain": "codereview.stackexchange",
"id": 28744,
"tags": "java, time-limit-exceeded, primes"
} |
Interfaces of ROS classes for mocking and dependency injection | Question:
I am trying to write unit tests for my class that interacts with various ROS ServiceClient and SimpleActionClient. Since I'm fairly confident in ROS and don't want to handle the indeterminacy of network latency in my unit tests, I plan to have my unit tests cover the interface between my class and ROS only.
Hence, I would need to mock the ServiceClient and SimpleActionClient. In order to mock, the things I am mocking needs to implement an interface (lest I be forced to use this uglier method.
My question is, does the ServiceClient and SimpleActionClient implement any interface that I can mock? If not, is there any best practices for mocking common ROS classes?
My current implementation is to use dependency injection to allow me to separate and swap out the concrete ROS stuff with (hopefully) mocked ROS stuff
PayloadController::PayloadController(
ros::ServiceClient& service_client,
actionlib::SimpleActionClient<...>& action_client)
PS:: To my blessing, tf2_ros::Buffer implements the BufferInterface which allowed me to mock and unit test code that used TF2 easily
Originally posted by Rufus on ROS Answers with karma: 1083 on 2020-11-25
Post score: 0
Answer:
I resorted to writing my own wrapper to achieve my needs. Below is an example of wrapping a SimpleActionClient
#pragma once
#include <actionlib/client/simple_action_client.h>
#include "actionlib/client/action_client.h"
#include "actionlib/client/simple_goal_state.h"
#include "actionlib/client/simple_client_goal_state.h"
#include "actionlib/client/terminal_state.h"
template <class ActionSpec>
class SimpleActionClientInterface
{
protected:
ACTION_DEFINITION(ActionSpec);
public:
virtual void sendGoal(const Goal& goal) = 0;
virtual void waitForResult() = 0;
virtual actionlib::SimpleClientGoalState getState() = 0;
};
template <class ActionSpec>
class SimpleActionClientWrapper : public SimpleActionClientInterface<ActionSpec>
{
protected:
ACTION_DEFINITION(ActionSpec);
public:
SimpleActionClientWrapper(actionlib::SimpleActionClient<ActionSpec>& client) :
client(client)
{};
virtual void sendGoal(const Goal& goal)
{
client.sendGoal(goal);
};
virtual void waitForResult()
{
client.waitForResult();
};
virtual actionlib::SimpleClientGoalState getState()
{
return client.getState();
};
private:
actionlib::SimpleActionClient<ActionSpec>& client;
};
Originally posted by Rufus with karma: 1083 on 2021-01-18
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 35805,
"tags": "ros-melodic"
} |
How does Gradient Descent treat multiple features? | Question: As far as I know, when you reach the step, in a gradient descent algorithm, to calculate step_size, you calculate learning_rate * slope
Now, slope is obtained by calculating the derivative of the cost_function with respect to the feature you want to find the optimal coefficient for.
Let's say that the cost function for the purposes of this question is the sum of squared residuals.
My question is, how are coefficients of other features treated in the differentiation of the equation? For instance, if I have the equation $y = b_0 + x_1 + x_2$, then by calculating the derivative of the cost function with respect to $x_1$, one gets:
$\frac{d}{d\left(x_1\right)}\left(\left(\hat{y}\:-\:\left(b_0\:+\:x_1+x_2\right)\right)^2\right) =$
$2\times\:\left(\hat{y}\:-\:b_{0\:}-x_1-x_2\right)\left(\frac{d}{d\left(x_1\right)}(x_2)\:+\:1\right)$
In this case, how is a value obtained by substituting a value for $x_1$ while $\frac{d}{d(x_1)}(x_2)$ is still in the formula?
I watched a YouTube video (it starts at the right point) that says that $x_2$ is a constant (while it's a different feature) and, therefore, when differentiating, $\frac{d}{d(x_1)}(x_2)$ is omitted and we are left with $2\times \:\left(\hat{y}\:-\:b_{0\:}-x_1-x_2\right)(1)$. Is this the case or am I missing something?
Answer: That's correct. The derivative of $x_2$ with respect to $x_1$ is 0.
A little context: with words like derivative and slope, you are describing how gradient descent works in one dimension (with only one feature / one value to optimize). In multiple dimensions (multiple features / multiple variables you are trying to optimize), we use the gradient and update all of the variables in each step. That said, yes, this is basically equivalent to separately updating each variable in the one-dimensional way that you describe. | {
"domain": "cs.stackexchange",
"id": 17668,
"tags": "machine-learning, gradient-descent"
} |
'*' present in the print statement-PYTHON | Question: for i in range(1,int(input())+1):
print(((10**i -1)//9)*((10**i -1)//9))
print('lol just need to fill one more line to post the question IGNORE IT')
What is the meaning of '*' in between ((10**i -1)//9) and ((10**i -1)//9) in the above print statement?
Answer: Let's first look at what you are printing:
((10**i -1)//9)*((10**i -1)//9)
#that is the math task, from which you are printing the result
You asked what * means.
This is the multiplication operator...
Let's look at the other things, so you get a better understanding of what is going on
result = 10**i - 1
Here you are doing 10 to the power of i and then minus one.
The ** operator means to the power of...
So 2**4 means 2^4 or two to the power of four.
Then you are dividing the result of that task with 9 (result)//9
You are using // which is the floor division. This will cut of the decimal place.
The right side after the * is just the same as the first one.
You multiply them with the * operator.
result = (10**i -1)//9
print( result * result ) #Multiplication
I hope that this helped you. | {
"domain": "codereview.stackexchange",
"id": 37950,
"tags": "python, python-3.x, community-challenge"
} |
Why are two Higgs doublets required in SUSY? | Question: I can't really understand why two Higgs doublets are required in SUSY.
From the literature, I have found opaque explanations that say something along the lines of: the superpotential $W$ must be a holomorphic function of the chiral supermutiplets and thus we need to introduce another chiral supermutiplet.
I also know that this is somehow related to an anomaly cancellation. Can somebody help to make this a bit more clear?
Answer: The anomalies in four dimensions are calculated from a triangular Feynman diagram with a chiral (left-right-asymmetric, when it comes to the couplings with the gauge bosons or gravitons) fermion running in the loop and three gauge bosons (and/or graviton[s]) attached at the vertices. For the Standard Model, all the gauge anomalies cancel (both leptons and quarks must be included, otherwise it wouldn't work: it's somewhat nontrivial although the cancellation may be reduced to one simple observation in GUT theories, among other fast methods to see why it works). They must also cancel in the supersymmetric models and in order to see what these anomalies are, we may look at the difference between the anomalies in the non-supersymmetric and supersymmetric models.
The Minimal Supersymmetric Standard model has almost the same spectrum of chiral spin-1/2 particles, the quarks and leptons: their anomalies cancel. Their new superpartners are scalars which don't contribute to the anomaly. The new superpartners of gauge bosons are Majorana fermions which are left-right-symmetric and contribute zero to the anomalies, too.
The only new particles in the supersymmetric theory that may be running in the loop that contribute to the anomaly are higgsinos, the superpartners of the Higgs boson (the whole doublet). The anomaly (for various combinations of gauge bosons) from one higgsino, one new Weyl fermion, is nonzero. It must be cancelled because gauge anomalies are inconsistencies (preventing us from decoupling negative-probability time-like polarizations of gauge bosons).
So the MSSM deals with that by adding two opposite higgsinos whose charges are opposite to each other so all the anomalies cancel in between them.
There's one more supersymmetric reason why we need two Higgs doublets: the Yukawa couplings must be holomorphic, arising from a superpotential $W=y\cdot h \bar q_L q_R$, and when one distinguishes chiral superfields and antichiral superfields (their complex conjugate), he finds out that only the up-type quarks (or only the down-quark quarks) could get masses from one Higgs doublet (the charges wouldn't add up to zero if you added the opposite quarks). So the opposite, complex conjugate Higgs doublet superfield (whose higgsinos have the opposite handedness for the same sign of the supercharge and the weak isospin) is needed to give masses to the remaining one-half of the quarks. | {
"domain": "physics.stackexchange",
"id": 5602,
"tags": "supersymmetry, higgs, quantum-anomalies"
} |
Why is 1,2,3,4,5,8-hexahydronaphthalene more stable than 1,4,4a,5,8,8a-hexahydronaphthalene? | Question: Why is 1,2,3,4,5,8-hexahydronaphthalene (1) more stable than 1,4,4a,5,8,8a-hexahydronaphthalene (2)?
A follow-up question is about stability being estimated by counting the α-hydrogens.
In the first case, do we consider α-hydrogens individually for each double bond, or are common hydrogens only taken once? If it is the former, then the answer would be 12, or else 8. Which one is correct?
If it is the latter, both have an equal number of hyperconjugating structures, i.e. H = 8. Therefore the stability of both must be equal. Is this wrong?
Answer: Number of hyperconjugating structures for a pi-bond is the same as total number of alpha-hydrogen that are associated to it.
We consider alpha-hydrogen individually for each double bond to calculate number of hyperconjugating structures.
In the first compound, there would be 12 hyperconjugating structures.
This is because alpha-hydrogen which are common to both pi-bonds hyperconjugate with both of them, contributing in two hyperconjugating structures.So total number of hyperconjugating structures will be 12.
However, if you only need to count number of alpha-H, then the answer would be given by considering the common hydrogens only once. So number of alpha-H is 8, where the 4 alpha-H at C1 and C2 contribute in 2 hyperconjugating structures each.
However, the second compound has both its pi-bonds at its ends, each having 4 hyerconjugating alpha-hydrogen interactions associated to them. So total number of hyperconjugating structures in this case are 8.
Hence, (1,2,3,4,5,8-hexahydronaphthalene) is more stable than (1,4,4a,5,8,8a-hexahydronaphthalene). | {
"domain": "chemistry.stackexchange",
"id": 14518,
"tags": "organic-chemistry, stability, hyperconjugation"
} |
Change in centripetal acceleration if tangential acceleration is non-zero | Question: I recently read about circular motion. They showed that acceleration $\vec a$ of a object in circular motion is given by
$$\vec a = -\omega^2r\vec e_r + \frac{dv}{dt}\vec e_t$$
where $r$ is the radius, $\omega$ is the angular velocity, $v$ is the magnitude of velocity, $\vec e_r = \hat i\cos \theta + \hat j\sin \theta$ and $\vec e_t = -\hat i \sin \theta + \hat j\cos \theta$ are the unit vectors along radius and tangent respectively.
The text book showed that in uniform circular motion, since speed doesn't change(i.e, $\frac{dv}{dt} = 0$), the acceleration reduces to $\omega^2r\vec e_r$. Then I wondered about non-uniform circular motion. I was like "Is it possible to have a circular path even with changing speed?". I got the answer in a question on this site. It says that for that the centripetal acceleration must change in accordance with speed to keep the path circular. So my question is by how much has the centripetal acceleration to be changed to keep the path circular path if the acceleration along tangent is non-zero?
I tried to find out. But I was not even able to find out from where to start.
Answer: The centripetal acceleration is $r\omega^2$ provided by a force $mr\omega^2$.
If the trajectory is to be a circle the radius $r$ must stay constant whilst $\omega$ is changing.
If it were a satellite orbiting the Earth such a change could not happen as for the same radius of orbit (equal to the separation between satellite and Earth) the gravitational force of attraction between the Earth and the satellite would need to change.
In terms of a rocket attached to a merry-go-round it can happen because in such a case the force applied to the rocket by the merry-go-round could increase in order to compensate for the increasing speed of the rocket (and merry-go-round).
As $a=\frac{v^2} {r}$ the rate of change of centripetal acceleration is $\dot a = \frac{2v}{r}\frac {dv}{dt}$. | {
"domain": "physics.stackexchange",
"id": 85559,
"tags": "newtonian-mechanics, acceleration, velocity"
} |
Could not find a package configuration file provided by "cv_bridge" | Question:
I created my catkin package like so:
catkin_create_pkg dcp_processing sensor_msgs cv_bridge roscpp std_msgs image_transport
I don't edit anything, then I compile the package:
catkin_make
I get that error the cv_bridge is missing. I didn't add or edit anything in the package so everything is by default as it is created with catkin_create_pkg.
I'm using Ubuntu 16.04 Kinetic ROS Distribution.
CMake Warning at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:76 (find_package): Could not find a package configuration file provided by "cv_bridge" with any of the following names:
cv_bridgeConfig.cmake
cv_bridge-config.cmake
Add the installation prefix of "cv_bridge" to CMAKE_PREFIX_PATH or set "cv_bridge_DIR" to a directory containing one of the above files. If "cv_bridge" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): dcp_processing/CMakeLists.txt:10 (find_package)
-- Could not find the required component 'cv_bridge'. The following CMake error indicates that you either need to install the package with the same name or change your environment so that it can be found. CMake Error at /opt/ros/kinetic/share/catkin/cmake/catkinConfig.cmake:83 (find_package): Could not find a package configuration file provided by "cv_bridge" with any of the following names:
cv_bridgeConfig.cmake
cv_bridge-config.cmake
Add the installation prefix of "cv_bridge" to CMAKE_PREFIX_PATH or set "cv_bridge_DIR" to a directory containing one of the above files. If "cv_bridge" provides a separate development package or SDK, be sure it has been installed. Call Stack (most recent call first): dcp_processing/CMakeLists.txt:10 (find_package)
-- Configuring incomplete, errors occurred! See also "/home/jaouadros/catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/jaouadros/catkin_ws/build/CMakeFiles/CMakeError.log". Makefile:640: recipe for target 'cmake_check_build_system' failed make: *** [cmake_check_build_system] Error 1
Originally posted by ROSkinect on ROS Answers with karma: 751 on 2017-10-03
Post score: 0
Original comments
Comment by gvdhoorn on 2017-10-03:\
I get that error the cv_bridge is missing.
well, is it? Do you have ros-kinetic-cv-bridge installed? What is the output of rospack find cv_bridge?
Adding it to your package as a dependency does not automatigically make it available.
Comment by ROSkinect on 2017-10-03:
Yes I know but I did install ros full version so as I don't miss any package.
I get [rospack] Error: package 'cv_bridge' not found
Answer:
Yes I know but I did install ros full version so as I don't miss any package.
Installing ros-$distro-desktop-full does not result in all packages getting installed. The various metapackages provide you with a convenient way to install a select set of them, but definitely not all. See REP-142: ROS Indigo and Newer Metapackages - Desktop variants.
I get
[rospack] Error: package 'cv_bridge' not found
That would seem to indicate that cv_bridge is not installed on your machine. Check with dpkg -l | grep cv-bridge. If that also doesn't result in cv_bridge being found, then install it with (on Ubuntu):
sudo apt-get install ros-kinetic-cv-bridge
and try again.
Edit: cv_bridge should be installed as part of vision_opencv which is (through perception) part of desktop_full, so not having cv_bridge would be strange if you did in fact install desktop_full.
Originally posted by gvdhoorn with karma: 86574 on 2017-10-03
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by ROSkinect on 2017-10-03:
dpkg -l | grep cv-bridge doesn't result anything so it is not installed. I just installed it and it works. Thanks! I want to know why it didn't get installed the first time when I installed ROS? usually I don't need to installed cv_bridge separately.
Comment by gvdhoorn on 2017-10-03:
I don't know why it didn't get installed. What is the output of dpkg -l | grep desktop-full?
Comment by ROSkinect on 2017-10-03:
It doesn't show anything just like with dpkg -l | grep cv-bridge
Comment by gvdhoorn on 2017-10-03:
Then I guess you didn't "install ros full version" as you claimed earlier. Only ros-kinetic-desktop-full would install cv_bridge automatically.
Comment by ROSkinect on 2017-10-03:
I just checked with this: grep ros-kinetic ~/.bash_history I found that I launched sudo apt-get install ros-kinetic-desktop-full
Comment by gvdhoorn on 2017-10-03:
According to everything I can find and verify desktop_full should install cv_bridge, so I'm not sure what happened.
You could see whether /var/log/apt/history.log or /var/log/apt/term.log contains anything that can shed some light.
Comment by Sam123 on 2022-03-10:
hi @gvdhoorn
I'm getting same error Could not find a package configuration file provided by "cv_bridge" with any of the following names
cv_bridgeConfig.cmake
cv_bridge-config.cmake
i am working with ROS2 foxy ,18.04.1-Ubuntu on AWS ec2 server.
after using this command $ dpkg -l | grep cv-bridge
output shows:ii libcv-bridge1d:amd64 1.12.3+ds-2 amd64 cv_bridge Robot OS package
ii python-cv-bridge 1.12.3+ds-2 amd64 cv_bridge ROS package - Python bindings
when i use this command $ dpkg -l | grep desktop-full
it shows nothing. | {
"domain": "robotics.stackexchange",
"id": 28989,
"tags": "cv-bridge"
} |
Basic PHP template loader | Question: For a project, I need to write a basic template manager who will be responsible to load the requested HTML or PHP template when the user clicks on a menù link. It's like a page loader since that all the loaded templates are static HTML pages. I want to improve it if is it possible. Here is the code.
on the main index of the project I have a jQuery code like this:
<script>
$(document).ready(function(){
$('.contents').load('templates/home.php');
$(document).on('click','a',function(e){
e.preventDefault();
$('#nav-modal').modal('hide');
var url = $(this).attr('href');
if(url == 'booking/'){
window.location.href= 'booking/';
} else {
$.ajax({
type: 'GET',
url: 'templates/TemplateController.php'+url,
cache: false,
dataType: 'html',
success: function(response){
$('.contents').empty()
.html(response);
}
});
}
});
});
</script>
on the PHP 'controller' side I have this code who will only call a class to obtain the needed files. The URL passed from AJAX to the controller is something like this: ?tpl=atemplate
<?php
require_once 'Autoloader.php';
if(isset($_GET['tpl'])){
$tpl = new TemplateLoader;
echo $tpl->render($_GET['tpl']);
}
?>
And then in the simple class, I have only a switch control that will select what is the requested template to load on the index
<?php
class TemplateLoader{
public function __construct(){
#$this->path = $path;
}
public function render($tpl){
switch($tpl){
case 'home':
echo file_get_contents('home.php');
break;
case 'about':
echo file_get_contents('about.php');
break;
case 'services':
echo file_get_contents('services.php');
break;
case 'contacts':
echo file_get_contents('contacts.php');
break;
case 'prices':
echo file_get_contents('prices.php');
break;
}
}
}
?>
Answer: Technical mistakes
echo void
echo $tpl->render($_GET['tpl']);
This echo will display empty string (null), because you've already output a string with echo within method and have void return type (implicit return null). Make render() method return a string, because you want to defer output as much as you can. You won't be able to do anything with it after that (including headers management). Eventually you might want to use psr-7 response object with stream and plug yourself into some framework - class with method that returns string can be left untouched (Open-closed Principle) and wrapped with an object that changes string into stream.
closing tag
?>
Simply don't use it at the end of file. It causes hard to debug problems with undesired output - more here.
Algorithm
Render method works on hardcoded strings, which rarely is a good idea. You can make it more generic, as the logical pattern is quite distinct. Append path & file extension to given name and get file contents if it exists. The only thing you need to be careful about is path traversal attack - if you allow only lowercase letters it won't be possible. Also some error handling will be required.
OOP
Api
First of all your class api is perfect. Giving only template name, without implying that file system is involved makes it encapsulated abstraction and that's what OOP is about - don't be fooled by magic tricks and utilities. It is really a hard part, that often goes into area of so called "unknown unknowns" if you get it wrong, and it is really easy to do so.
Naming
You can write a book about that, so I'll point a few things:
Avoid naming your classes with names ending with -or or -er. It fits interfaces pretty well - they're focused on delivering something specific (from client's perspective) in unspecified way.
On the other hand avoid mentioning IO gateway names in data focused interfaces. It fits class names - they're interface implementations delivering something by talking to concrete IO mechanism.
Your TemplateLoader is concrete kind of loader, but you should be able to switch it with implementation that delivers strings from database or memory (when testing the class that uses it). You won't see the point when global namespace is the client using it, but when additional layers wrap it up it won't require messing with code beside the place where object composition happens.
render() method usually implies that some data processing is involved - you are simply returning template contents.
Refactored
Since this class is small (as it should) I've written refactored version based on the things I've pointed out and type hints for php 7:
interface TemplateLoader
{
/**
* @param string $templateName
*
* @throws InvalidArgumentException|TemplateNotFoundException
*
* @return string
*/
public function contents(string $templateName): string;
}
class TemplateFiles implements TemplateLoader
{
private $templatesDirectory;
public function __construct(string $templatesDirectory = '') {
$this->templatesDirectory = $templatesDirectory;
}
public function contents(string $templateName): string
{
$filename = $this->templateFilename($templateName);
return file_get_contents($this->existingFile($filename));
}
private function templateFilename(string $templateName): string
{
$file = $this->validName($templateName) . '.php';
return rtrim($this->templatesDirectory, '/') . '/' . $file;
}
private function validName(string $template): string
{
if (preg_match('/[^a-z]/', $template)) {
throw new InvalidArgumentException('...');
}
return $template;
}
private function existingFile(string $filename): string
{
if (!file_exists($filename)) {
throw new TemplateNotFoundException('...');
}
return $filename;
}
}
You would have to implement two exceptions and put method call into try-catch block. There are two of them, because one may come from potentially malicious request (you might want to log these), while the other may be dev error or bypassed UI by user (it's not control flow exception which should be avoided). | {
"domain": "codereview.stackexchange",
"id": 31166,
"tags": "php, template, ajax"
} |
Water and Ice - density | Question: We know that ice has a lower density than water despite both having the same [molecular] mass. I know that as water turns to ice, it expands. As far as I was taught, I know that it has something to do with hydrogen bonding. I would like to know in terms of hydrogen bonding, why ice actually expands?
Answer: The most straightforward answer is that water in the form of ice I (or $\ce{I_h}$) has an open hexagonal structure (see figure) whereas liquid water does not. In ice the oxygens are precisely tetrahedrally positioned where each oxygen is hydrogen bonded by four neighbouring oxygens with an O..O distance of approx 0.28 nm. The H atoms lie very closely along the O-O axes. Two of the H bonds are short, as in liquid water, and two are long. This is expected from residual entropy calculations and also from thermal and neutron diffraction experiments.
As this ice structure has a rather open lattice this is the reason its density is less at its freezing point than that of liquid water, rather than being due to the high density of water. One could argue that the reason that water density is a maximum at 4 $\ce{^0C}$ (277 K) is that the structure of liquid water is changing from that of ice $\ce{I_h}$ to the short range structure of the liquid.
Liquid water is, of course, also highly hydrogen bonded but the extra energy it contains means that its structure is far less regular and so becomes slightly denser than ice. As the water molecules can move about more in the liquid than in ice, the hydrogen bonds will bend a little or break as molecules rotate. This will lead to a shorter O - O distance and hence slightly higher density in liquid than in the solid.
see also answers to this post Why really is ice less dense than water?
This figure shows the hexagonal ice structure; The O atoms are shown as dots and the lines are the (almost linear) O-H-O bonds.(Image from Murrell & Jenkins 'Properties of liquids & solutions')
(Ice also exhibits eight different phases depending on temperature and pressure of which $\ce{I_h}$ is the one found at normal atmospheric pressure. In these other phases, angles are distorted from being tetrahedral) | {
"domain": "chemistry.stackexchange",
"id": 6498,
"tags": "water, hydrogen-bond"
} |
Definite energy states for a single non-relativistic particle with a time dependent potential | Question: Do definite energy states exist for a single particle when its potential itself changes with time?
I tried solving it and the equations seem to show that they do not exist. But then i am confused as to what energies will be observed when it is measured. What does the expectation value of energy mean in this case?
As a specific example, consider a 1D infinite potential well with $V(x,t)=t$. What energies are observed and with what probability when the system's energy is measured?
Answer: In this case, the eigenstates of the Hamiltonian are not useful to solve the problem, and one has to work with the Schrödinger equation directly:
$$
i\hbar \, \partial_t \psi(x,t)=\frac{-\hbar^2}{2m}\partial_x^2\psi(x,t)+P\,t\,\psi(x,t)
$$
Using a Fourier transform in the variable $x$ you can show that the general solution is
$$
\psi(x,t) = \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}} \hat{\psi}(k,0)\,\exp\left(\frac{-i\hbar\,k^2\,t}{2m} - \frac{i\,P\,t^2}{2\hbar}+ikx \right)\,dk
$$
where $\hat{\psi}(k,0)\in L^2(\mathbb{R})$ is the Fourier transform of the initial condition $\psi(x,0)$. Note that the kernel of the integral operator
$$
\exp\left(\frac{-i\hbar\,k^2\,t}{2m} - \frac{i\,P\,t^2}{2\hbar}+ikx \right)
$$
is a eigenfuction of the Hamiltonian $\hat{H}(t)$ for all $t$ with eigenvalue
$$
E(t) = \frac{\hbar^2k^2}{2m} + Pt
$$
This is because $[\hat{H}(t),\hat{H}(t')]=0$ for different times $t$ and $t'$. When you make a measurement at time $t$ your eigenfunction will collapse to the instantaneous eigenvector of $\hat{H}(t)$. Suppose now you have an eigenvector $\psi(x,t_0)$ of $\hat{H}(t_0)$ with eigenvalue $E(t_0)$, how will it evolve? Using the formula from before I get
$$
\psi(x,t) = \psi(x,t_0) \, \exp\left(\frac{-i\,E(t_0)\,(t-t_0)}{\hbar} -\frac{i\,P\,(t-t_0)^2}{2\hbar} \right)
$$
So $\psi$ will be an instantaneous eigenvector of $\hat{H}(t)$ for all times. In this sense the state is "stationary", since you will always find the state of the system in the corresponding eigenvector, but the value of energy I measure will depend on time since the eigenvalues of the Hamiltonian depend on time too.
I'll like to add that for time dependent Hamiltonians with the property $[\hat{H}(t),\hat{H}(t')]=0$ the evolution operator can be written
$$
\hat{U}(t_1,t_0) = \exp\left(-\frac{i}{\hbar} \int_{t_0}^{t_1} \hat{H}(t) \, dt \right)
$$ | {
"domain": "physics.stackexchange",
"id": 16102,
"tags": "energy, time, schroedinger-equation, potential"
} |
$\mathrm{BPEXP} = \mathrm{BPP} \iff \mathrm{BPEE} = \mathrm{BPE}$ | Question: Concerning about a wide variety of complexity classes, I have come up with the above conjecture.
Please, establish the claim in the title formally.
Answer: This is true because both sides of the $\iff$ are false.
$\mathrm{BPEXP}\neq \mathrm{BPP}$ because if not, then $\mathrm{BPP} = \mathrm{EXP}$ and by padding, we have $\mathrm{BPEXP} = \mathrm{EEXP} \neq \mathrm{EXP} = \mathrm{BPP}$
$\mathrm{BPEE}\neq \mathrm{BPE}$ because if not, then by padding, we have $\mathrm{BPEEXP} = \mathrm{BPEXP}$, this implies that $\mathrm{BPEXP} = \mathrm{EEXP}$, and by padding again, we have $\mathrm{BPEEXP} = \mathrm{EEEXP} \neq \mathrm{EEXP} = \mathrm{BPEXP}$ | {
"domain": "cs.stackexchange",
"id": 12101,
"tags": "complexity-theory"
} |
Vector-like type for type that can't be moved or copied | Question: I was in need of a vector-like type that could hold types that couldn't be copied or moved so I implemented one. I tried to make it similar to standard STL containers.
Is the use of the macro to throw the exception for the at member function legitimate to prevent code repetition between the const and non const version of at?
I didn't include all the code as there is a lot of repetitions for getters, and I don't have support for allocators yet.
Do you see design or implementation flaws?
Any advice to improve it?
#define FIXED_BUFF_THROW_OUT_RANGE(pos, size) (throw std::out_of_range("out of range : " + std::to_string(pos) + " >= " + std::to_string(size) + " in " + std::string(__PRETTY_FUNCTION__)))
//pretty function is a clang/gcc extension similar to __func__ be with a full function signature
template<typename T>
class fixed_buffer {
public:
using value_type = T;
using size_type = std::size_t;
using difference_type = std::ptrdiff_t;
using reference = value_type&;
using const_reference = const value_type&;
using iterator = value_type*;
using const_iterator = const value_type*;
private:
static constexpr bool is_noexcept_destructible = noexcept(std::declval<value_type>().~value_type());
private:
iterator _begin;
iterator _end;
iterator _alloc;
public:
fixed_buffer() noexcept {
_begin = nullptr;
_end = nullptr;
_alloc = nullptr;
}
explicit fixed_buffer(size_type size) {
_begin = static_cast<iterator>(std::aligned_alloc(alignof(value_type), sizeof(value_type) * size));
_end = _begin;
_alloc = _begin + size;
}
fixed_buffer(const fixed_buffer&) = delete;
fixed_buffer operator =(const fixed_buffer&) = delete;
fixed_buffer(fixed_buffer&& other) noexcept {
_begin = other._begin;
_end = other._end;
_alloc = other._alloc;
other._begin = nullptr;
other._end = nullptr;
other._alloc = nullptr;
}
fixed_buffer& operator= (fixed_buffer&& other) noexcept(is_noexcept_destructible) {
if (_begin) {
clear();
free(_begin);
}
_begin = other._begin;
_end = other._end;
_alloc = other._alloc;
other._begin = nullptr;
other._end = nullptr;
other._alloc = nullptr;
return *this;
}
reference operator [](size_type pos) {
return _begin[pos];
}
const_reference operator [](size_type pos) const {
return _begin[pos];
}
reference at(size_type pos) {
if (pos < size())
return this->operator[](pos);
FIXED_BUFF_THROW_OUT_RANGE(pos, size());
}
const_reference at(size_type pos) const {
if (pos < size())
return this->operator[](pos);
FIXED_BUFF_THROW_OUT_RANGE(pos, size());
}
template<typename ... Ts>
void remplace_at(size_type pos, Ts&& ...ts) {
_begin[pos].~value_type();
new(_begin + pos) value_type(std::forward<Ts>(ts)...);
}
template<typename ... Ts>
void emplace(Ts&& ... ts) {
new(_end++) value_type(std::forward<Ts>(ts)...);
}
void pop_back() {
back().~value_type();
_end--;
}
void clear() noexcept(is_noexcept_destructible) {
for (iterator it = _begin; it != _end; it++) {
it->~value_type();
}
_end = _begin;
}
void swap(fixed_buffer& other) noexcept(is_noexcept_destructible) {
fixed_buffer tmp = std::move(other);
other = std::move(*this);
*this = std::move(tmp);
}
size_type size() const noexcept {
return _end - _begin;
}
size_type capacity() const noexcept {
return _alloc - _begin;
}
iterator begin() noexcept {
return _begin;
}
const_iterator begin() const noexcept {
return _begin;
}
iterator end() noexcept {
return _end;
}
const_iterator end() const noexcept {
return _end;
}
reference front() noexcept {
return *_begin;
}
const_reference front() const noexcept {
return *_begin;
}
reference back() noexcept {
return _end[-1];
}
const_reference back() const noexcept {
return _end[-1];
}
~fixed_buffer() noexcept(noexcept(std::declval<value_type>().~value_type())) {
if (_begin) {
clear();
free(_begin);
}
}
};
#undef FIXED_BUFF_THROW_OUT_RANGE
Answer: Your code is quite good, with some improvement margins:
Be more expressive
Uninitialized memory management isn't the best-known or the most often used area of C++. It does even look arcane to many C++ programmers. What I suggest is to make it more accessible:
for "placement destructor call", there already is a standard function: std::destroy_at, which is arguably more readable than pointer->~Type(). And there's a range version (std::destroy) that would fit very well in your destructor.
for placement new, you could write a mirror function, construct_at.
The your remplace_at would become very clear:
std::destroy_at(_begin+pos);
construct_at (_begin+pos, std::forward<Args>(args)...);
I would also advise against negative array subscription, like return _end[-1] which, although correct, is unsettling. More generally, you should be consistent in the use of your iterators: use either subscription or pointer-like syntax. My personal taste would be:
// front()
return *_begin; // or better: return *begin() once you've defined it
// back()
return *std::prev(_end);
Be more concise
There's a lot of code repetition you could avoid.
You can now initialize your class variables where you declare them, so there is no need to do it in your default constructor, which can remain default:
private:
iterator _begin = nullptr;
iterator _end = nullptr;
iterator _alloc = nullptr;
You can use swap to define your assignment operator and your value constructor, instead of doing it the other way around:
fixed_buffer(fixed_buffer&& other) noexcept {
swap(other, *this);
}
// ...
fixed_buffer& operator= (fixed_buffer&& other) noexcept(is_noexcept_destructible) {
// other is correctly destroyed at the end of this function,
// taking care of previous *this resources
auto tmp = std::move(other);
swap(tmp, *this);
return *this;
}
You could have your pointers inside a struct to make it trivially swappable -you wouldn't even have to write swapthen.
Macros should disappear from your code. If you want to avoid code duplication, write a function that computes the right message and pass its results to the exception's constructor:
std::string out_of_range_message(std::size_t pos, std::size_t size, const std::string& function_name);
// ...
// inside `at`
throw std::out_of_range(out_of_range_message(pos, size, __PRETTY_FUNCTION__)); | {
"domain": "codereview.stackexchange",
"id": 31678,
"tags": "c++, c++11"
} |
Is rosdoc supposed to be working in groovy with catkin pakages? | Question:
Is rosdoc supposed to be working in groovy with catkin pakages? I tried to use it but got meesages like:
WARN: Package 'lrs_msgs' does not have a valid manifest.xml file, manifest information will not be included in docs
But catkin packages does not have a manifest.xml file.
Or should I generate the documentation in some other way?
Originally posted by TommyP on ROS Answers with karma: 1339 on 2013-01-31
Post score: 0
Answer:
Rosdoc has been replaced by rosdoc_lite, which is automatically run for packages in repositories that have rosinstall files listed in the rosdisto repository.
Originally posted by joq with karma: 25443 on 2013-01-31
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by TommyP on 2013-01-31:
I want to generate private documentation. I cannot find rosdoc_lite in my groovy installation. So how am I supposed to install it?
Comment by joq on 2013-01-31:
Please ask that as a separate question, so others can find it in the future.
Comment by TommyP on 2013-01-31:
Now asked as a separate question. | {
"domain": "robotics.stackexchange",
"id": 12659,
"tags": "ros, rosdoc, ros-groovy, catkin"
} |
Current Status of the Monte Carlo Sign Problem | Question: I've been reading about the Monte Carlo sign problem, and I am a little confused about its current status. Specifically, after reading this post
When is the "minus sign problem" in quantum simulations an obstacle?
I am confused on whether or not we can simulate "fermi hamiltonians away from special symmetry points". For example, in this article, the sign problem is solved for "a class of lattice field theories involving massless fermions". However, in this paper by Ceperley and Wagner, they discuss a first principles Monte Carlo to correlated electron systems, and make no discussion of the former solution. In this paper by Ferris, an unbiased Monte Carlo is introduced that "has been shown
to mitigate the sign problem given a sufficiently large bond dimension". Finally, there is also Majorana Monte Carlo, which uses the Majorana representation to simulate "a class of spinless fermion models on bipartite lattices at half filling and with arbitrary range of (unfrustrated) interactions".
So my question is this: the sign problem in Monte Carlo seems to be partially solved at this point. Not only can we simulate simple fermionic systems, but recent progress in the field has led us to understand more complex and varied models. Therefore, at this point in time, what are the limitations of Monte Carlo in simulating fermionic systems? That is, what can tensor network techniques like DMRG and PEPS do that QMC can't?
Answer: The most general sign problem was shown to be NP hard http://arxiv.org/abs/cond-mat/0408370, so there is no general solution in sight, and indeed no general solution expected.
This does not prevent us from solving specific sign problems, or studying models that are sign-free when written down in suitable variables (the attractive Hubbard model, for example).
However, the most interesting sign problems remain unsolved. This includes, in particular, the repulsive Hubbard model and QCD at finite baryon density.
Some interesting models can be studied using work-arounds, like DMRG or DMFT, but one should remember that these are approximate methods. | {
"domain": "physics.stackexchange",
"id": 32469,
"tags": "simulations, computational-physics, fermions, quantum-statistics"
} |
Finding similar values within a list of lists in Python | Question: I am working on a machine learning problem with object detection. For now, I am trying to find GPS coordinates that are close to other GPS coordinates. If they are close, I want to make note of it by index. So in my example below, with test data, these two areas are not actually close to one another, so their 'close_points_index' should be just their index. But my actual data set has ~100k observations.
This code is slow with 100k observations. I am looking for some help optimizing this code as I can get correct output but would like it if someone could point out any inefficiencies.
My data looks like:
[{'area_name': 'ElephantRock', 'us_state': 'Colorado', 'url': 'https://www.mountainproject.com/area/105746486/elephant-rock', 'lnglat': [38.88463, -106.15182], 'metadata': {'lnglat_from_parent': False}}, {'area_name': 'RaspberryBoulders', 'us_state': 'Colorado', 'url': 'https://www.mountainproject.com/area/108289128/raspberry-boulders', 'lnglat': [39.491, -106.0501], 'metadata': {'lnglat_from_parent': False}}]
My code solution is below. I avoided using two for loops but realize that I am sure a map() is just syntatical sugar for a for loop. Note that latLongDistance I assume is fairly optimized but if not I don't mind. My focus is on my findClusters() function.
from math import cos, asin, sqrt, pi
from functools import partial
def latLongDistance(coord1, coord2):
lat2 = coord2[0]
lat1 = coord1[0]
lon1 = coord1[1]
lon2 = coord2[1]
p = pi/180
a = 0.5 - cos((lat2-lat1)*p)/2 + cos(lat1*p) * cos(lat2*p) * (1-cos((lon2-lon1)*p))/2
kmDistance = 12742 * asin(sqrt(a))
return kmDistance
def findClusters(listOfPoints, thresholdValueM = 800):
coords = [x['lnglat'] for x in listOfPoints]
for index, data in enumerate(listOfPoints):
lngLat = data['lnglat']
modifiedLLDistance = partial(latLongDistance,coord2 = lngLat)
listOfDistances = list(map(modifiedLLDistance,coords))
meterDistance = [x*1000 for x in listOfDistances]
closePoints = [i for i in range(len(meterDistance)) if meterDistance[i] < thresholdValueM]
listOfPoints[index]['close_points_index'] = closePoints
return listOfPoints
After the function is ran, see below. Note that these have multiple indices as I ran this output on the actual data set. If I were to run just these two points their indices should be: [0] and [1] respectively.
[{'area_name': 'ElephantRock', 'us_state': 'Colorado', 'url': 'https://www.mountainproject.com/area/105746486/elephant-rock', 'lnglat': [38.88463, -106.15182], 'metadata': {'lnglat_from_parent': False}, 'close_points_index': [0]}, {'area_name': 'RaspberryBoulders', 'us_state': 'Colorado', 'url': 'https://www.mountainproject.com/area/108289128/raspberry-boulders', 'lnglat': [39.491, -106.0501], 'metadata': {'lnglat_from_parent': False}, 'close_points_index': [1]}]
I've experimented with a few things, but am coming up short. Primarily, I am a bit inexperienced with finding speed increases as I am relatively new to Python. Any critical input would be helpful. I have not posted here so let me know if I need some more information for it to be reproducible.
Answer: First some general comments
In the dataset, the key lnglat is confusing, because the data is clearly latitude and then longitude. That is just asking for a bug.
lng, lon, and long are all used as abbreviations for longitude in the code, pick one.
Take a look at PEP8 to see what people expect to see when looking at Python code, e.g., latlongdistance or lat_long_distance
Use sequence unpacking:
lat1, lon1 = coord1
list(map(....)) is a bit of an anti-pattern. The whole point of using map is the generate values as needed rather than create a list of all the values. If you want a list, many people find a list comprehension clearer.
enumerate() works in comprehensions too:
closePoints = [i for i, distance in enumerate(meterDistance)
if distance < thresholdValueM]
Each of listOfDistances and meterDistance create a long list of distances (100k of them), only to discard most of the distances when creating closePoints. Use a generator expression to avoid creating the lists.
Instead of multiplying each distance by 1000, divide thresholdvalue by 1000 just once outside of the for-loop. That's 1 division instead of 10G multiplications (100k loops and 100k multiplies in the list comprehension).
The code calculates each distance twice. For example, in the first loop iteration it calculates the distance from the first coord to the second coord, then on the second loop iteration it calculates the distance from the second to the first.
So something like this would be somewhat more efficient (untested code).
def findclusters(points, threshold=800):
coords = [x['lnglat'] for x in points]
# convert to meters
threshold /= 1000
for index, data in enumerate(points):
lngLat = data['lnglat']
# this is a generator expression
distances = (latlongdistance(lnglat, coord) for coord in coords[index:])
for i, d in enumerate(distances, index)):
if d < threshold:
points[index].setdefault('close_points_index', []).append(i)
points[i].setdefault('close_points_index', []).append(index)
return points
But the biggest efficiency issue is that findClusters() has O(n^2) complexity. The for index, data loop runs for each point in listOfPoints. Inside the loop, each of these lines also loops over the entire list.
listOfDistances = list(map(modifiedLLDistance,coords))
meterDistance = [x*1000 for x in listOfDistances]
That's n * n. To get significant speedups, a different approach is needed. There are various data structures that can be built in O(n * log n) time and then queried to find nearby points. I mentioned KDTrees in a comment to the question, but there are others. | {
"domain": "codereview.stackexchange",
"id": 42080,
"tags": "python, performance, python-3.x"
} |
A comparison of extractors in terms of tradeoffs between time, randomness and space ? | Question: Is there a good survey that compares different extractors, concentrators and superconcentrators and lays out the best methods in terms of the tradeoff between randomness, time and space ?
Answer: The default reference is Ronen Shaltiel's survey. This predates the important results of [Barak-Impagliazzo-Wigderson '04], [Barak-Kindler-Shaltiel-Sudakov-Wigderson '05], [Barak-Rao-Shaltiel-Wigderson '06] etc. I believe Anup Rao's Ph.D. thesis is a good recent reference that describes these developments. | {
"domain": "cstheory.stackexchange",
"id": 6,
"tags": "reference-request, randomness, derandomization"
} |
Adding 3 electron spins | Question: I've learned how to add two 1/2-spins, which you can do with C-G-coefficients. There are 4 states (one singlet, three triplet states). States are symmetric or antisymmetric and the quantum numbers needed are total spin and total z-component.
But how do you add three 1/2-spins? It should yield 8 different eigenstates. Which quantum numbers do you need to characterise the 8 states?
It is not as easy as using C-G-coefficients and the usual quantum numbers as for the total momentum the doubly degenerate 1/2 state and quadruple degenerate 3/2 state can describe only 6 or the 8 states. You will need an additional quantum number for the degeneracy.
So how do you get the result?
(I actually tried out myself with a large 8x8 matrix. The total spin 1/2 is each doubly degenerate. For the additional quantum number I chose the cyclic permutation. Spin 1/2 states are neither symmetric nor antisymmetric. But what is the usual way to derive this?)
EDIT: For reference I'm adding my results for up to 4 spins from some time ago:
If you recall the basics of quantum mechanics with matrices it is actually a straightforward matrix diagonalization and requires no specialized knowledge. However, you still need to find an additional operator which breaks degeneracy. I chose the cyclic permutation, which seems to do the job. Please refer to the below answer, since I haven't checked all details.
Answer: I looked in Edmonds, which is usually the standard reference, and he doesn't mention any standard approach at breaking the degeneracy.
You need two linearly independent $s=1/2,\,m=1/2$ solutions, and you can get three different solutions by first coupling one of the three different pairs to the singlet $s=0$ state and then adding an up state. This yields the three vectors $\newcommand{\ket}[1]{\left|#1\right\rangle}$
$$\ket{\psi_1}={1\over\sqrt{2}}\left(\ket{\uparrow\uparrow\downarrow}-\ket{\uparrow\downarrow\uparrow}\right),$$
$$\ket{\psi_2}={1\over\sqrt{2}}\left(\ket{\downarrow\uparrow\uparrow}-\ket{\uparrow\uparrow\downarrow}\right),$$
$$\ket{\psi_3}={1\over\sqrt{2}}\left(\ket{\uparrow\downarrow\uparrow}-\ket{\downarrow\uparrow\uparrow}\right),$$
which add to zero so only two are linearly independent.
Edmonds shows, in particular, that there is a unitary transformation linking any of the three representations linked to the three vectors above (which is of course no surprise) and that this unitary transformation is independent of spatial orientation (which is not automatic but by the Wigner-Eckart theorem ought to happen). He then goes on to define appropriate invariant transformation coefficients (the Wigner $6j$ symbols) and spends a good deal of time exploring them, but he doesn't say how to (canonically) break the degeneracy.
If it's a basis you want, then take any two of the three above. If you need (like you should!) an orthonormal basis, then you can take linear combinations like
$$\ket{\psi_{23}}={1\over\sqrt{6}}\left(\ket{\uparrow\uparrow\downarrow}-2\ket{\downarrow\uparrow\uparrow}+\ket{\uparrow\downarrow\uparrow}\right)$$
which obeys $\langle\psi_1|\psi_{23}\rangle=0$.
However, I don't think there is any way to treat the problem symmetrically in the three electrons. I had a quick go and I think one can prove there are no linear combinations of the three states that are symmetric or antisymmetric w.r.t. all three electron exchanges.
One way to see this is noting that you have three linearly dependent, unit-norm vectors that span a two-dimensional vector space and sum to zero. This is like having three unit vectors on a plane, symmetrically arranged at $120^\circ$ to each other. (The analogy is precise: the Gram matrices, $G_{ij}=\langle\psi_i|\psi_j\rangle=-\frac12+\frac32\delta_{ij}$, coincide, and these encode all the geometrical information about any set of vectors - see problem 8.5 in these notes by F. Jones at Rice.) There is then no way to choose a basis for the plane that is symmetric in the three "electron" exchanges, i.e. one whose symmetry group is the same as the three original vectors, including all three reflections.
On the other hand, there are two approaches to this problem that do retain some of the exchange symmetry. One is to form an electron-exchange invariant resolution of the identity, of the form
$$
\frac{2}{3}\sum_{j=1}^3\ket{\psi_j}\langle\psi_j|=1|_{S={1\over2},m= +{1\over2}} $$
This also holds for the three vectors in the plane and expresses the fact that they form a tight vector space frame for $\mathbb{R}^2$. This is also a consequence of Schur's lemma, as both vector spaces carry irreducible representations of the exchange group of three electrons; the sum above is the Haar integral over the orbit of any one state and commutes with all matrices in the representation.
The other approach is due to the OP, who provided this image (with slight errors), and which I'll write in full here for completeness. An alternative basis for the plane, which does play well with the electron exchange group - though not as symmetric as one might wish - is to use a complex-valued basis (which is of course perfectly all right) and which corresponds to the circular polarization basis if we think of the plane as the Jones vectors for the polarization of an EM wave. In this analogy, the vectors in the image represent polarizations about those directions. Circular polarization is then invariant - up to a phase - under rotations, but individual electron exchange reflections will flip left$\leftrightarrow$right circular polarizations.
To cut the waffle, the trick in the plane is to take as basis vectors
$$
\mathbf{e}_L=\begin{pmatrix}1\\i\end{pmatrix}
=\frac23\sum_{j=1}^3 e^{\frac{2\pi i}{3}(j-1)}v_j
\text{ and }
\mathbf{e}_R=\begin{pmatrix}1\\-i\end{pmatrix}
=\frac23\sum_{j=1}^3 e^{-\frac{2\pi i}{3}(j-1)}v_j.
$$
These are taken to each other, up to a phase, by the reflections, and to themselves up to a phase by the rotations.
Similarly, for the three electrons you can take the combinations
$$
|\psi_+\rangle
=\frac{1}{\sqrt{3}} \left[\ket{\uparrow\uparrow\downarrow}+e^{2\pi i/3}\ket{\uparrow\downarrow\uparrow}+e^{-2\pi i/3}\ket{\downarrow\uparrow\uparrow}\right]
=\frac{\sqrt{2}}{3}e^{-i\pi/6}\sum_{j=1}^3e^{-\frac{2\pi i}{3}(j-1)}|\psi_j\rangle
$$
and
$$
|\psi_-\rangle
=\frac{1}{\sqrt{3}} \left[\ket{\uparrow\uparrow\downarrow}+e^{-2\pi i/3}\ket{\uparrow\downarrow\uparrow}+e^{2\pi i/3}\ket{\downarrow\uparrow\uparrow}\right]
=\frac{\sqrt{2}}{3}e^{+i\pi/6}\sum_{j=1}^3e^{+\frac{2\pi i}{3}(j-1)}|\psi_j\rangle
$$
which are eigenvectors of the cyclic permutations with eigenvalue $e^{\pm 2\pi i/3}$, and for which the individual exchanges act as
$$P_{12}|\psi_+\rangle=|\psi_-\rangle,
\ P_{23}|\psi_+\rangle=e^{\frac{2\pi i}{3}}|\psi_-\rangle,
\text{ and }P_{31}|\psi_+\rangle=e^{\frac{-2\pi i}{3}}|\psi_-\rangle
.
$$
So, in conclusion: this method is not perfect, as it does not give a way to lift the degenerate subspace into two distinct subspaces which are invariant under the full electron exchange group, and which therefore carry separate representations of it. However, it does give a basis that's got a definite action under the exchange group. I would be interested to know what the formal analysis of this action is, and how this generalizes to more than three spins. Maybe for another time! | {
"domain": "physics.stackexchange",
"id": 85577,
"tags": "quantum-mechanics, angular-momentum, atomic-physics, quantum-spin, group-representations"
} |
Calculating neuron outputs and derivatives | Question: This function runs very often. cudaMemcpy is at the start and works very slowly. How can I change this function to avoid this? I already have inputs in device memory.
void OpenNNL::calculateNeuronsOutputsAndDerivatives(double * inputs, double * deviceOutputs, double * deviceDerivatives)
{
int inputsCount = _inputsCount;
double * deviceTemp;
double * deviceInputs;
cudaCall(cudaMalloc ( (void**)&deviceInputs, inputsCount*sizeof(double) ));
cudaCall(cudaMemcpy ( deviceInputs, inputs, inputsCount*sizeof(double), cudaMemcpyDeviceToDevice ));
for(int i=0;i<_layersCount;i++)
{
cudaCall(cudaMalloc((void**)&deviceTemp, _neuronsPerLayerCount[i]*inputsCount*sizeof(double)));
dim3 threadsMul = dim3(BLOCK_SIZE, 1);
int blocksCount = floor((double) _neuronsPerLayerCount[i]*inputsCount / threadsMul.x) + 1;
dim3 blocksMul = dim3(blocksCount, 1);
weighting<<<blocksMul, threadsMul>>>(deviceTemp, deviceInputs, _neuronsInputsWeights, _inputsInPreviousLayers[i], inputsCount, _neuronsPerLayerCount[i]);
cudaCall(cudaFree(deviceInputs));
cudaCall(cudaMalloc((void**)&deviceInputs, _neuronsPerLayerCount[i]*sizeof(double)));
dim3 threadsSum = dim3(BLOCK_SIZE, 1);
blocksCount = floor((double) _neuronsPerLayerCount[i] / threadsSum.x) + 1;
dim3 blocksSum = dim3(blocksCount, 1);
calculateOutputsAndDerivatives<<<blocksSum, threadsSum>>>(deviceOutputs, deviceDerivatives, deviceInputs, deviceTemp, _neuronsBiases, inputsCount, _neuronsPerLayerCount[i], _neuronsInPreviousLayers[i]);
inputsCount = _neuronsPerLayerCount[i];
cudaCall(cudaFree(deviceTemp));
}
cudaCall(cudaFree(deviceInputs));
}
Answer: Try to minimaze memory allocations.
Allocate memory for deviceTemp and deviceInputs only once (in the constructor, for example):
cudaCall(cudaMalloc ( (void**)&deviceInputs, some_big_value * sizeof(double) ));
cudaCall(cudaMalloc((void**)&deviceTemp, some_big_value * sizeof(double)));
And in calculateNeuronsOutputsAndDerivatives, reallocate memory only if needed:
if (cur_deviceInputs_size < inputsCount)
{
cudaCall(cudaFree(deviceInputs));
cudaCall(cudaMalloc ( (void**)&deviceInputs, inputsCount*sizeof(double) ));
cur_deviceInputs_size = inputsCount;
} | {
"domain": "codereview.stackexchange",
"id": 9514,
"tags": "c++, performance, memory-management, neural-network, cuda"
} |
Pseudoscalar current of Majorana fields | Question: Consider a Majorana spinor
$$
\Phi=\left(\begin{array}{c}\phi\\\phi^\dagger\end{array}\right)
$$
and an pseudoscalar current $\bar\Phi\gamma^5\Phi$. This term is invariant under hermitian conjugation:
$$
\bar\Phi\gamma^5\Phi\to\bar\Phi\gamma^5\Phi
$$
but if I exploit the two component structure
$$
\bar\Phi\gamma^5\Phi=-\phi\phi+\phi^\dagger\phi^\dagger
$$
the invariance under hermitian conjugation seems lost
$$
-\phi\phi+\phi^\dagger\phi^\dagger\to\phi\phi-\phi^\dagger\phi^\dagger.
$$
Where is the catch?
Answer: Ok, i found the (silly) error:
$$
\bar\Phi\gamma^5\Phi=\Phi^\dagger\gamma^0\gamma^5\Phi
$$ so under hermitian conjugation this becomes
$$
\Phi^\dagger\gamma^5\gamma^0\Phi=-\Phi^\dagger\gamma^0\gamma^5\Phi=-\bar\Phi\gamma^5\Phi
$$
that imply
$$
\bar\Phi\gamma^5\Phi+h.c.=0
$$
the same result that we found exploiting the two component structure. | {
"domain": "physics.stackexchange",
"id": 26594,
"tags": "spinors, majorana-fermions, dirac-matrices"
} |
How fast does the water travel down river when the discharge gates of a large dam are opened? | Question: How fast does the water travel down river when the discharge gates of a large dam are opened? Can the discharge-wave travel downstream faster than the water? Is this likely to occur for a real river and an actual sudden discharge from a large reservoir?
Answer: The Manning Formula is an empirical equation that describes uniform open channel flow. It depends upon several factors, including roughness and sinuosity of the river channel.
This paper: The Colorado River in Grand Canyon; how fast does it flow? describes some dye experiments performed in the Colorado River in the 1990s. Several releases (at different volumetric rates) were made from lake Powell and the time to reach different distances downriver were measured. These experiments showed that the water velocity down river was higher for the higher volume releases (~3 miles/hr for ~30,000 cubic feet/sec.) The discharge waves traveled faster than the water, traveling a distance of ~235 miles in ~1.6 days rather ~4.3 days. | {
"domain": "physics.stackexchange",
"id": 11194,
"tags": "fluid-dynamics, geophysics"
} |
Minimum number of cuts to divide a rectangle | Question: We're given a big rectangle, and a list of small rectangles contained inside it, with their vertex coordinates.
We want a list of the minimum number of lines defined by a pair of points (x,y) that cut up the big rectangle into the small ones.
For example for this case:
The minimum number of cuts would be 7, and they are represented in the following picture.
Any idea to achieve this? (The rectangles are not always touching the borders.)
Answer: Each edge of an inner rectangle that (that isn't on the exterior of the big rectangle) is a line segment. Your problem now becomes: given a collection of line segments, find the minimum number of lines to cover all of the line segments (and nothing more).
This problem can be solved as follows. For each line segment, extend the line as far as you can in both directions, until it no longer is touching any line segment. Add that line to your collection of lines. Repeat until you've covered all of the line segments. In other words, merge two line segments if they are touching and go in the same direction (since in that case they can be merged into a single line that covers both). Repeat until you can't merge anything any longer. | {
"domain": "cs.stackexchange",
"id": 9828,
"tags": "algorithms, minimum-cuts"
} |
What is Hawking radiation? | Question: Like the radiation the Sun gives out to sustain it's circular shape, will Hawking radiation sustain the black hole's shape?
Answer: In a word, no.
A black hole's border or event horizon is just curved space time where the curvature exceeds an escape velocity of the speed of light. Because the inner mass is thought to be very uniform, with essentially zero mass concentrations, it should be one of the most spherical objects in the universe, even more than the sun. A black hole's event horizon's spherical shape has nothing to do with Hawking radiation, which is a quantum effect just outside the event horizon.
That said, a rotating black hole has two distinct event horizons. The inner one is spherical, the outer one is not and that may be a more accurate picture of black holes, so they may not be as spherical as highly spherical stars depending on which event horizon your taking into account. | {
"domain": "astronomy.stackexchange",
"id": 5522,
"tags": "black-hole, supermassive-black-hole, hawking-radiation"
} |
How to create a quantum algorithm that produces 2 n-bit sequences with equal number of 1-bits? | Question: I am interested in a quantum algorithm that has the following characteristics:
output = 2n bits OR 2 sets of n bits (e.g. 2 x 3 bits)
the number of 1-bits in the first set of n-bits must be equal to the number of 1-bits in the second set. E.g. correct output = 0,0,0, 0,0,0 (both 3-bit sets have zero 1-bits); 1,0,0, 0,1,0 (both 3-bit sets have one 1-bit); 1,1,0, 0,1,1 (both 3-bit sets have two 1-bit)
Each time the quantum algorithm runs it must randomly return one of the possible solutions.
Any idea how I can best implement such an algorithm on a quantum computer ?
FYI I have tried the following algorithm (where n = 2 ) but it missed the 2 answers 0110 and 1001:
Answer: There are probably better ways than this, but here’s one you could try:
Start as you have done, with Hadamards on every qubit of the first register, then controlled nots between matching pairs of qubits across the two registers. This creates a uniform superposition of terms $|x\rangle|x\rangle$.
Now you need to somehow perform a random permutation on the second register. Introduce $\binom{n}2$ ancillary qubits. Apply Hadamard on each, and use each qubit to control the application of a swap between a different pair of qubits on the second register. Then forget about the ancillary qubits, and just measure the first two registers. (I’m guessing this gives you a sufficiently random permutation.) | {
"domain": "quantumcomputing.stackexchange",
"id": 186,
"tags": "quantum-algorithms"
} |
Hole diameter vs Screw Diameter | Question: I am working with a normalized ball-screw drive where the nut has holes for screws to screw onto the moving part. The catalogue says these holes have diameter 13.5 mm. What is the appropriate screw size (metric) to use on such a part?
Answer: I would suggest you use the largest diameter bolt you can find that fits through the hole, and that you can be sure you'll be able to install properly. By "install properly" I mean that you leave enough clearance between the bolt and 13.5mm hole to account for misplacement of the holes in whatever you're mounting the nut to.
In my work, when I design a clearance hole for a bolt, I usually go 0.5mm above the nominal size of a bolt, and we feel comfortable going down to 0.2mm above nominal if we need a more snug feet. A 13.5mm hole for a 12mm bolt is a lot of slop, so I would suggest a small washer if you go this route.
If you have imperial sizing available, a 1/2" bolt will have a nominal diameter of 12.7mm, so that would probably actually be your best bet. 13mm bolts do exist, but they're not common. | {
"domain": "engineering.stackexchange",
"id": 1675,
"tags": "design"
} |
Image Processing Application in C | Question: I'm making a image processing application in c from scratch using P6 ppm images, I want some input on my code before I start adding more features to prevent it from falling apart if it's not well structured
sample output
improc.h
#ifndef IMPROC_H
#define IMPROC_H
#include <stdint.h>
typedef struct {
int size;
double *weights;
} Kernel;
typedef struct {
uint8_t r;
uint8_t g;
uint8_t b;
} Pixel;
typedef struct {
double r;
double g;
double b;
} Accumulator;
typedef struct {
int height;
int width;
Pixel *pixels;
} Image;
typedef struct {
double re;
double im;
} Complex;
Image *create_image(int width, int height);
void free_image(Image *image);
Image *load_image(char *filename);
int save_image(char *filename, Image image);
Image *grayscale(Image image);
Image *perceptual_grayscale(Image image);
double kernel_min(Kernel kernel);
double kernel_max(Kernel kernel);
Accumulator convolve(Image image, Kernel kernel, int row, int col);
Image *apply_kernel(Image image, Kernel kernel);
Image *sobel(Image image);
Kernel sobel_y(int n);
Kernel sobel_x(int n);
unsigned modulo(int value, unsigned m);
#endif
improc.c
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
#include <unistd.h>
#include <errno.h>
#include "improc.h"
Image *create_image(int width, int height)
{
Image *image = malloc(sizeof(Image));
if (image == NULL) {
fprintf(stderr, "Could not allocate memory to Image: %s\n", strerror(errno));
return NULL;
}
image->width = width;
image->height = height;
image->pixels = malloc(width*height*sizeof(Pixel));
if (image->pixels == NULL) {
free(image);
fprintf(stderr, "Could not allocate memory to pixels: %s\n", strerror(errno));
return NULL;
}
return image;
}
void free_image(Image *image)
{
if (image != NULL) {
if (image->pixels != NULL)
free(image->pixels);
free(image);
}
}
Image *load_image(char *filename)
{
char magic[3];
int maxval, width, height;
FILE *fp = fopen(filename, "rb");
if (fp == NULL) {
fprintf(stderr, "Error opening file: %s\n", strerror(errno));
return NULL;
}
fscanf(fp, "%2s", magic);
if (magic[0] != 'P' || magic[1] != '6') {
fprintf(stderr, "Not a valid P6 ppm file: %s\n", strerror(errno));
fclose(fp);
return NULL;
}
fscanf(fp, "%d%d%*c", &width, &height);
Image *image = create_image(width, height);
fscanf(fp, "%d%*c", &maxval);
fread(image->pixels, sizeof(Pixel),image->width*image->height, fp);
fclose(fp);
return image;
}
int save_image(char *filename, Image image)
{
FILE *fp = fopen(filename, "wb");
if (fp == NULL) {
fprintf(stderr, "Error opening file: %s\n", strerror(errno));
return -1;
}
fprintf(fp, "P6\n%d %d\n255\n", image.width, image.height);
fwrite(image.pixels, sizeof(Pixel), image.width*image.height, fp);
fclose(fp);
return 0;
}
Image *grayscale(Image image)
{
Image *gray = create_image(image.width, image.height);
Pixel pix;
int index;
uint8_t avg;
for (int row = 0; row < image.height; row++) {
for (int col = 0; col < image.width; col++) {
index = row*image.width + col;
pix = image.pixels[index];
avg = (uint8_t) ((pix.r + pix.g + pix.b)/3.0);
gray->pixels[index] = (Pixel) {avg, avg, avg};
}
}
return gray;
}
Image *perceptual_grayscale(Image image)
{
Image *gray = create_image(image.width, image.height);
Pixel pix;
uint8_t bt_601;
int index;
for (int row = 0; row < image.height; row++) {
for (int col = 0; col < image.width; col++) {
index = row*image.width + col;
pix = image.pixels[index];
bt_601 = (uint8_t) (0.2126*pix.r + 0.7152*pix.g + 0.0722*pix.b);
gray->pixels[index] = (Pixel) {bt_601, bt_601, bt_601};
}
}
return gray;
}
double kernel_min(Kernel kernel)
{
double min = 0;
for (int index = 0; index < kernel.size*kernel.size; index++) {
if (kernel.weights[index] < 0)
min += kernel.weights[index];
}
return min*255;
}
double kernel_max(Kernel kernel)
{
double max = 0;
for (int index = 0; index < kernel.size*kernel.size; index++) {
if (kernel.weights[index] > 0)
max += kernel.weights[index];
}
return max*255;
}
Accumulator convolve(Image image, Kernel kernel, int row, int col)
{
Accumulator accumulator = {0, 0, 0};
for (int r_off = -kernel.size/2; r_off <= kernel.size/2; r_off++) {
for (int c_off = -kernel.size/2; c_off <= kernel.size/2; c_off++) {
int ir = modulo(row + r_off, image.height);
int ic = modulo(col + c_off, image.width);
int kr = r_off + kernel.size/2;
int kc = c_off + kernel.size/2;
int index = ir*image.width + ic;
Pixel pixel = image.pixels[index];
accumulator.r += pixel.r*kernel.weights[kr*kernel.size + kc];
accumulator.g += pixel.g*kernel.weights[kr*kernel.size + kc];
accumulator.b += pixel.b*kernel.weights[kr*kernel.size + kc];
}
}
return accumulator;
}
Image *apply_kernel(Image image, Kernel kernel)
{
Image *conv = create_image(image.width, image.height);
double max = kernel_max(kernel);
double min = kernel_min(kernel);
double alpha = max - min;
for (int row = 0; row < image.height; row++) {
for (int col = 0; col < image.width; col++) {
Accumulator accumulator = convolve(image, kernel, row, col);
accumulator.r = 255*(accumulator.r - min)/alpha;
accumulator.g = 255*(accumulator.g - min)/alpha;
accumulator.b = 255*(accumulator.b - min)/alpha;
conv->pixels[row*image.width + col].r = accumulator.r;
conv->pixels[row*image.width + col].g = accumulator.g;
conv->pixels[row*image.width + col].b = accumulator.b;
}
printf("%lf\r", 100.0*(1.0*row)/image.height);
fflush(stdout);
}
return conv;
}
Kernel sobel_x(int n)
{
Kernel sx;
sx.size = 3;
sx.weights = malloc(sizeof(double)*sx.size*sx.size);
sx.weights[0] = -1;
sx.weights[1] = -2;
sx.weights[2] = -1;
sx.weights[3] = 0;
sx.weights[4] = 0;
sx.weights[5] = 0;
sx.weights[6] = 1;
sx.weights[7] = 2;
sx.weights[8] = 1;
return sx;
}
Kernel sobel_y(int n)
{
Kernel sy;
sy.size = 3;
sy.weights = malloc(sy.size*sy.size*sizeof(double));
sy.weights[0] = -1;
sy.weights[1] = 0;
sy.weights[2] = 1;
sy.weights[3] = -2;
sy.weights[4] = 0;
sy.weights[5] = 2;
sy.weights[6] = -1;
sy.weights[7] = 0;
sy.weights[8] = 1;
return sy;
}
Image *sobel(Image image)
{
Image *conv = create_image(image.width, image.height);
Image *sobx, *soby;
Kernel sx = sobel_x(3);
Kernel sy = sobel_y(3);
sobx = apply_kernel(image, sx);
soby = apply_kernel(image, sy);
save_image("sobel_x.ppm", *sobx);
save_image("sobel_y.ppm", *soby);
double max = kernel_max(sx);
double min = kernel_min(sx);
double alpha = max - min;
for (int row = 0; row < image.height; row++) {
for (int col = 0; col < image.width; col++) {
Accumulator x = convolve(image, sx, row, col);
Accumulator y = convolve(image, sy, row, col);
x.r = 255*(x.r)/alpha;
x.g = 255*(x.g)/alpha;
x.b = 255*(x.b)/alpha;
y.r = 255*(y.r)/alpha;
y.g = 255*(y.g)/alpha;
y.b = 255*(y.b)/alpha;
Accumulator gradient = {
sqrt(x.r*x.r + y.r*y.r),
sqrt(x.g*x.g + y.g*y.g),
sqrt(x.b*x.b + y.b*y.b),
};
gradient.r = (gradient.r > 255)? 255: gradient.r;
gradient.g = (gradient.g > 255)? 255: gradient.g;
gradient.b = (gradient.b > 255)? 255: gradient.b;
conv->pixels[row*image.width + col].r = (uint8_t) gradient.r;
conv->pixels[row*image.width + col].g = (uint8_t) gradient.g;
conv->pixels[row*image.width + col].b = (uint8_t) gradient.b;
}
printf("%lf\r", 100.0*(1.0*row)/image.height);
fflush(stdout);
}
return conv;
}
unsigned modulo(int value, unsigned m)
{
int mod = value % (int) m;
if (mod < 0)
mod += m;
return mod;
}
main.c
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "improc.h"
int main(int argc, char **argv)
{
(void) argc;
Image *ema = load_image(argv[1]);
save_image("identity.ppm", *ema);
Image *sob = sobel(*ema);
save_image("sobel.ppm", *sob);
}
build script
#!/bin/bash
gcc improc.c main.c -O3 -g -Wall -Wpedantic -lm -Wextra -o main
Answer: Let's talk about the API. I've worked with multiple imaging libraries over the past ~25 years, and have significant experience designing them too.
API:
Let's start with the image container, the core of any imaging library.
typedef struct {
int height;
int width;
Pixel *pixels;
} Image;
//…
Image *ema = load_image(argv[1]);
Image *sob = sobel(*ema);
free_image(ema); /* (missing from the code, but should be there.)
//…
Most of the times that you refer to an image, you use *name, except in a few places where it's just name. This is confusing, and unnecessarily complicated. First of all, passing a struct (even if it's small now) by value (i.e. its values are copied) is more expensive than passing its pointer. Second, when you create the image object you malloc it, and it's awkward to then copy its contents to the stack to call a function. Third, as the library grows, so will this struct (a flag for RGB vs grayscale, or maybe an int for the number of channels, maybe a flag for which color space it's in, maybe a flag for the data type, maybe a flag for whether the struct owns the pixel data or not, ...). At some point copying the struct will be prohibitive.
So, let's enforce passing the struct by pointer, and let's make it easy to do so!
typedef struct {
int height;
int width;
Pixel *pixels;
} *Image;
//…
Image ema = load_image(argv[1]);
Image sob = sobel(ema);
free_image(ema);
//…
Now Image is always a pointer to the struct. There's no type you can use to refer to the struct itself, you can only refer to the pointer. But it looks nice in the code, the user doesn't even need to know it's a pointer!
For Kernel you're doing something totally different: when you create a kernel (e.g. sobel_x) you create the struct on the stack and return it by value. Why the distinction? A kernel is, after all, just an image with a different type for the pixels.
Why is the kernel always square? This is an important limitation. At some point you'll be looking to use a 15x1 kernel, and you'll have to create a 15x15 kernel with lots of zeros, which will be 15 times as expensive to use.
The function convolve doesn't convolve two images, it only computes the result of the convolution at one pixel. This is surprising. Your function apply_kernel should be called convolve (or maybe convolution). The single pixel sub-function should, IMO, be private.
Likewise, Accumulator could be private until you have a reason to make it public. The fewer things you make public initially, the easier it will be to improve on the API. Making something public (i.e. putting it into improc.h) sort of fixes them in perpetuity. As soon as people start using that API, you can't change it any more*, but you can always add to it.
* At least not without forcing users to update their code. And you're a user too. As soon as you start using this library to write programs, it will become hard to make changes to the API, you'll have to update all the programs that use it too.
kernel_min and kernel_max have the wrong name. I was reading the code, and wondering why you were using addition and not max(). Later I came to realize that you use these functions to determine what the minimum and maximum possible values of the output image will be when you compute the convolution with that kernel.
You could instead consider adding offset and scale arguments to your convolution function, and clip the result of the convolution before writing it to the uint8 output. This makes the function more flexible: the maximum and minimum possible values are not often obtained, so your scaling is a bit too drastic, the result is a very dim image, and a strongly quantized derivative. A user might want to pick a smaller scaling value.
improc.h, which defines your API, should contain documentation for the functions and types it makes public. You can document in a separate file, but it's always easier to put the documentation directly in the header. You user will be able to easily find the documentation in their IDE, and many IDEs will even show this documentation in a tooltip when you hover over a function call with the mouse. I suggest you use the Doxygen style for documentation. Doxygen is a nice tool to generate HTML from the documentation in the header files, though it has some downsides as well (many people, including me, have written alternatives, but most of these will use the same style for the documentation source).
Efficiency:
The convolution tests, for each pixel, whether a neighbor is inside the image or not (you use modulo for this, a neat solution, but it still has a branch in it). It is actually (in my experience) faster to copy the image into a larger buffer, and pick some padding scheme to fill those values outside the original image. The convolution now doesn't need to do any tests at all.
You can also consider reducing the amount of coordinate computation you do within the loop:
Accumulator convolve(Image image, Kernel kernel, int row, int col)
{
Accumulator accumulator = {0, 0, 0};
r_offset = row - kernel.size / 2;
c_offset = col - kernel.size / 2;
int kindex = 0;
for (int kr = 0; kr < kernel.size; kr++) {
int ir = modulo(r_offset + kr, image.height);
int iindex = ir * image.width;
for (int kc = 0; kc <= kernel.size; kc++, kindex++) {
int ic = modulo(c_offset + kc, image.width);
Pixel pixel = image.pixels[iindex + ic];
accumulator.r += pixel.r * kernel.weights[kindex];
accumulator.g += pixel.g * kernel.weights[kindex];
accumulator.b += pixel.b * kernel.weights[kindex];
}
}
return accumulator;
}
kindex, the index into the kernel, increases by 1 every inner loop iteration, so just increment it, don't compute kr*kernel.size + kc (and certainly don't compute that 3 times, even though your compiler will likely optimize that out). ir doesn't change during the inner loop, so compute it outside that loop. And a lot of the remaining computation you did was because your loop goes from -size/2 to size/2, rather than from 0 to size, and so you needed to add an offset again to index into the kernel.
(By the way, your code has a bug if kernel.size is even)
Style:
Please use an empty line in between functions. Vertical space is very important for readability. | {
"domain": "codereview.stackexchange",
"id": 45241,
"tags": "c, image, signal-processing"
} |
Current and voltage - Incompatibility between Ohm's Law and Power Law! | Question: Ohm's Law: I $=\frac{V}{R}$: Increasing voltage increases current.
Power Law: P $={V}*{I}$: Increasing voltage decreases current.
Am I missing something?
Answer: As the others say, you are implicitly holding one factor constant and seeing how the others are related but a different constant factor in each case.
A real world example may help. Consider an old style filament light bulb. It is intended to be used in a $220V$ country and consume $110W$. So, it should draw $0.5A$ and needs a resistance of $440 \Omega$.
We can keep the resistance constant and vary the voltage by taking it to a $110V$ country. Ohm's law tells us that it will now draw only $0.25A$ hence the power will be only $27.5W$.
For the constant power scenario, imagine that the manufacturer wants to make a similar $110W$ bulb for $110V$ countries. He needs it to draw $1A$ so he must arrange for its resistance to be $110 \Omega$. Note that this will be a different bulb.
For some fun, take the bulb intended for $110V$ to a $220V$ country. I leave that as an exercise. | {
"domain": "physics.stackexchange",
"id": 65629,
"tags": "electromagnetism, electricity, electric-circuits, electric-current, voltage"
} |
Constructive interference from a reflector | Question: Would it be possible to construct a reflector such that, for a given wavelength (perhaps part of the microwave spectrum?), the reflected wave interferes constructively with itself?
Ideally, this would work for any two arbitrary reflection points that are "close" to one another.
My question has two very interrelated ideas:
Could such a shape even exist?
How would you go about finding the defining curve?
Notes:
I initially assumed a parabolic reflector because one nicety of this thought experiment was to produce a collimated beam in addition to the interference pattern, and I remembered that parabolic dishes are widely used to roughly focus light into beams. In actuality I realize that if such an shape does exist, it probably wouldn't have such a nice defining equation.
I tried considering the problem from the $\Delta L =m\lambda$ standpoint, in two dimensions first to simplify things.
If the emitter is considered to be a point source at the focus of the parabola $y=ax^2$ then for any two rays with reflection points $(x,ax^2)$ and $(x+h,a(x+h)^2)$ then the path length difference is $\Delta L=\sqrt{x^2+(ax^2-\frac{1}{4a})^2}-\sqrt{(x+h)^2+(a(x+h)^2-\frac{a}{4a})^2}+a((x+h)^2-x^2)$, presuming I did my math right.
If I understand what I have correctly, it will give me the equation of the reflector where rays separated in the x-direction by h units interfere constructively. Ideally, however, there's a shape that will reflect a given wavelength with constructive interference for every h (within realistic constraints of course. Infinitely large reflectors can't exist, etc ).
Answer: One way to look at optics is that the reason the paraboloid focuses a plane wave onto a point is that there is construtive interference for reflections converging on that point and destructive interference for beams going any other direction. So the parabolic reflector is already using interference to do its job.
Beyond that, you could consider doing things like etching patterns on an optical reflector or making your microwave reflector from patterns of wire that produce some further diffractive effect in the reflected beam. You could also probably make a flat mirror that has the converging properties of a parabolic one, at least within some fairly narrow frequency band (and with some leakage into 2nd-order beams). | {
"domain": "physics.stackexchange",
"id": 45502,
"tags": "homework-and-exercises, electromagnetic-radiation, reflection, interference"
} |
how to delete object in gazebo | Question:
Hello:
I have a question about how to delete object in gazebo. I am able to use: "gzfactory spawn -f model.sdf" to spawn object but I don't know how to delete object using a command instead of using GUI and right click to delete the object. Anybody know how to do that? Thanks!
Best wishes
Cherry
Originally posted by Cherry on Gazebo Answers with karma: 17 on 2013-03-13
Post score: 1
Answer:
Try:
gzfactory delete -m model_name
where model_name is the name of the model you want to delete
Originally posted by iche033 with karma: 1018 on 2013-03-14
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by Cherry on 2013-03-14:
HI, thanks. I have another question. Because I am doing this in a loop. So I would like the model to be spawned and deleted multiple time. After I spawned the object twice, then I cannot deleted in the scene, although on the left in the model list, the object's name disappeared there. Do you know how to do the delete and spawn multiple times?
Comment by Cherry on 2013-03-14:
HI, I just figured out it was the rendering problem. I don't actually need to see the object in the scene as long as it exists in the collision model.
Comment by skhan on 2013-03-15:
@iche033
How it is possible to delete model from launched ROS gazebo package? Because it crashes on deleting last model | {
"domain": "robotics.stackexchange",
"id": 3124,
"tags": "gazebo"
} |
A translator class in Python | Question: This class can be used to access the Google Translate web interface and get translation result for large text (split up into sentences).
The Signal class is copied from here.
#!/usr/bin/env python3
import urllib.request
import urllib.parse
import re
import traceback
from multiprocessing.dummy import Pool
from bs4 import BeautifulSoup
class Signal:
def __init__(self):
"""
:type self: Signal
:return:
"""
self.__subscribers = []
""":type: list of [function]"""
def emit(self, *args, **kwargs):
for sub in self.__subscribers:
sub(*args, **kwargs)
def connect(self, func):
self.__subscribers.append(func)
def disconnect(self, func):
try:
self.__subscribers.remove(func)
except ValueError:
print("Warning: function %s not removed from signal %s" % (func, self))
# signal = Signal()
# def callback():
# print("Calling back...")
# signal.connect(callback)
# signal.emit()
class Translator(Signal):
allowed_lang = ("nl", "fr", "de", "en")
def __init__(self, from_lang, to_lang, orig_str = None, filename = None):
"""
A translation class for accessing google translate
:type self: Translator
:type from_lang: str
:type to_lang: str
:type orig_str: str
:type filename: str
:param from_lang:
:param to_lang:
:param orig_str:
:param filename:
:return:
"""
super().__init__()
self.connect(self.report)
self._from_lang = from_lang
self._to_lang = to_lang
self.agent = {'User-Agent': "Mozilla/4.0"}
self.linkroot = "http://translate.google.com/m?sl=%s&hl=%s&q=" % (self.from_lang, self.to_lang)
if orig_str != None:
self.orig_str = str(orig_str)
elif filename != None:
with open(filename) as fh:
self.orig_str = fh.read()
else:
raise Exception("You must provide orig_str or filename")
# Clean up the input string
# todo: preserve punctuation
self.orig_str = self.orig_str.replace("\n", " ").replace("\r", "")
self.orig_str = re.compile(r"(?<=[.!?;])\s+").split(self.orig_str)
self.orig_str = [x.strip() for x in self.orig_str]
self.orig_str = [x for x in self.orig_str if x]
self.n_sentences = len(self.orig_str)
self.n_translated = 0
@property
def from_lang(self):
return self._from_lang
@from_lang.setter
def from_lang(self, new_lang):
print("Setting from_lang")
if new_lang not in self.allowed_lang:
raise Exception("%s not valid language option" % new_lang)
self._from_lang = new_lang
@property
def to_lang(self):
return self._to_lang
@to_lang.setter
def to_lang(self, new_lang):
print("Setting to_lang")
if new_lang not in self.allowed_lang:
raise Exception("%s not valid language option" % new_lang)
self._to_lang = new_lang
def translate_sentence(self, sentence):
"""
Translate one sentence
:param sentence:
:return:
"""
query = urllib.parse.quote(sentence)
link = self.linkroot + query
try:
request = urllib.request.Request(link, headers=self.agent)
webpage = urllib.request.urlopen(request).read()
soup = BeautifulSoup(webpage)
res = soup.find_all("div", class_="t0")[0].string
except Exception as e:
traceback.print_exc()
res = "Failed to fetch translation from google."
self.n_translated += 1
self.emit()
return res
def translate(self, n_threads=4):
"""
Parallelization using multiprocessing
:return:
"""
pool = Pool(n_threads)
self.trans_str = pool.map(self.translate_sentence, self.orig_str)
def contrast(self):
return zip(self.orig_str, self.trans_str)
def report(self):
print("\rTranslated %d/%d sentences..." % (self.n_translated, self.n_sentences), end="")
def __str__(self):
"""
Output a plain text string
:type self: Translator
:rtype: str
:return:
"""
res = ""
for i, o in self.contrast():
res += i + "\n" + o + "\n\n"
return res
def prettify(self):
"""
Output an html string
:type self: Translator
:rtype: str
:return:
"""
# import pdb
# pdb.set_trace()
res = ""
for i, o in self.contrast():
res += "<div>\n<p>\n" + i + "\n</p>\n<p>\n<i>\n" + o + "\n</i></p>\n</div>\n\n"
return res
if __name__ == "__main__":
mystring = """
Dat maakt het Openbaar Ministerie (OM) in Amsterdam bekend?
De 56-jarige Holleeder wordt verdacht van het medeplegen van moord en deelneming aan een criminele organisatie. Hij wordt waarschijnlijk begin komende week voorgeleid!
Zijn arrestatie volgt op onderzoek dat is gedaan naar de verklaringen van de nieuwe kroongetuige, Fred Ros in het omvangrijke Passageproces over de liquidaties. "Hij heeft belastend over Holleeder verklaard", aldus een woordvoerster van het OM.
"""
trans = Translator(from_lang="nl", to_lang="en", orig_str=mystring)
trans.translate()
print(trans)
A new version of this class is available here: A translator class in Python v2
Answer: PEP8
Please follow PEP8, the official style guide of Python.
In parameter lists, don't put spaces around = like this:
def __init__(self, from_lang, to_lang, orig_str = None, filename = None):
Write like this:
def __init__(self, from_lang, to_lang, orig_str=None, filename=None):
Put one blank line before method definitions inside a class.
Instead of this:
class Translator(Signal):
allowed_lang = ("nl", "fr", "de", "en")
def __init__(self, from_lang, to_lang, orig_str=None, filename=None):
Write like this:
class Translator(Signal):
allowed_lang = ("nl", "fr", "de", "en")
def __init__(self, from_lang, to_lang, orig_str=None, filename=None):
Don't use != or == when comparing with None like this:
if filename != None:
Write like this:
if filename is not None:
Actually, since filename is supposed to be a string,
this would be the most natural way to write this condition:
if filename:
Avoid using bare except clauses like this as much as possible:
except:
res = "Failed to fetch translation from google."
This is somewhat better:
except Exception:
res = "Failed to fetch translation from google."
But it's best to use as specific exception type as possible.
In this code:
def translate(self):
"""
Parallelization using multiprocessing
:return:
"""
pool = multiprocessing.Pool()
self.trans_str = pool.map(self.translate_sentence, self.orig_str)
It's not recommended to define attributes outside of __init__.
It would be better to initialize the value in the constructor.
It's strange that the docstring says:
:type orig_str: str
But then you have this logic:
if orig_str is not None:
self.orig_str = str(orig_str)
If the parameter is supposed to be a string,
then you can simply do this:
if orig_str:
self.orig_str = orig_str | {
"domain": "codereview.stackexchange",
"id": 11098,
"tags": "python, google-translate"
} |
What is the best way to replace ndt localization with gps localization in AutowareAuto? | Question:
I am planning on replacing ndt localization with rtk corrected gps localization for AutowareAuto release 1.0.0.
I see that there is a new package called gnss_conversion_nodes which is not in 1.0.0 and it converts WGS84 coordinates to ECEF then fills up a autoware_auto_msgs::msg::RelativePositionWithCovarianceStamped and publishes it. https://gitlab.com/autowarefoundation/autoware.auto/AutowareAuto/-/blob/master/src/tools/gnss_conversion_nodes/design/gnss_conversion_nodes-design.md. The published message contains position in 3D in ECEF earth coordinates but no orientation.
Also I have noticed that ndt based localization publishes pose as a geometry_msgs::msg::PoseWithCovarianceStamped which is not the same as autoware_auto_msgs::msg::RelativePositionWithCovarianceStamped which gnss_conversion_node publishes. Shouldnt the two msg types be the same so that gnss_conversion_nodes be a direct replacement of ndt_nodes?
Assuming we get gnss_conversion_nodes running, next step is to provide a static_transform_publisher to provide the conversion from ECEF earth coordinates to map coordinates similar to PCD map loader.
ros2 run tf2_ros static_transform_publisher x y z r p y earth map
Originally posted by asobhy on ROS Answers with karma: 31 on 2021-07-10
Post score: 0
Answer:
Here is what I have done:
create a gps imu node. the node combines gps position and imu orientation into a geometry_msgs::msg::PoseWithCovarianceStamped then publishes this data. In addition to that the node also needs to publish the map to odom transform in order for the tf tree to be complete. the map to odom transform can be achieved through this set of transformations: map to earth then earth to base link then base link to odom, which are all already available through tf2_ros::BufferClient::lookupTransform
Additionally a static earth to map publisher needs to publish the tf, which is done from command line for now.
Just a heads up for anyone doing the same project the GPS data on LGSVL is not aligned properly and I have created an issue for it here https://github.com/lgsvl/simulator/issues/1597
Originally posted by asobhy with karma: 31 on 2021-07-15
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by sujithvemi on 2021-12-28:
Hi, I had implemented the node according to your suggestions, it was really helpful. But one thing is not clear to me, is the lookupTransform capable of finding the /map->/odom transform by itself? Because in my implementaiton /earth->/base_link is not readily available as you have mentioned in your answer. Should I publish that transform using the GNSS data we get, if so would the orientation be IMU data directly received? It would be really helpful if you can shed some more light on the transforms part. Thank you. | {
"domain": "robotics.stackexchange",
"id": 36684,
"tags": "ros"
} |
Need motion equations for falling ball for simulating | Question: I am currently MS student in artificial intelligence (AI) and working on my thesis on reinforcement learning, which robot should learn by itself how to control a quadcopter.
Long story short, I have come up with a method for such purposes, but before I jump into quadcopter and deal with its non-linear controlling problems, which could take a month to program, I need to test the method to see if it's really a solution for this kind of problem. I've defined a simplified version of my problem, such as following:
Simplified scenario:
I have a ball dropped at an altitude and the program must learn the force it needs to apply on the ball to hold it still in the air at a pre-defined altitude.
(i.e if target altitude is higher than the altitude that ball's dropped, it should up-force the ball until it reaches the altitude, otherwise, it should moderately stabilize the ball in the target altitude).
In this scenario it doesn't matter how the force will apply to the ball, i.e. there is no rope attached or etc. it just gets applied.
The last time I read something related to physics was 6-7 years ago, so the motion equation of this problem is beyond my specialty, but it seems to me that the defined scenario is a classic physic problem.
Questions:
What are the equations to formulate the scenario (falling ball with an external force besides gravity) to be able to write its simulator?
What about adding some noise to the problem, like air resistance factor or wind, how would the equations look then?
I would really appreciate if someone help a C.S. fellow here.
Thanks in advance.
PS: Please let the equations be simple as they could be, I am not quite good at reading complicated physic equations.
UPDATE:
From this site and details @Jodes kindly explained, I have come up with following equations:
$$
x = {1 \over 2}at^2 + v_0t + x_0
$$
where $a = g + \sum {F_i \over m_{\text{ball}}}$ and $F_i$ are the external forces applied to the ball.
If I'm right, the noises could also be combined to the equation as external forces
Does this formulation valid for the defined scenario?
If so, how to make it work in 2D and 3D environment(does simply vectorizing the $\vec{x}, \vec{a}, \vec{v}$ will do the job?)?
Answer: The force equation of an object in a uniform gravitational field with a mysterious compensating force is
$$
ma=F_c-mg
$$
Where $m$ is the mass of the object, $g$ is the acceleration due to gravity, $F_c$ is the force applied by your controller, and $a$ is the net acceleration. For simulation purposes, you probably want to express this as a differential equation
$$
\frac{d^2}{dt^2}y=\frac{F_c}{m} - g.
$$
If you want to add a random fluctuating force due to noise, $F_n$ then you can simply add it in on the RHS of the equation
$$
\frac{d^2}{dt^2}y=\frac{F_c}{m} - g + \frac{F_n}{m}.
$$ | {
"domain": "engineering.stackexchange",
"id": 839,
"tags": "mechanical-engineering, simulation, experimental-physics"
} |
How to relate second variation to quadratic terms? | Question: I am working on a physics problem, but my issue is math-related. My professor skips some steps based on 'intuition' that I lack:
In a conservative system, to find out the nature of equilibrium points, we are looking at the potential energy functional, in general form given by $P[f(s)] = \int F(f(s)) ds$. (The problem is about deflection of beams, $s$ being the coordinate of the deflected elastic line.)
To me, it makes sense to perturb the function $f(s)$ to $f(s)+\epsilon g(s)$, $\epsilon$ being a small number. Then, $P[f + \epsilon g]$ is an ordinary function of $\epsilon$, so Taylor expansion works:
$$
\Delta P = P[f+\epsilon g] - P[f] = \epsilon \frac{dP[f+\epsilon g]}{d\epsilon}\big|_{\epsilon=0} +
\frac{1}{2}\epsilon^2 \frac{d^2P[f+\epsilon g]}{d\epsilon^2}\big|_{\epsilon=0}
+O(\epsilon^3).
$$
Equilibrium requires that the first term on the RHS is zero, and for a stable equilibrium, the potential energy should be minimal, so it is necessary that
$$
\frac{1}{2}\epsilon^2 \frac{d^2P[f+\epsilon g]}{d\epsilon^2}\big|_{\epsilon=0} \geq 0 \qquad (1)
$$
for all $g$.
My problem arises when my professor says the above is more formal and involved, and that it would suffice to just collect powers of $f(s)$ and write the functional as
$$
P[f(s)] = P_0 + P_1 + P_2 + \cdots,
$$
where $P_i$ has $i$th-order terms in $f$. Then, apparently, equilibrium dictates that $P_1=0$ and a minimum requires that
$$
P_2 \geq 0. \qquad (2)
$$
Could anyone please show me how to go from (1) to (2)? I have tried a second Taylor expansion, now in $F$, but it confuses me, especially since it is not a variable but a function.
Answer: I will try to provide some insight.
Stationarity of the energy functional (in your case, beams, elastic energy plus work potential I suppose) is equivalent to equilibrium: in your case this can be verified quickly by observing how stationarity (nullity of the first variation) leads to the Euler beam equatuion (I am assuming you are not dealing with more elaborated beam models, but the essence would still be unaffected).
Stability (against infinitesimal perturbations, global stability is left aside at this stage) is a more stringent requirement: on top of equilibrium you want all your nearby accessible configurations to have a higher energy than the equilibrium (the physical picture of a ball in a well or on a saddle is as valid as ever...at least for me).
That is the intuition behind requiring that the quadratic term in the Taylor expansion be positive (hinting at some "convexity" property related to stability): if it were not so an infinitesimal perturbation at the second order would find nearby states of lower energy (as the ball on top of a half-sphere).
This verbose prologue of mine might well be obvious, for which I apologise.
Now to your direct question, how to go from (1) to (2).
(1) contains a derivative of a function (with respect to $\epsilon$).
Expand $\hat{P}(\epsilon;f; g)=P(f+\epsilon$f; g) as your professor suggested:
$$\hat{P}(\epsilon;f;g)=\hat{P}_0 +\epsilon\hat{P}_1 (0;f;g) +\epsilon^2\hat{P}_2 (0;f;g) +⋯$$
plug this expression into (1), perform derivatives term by term and you (almost) get your result. One would have to define what is meant by Taylor expansion of a functional to be fully rigorous, but at the intuitive level I hope the above is of some help (one can always "imagine" a functional as a function of infinite variables...) | {
"domain": "physics.stackexchange",
"id": 12047,
"tags": "variational-calculus, functional-derivatives"
} |
Venturimeter - Doubt | Question: I was reading about venturimeters , and this is what I found on net:
Now my question is , where do we exactly "select" the two points required to put the bernoulli's equation? For example I select two points as follows:
If I do this , then I will have a horizontal level and there will be no need to include the potential head in the equation.. However the problem is that how will we calculate the pressure at these points? As the pressure will not remain constant throughout the height... Also If i select any two points as follows:
There will be a problem regarding the height difference between the points being considered... so could anyone please help?
Answer: Actually in a Venturi meter we usually make the approximation that height difference within the tube can be neglected. There are two reasons why this is valid:
Venturi tubes are of small diameters (of the order of a few centimetres) and hence height difference can be neglected.
due to the high speed of water through the tube, the pressure is also considerably large and so height difference does not affect it a lot | {
"domain": "physics.stackexchange",
"id": 99774,
"tags": "newtonian-mechanics, forces, fluid-dynamics, pressure, bernoulli-equation"
} |
why didn't ASCII/utf-8 used 00000000 to represent char 0 or rather any other character | Question: I mean why value of 0 value in binary is not used by ascii encoding , it would have been used to represent char 0 or anything else why they used it to represent NULL
Answer: In the modern digital era, character code 0 could have been any useful thing, such as "0" or "A". The use of character code 0 as "null," has a long history, as far back as Murray codes back in 1901. This had a particular advantage on punched tape, where a fresh tape with no punches in it read as all 0's. Distinguishing between "0" and no character proved very valuable. Instead, ASCII prepended all of the binary coded decimal characters for 0-9 with 011 and reserved the first 32 symbols for control characters like null. | {
"domain": "cs.stackexchange",
"id": 21881,
"tags": "binary"
} |
const Eigen::Affine3d; rotation() | Question:
Hey,
I have solved the forward kinematics with moveit.
kinemtic_state->setJointGroupPositions(joint_model_group, joint_values);
const Eigen::Affine3d &arm_link_5_state = kinematic_state->getGlobalLinkTransform("arm_link_5");
x = arm_link_5_state.translation().x();
y = arm_link_5_state.translation().y();
z = arm_link_5_state.translation().z();
xx = arm_link_5_state.rotation().??;
xy = arm_link_5_state.rotation().??;
xz = arm_link_5_state.rotation().??;
I don't know how to assign the value of xx, xy, xz of the rotation matrix.
I tried:
xx = arm_link_5_state.rotation().xx();
xx = arm_link_5_state.rotation().x().x();
xx = arm_link_5_state.rotation().nx();
xx = arm_link_5_state.rotation(0,0);
Error: const LinearMatrixType has no member named xx
I don't find the class where it is declared.
It would be great if someone can help me out.
Best regards,
Jonas
Originally posted by JonasG on ROS Answers with karma: 1 on 2018-06-14
Post score: 0
Original comments
Comment by stevejp on 2018-06-14:
The accessor for the linear part of the affine transform is .linear(), not .rotation(). See docs here.
Comment by JonasG on 2018-06-14:
.linear().xx()? I don't find it in the doc.
Comment by stevejp on 2018-06-14:
Check out this tutorial and if you still can't figure it out update your question.
Comment by JonasG on 2018-06-14:
In my code I print the rotation with
ROS_INFO_STREAM("Rotation: " << arm_link_5_state.rotation());
and that works well.
I want to extract the individual values of the rotation matrix.
Thanks for your help @stevejp
Comment by JonasG on 2018-06-14:
Now it works.
Thanks for that @stevejp
Answer:
Ah I misspoke earlier, .rotation() is fine for pulling out the rotation matrix. To access individual elements you can do Eigen::Matrix3d rot = arm_link_5_state.rotation(), and then xx = rot(0,0), etc.
Originally posted by stevejp with karma: 929 on 2018-06-14
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 31016,
"tags": "moveit, ros-hydro"
} |
Nickel plated steel strip for li-ion Battery pack - Purpose of Nickel Plating? | Question: I am trying to build a battery pack from 18650 batteries, each interconnection is made from steel strip, most '18650 strip' has a steel core with nickel plating.
I have a question regarding the purpose of the nickel coating found on 'nickel coated steel strip' which is used for welding 18650 batteries together to build li-ion battery packs.
Is the nickel plating's purpose merely to stop the steel strip from rusting/corroding? Or does the nickel aid in the welding process as well?
I am using a spot welder similar to the one in the image below, high current is sent between the two electrodes creating high heat on the contact patches between the steel strip and battery top/bottom. This heat melts the metals together at the contact points.
Nickel is a better conductor than steel therefore if i am correct the nickel coating does not help in the spot welding process since the electrical resistance of nickel is lower than that of steel so more current needs to pass through nickel to get the same amount of heat production. Is there another explanation why (if true) nickel aids the welding process or is it merely to prevent rust/oxidation of the steel?
Does the same apply when using copper strip to connect the batteries together?
EDIT:
I was thinking maybe molten Ni has a better viscosity (more fluid) when molten compared to steel, which could be positive in the welding process as the gaps between the strip and battery get filled better. I have found the following table with some info regarding viscosity of molten metals:
Source: https://www.pnas.org/content/pnas/73/4/988.full.pdf
It appears Fe (iron) has a higher value for ϕ (Fluidity in reciprocal centipoise) so my theory is probably wrong...however nickel has a higher melting point than nickel so perhaps in the welding process the average fluidity of the nickel is still higher than the average fluidity of the steel at the surface of the strip (at the contact plane between the strip and battery housing). Could someone please tell me if this is of any influence and if my theory is somewhat accurate (and how much difference this (if existent) difference in fluidity would make)?
I have not yet figured out how the temperature influences the viscosity of the molten metal and how to calculate/obtain the factor which determines the ratio between fluidity/temperature... perhaps someone more knowledgeable will be able to give some insight into this
Thank you very much!
Answer: I can see 5 purposes for using a Ni plated steel strip:
1) Protection against corrosion.
2) Improved electrical conductivity. Ni has about half the resistivity of 1010 steel.
3) Mechanical resistance (due to steel).
4) Low price (also due to steel)
5) Weldability
If the steel could be Cu plated, item 2 would be maximized, while keeping item 1 property. But there are problems with liquid Cu diffusion into iron grain boundaries that poses a great challenge to make Cu plated product. See for example: http://files.aws.org/wj/supplement/WJ_1978_01_s9.pdf
Increased electrical conductivity impairs weldability as you said, and it would be much more difficult with Cu.
Another candidate is Zinc plated steel. It is very common as a way of protection against corrosion, and the electrical resistance of Zinc is slightly lower than Ni. However the low boiling point of zinc ($ 871^0$C, well below iron melting point) is a problem for the welding. | {
"domain": "physics.stackexchange",
"id": 64351,
"tags": "thermodynamics, electricity, electric-current, electrical-resistance, metals"
} |
Use TSFRESH-library to forecast values | Question: Have some issue with understanding how to use TSFERSH-library (version 0.4.0) to forecast next N-values of particular series. Below my code:
# load data train/test datasets
train, Y, test, YY = prepare_train_test()
# add series ID
train['TS_ID'] = pd.Categorical(train['QTR_HR_START']).codes
test['TS_ID'] = pd.Categorical(test['QTR_HR_START']).codes
# add ordered id for concrete event of series
for id in sorted(train['TS_ID'].unique()):
train.ix[train.TS_ID == id, 'TIME_ORDER_ID'] = pd.Categorical(train[train.TS_ID == id]['DATETIME']).codes
for id in sorted(test['TS_ID'].unique()):
test.ix[test.TS_ID == id, 'TIME_ORDER_ID'] = pd.Categorical(test[test.TS_ID == id]['DATETIME']).codes
# perform feature extraction for my signal
extraction_settings = FeatureExtractionSettings()
extraction_settings.IMPUTE = impute # Fill in Infs and NaNs
X = extract_features(train, column_id='TS_ID', feature_extraction_settings=extraction_settings).values
XT = extract_features(test, column_id='TS_ID', feature_extraction_settings=extraction_settings).values
# there should be as example
# model = xgb.DMatrix(X, label=Y, missing=np.nan)
# model.fit()
# model.predict(XT)
However, after line X = extract_features(...) I see at debugger following results
It's mean that initial X-dataset/features (shape=(722,10) were transformed to shape (80, 1899).
Where does '80' come from? I guess from train.TS_ID comes. But my XT-dataset still contains 722-rows (9 days * 80 different series per day).
So, how can I predict for 9 days in advance? or is there only forecast for next period?
Answer: TSFRESH is already supporting Time Series Forecast.
See details and example here and here | {
"domain": "datascience.stackexchange",
"id": 1997,
"tags": "python, predictive-modeling, time-series, feature-selection, feature-extraction"
} |
Can a Lagrangian with only non-negative mass dimensional couplings be non-renormalizable? | Question: In the introduction part of this renowned paper by S. Coleman and E. Weinberg, the authors write
The quartic self-coupling is required for renormalizability, to cancel the logarithmic divergence that arises in the amplitude for scalar Coulomb scattering.
The quartic term in scalar QED must be included in the Lagrangian (density) if one wishes to write the most general power-counting renormalizable Lagrangian (i.e., all non-negative mass-dimensional couplings) compatible with Lorentz invariance and gauge invariance.
The authors' statement although doesn't seem to contradict this but adds that if one neglects the quartic term, retaining only the quadratic mass term, the theory can become nonrenormalizable even with non-negative mass-dimensional term.
Do they mean massive scalar QED with vanishing quartic coupling is nonrenormalizable?
Answer:
Do they mean massive scalar QED with vanishing quartic coupling is nonrenormalizable?
Yes, they mean precisely that. In Ref.1 you can find the explicit calculation of the quartic counter-term in scalar QED:
$$
Z_\lambda=1+\left(\frac{3e^2}{2\pi^2\lambda}+\frac{5\lambda}{16\pi^2}\right)\frac{1}{\epsilon}+\cdots
$$
With this, the quartic term in the Lagrangian reads
$$
\lambda Z_\lambda|\phi|^4\overset{\lambda\to0}\longrightarrow\frac{3e^2}{2\pi^2}\frac{1}{\epsilon}|\phi|^4
$$
Therefore, if you were to purposely choose to omit this term, you would miss the $\frac{3e^2}{2\pi^2}\frac{1}{\epsilon}$ piece necessary to make the four-point function finite. You would lose renormalisability.
Note: if it turned out that the limit above vanishes, then you would be allowed to omit the quartic term (at least to one-loop order; it may be necessary to higher orders). This is precisely what happens in a scalar theory (in $d=4$): the most general renormalisable Lagrangian includes the terms $\phi^3$ and $\phi^4$. Unlike before, here you may want to choose to omit either of them without interfering with renormalisability. The reason is clear: if you omit the $\phi^3$ term you get the $\phi\to-\phi$ symmetry (which guarantees that no cubic divergence may arise), while if you omit the $\phi^4$ term the theory becomes super-renormalisable (which guarantees that no quartic divergence may arise).
References.
Srednicki, chapter 65. | {
"domain": "physics.stackexchange",
"id": 46080,
"tags": "quantum-field-theory, quantum-electrodynamics, renormalization"
} |
What are the application of Scott-Topology in theoretical computer science? | Question: During a work I came across the Scott-Topology and I see that Scott-continuous functions show up in the study of models for lambda calculi. What I cannot understand is how this enrich the lambda-calculus as we know.
I'm searching for paper that give -maybe- some application of Scott-topology in the computability field, as I have not find anything related.
Hoping for help from this great community
Answer: Scott-continuity emerged when Dana Scott build the first model of untyped λ-calculus, while trying to prove that no such model can exist (since any such model $D$ needs to be, simplifying a bit, isomorphic to the function space $D \rightarrow D$ which is not possible set-theoretically, but turns out to be possible when you restrict your attention to computable functions).
Scott-continuity can be understood as a mathematically well-behaved approximation to computability.
[1] is a gentle introduction to the general area of order theory that Scott continuity emerged out of, and [2] is a reference article. [3] has a bit on domain-theory and Scott-continuity and might be the easiest introduction for computer scientists.
B. A. Davey, H. A. Priestley, Introduction to Lattices and Order.
S. Abramsky, A. Jung, Domain theory, https://www.cs.bham.ac.uk/~axj/pub/papers/handy1.pdf
G. Winskel, The Formal Semantics of Programming Languages: An Introduction. | {
"domain": "cstheory.stackexchange",
"id": 5492,
"tags": "computability, lambda-calculus, topology"
} |
Is there a way to avoid overwriting node name when launching a node? | Question:
If I understand correctly, to run a node, you have to use a launch file, and each launch file includes something of the kind
<node pkg="pkgname" type="mainprogram.py" name="nodename" output="screen">
where nodename will overwrite the node name specified in rospy.init_node("...") of mainprogram.py
Then, why does it make sense to name a node with rospy.init_node("..."), provided that that name will be overwritten anyway?
Originally posted by Zhoulai Fu on ROS Answers with karma: 186 on 2018-03-20
Post score: 1
Answer:
If I understand correctly, to run a node, you have to use a launch file [..]
No, that is not correct.
Nodes are 'just' binaries that happen to use ROS infrastructure for the input and output. So they can be run directly as you would any other binary (ie: ./name_of_binary, or $CATKIN_WS/devel/lib/$PKG/$BINARY). In that case the node would use the name that is hard-coded in the source.
rosrun $PKG $BINARY also does not override the name.
Originally posted by gvdhoorn with karma: 86574 on 2018-03-20
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by Zhoulai Fu on 2018-03-20:
Thanks for the clarification!
Comment by Zhoulai Fu on 2018-03-20:
By the way, is that correct that all nodes that you can invoke from a package PKG located at $CATKIN_WS/devel/lib/$PKG/? (Apparently no: I just checked the mavros package. It has >5 launch files, yet only two executable in dev/lib/mavros/mavros. But where are the other nodes, if any?)
Comment by gvdhoorn on 2018-03-20:
All ROS nodes are either binaries or scripts. If they are scripts, they might not be placed in the devel/lib location, but be kept in the source space of your workspace.
But in principle: yes, you should be able to start all nodes as normal programs. | {
"domain": "robotics.stackexchange",
"id": 30387,
"tags": "ros, roslaunch, ros-kinetic"
} |
Resource cost and noise effects in quantum teleportation of multible (entangled) qbits | Question: Suppose you have n qubits that are in an unknown state (may be entangled, etc).
Can you teleport this state by teleporting each qubit individually (using a Bell state and a classical channel)?
If not, how many classical bits and what kind of Bell states are needed? Do we suffer an exponential blowup? Does adding small amount of noise (i.e. imperfect hardware) have a large impact on these costs?
Answer: Yes, a many-qubit state (even if it entangled) can be teleported by teleporting each qubit separately using one (perfect) Bell pair and two bits of classical communication. (This is the idea of quantum teleportation: The qubit, including all of its quantum correlations, is teleported.)
This can be seen by observing that teleportation of a qubit implements the identity channel on the qubit, i.e., it maps any state $|\alpha\rangle$ to itself (while preserving the phase!). More precisely, teleportation (like any physical map) is a completely positive trace preserving (CPTP) map which is of the form $\mathcal E(\rho)=\rho$, i.e., it also preserves coherences. Several independent teleportations thus implement the identity on $N$ qubits, i.e., they preserve any states (including entangled ones).
On the other hand, if there is noise (i.e., the entangled states are not perfect) you will need to use an encoding/decoding scheme. For general noise (however small it is), this will only work asymptotically, i.e., you will need to teleport a large number $N$ of qubits using $cN$ imperfect Bell pairs (with $c>1$ some constant), and in the limit $N\rightarrow\infty$, the teleportation will work perfectly given $c$ is not larger than the quantum capacity of the channel. | {
"domain": "physics.stackexchange",
"id": 18663,
"tags": "quantum-mechanics, quantum-information"
} |
How long to breathe (the equivalent of) all of the atmosphere? | Question: I have done some rough calculations of how long it might take humanity: approx 80,000 years (that's taking Earth's population as 7.5 billion, 11,000 litres a day of breathing, the weight of 1 litre of air at sea level at 1.225 grams, and the total weight of the atmosphere at 5140 trillion tonnes)
Please let me know if my calculations are massively out!
My question for you is admittedly much more complicated to answer - how long would it take all animal species combined to breathe the equivalent sum total weight of the atmosphere? and how long would it take all life combined (also including the weight of air of respiration in plants, fungi, algae, etc)?
Answer: Volume is probably not the best way to think about respiration for most of life on Earth, as it only really applies to animals with lungs (a small proportion of earth's biomass). Instead, it would make more sense to think about rates of respiration in terms of oxygen demands. With that in mind, an average human needs around 0.85 kg of oxygen per day (according to NASA). So, the annual oxygen needs of the human population would found by multiplying 0.85 kg/person/day by 365.25 days/year and 7.5 billion people to get ~2.3e+12 kg of oxygen per year.
For a crude estimate of how long it would take to use up our atmosphere's oxygen at that rate, we could simply divide the total capacity of oxygen in the atmosphere (~1.4e+18 kg) by the annual oxygen demand of the current human population (2.3e+12 kg), to get a rough estimate of 600,000 years for Earth's current population to use up all of Earth's current atmospheric oxygen (note that both calculations require some pretty dubious assumptions, but the problem already seems to start that way if I'm reading it right).
For an even cruder estimate of all life on earth, we can again take the Atmospheric Oxygen capacity (~1.4e+18 kg) and divide by the annual oxygen flux from atmosphere to biosphere (~3.0e+14 kg) to get around ~4700 years. (note that this assumes no flux from the biosphere back into the atmosphere, and also assumes that lithospheric flux is negligible.)
If you want to go further and calculate how long it would take for just animals to use up an atmosphere worth of oxygen, you could start by looking at how Earth's biomass is distributed, and make some more assumptions about the oxygen demand of other animals compared to humans following this same idea that I've used above.
Capacity and Flux estimates came (via wikipedia) from (Walker JC (1980) "The Oxygen Cycle". The Natural Environment and the Biogeochemical Cycles), but there are probably more up-to-date estimates available somewhere. The general idea would be the same. | {
"domain": "biology.stackexchange",
"id": 11120,
"tags": "respiration, breathing"
} |
Metric of Gravitational Field near Earth's surface | Question: I am trying to do a calculation in which I am trying to work out how a scalar field behaves in the earth's gravitational field near the surface.
I know that the Schwarzschild metric would describe the field, however I am only considering a local scalar fiel near the earth's surface.
What metric should one use?
Answer: To derive an appropriate metric for use near the surface of the earth, start with the Schwarzschild metric
$$
d\tau^2=A(r)dt^2
- \frac{1}{A(r)}dr^2 - (d{\mathbf{x}}^2-dr^2)
$$
with
$$
A(r) = 1-\frac{\kappa}{r}
\hskip2cm
\kappa = \frac{2GM}{c^2}
$$
and
$$
\mathbf{x}=(x,y,z)
\hskip1cm
r = \sqrt{x^2+y^2+z^2}
\hskip1cm
d{\mathbf{x}}^2 = dx^2+dy^2+dz^2.
$$
(I wrote the metric
here using $x,y,z$ coordinates instead of the more
traditional spherical coordinates.
The way I wrote it, the last parenthesized term
corresponds to the "angular part.")
Suppose that we want to approximate this metric in the vicinity of the point $\mathbf{x}_0$. Write $\mathbf{x}=\mathbf{x}_0+\mathbf{X}$ and expand to lowest order in $\mathbf{X}$. Clearly $d\mathbf{x}=d\mathbf{X}$. To handle $r$, use
$$
r \approx r_0 + \mathbf{u}_0\cdot\mathbf{X}
\hskip2cm
\mathbf{u}_0=\frac{\mathbf{x}_0}{r_0}
\hskip2cm
r_0^2=\mathbf{x}_0\cdot\mathbf{x}_0.
$$
Then
$$
dr\approx\mathbf{u}_0\cdot d\mathbf{X}
\hskip2cm
A(r)\approx 1-\frac{\kappa}{r_0 + \mathbf{u}_0\cdot\mathbf{X}}.
$$
Finish by using this last expression to expand $A(r)$ and $1/A(r)$ to first order in $\mathbf{X}$. Substituting these approximations back into the Schwarzschild metric gives
$$
d\tau^2\approx (\alpha+\beta\mathbf{u}_0\cdot\mathbf{X})dt^2
- (\mu+\nu\mathbf{u}_0\cdot\mathbf{X})(\mathbf{u}_0\cdot d\mathbf{X})^2
- d\mathbf{X}^2
$$
with $\mathbf{X}$-independent coefficients $\alpha,\beta,\mu,\nu$ determined explicitly by the preceding steps. We can simplify this by choosing, say, $\mathbf{x}_0=(R,0,0)$, so that $r_0=R$ and $\mathbf{u}_0=(1,0,0)$.
You can also look up the Rindler metric. That might be easier, because there are probably lots of references that have already worked out the behavior of a (quantum) scalar field in Rindler spacetime. You can probably find some leads by looking up the Unruh effect, which is a flat space-time analogue of the (eternal black hole version of the) Hawking effect, using the Rindler metric in place of the Schwarzschild metric. | {
"domain": "physics.stackexchange",
"id": 53016,
"tags": "general-relativity, metric-tensor"
} |
How do I add resistance to a ball joint? | Question: I'm trying to make a simple design that consists of two ball joints connected to each other in such a way that I have rotation and translation of the second joint, but with an adjustable resistance. I'm no engineer, so I'm unsure about the whole thing, and I might end up using different types of mechanical joints altogether. How can I add resistance to a ball joint? Is there any well-known two joint structure that resembles what I want?
Answer: The ball joint can be seated in a split-spherical shell socket that is tightened by screw. The one below uses a hydraulic mechanism to tighten all the clamps present with a single knob, but other models use a wire. Of course a direct bolt could also be used per joint.
Most obviously seen in the photo on the "wrist joint" and the "shoulder joint". It's dark, but look closely, you can see the split in the spherical shell that allows how tightly it clamps the ball to be adjusted.
Actually thinking about it, it might actually draw the ball into a wedge shaped groove to exert clamping pressure rather than closing the spherical half-shell around the ball.
Mitutoyo 7033B | {
"domain": "engineering.stackexchange",
"id": 4522,
"tags": "mechanical-engineering"
} |
What do you call the difference between the on and off temperatures in a simple thermostat? | Question: A simple thermostat will turn on at one temperature and off at a higher temperature. This keeps the thermostat from cycling on and off too quickly. The difference between these values is sometimes called dead band and sometimes called hysteresis. Both words apply to more complex processes like hydraulic valves and PID controllers, but what I am curious about is which one is better used to describe a simple thermostat.
What term(s) do you use in this context? Please cite sources if you can.
For some background, this question came up as a result of this Engineering SE question
Answer: Set point and trigger point are two terms that I am most familiar with respect to temperature controls. My experiences is limited to Thermotron and ESPEC environmental temperature chambers. I believe searching online resources section of both these organization's might lead to better focused definitions. Below is short list of terms defined in a Watlow controller.
In addition Watlow PID Controller user guide defines the following.
Positive Dead band
Zero Dead band
Negative Dead band
Below are excerpts from the user guide.
On-Off Controller
There is more interesting infomation with regard the on/off temperatures in the User Guide.
References:
User's Guide EZ-ZONE® PM PID Controller Models
On/Off Control
Introduction To Temperature Controllers | {
"domain": "engineering.stackexchange",
"id": 1972,
"tags": "control-engineering, terminology"
} |
Problem with ROS_ASSERT | Question:
I am running into the same problem as:
https://answers.ros.org/question/216315/help-using-epos_hardware/
std::string serial_number_str;
if(!config_nh_.getParam("serial_number", serial_number_str))
{
ROS_ERROR("You must specify a serial number");
valid_ = false;
}
else
{
ROS_ASSERT( SerialNumberFromHex(serial_number_str, &serial_number_) );
ROS_INFO_STREAM("serial number="<<serial_number_str<<" ,hex-> "<<serial_number_);
}
The exact same code above works on one machine running Ubuntu 14.04 but failed on a PC running 14.02 (both x64)
However, I found out if I removed ROS_ASSERT, SerialNumberFromHex() function will work. If I use the original code, serial_number_ will never obtain the correct reading from the serial_number_str.
Anyone can explain why this is happening? I know I can get it to work if I modify the code, but I really want to understand why this is happening.
Also I tried to put ROS_INFO() to print out the values, I noticed ROS_ASSERT will not allow me to print out anything:
bool SerialNumberFromHex(const std::string& str, uint64_t* serial_number) {
std::stringstream ss;
ss << std::hex << str;
ROS_INFO_STREAM("ss="<<ss.str());
ss.flush();
ROS_INFO_STREAM("ss="<<ss.str());
ss >> *serial_number;
ROS_INFO_STREAM("ss="<<ss.fail());
return true;
}
This kinda tells me maybe the SerialNumberFromHex() function is never called? So I tried ROS_ASSERT(false) directly, and surprisingly it didn't throw fatal error as expected. So something wrong with ROS_ASSERT command on my machine but without compilation error. Anyone has any clue what's causing the problem (like compiler problem etc)?
Originally posted by johnyang on ROS Answers with karma: 66 on 2017-09-13
Post score: 0
Answer:
You should check what build flags you're running. If you have turned on optimizations and are building for release asserts will be compiled out for faster operations. If you need the code inside to run in regular operations they should not be done inside an assert.
Originally posted by tfoote with karma: 58457 on 2019-05-20
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28839,
"tags": "roslaunch"
} |
Third project Euler solution | Question: The things I'm interested the most from the review are:
The performance of the code
Overall review of the code structure, styling rules and naming conventions.
What is the largest prime factor of the number 600851475143 ?
import math
#----------------------------------------------------------------------------------
def prime_factorization(num_to_factor):
"""Calculates the prime factors of the input number based on
Pollard's rho algorithm for integer factorization.
Arguments:
type:<int> - An number to factor.
Return:
type:<arr> - An array holding the prime factors of input number.
"""
#------------------------------------------------------------------------------
def calc_factor(num_to_factor):
x,x_fixed = 2,2
cycles = 2
factor = 1
while factor == 1:
for counter in range(cycles):
if factor > 1: break
x = (x * x - 1) % num_to_factor
factor = math.gcd(x - x_fixed, num_to_factor)
x_fixed = x
cycles *= 2
return factor
factors = []
while num_to_factor > 1:
# Gets the smallest factor of (num_to_factor) and appends it to the array.
# Divides the (num_to_factor) with it, and starts searching for the next factor.
factor = calc_factor(num_to_factor)
factors.append(factor)
num_to_factor //= factor
return factors
#----------------------------------------------------------------------------------
def main():
n = 600851475143
print(' Prime factorization of {} is: {}'.format(n, prime_factorization(n)))
#----------------------------------------------------------------------------------
if __name__ == "__main__":
main()
Answer: Disclaimer: I have never used or implemented Pollard's rho algorithm.
calc_factor
I think this function does not need to be a nested function. It could be a normal function on its own (and thus be documented accordingly).
Maybe, its name and the name for its parameter could be updated for something that sounds more usual, maybe get_factor(n)...
Instead of checking if factor > 1 at the beginning of the loop, you could check if just after updating the value of factor.
Also, instead of trying to break out of 2 loops, you could simply return the value directly if factor > 1 and you realise you don't need to check the value of factor in other places.
You'd get something like:
def get_factor(n):
x,x_fixed = 2,2
cycles = 2
while True:
for counter in range(cycles):
x = (x * x - 1) % n
factor = math.gcd(x - x_fixed, n)
if factor > 1:
return factor
x_fixed = x
cycles *= 2
prime_factorization
Here again, you could probably rename the parameter for something shorter.
I am not sure what the conventional way to do is but once you get a factor with the other functions, it may be worth checking how many times it divides.
Potential issues
Finally, I have the feeling that the algorithm does not provide an real prime factorisation because the factors returned by calc_factor are not always prime factors, they are just "random" factors.
If it may help you, here is the function I've used many times for Project Euler factorisations:
def prime_factors(n):
"""Yields prime factors of a positive number."""
assert n > 0
d = 2
while d * d <= n:
while n % d == 0:
n //= d
yield d
d += 1
if n > 1: # to avoid 1 as a factor
assert d <= n
yield n | {
"domain": "codereview.stackexchange",
"id": 28550,
"tags": "python, python-3.x, primes"
} |
Flux of electric field through a closed surface with no charge inside? | Question: I'm reading the Feynman lectures on electromagnetism and in Vol II, Chapter 1.4 Vol. II, Chapter 1-4 he talks about the flux of the electric field and says that flux of $E$ through and closed surface is equal to the net charge inside divided by $\epsilon_0$.
If there are no charges inside the surface, even though there are
charges nearby outside the surface, the average normal component of $E$
is zero, so there is no net flux through the surface
I cannot see why the net flux is zero here. Say we have a closed unit sphere at the origin with no charge inside it and at the point $(2, 0, 0)$ we have some charge $q$.
Well doesn't this charge then define the electric field $E$ for the system and it will flow into the unit sphere on the right hand side, and out of the unit sphere on the left hand side?
Furthermore, as the strength of the electric field decreases with distance from $q$ won't we have more flux going into the right hand side which is closer to the charge $q$, and less flux leaving through the left hand side as it is further away - and hence we should have a non-zero flux?
Can someone please explain what I am misinterpreting here?
Answer: You have more flux per unit area going into the right side, but the area on the right side is smaller. These two balance out so that the total flux is the same going in as going out.
The part of the sphere which has electric flux going in, traced in red, is less than half the area of the sphere.
Incidentally, flux per unit area is just the electric field. | {
"domain": "physics.stackexchange",
"id": 24925,
"tags": "electrostatics, electric-fields, gauss-law"
} |
Mixed symmetrization and antisymmetrization / Combinatorics | Question: I have the following sum of 10 terms:
$$
\delta^{ab}f^{cde} + \delta^{ac}f^{bde} + \delta^{ad}f^{bce} + \delta^{ae}f^{bcd} +
\delta^{bc}f^{ade} + \delta^{bd}f^{ace} + \delta^{be}f^{acd} + \delta^{cd}f^{abe} +
\delta^{ce}f^{abd} + \delta^{de}f^{abc}
$$
In other words I consider all permutations of 5 indices, but only use those for which the first two indices and the last three are ordered (at the same time).
On top op that, $\delta$ is symmetric and $f$ is fully antisymmetric.
What I am looking for is some short-hand notation which would evaluate exactly to this sum. Consider the following sum as an easy example:
$$
\delta^{ab}\delta^{cd} + \delta^{ac}\delta^{bd} + \delta^{ad}\delta^{bc} =
3\,\delta^{(ab}\delta^{cd)}
$$
Normally $\delta^{(ab}\delta^{cd)}$ would evaluate to 24 terms, but because of the symmetry property of $\delta$, these simplify to three. I am looking for a similar notation for the first sum.
Because $\delta$ is symmetric and $f$ antisymmetric, one has $\delta^{(ab}f^{cde)}=0$ and $\delta^{[ab}f^{cde]}=0$, so these don't fit. And $\delta^{(ab)}f^{[cde]}$ is incorrect as it doesn't mix the two sets of indices. I came up with some kind of "mixed symmetrization":
$$\delta^{(ab\,|}f^{cde]} $$
where I defined:
$$
\begin{align}
T^{(a_1 \cdots a_m \, |\, a_{m+1} \cdots a_n]} &= \text{sum of all } n! \text{ permutations, where each permutation gets a sign depending} \\ &\text{ on the number of permutations needed to put } \mathcal{P}\left(a_{m+1} \cdots a_n \right) \text{ in canonical} \\ &\text{ order.}
\end{align}
$$
This indeed evaluates (up to a factor 10) to the first sum, but it feels a bit awkward to introduce a notation that is not generally usable (and for which properties have to re-derived). As these kind of "ordered" sums are for sure not uncommon, I expect them to be treated in some corner of combination theories..
Does anybody know whether such 'mixed symmetrisation" already exists in literature?
Or even better, does anybody know of a simple way to rewrite the first sum, maybe in some combinatorics notation?
Many thanks in advance!
Answer: What you are doing amounts to computing a tensor product and decomposing it into irreducible components. There is a standard way of doing this with Littlewood-Richardson rule, Young symmetrizers etc. and there are nice pictures, Young diagrams (http://en.wikipedia.org/wiki/Young_tableau), that help to visualize different types of symmetries. In many cases it is sufficient to use Young diagrams, so there is no need to write indices directly. For example, $\delta$ is a rank two symmetric, it is depicted by a diagram made of two boxes in a row, we can say it is $(2,0,0,...)$ with the meaning that all other rows are of zero length. Your $f$ is a rank three antisymmetric, it is depicted by a diagrams with three boxes in a column, $(1,1,1,0,...)$. Then $\delta \otimes f$ contains two irreducible components, $(3,1,1,0,...)$ and $(2,1,1,1)$. Each box corresponds to one index and different arrangements of the same number of boxes are known to correspond to different types of tensors one can have with the same number of indices. You can look in Hamermesh or Fulton&Harris.
There are several notations that can be of use for you and are actually used by people. First of all it is convenient to denote all indices that belong to the group of indices in which a tensor is symmetric or antisymmetric by the same letter. For example $f^{uuu}$ or $f^{u[3]}$ instead of your $f^{abc}$ and $\delta^{aa}$ or $\delta^{a(2)}$ for your $\delta^{ab}$ and I used round(square) brackets to indicate the number of indices and whether they are symmetric or antisymmetric. But this works for rather simple types of symmetries, like the one you need.
In the case of $T^{a(n)|u[m]}$ that you gave ($n$ symmetric indices and $m$ antisymmetric) there are still only two irreducible components one is given by $T^{a(s-1)u|u[m]}$ and I assume antisymmetrization over all $u$ indices. Since it is already antisymmetric in the last $u[m]$ this requires $m+1$ terms. The second irreducible components is given by $t^{a(n)|au[m-1]}$ and it requires $n+1$ symmetric permutations. This is just a shorthand notation to same time.
The symmetrization operator you defined is strange and I cannot see that you followed your own recipe in the first formula, for example the 8th term $\delta^{cd}f^{abe}$ must be accompanied by $-\delta^{dc}f^{abe}$, this is another permutation that belongs to $5!$ and there is a sign needed according to your procedure. But the two just cancel each other. Actually, this is always true if you try to move antisymmetric indices to a tensor that is symmetric and vice-verse. Hope this is helpful. | {
"domain": "physics.stackexchange",
"id": 8789,
"tags": "tensor-calculus"
} |
Uncertainty principle with two photons | Question: Imagine an experimental setup in which you have to measure the momentum and location of a particle. To measure it we know we will have to affect it, and the uncertainty principle would come into the picture, but I have a different setup. The classical setup is that you fire a photon to measure the location of the particle, but the particle will change its momentum due to the collision with the photon.
I decided to take two photons. I will shoot one photon from either side of the particle, so the effects of the two photons cancel each other, giving an accurate measurement. To understand this, see the picture below.
The classic experiment
My thought experiment
In the second experiment, we shoot a photon of the same energy as the first one and counteract the effect of the first photon, so the electron would continue on its original path. Please tell me where I am wrong.
EDIT
We will have to take multiple photons but equal from both sides and in opposite directions.
Answer: First of all, the uncertainty principle is more than just disturbance of observation.
From the Wikipedia article "Uncertainty principle":
Historically, the uncertainty principle has been confused with a
somewhat similar effect in physics, called the observer effect, which
notes that measurements of certain systems cannot be made without
affecting the systems. Heisenberg offered such an observer effect at
the quantum level (see below) as a physical "explanation" of quantum
uncertainty.
It has since become clear, however, that the
uncertainty principle is inherent in the properties of all wave-like
systems, and that it arises in quantum mechanics simply due to the
matter wave nature of all quantum objects. Thus, the uncertainty
principle actually states a fundamental property of quantum systems,
and is not a statement about the observational success of current
technology. It must be emphasized that measurement does not mean
only a process in which a physicist-observer takes part, but rather
any interaction between classical and quantum objects regardless of
any observer.
Now, you've drawn 'the' path of the electron as if the electron has a definite trajectory and that two photons of equal and opposite momentum interact with the electron at a definite location.
However, the state of definite position has maximum 'uncertainty' in momentum! Not only can there not be a definite trajectory but, if the electron is localized by an interaction, one cannot escape the inherent uncertainty of that localized state. | {
"domain": "physics.stackexchange",
"id": 14890,
"tags": "photons, heisenberg-uncertainty-principle, measurements, thought-experiment"
} |
Las Vegas vs Monte Carlo randomized decision tree complexity | Question: Background:
Decision tree complexity or query complexity is a simple model of computation defined as follows. Let $f:\{0,1\}^n\to \{0,1\}$ be a Boolean function. The deterministic query complexity of $f$, denoted $D(f)$, is the minimum number of bits of the input $x\in\{0,1\}^n$ that need to be read (in the worse case) by a deterministic algorithm that computes $f(x)$. Note that the measure of complexity is the number of bits of the input that are read; all other computation is free.
Similarly, we define the Las Vegas randomized query complexity of $f$, denoted $R_0(f)$, as the minimum number of input bits that need to be read in expectation by a zero-error randomized algorithm that computes $f(x)$. A zero-error algorithm always outputs the correct answer, but the number of input bits read by it depends on the internal randomness of the algorithm. (This is why we measure the expected number of input bits read.)
We define the Monte Carlo randomized query complexity of $f$, denoted $R_2(f)$, to be the minimum number of input bits that need to be read by a bounded-error randomized algorithm that computes $f(x)$. A bounded-error algorithm always outputs an answer at the end, but it only needs to be correct with probability greater than $2/3$ (say).
Question
What is known about the question of whether
$R_0(f) = \Theta(R_2(f))$?
It is known that
$R_0(f) = \Omega(R_2(f))$
because Monte Carlo algorithms are at least as powerful as Las Vegas algorithms.
I recently learned that there is no known separation between the two complexities. The latest reference I can find for this claim is from 1998 [1]:
[1] Nikolai K. Vereshchagin, Randomized Boolean decision trees: Several remarks, Theoretical Computer Science, Volume 207, Issue 2, 6 November 1998, Pages 329-342, ISSN 0304-3975, http://dx.doi.org/10.1016/S0304-3975(98)00071-1.
The best known upper bound of one in terms of the other is
$R_0(f) = O(R_2(f)^2 \log{R_2(f)})$
due to [2]:
[2] Kulkarni, R., & Tal, A. (2013, November). On Fractional Block Sensitivity. In Electronic Colloquium on Computational Complexity (ECCC) (Vol. 20, p. 168).
I have two specific questions.
[Reference request]: Is there a more recent paper (after 1998) that discusses this problem?
More importantly, is there a candidate function that is conjectured to separate these two complexities?
Added in v2: Added ref [2], emphasized second question about existence of candidate function.
Answer: This question has been resolved!
A few days ago Andris Ambainis, Kaspars Balodis, Aleksandrs Belovs, Troy Lee, Miklos Santha, and Juris Smotrovs uploaded a preprint showing the existence of a total function $f$ which satisfies
$R_0(f) = \tilde{\Omega}(R_2(f)^{2})$
and even
$R_0(f) = \tilde{\Omega}(R_1(f)^{2})$,
where $R_1(f)$ denotes 1-sided bounded-error randomized query complexity.
Both separations are optimal up to log factors! | {
"domain": "cstheory.stackexchange",
"id": 3302,
"tags": "cc.complexity-theory, reference-request, query-complexity, decision-trees"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.