anchor stringlengths 0 150 | positive stringlengths 0 96k | source dict |
|---|---|---|
Search a number | Question: Given n numbers, design an algorithm to find the smallest $n^{\frac{2}{3}}$numbers, in sorted order. (Assume $n^{\frac{2}{3}}$ is an integer.)
I don't understand this question. Can I simply $x = n^{\frac{2}{3}}$ and fetch the $A[x]$?
Answer: That would only give you the $x$-th number. What the question is asking is to return a sorted list containing the smallest $n^{\frac{2}{3}}$ numbers of the input.
For example if $n=8$ and the input consists of the numbers $\langle 4, 3, 6, 1, 2, 5, 8, 7\rangle$ then you need to return the $x = n^\frac{2}{3} = 4$ smallest numbers in sorted order, i.e., $\langle 1, 2, 3 ,4 \rangle$. | {
"domain": "cs.stackexchange",
"id": 16246,
"tags": "algorithms, search-algorithms"
} |
How we calculate Precision-Recall Curve? | Question: As far as I know, precision and recall are two single values. How we can plot a curve from these two single values? I think I should calculate a set of values for each of them but how?
afterwards, the curve can be depicted by using the fact that they are inversely related.
$$
\text{Precision}=\frac{tp}{tp+fp}
$$
$$
\text{Recall}=\frac{tp}{tp+fn}
$$
Where $tp = \text{True Positives}$, $fp = \text{False Positives}$ and $fn = \text{False Negatives}$.
Anybody can explain how we can plot a curve from two single values?
One answer can be found here but I do not believe that I have caught the point well.
Answer: You're right. When you just have a single precision and a single recall value, you get a precision-recall point, not a curve.
However, machine learning models typically do not output discrete categories, even when such models fall into the "classification" paradigm (which is why, for instance, Frank Harrell dislikes the term "classification").
For instance, a logistic regression or a neural network, using a method like model.predict_proba in Python (not just model.predict), will return values on a continuum. The model.predict method will predict the category with the highest predicted value in predict_proba, but that is a stage on top of the logistic regression or neural network itself, a stage that combines the machine learning model output with a decision rule about what to do with that output.
You get the precision-recall curve by choosing multiple decision rules, calculating the precision and recall for each decision rule, and plotting those precision-recall pairs. The multiple decision rules have to do with thresholds. As logistic regression (especially) and neural network outputs are often on the interval $[0,1]$, they can be interpreted as probabilities. The default decision rule (such as in model.predict) is to assign to category $0$ if the probability of category $1$ (which is what is predicted) is less than $0.5$, otherwise category $1$. Another decision rule might use a threshold of $0.2$ or $0.9$ instead of $0.5$.
Loop over many thresholds to produce an entire PR curve. In the code below, I let sklearn calculate the thresholds, but if you set the thresholds in for value in thresholds to be something like np.arange(0, 1, 0.001), you will get similar results.
from sklearn.metrics import precision_recall_curve, precision_score, recall_score
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(2024)
# Define some simulated model predictions and binary outcomes
#
preds = np.random.beta(1/3, 1/3, 1000)
truth = np.random.binomial(1, preds, len(preds))
# Calculate the PR curve in sklearn
#
precision, recall, thresholds = precision_recall_curve(truth, preds)
# Loop over various thresholds to calculate the PR curve ourselves
#
precision_self = []
recall_self = []
for value in thresholds:
# Assign categorical predictions according to the side of the threshold on which
# the prediction falls
# Category 0 if < threshold
# Category 1 otherwise
#
categories = [0 if item < value else 1 for item in preds]
# Calculate the precision and recall scores
#
precision_self.append(precision_score(truth, categories))
recall_self.append(recall_score(truth, categories))
# Plot it out to see that we get the same curve either way
#
plt.plot(recall, precision, label = "sklearn", linewidth = 7)
plt.plot(recall_self, precision_self, label = "self", linewidth = 3)
plt.xlabel("Recall")
plt.ylabel("Precision")
plt.legend()
plt.show()
plt.close()
A critical piece to keep in mind is that many of these machine learning models called "classifiers" do not do classification on their own. They predict on a continuum, and then some decision rule can, but does not have to and arguably should not, convert those predictions on a continuum to categories, and there are many possible decision rules, each of which has its own recall and precision values. | {
"domain": "dsp.stackexchange",
"id": 12448,
"tags": "image-segmentation, classification, machine-learning"
} |
Why do we seek to maximise $F_{p}(\gamma, \beta)=\langle\gamma, \beta|H_{C}| \gamma, \beta\rangle$? | Question: The question is simple: why do we seek to maximise $F_{p}(\gamma, \boldsymbol{\beta})=\langle\gamma, \boldsymbol{\beta}|H_{C}| \gamma, \boldsymbol{\beta}\rangle$? How does maximising this value correspond to finding the groundstate value of $H_{C}$ ? Why does this optimum value of $H_{C}$ correspond to a set of $ \gamma, \boldsymbol{\beta} $ that maximise our chances of getting the optimum solution?
We try to minimise the value of $\langle \gamma,\beta|H_{C}| \gamma, \beta\rangle$ as this gives us the groundstate, as pointed out in this answer.
The Fahri paper says we seek to maximise this value. Other sources try to minimise it: are these just equivalent attempts with a minus sign stuck in front?
I know that when we encode the problem Hamiltonian, we do this in way that the groundstate corresponds to the optimum solution. Why then has it been found that the set of $ \gamma,\beta $ that give the best probability of success don't always correspond to the max value of $H_{c}$ ?
The image is taken from https://arxiv.org/abs/1907.02359.
Answer: Whether it is minimize or maximize, it is the same up to a minus sign indeed.
Why we optimize with this expression, is because the original theory tells you that, if you could have infinite depth ($p \rightarrow \infty$), this expression converges to the ground state. Think of it as a probability distribution over all possible bitstrings , that you drive with optimization, whose expectation value gets closer to the optimal value, while decreasing variance.
But in practice, nothing prevents you from optimizing a different objective (for instance maximizing the expectation over a quantile of measurements giving you the best values). Check this paper for instance. Then you will have different angles $\gamma, \beta$ for this objective.
In fact, optimizing $F_{p}(\gamma, \boldsymbol{\beta})$ is optimizing the average value, which in the process means you want a good performance in average, and you may in the process privilege bitstrings with good values rather than the optimal one(s) cause it will give you a better average output. | {
"domain": "quantumcomputing.stackexchange",
"id": 2588,
"tags": "quantum-algorithms, qaoa"
} |
Find all integer points that lay in a 3-ball with a given radius | Question: How can I efficiently find all lattice points in the cubic lattice $Z^3$ (that is to say, all integer points in a 3-space) that lay in a closed ball of radius $R$ centred at the origin?
Essentially,
Let $dist(p)$ be a function denoting the euclidean distance between a point in n-space and the origin point of that space, so $dist(p)=\sqrt{p_1²+p_2²+p_3²\ldots p_n²}$.
How might I efficiently iterate over $\{p \in \mathrm{Z}^3\ | \ dist(p) \le R\}$?
I'm aware that this is trivial to do in $\mathbb{O}(R^3)$ time by iterating over all lattice points that lay inside the minimum bounding box of the ball and filtering out every point $p$ where $dist(p) > R$, and I'm also aware that this can be optimised by squaring both sides of the distance function, but this algorithm is still too slow for my needs.
Answer: One straightforward approach is to iterate over $x,y$ in the bounding box, then find $z_\max$ so $(x,y,z)$ is in the sphere iff $|z| \le z_\max$. You can find $z_\max$ via the formula
$$z_\max = \sqrt{R - x^2 - y^2}.$$
If you can compute squares and square roots in constant time, then the running time is proportional to the number of pixels in the ball, i.e., you iterate over $\frac43 \pi R^3$ points.
A similar but slightly better algorithm is to iterate over $x,y$ in the bounding box, then iterate over increasing values of $z$ (i.e., $z=0,1,2,\dots$) until $z^2 > R - x^2 - y^2$. This is a little more efficient, because it doesn't require computing square roots, and because you can use the identity
$$(z+1)^2 = z^2 + 2z + 1$$
to avoid the need to compute squares in the inner loop either, instead updating the value of $z^2$ in each iteration. (Make sure to add in the negative values of $z$ too.) | {
"domain": "cs.stackexchange",
"id": 19505,
"tags": "algorithms, computational-geometry"
} |
Display messages from rosbag | Question:
Hey,
I want to display the data in some messages from different rosbags. I don know what kind of messages each rosbag has but I want to be able to check that. And then be able to choose which message to display in a graph. My plan is to choose message from a GUI which I will design in QT. But I have problem with reading the data from rosbags messages without knowing exactly what every rosbag contains. Does anyone know how to approach this? (And I am new to ros (that you might have guessed) so please keep it simple).
Originally posted by roger on ROS Answers with karma: 1 on 2017-08-25
Post score: 0
Original comments
Comment by gvdhoorn on 2017-08-25:
What you describe sounds very much like rqt_bag, rqt_multiplot and plotjuggler. Perhaps you could look at those and see how they do it - if you cannot use those.
Answer:
rqt_bag and rqt_plot function almost exactly as your proposed solution. Here is a simple tutorial to view bag files and rqt_plot has a similar tutorial.
Originally posted by KwanFace with karma: 26 on 2017-08-25
This answer was ACCEPTED on the original site
Post score: 0 | {
"domain": "robotics.stackexchange",
"id": 28700,
"tags": "c++, rosbag, qt"
} |
How does positive charge spread out in conductors? | Question: I know that when there are excess positive charges in a conductor, for example, a metal sphere, the positive charges will spread out over its surface. However, I am confused about how this excess charge spreads out over the surface, if protons cannot move and only electrons can move.
Can someone please inform me on how the excess positive charge spreads out over the surfaces of conductors?
Answer: Physically what is happening is this:
When you touch the positively charged source to the conductor (the metal sphere), electrons leave the conductor through the point of contact.
This leaves the point of contact on the conductor with a large deficit of electrons, and thus the point has a positive charge density.
The positive charge density produces an electric field in the conductor, which immediately pulls on remaining electrons in the conductor.
The electrons remaining spread out until they have eliminated all of the electric fields in the conductor (if there were remaining fields, the electrons would continue to rearrange).
The electrons will now be 'more spread out' than the protons; the difference between the new electron surface density and the original tells you the distribution of 'excess positive charge' on the surface.
I hope this helps, let me know if you have an application in mind for this; I often times find it helpful in thinking about problems to temporarily ignore the fact that in practice there is only one charge carrier (the electron) and just think about excess positive charge as positively charged particles spreading out. | {
"domain": "physics.stackexchange",
"id": 6785,
"tags": "electrostatics, charge, metals, conductors"
} |
What is an intuitive explanation for the log loss cost function? | Question: I would really appreciate if someone could explain the log loss cost function And the use of it in measuring a classification model performance.
I have read a few articles but most of them concentrate on mathematics and not on intuitive explanation and also a basic implementation using python with a small dataset would really helpful, So I can understand it better.
It would really help many looking for the same here. Thanks.
Answer: Ok, so here is how it works. Say you want to classify animals, and you have cats, dogs and birds. This means that your model will output 3 units in the form of a vector (call it a list if you prefer).
Each element of the list represents and animal, so for example
Position 0 represents how likely the input is to be a cat
Position 1 represents how likely the input is to be a dog
Position 2 represents how likely the input is to be a bird.
Now imagine you get an input that is a bird, in a happy world, your algorithm should output a vector like
[0, 0, 1]
That is, the input has 0% chance of being a cat, a 0% chance of being a dog and a 100% chance of being a bird.
I reality, this is not so simple, and most likely your output would be something like this
[0.15, 0.1, 0.75]
This means 15% chance of being a cat, 10% chance of being a dog and 75% chance of being a bird. Now, notice that this means your algorithm will still consider the input as a bird, so in terms of classification, sure the output would be correct... but it would not be as correct as it had predicted 100% chance of a bird.
So, the intuition is that the logloss measures how far away you are from perfection, where perfection would be identifying the correct label with a 100% chance and the incorrect labels with a 0% chance.
Final word of advice: Do NOT be afraid of math, you will really need to get the grasp of it at some point, do not let the sum terms intimidate you, after all they just represent loops in programming.
UPDATE
Let's dive into the math, specially to demystify it.
The logloss formula is given by
$$ LogLoss = - \frac{1}{n} \sum\limits_{i=1}^n [y_i \cdot log(\hat{y_i}) + (1-y_i) \cdot log(1-\hat{y_i}) ] $$
Where
$n$ represents the number of examples, in our case i will use 2 examples.
$y_i$ represents the correct answer for example $i$
$\hat{y}_i$ represents our prediction for example $i$
So, for our example we have two examples, (remember that the examples are denoted by $y_i$) which are
[0, 1, 0] # Example 1: This means the correct answer is dog
[1, 0, 0] # Example 2: This means the correct answer is cat
Now, let's go for our predictions, remember that the predictions are denoted by $\hat{y}$ let's say they are
[0.1, 0.6, 0.3]
[0.85, 0.05, 0.1]
And let's apply the scary formula here. First notice that $$\sum\limits_{i=1}^n$$ just means sum all the elements from i=1 to $n$, in our case, $n$ is the number of examples, and we have two of those.
So, for $i=1$ we have
$[y_i \cdot log_e(\hat{y_i}) + (1-y_i) \cdot log_e(1-\hat{y_i}) ]$
For $i=1$
term1 = [0, 1, 0] * log([0.1, 0.6, 0.3]) + (1-[0, 1, 0]) * log(1 - [0.1, 0.6, 0.3])
For $i=2$
term2 = [1, 0, 0] * log([0.85, 0.05, 0.1]) + (1-[1, 0, 0]) * log (1-[0.85, 0.05, 0.1])
And finally we have
log_loss = (-1/2) * (term1 + term2)
Using sklearn log_loss, the answer is 0.3366
Now, do not get too lost in the math here, just notice that it is not THAT hard, and also do understand that the loss function here tells you essentially how wrong you are, or if you prefer, it measures the "distance" from perfection. I strongly recommend you to code the logloss by yourself (normally numpy is a good option to do so :) | {
"domain": "datascience.stackexchange",
"id": 4385,
"tags": "machine-learning, python, cost-function"
} |
Magabot ROS Integration | Question:
Hey all,
I've the following Arduino based educational robot:
magabot.cc
which comprises the following specs:
differential drive system w/ encoders on both wheels
2x front bumpers
5x sonar sensors on the front
And I'd like to integrate it with ROS environment. I've already put a laptop on top running ROS and communicating with the robot through rosserial Arduino package and now I'd like to setup it with the 2D Navigation Stack by adding a laser scanner.
While reading this tutorial - Setup and Configuration of the Navigation Stack on a Robot - some questions arise:
Which TF broadcasters should I implement? (I thought of a laser scanner TF + a Sonar TF (group or individual for each sensor?), maybe a bumper TF too?!)
How about odometer TF? Which kind of transformation is correct, since both wheels are paralell to the robot? Just translation? Or maybe rotation to?!
Should all TF broadcasting msgs be published by a single node or different nodes for each frame?!
Regarding sensors and particularly sonar, should I publish their information like a sensor_msgs/PointCloud or sensor_msgs/LaserScan?
Sorry for so many questions and thank you very much in advance. ;)
Originally posted by rflmota on ROS Answers with karma: 11 on 2013-11-25
Post score: 0
Answer:
Provide tf information for base_link to any sensor that you have connected. Depending on how far you'd like to go, writing an URDF might be easier (in that case robot_state_publisher will do tf for you).
Provide /odom -> /base_link which is the full pose (translation and rotation).
Does not matter.
Depends on what fits the sensor better. LaserScans will be interpreted as a range sensor with beams that are equally spaced (by angles) and provide measurements for all angles.
Originally posted by dornhege with karma: 31395 on 2013-11-25
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 16262,
"tags": "navigation, transform"
} |
Fixed point conversions | Question: Have following questions about Fixed point conversion format.
Suppose I want to change Fixed Point format as
X1(m1,n1) (→) X2(m2,n2),m1+n1=m2+n2[Word length same], n2>n1[ Higher Fractionlength]
Suppose X1=00001100 and suppose I do X1<<(2^3)
Q1) If I interpret X1 as pure Signed integer, then X1= 12 and above operation means
Multiply by 2^(n2-n1)=8 so that X2= 01100000 = 96. (Decimal point is irrelevant here?). Is the interpretation correct?
or
Assuming Fixed point notation, above means
X1(m1,n1) (→) X2(m1-K,n1+k),m1+n1=m2+n2[Word length same], n2>n1[ Higher Fractionlength] and k=n2-n1
For given example, this means X1(3,5)=000.011 00(0.375)(→) X2(0,8)= .01100000=0.375
So, the value remained same.
Q2) Can we always interpret this as decimal point being fixed and bits moved left?Is this a logical shift or arithmatic shift?
Q3) Ideally, we would like to preserve resolution and range covered. Under what condition would this be guaranteed for the above shift? for example, consider following conversions:
Consider X1(3,5)=101.011 00(5.375)(→) X2(0,8) is 101.01100<<3 = .01100000 =0.375.So, we have error.
Q4) Is this rounding or truncation and why?
Consider X1(m1n1) = 000...0 . abc......z , [m1 = all zeroes], X1 << k = X1(m1-k,m1+k) where k<=m1
would not change the value, as bits to the left and right of decimal point remain same
Q5) So, if fixed point fractional notation is assumed( Positive fraction, with magnitude <1), Left shifts smaller than integer word length would preserve the fractional value?
Suppose I have fixed point number X1(m1,n1) with Word length m1+n1. Suppose I want to express
it in X2(m2,n2) format so that Word length is reduced ,Fraction length is increased, Integer Word length reduced but the value remains the same.For example:
X1(3,5) = 000.01100(0.375) -> X2(1,6)
Steps for above could be:
Left shift by n2-n1 bits so that 000 011 00 <<1 = 000 110 00
Keep Least Significant m2+n2 bits so that X2 = 0.011000(0.375) or Keep Most Significant m2+n2 bits so that X2 = 0.001100(0.1875)
Q6) If I was writing a routine that would automatically do conversion, )(say this was Sum result that needed to be converted to output (m2.n2) format automatically), how are what decides whether I keep LS or MS bits or doing a left or right shift?
Suppose I have fixed point number X1(m1,n1) with Word length m1+n1. Suppose I want to express it in X2(m2,n2) format so that Word length is increased ,Fraction length is increased but the value remains the same.For example:
X1(3,5) = 100.01100(-3.625) -> X2(3,13)
and X1(3.5) = 100.01100(-3.635) -> X2(1,11)
Q7) How can I go about that? What would be steps if result required increase in Word length and Integer Word Length?
Suppose I need to convert Fixed Point number as per
X1(m+1,n) (→) X2(m,n)[ Word length reduced, Fraction length same, integer Word length reduced]
One of the references I found says do following :
(x1)&(0xff..e) and keep MS m2+n2 bits
With X1(1,4) = 0.1011(0.875) this becomes 0.101 = 0.75
Other says: Do left shift by 1 and Keep Ms m2+n2 bits
With X1(1,4) = 1.011(0.875) this becomes 1.011(0.625)
Obviously, i don't perverse the value.
Q8) Is there a correct way to do it and the logic behind it?
I know these are lot of questions but it will help me clear lot of confusing stuff.
Thanks
sedy
Answer: As mentioned in the comments, your question is pretty confusing and confused, so I might not be able to answer all your smaller questions, but I'll try to shed some light on the topic. This should help you understand what's going on, and hopefully this understanding will help you answer all further questions by yourself.
In the following I assume two's complement representation. Note that if $M$ is the word length, and $n$ is the number of fractional bits, a number $x$ is represented as
$$x=K\cdot 2^{-n}\tag{1}$$
where $K$ is an integer is the range $[-2^{M-1},2^{M-1}-1]$. Now if you want to change the number of fractional bits to $m$, you have
$$x=L\cdot 2^{-m}\tag{2}$$
Combining (1) and (2) gives
$$L=K\cdot 2^{m-n}\tag{3}$$
If $m>n$, $K$ needs to be left-shifted by $m-n$ bits to obtain $L$. This means that you lose range (i.e. the maximum number that can be represented becomes smaller), so there is a possibility of overflow. If $m<n$, $L$ is obtained from $K$ by right-shifting $n-m$ bits, but for two's complement you need to replicate the sign bit, i.e. if the original sign bit was set, the new right-shifted number must also have the sign bit set. For $m<n$ you can lose precision because the resolution is reduced.
An example makes things easier to see. Let $M=6$, i.e. the integer $K$ is in the range $[-32,31]$. Let the number of fractional bits be $n=3$. E.g.
$$x=\texttt{101.011}=(-2^{5}+2^{3}+2^{1}+2^{0})\cdot 2^{-3}=-21\cdot 2^{-3}=-2.625$$
If we want to change the number of fractional bits to $m=2$, you need to do an arithmetic right-shift, resulting in
$$y=\texttt{1101.01}=-11\cdot 2^{-2}=-2.75\neq x$$
i.e. precision is lost. If instead we want $m=4$ fractional bits we need to do a left-shift resulting in overflow:
$$y=\texttt{01.0110}=22\cdot 2^{-4}=1.375\neq x$$
This is clear because by increasing resolution we have decreased the range of possible numbers to $[-32,31]\cdot 2^{-4}=[-2,1.9375]$, which makes it impossible to represent the original value $x=-2.625$.
There are of course examples where you would neither lose precision nor would you get overflow. E.g. (again with $M=6$ and $n=3$)
$$x=\texttt{001.010}=10\cdot 2^{-3}=1.25$$
For $m=2$ we get
$$y=\texttt{0001.01}=5\cdot 2^{-2}=1.25 = x$$
i.e. no loss in precision, and for $m=4$ we get
$$y=\texttt{01.0100}=20\cdot 2^{-4}=1.25=x$$
i.e. no overflow.
I hope that this addressed at least some of your questions and that things have become a bit clearer. If you haven't done it yet, you should check out the basics here, here, and here. | {
"domain": "dsp.stackexchange",
"id": 2467,
"tags": "fixed-point"
} |
How do we translate a two particle system in bra-ket notation into a wavefunction as a function of the two particle positions? | Question: Consider the two particle system given by the following bra-ket notation
$$| \psi _1 , \psi _2 \rangle $$
where $\psi_1, \psi_2$ each describe a particle. I then want to apply the projector $\langle x \rvert$ - or some other projector, to find $\psi (x_1, x_2 )$.
Is the following true:
$$\langle x |\psi_1 , \psi _2 \rangle = \psi (x_1, x_2 ) \, ,$$
or do I need two projectors $\langle x_1 \rvert$ and $\langle x_2 \rvert$, or am I horribly off base with any of this?
Answer: The vector you have present is a direct product of vectors from two closed Hilbert spaces. It is of the form: $$| {\psi_1,\psi_2}\rangle= |{\psi_1}\rangle\otimes |{\psi_2}\rangle$$
Thus naturally any basis you’d want to express them in must also necessarily be a product of basis of two closed Hilbert spaces. As such:$$|{x_1,x_2}\rangle= |{x_1}\rangle\otimes |{x_2}\rangle$$ | {
"domain": "physics.stackexchange",
"id": 66528,
"tags": "quantum-mechanics, wavefunction"
} |
MVC app to associate users with roles | Question: I'm a beginner to web programming and just started a MVC project from scratch. Because this will become a large project eventually, I would like to make sure that I'm doing things kind of right from the beginning.
The architecture is the following: ASP.NET 4.6, MVC 5, EF 6, Identity 2.0. I'm using EF Database First approach and Bootstrap 3.3.5. The solution is divided into 3 projects: Data (where I keep my .edmx and model classes), Resources (where I keep strings for localization purposes -and eventually images-), and Web (with my controllers, views, etc).
I'm going to point out a couple of examples in my code where I'm not sure about my approach. I have a navigation bar with an "Administration" link and two submenu links, "Users" and "Roles".
Users
When a user clicks on "Users", I'd like to show a table with four columns:
Username
Roles (string with the names of all roles assigned)
Assign Role (button that will take you to another form)
Remove Role (button that will take you to another form)
This is what the UserIndex.cshtml view looks like:
@model List<MySolution.Data.DAL.ApplicationUser>
@{
ViewBag.Title = Resources.Users;
}
<h2>@Resources.Users</h2>
<hr />
<table class="table table-striped table-hover ">
<thead>
<tr>
<th>@Resources.User</th>
<th>@Resources.Roles</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
@foreach (var user in Model)
{
<tr>
<td>
@user.UserName
</td>
<td>
@user.DisplayRoles()
</td>
<td>
@using (Html.BeginForm("UserAssignRole", "Admin", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Get, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
@Html.HiddenFor(m => m.Where(u => u.Id.Equals(user.Id)).FirstOrDefault().UserName)
<input type="submit" value="@Resources.AssignRole" class="btn btn-default btn-sm" />
}
</td>
<td>
@using (Html.BeginForm("UserRemoveRole", "Admin", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Get, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
@Html.HiddenFor(m => m.Where(u => u.Id.Equals(user.Id)).FirstOrDefault().UserName)
<input type="submit" value="@Resources.RemoveRole" class="btn btn-default btn-sm" />
}
</td>
</tr>
}
</tbody>
</table>
I added a DisplayRoles() method to my ApplicationUser class that returns a string with the list of assigned roles separated by commas, so that I can plug it directly into the user table in my view. I'm not sure at all about this approach; it does work, but putting logic like that in my model just seems kind of weird. I just haven't figured a better way to do this.
Then, on my controller, I have the following:
//
// GET: /Admin/UserIndex
[Authorize(Roles = "Admin")]
public ActionResult UserIndex()
{
var users = context.Users.ToList();
return View(users);
}
//
// GET: /Admin/UserAssignRole
[HttpGet]
//[ValidateAntiForgeryToken]
public ActionResult UserAssignRole(UserAssignRoleViewModel vm)
{
ViewBag.Username = vm.Username;
ViewBag.Roles = context.Roles.OrderBy(r => r.Name).ToList().Select(rr => new SelectListItem { Value = rr.Name.ToString(), Text = rr.Name }).ToList();
return View("UserAssignRole");
}
//
// POST: /Admin/UserAssignRole
[HttpPost]
//[ValidateAntiForgeryToken]
[ActionName("UserAssignRole")]
public ActionResult UserAssignRolePost(UserAssignRoleViewModel vm)
{
ApplicationUser user = context.Users.Where(u => u.UserName.Equals(vm.Username, StringComparison.CurrentCultureIgnoreCase)).FirstOrDefault();
this.UserManager.AddToRole(user.Id, vm.Role);
return RedirectToAction("UserIndex");
}
With my UserAssignRoleViewModel looking like this:
/// <summary>
/// Views\Admin\UserAssignRole.cshtml
/// </summary>
public class UserAssignRoleViewModel
{
[Display(Name = "Username", ResourceType = typeof(Resources))]
public string Username { get; set; }
[Display(Name = "Role", ResourceType = typeof(Resources))]
public string Role { get; set; }
}
And the UserAssignRole view being this:
@model UserAssignRoleViewModel
@{
ViewBag.Title = Resources.AssignRole;
}
<h2>@Resources.AssignRole</h2>
<hr />
<div class="row">
<div class="col-md-8">
<section id="assignRoleForm">
@using (Html.BeginForm("UserAssignRole", "Admin", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Post, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
@Html.ValidationSummary(true, "", new { @class = "text-danger" })
<div class="form-group">
@Html.LabelFor(m => m.Username, new { @class = "col-md-2 control-label" })
<div class="col-md-10">
@Html.TextBoxFor(m => m.Username, new { @class = "form-control" , @readonly = "readonly" })
@Html.ValidationMessageFor(m => m.Username, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
@Html.LabelFor(m => m.Role, new { @class = "col-md-2 control-label" })
<div class="col-md-10">
@Html.DropDownListFor(m => m.Role, (IEnumerable<SelectListItem>)ViewBag.Roles, Resources.DropdownSelect, new { @class = "form-control" })
</div>
</div>
<div class="form-group">
<div class="col-md-offset-2 col-md-10">
<input type="submit" value="@Resources.Assign" class="btn btn-default" />
</div>
</div>
}
</section>
</div>
</div>
I especially am not sure about the way that I use my controller actions, and how I'm calling them from my forms. And does it make sense to have a Get and Post method for the same action, or should I be doing something else?
Roles
The "Roles" section is very similar, with a table with three columns:
Name
Edit (button that will take you to another form to rename the role)
Delete button (button that will show a modal asking for verification)
On top of the table, there's a separate button allowing the user to add a new role.
Here's my RoleIndex.cshtml view.
@model IEnumerable<Microsoft.AspNet.Identity.EntityFramework.IdentityRole>
@{
ViewBag.Title = Resources.Roles;
}
<h2>@Resources.Roles</h2>
<hr />
@using (Html.BeginForm("RoleCreate", "Admin", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Get, new { @class = "form-horizontal", role = "form" }))
{
<input type="submit" value="@Resources.CreateRole" class="btn btn-default btn-sm" />
}
<hr />
<table class="table table-striped table-hover ">
<thead>
<tr>
<th>@Resources.Role</th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
@foreach (var role in Model)
{
<tr>
<td>
@role.Name
</td>
<td>
@using (Html.BeginForm("RoleEdit", "Admin", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Get, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
@Html.HiddenFor(m => m.Where(r => r.Id.Equals(role.Id)).FirstOrDefault().Name)
<input type="submit" value="@Resources.Edit" class="btn btn-default btn-sm" />
}
</td>
<td>
<input type="submit" value="@Resources.Delete" class="btn btn-default btn-sm" data-toggle="modal" data-target="#confirm-delete"/>
<div class="modal fade" id="confirm-delete" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
@Resources.DeleteRole
</div>
<div class="modal-body">
@Resources.AreYouSureYouWantToDelete
</div>
<div class="modal-footer">
@using (Html.BeginForm("RoleDelete", "Admin", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Post, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
@Html.HiddenFor(m => m.Where(r => r.Id.Equals(role.Id)).FirstOrDefault().Name)
<button type="button" class="btn btn-default" data-dismiss="modal">@Resources.Cancel</button>
<input type="submit" value="@Resources.Delete" class="btn btn-danger btn-ok" />
}
</div>
</div>
</div>
</div>
</td>
</tr>
}
</tbody>
</table>
Here's my RoleCreateViewModel
/// <summary>
/// Views\Admin\RoleCreate.cshtml
/// </summary>
public class RoleCreateViewModel
{
[Required]
[Display(Name = "Name", ResourceType = typeof(Resources))]
public string Name { get; set; }
}
and RoleCreate actions
//
// GET: /Admin/RoleCreate
[HttpGet]
[Authorize(Roles = "Admin")]
public ActionResult RoleCreate()
{
return View();
}
//
// POST: /Admin/RoleCreate
[HttpPost]
[Authorize(Roles = "Admin")]
public ActionResult RoleCreate(RoleCreateViewModel vm)
{
context.Roles.Add(new IdentityRole()
{
Name = vm.Name
});
context.SaveChanges();
ViewBag.ResultMessage = Resources.RoleCreatedSuccessfully;
return RedirectToAction("RoleIndex");
}
and RoleCreate.cshtml view
@model RoleCreateViewModel
@{
ViewBag.Title = Resources.CreateRole;
}
<h2>@Resources.CreateRole</h2>
<hr />
<div class="row">
<div class="col-md-8">
<section id="createRoleForm">
@using (Html.BeginForm("RoleCreate", "Admin", new { ReturnUrl = ViewBag.ReturnUrl }, FormMethod.Post, new { @class = "form-horizontal", role = "form" }))
{
@Html.AntiForgeryToken()
@Html.ValidationSummary(true)
<div class="form-group">
@Html.LabelFor(m => m.Name, new { @class = "col-md-2 control-label" })
<div class="col-md-10">
@Html.TextBoxFor(m => m.Name, new { @class = "form-control" })
@Html.ValidationMessageFor(m => m.Name, "", new { @class = "text-danger" })
</div>
</div>
<div class="form-group">
<div class="col-md-offset-2 col-md-10">
<input type="submit" value="@Resources.Save" class="btn btn-default" />
</div>
</div>
}
</section>
</div>
</div>
Please do critique.
Answer: Well, there's a lot there. I'll give some feedback on the Users bit.
I wouldn't call DisplayRoles in cshtml either. I would use a view model for that page. It would have an int, UserId, and 2 strings, UserName and UserRoles, and the page would use a list or ienumerable of that view model. Then in your Get for the Index, create the view model from each user and build up the collection. Pretty straightforward using LINQ.
For the 2 buttons in your table, can't you just use ActionLink? Yes it makes sense to have Get and Post for same action. But your Get for assigning roles would just take the user Id. Your Post would accept your view model, then you don't have to change the name in order to make it unique. | {
"domain": "codereview.stackexchange",
"id": 16074,
"tags": "c#, html, mvc, asp.net, authorization"
} |
Several simple calculators on one page | Question: For my students (I am an Austrian math teacher) I would like to provide some simple online calculators on my blog.
Let's say there are two pages on my website("Circle" and "Cube") and on every page I want to have about 10 different calculators.
Here is an example of one page with two calculators:
My code
JSFiddle
HTML
<h1>Circle</h1>
<h2>Calculate circumference of circle</h2>
<p>
Radius of circle: <input type="text" id="calc1-input"> <button onclick="calc1()">Calculate</button>
</p>
<p>
Circumference of circle: <span id="calc1-output"></span>
</p>
<h2>Calculate area of circle</h2>
<p>
Radius of circle: <input type="text" id="calc2-input"> <button onclick="calc2()">Calculate</button>
</p>
<p>
Area of circle: <span id="calc2-output"></span>
</p>
Javascript
function commaToDot(number) {
return parseFloat(number.replace(',','.').replace(' ',''));
}
function dotToComma(number){
return number.toString().replace('.',',');
}
function calc1() {
var input = document.getElementById("calc1-input").value;
input = commaToDot(input);
if (isNaN(input)) {
document.getElementById('calc1-output').innerHTML = "<b>Please enter a number!</b>";
} else {
var output = 2 * Math.PI * input;
document.getElementById('calc1-output').innerHTML = dotToComma(output);
}
}
function calc2() {
var input = document.getElementById("calc2-input").value;
input = commaToDot(input);
if (isNaN(input)) {
document.getElementById('calc2-output').innerHTML = "<b>Please enter a number!</b>";
} else {
var output = Math.PI * Math.pow(input,2);
document.getElementById('calc2-output').innerHTML = dotToComma(output);
}
}
Explanation
Input: In my country we use commas as decimal separators.
So the students write 55,5 instead of 55.5 (see commaToDot()).
Output: The students expect the output to be with comma decimal separator, too (see dotToComma()).
Error: If the input is not a number (if (isNaN(input))) an error message will be displayed instead of the calculation result.
Questions
I see a lot of redundance in my code but I don't know how to improve it.
The only difference between calc1() and calc2() is the formula.
Maybe it is possible to improve the naming of the variables/functions?
Answer: calc1 and calc2 are really bad names. Names should always clearly describe what they are for or do. calculateCircleCircumference and calculateCircleArea would be better
choices.
I'm not a big fan of the names commaToDot and commaToDot either. I'd prefer more conceptional names such as parseNumber and formatNumber, maybe even parseGermanNumber and formatGermanNumber.
A slightly better HTML structure would be in order: Each calculator could be surrounded by a <form> element and there is an <output> element specifically for display calculation results.
In current JavaScript one should prefer const and let over var.
The following suggestions may introduce concepts, that the students haven't learnt yet, but they are considered good conventions.
For a better separation of layout and logic it is usually suggested not to use on... event handler attributes, but instead assign them using addEventListener.
Also one should move all element look ups to the initialization. This could look like this (notice the changed IDs):
const circleCircumferenceRadiusInput = document.getElementById("circle-circumference-radius-input");
const circleCircumferenceOutput = document.getElementById('circle-circumference-output');
function calculateCircleCircumference() {
// Using separate variables so "const" can be used.
const inputString = circleCircumferenceRadiusInput.value;
const input = commaToDot(inputString);
if (isNaN(input)) {
circleCircumferenceOutput.innerHTML = "<b>Please enter a number!</b>";
} else {
const output = 2 * Math.PI * input;
circleCircumferenceOutput.innerHTML = dotToComma(output);
}
}
document.getElementById('circle-circumference-button').addEventHandler("click", calculateCircleCircumference);
This can be simplified a bit by wrapping HTML of the calculator in an <form> element and referring to the needed elements by name. Also I'm wrapping the code in an initialization function to avoid lots of global variables. Both of these prepare for generalizing and reusing the code.
<form id="circle-circumference">
<h2>Calculate circumference of circle</h2>
<p>
Radius of circle: <input type="text" name="radius"> <button name="execute">Calculate</button>
</p>
<p>
Circumference of circle: <output name="output"></output>
</p>
</form>
function initCircleCircumferenceCalculator(form) {
const radiusInput = form.elements["radius"];
const outputElement = form.elements["output"];
function calculate() {
const inputString = radiusInput.value;
const input = commaToDot(inputString);
if (isNaN(input)) {
outputElement.innerHTML = "<b>Please enter a number!</b>";
} else {
const output = 2 * Math.PI * input;
outputElement.innerHTML = dotToComma(output);
}
}
form.elements["execute"].addEventHandler("click", calculate);
}
initCircleCircumferenceCalculator(document.getElementById("circle-circumference"));
Now that we have the initialization function it's easier to generalize this so that it can be used for multiple calculators:
function initCalculator(form, inputName, calculationFunction) {
const inputElement = form.elements[inputName];
const outputElement = form.elements["output"];
function calculate() {
const inputString = inputElement.value;
const input = commaToDot(inputString);
if (isNaN(input)) {
outputElement.innerHTML = "<b>Please enter a number!</b>";
} else {
const output = calculationFunction(input);
outputElement.innerHTML = dotToComma(output);
}
}
form.elements["execute"].addEventHandler("click", calculate);
}
// Circle circumference
initCalculator(document.getElementById("circle-circumference"), "radius", r => 2 * Math.PI * r);
// Circle area
initCalculator(document.getElementById("circle-area"), "radius", r => Math.PI * Math.pow(r, 2));
The next step could be modify initCalculator so that it supports multiple input fields. | {
"domain": "codereview.stackexchange",
"id": 38748,
"tags": "javascript, calculator"
} |
Internet Relay Chat bot core with plugins system | Question: I started learning Python by making an IRC bot, as it took some pains in another language. I've improved it now over time. As it involves networking, I'd also like some comments on that side.
# -*- coding: utf-8 -*-
import socket
import os
import importlib
plugins = []
class Bot_core(object):
def __init__(self,
server_url = 'chat.freenode.net',
port = 6667,
name = 'appinvBot',
owners = ['appinv'],
password = '',
friends = ['haruno'],
autojoin_channels = ['##bottestingmu']
):
self.server_url = server_url
self.port = port
self.name = name
self.owners = owners
self.password = password
self.autojoin_channels = autojoin_channels
self.friends = friends
'''
NORMAL ATTRIBUTES
'''
self.irc = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.isListenOn = 1
dom = self.server_url.split('.')
self.domain = '.'.join(dom[-2:])
self.sp_command = 'hbot'
self.plugins = []
'''
STRINGS
'''
def set_nick_command(self):
return 'NICK ' + self.name + '\r\n'
def present_command(self):
return 'USER '+self.name+' '+self.name+' '+self.name+' : '+self.name+' IRC\r\n'
def identify_command(self):
return 'msg NickServ identify ' + self.password + ' \r\n'
def join_channel_command(self, channel):
return 'JOIN ' + channel + ' \r\n'
def specific_send_command(self, target, msg):
return "PRIVMSG "+ target +" :"+ msg +"\r\n"
def pong_return(self):
return 'PONG \r\n'
def info(self, s):
def return_it(x):
if x == None:
return ''
else:
return x
try:
prefix = ''
trailing = []
address = ''
if not s:
print("Empty line.")
if s[0] == ':':
prefix, s = s[1:].split(' ', 1)
if s.find(' :') != -1:
s, trailing = s.split(' :', 1)
args = s.split()
args.append(trailing)
else:
args = s.split()
command = args.pop(0)
if '#' in args[0]:
address = args[0]
else:
address = prefix.split('!~')[0]
# return prefix, command, args, address
return {
'prefix':return_it(prefix),
'command':return_it(command),
'args':['' if e is None else e for e in args],
'address':return_it(address)
}
except Exception as e:
print('woops',e)
'''
MESSAGE UTIL
'''
def send(self, msg):
self.irc.send(bytes( msg, "UTF-8"))
def send_target(self, target, msg):
self.send(self.specific_send_command(target, msg))
def join(self, channel):
self.send(self.join_channel_command(channel))
'''
BOT UTIL
'''
def load_plugins(self, list_to_add):
try:
to_load = []
with open('PLUGINS.conf', 'r') as f:
to_load = f.read().split('\n')
to_load = list(filter(lambda x: x != '', to_load))
for file in to_load:
module = importlib.import_module('plugins.'+file)
Plugin = getattr(module, 'Plugin')
obj = Plugin()
list_to_add.append(obj)
except ModuleNotFoundError as e:
print('module not found', e)
def methods(self):
return {
'send_raw':self.send,
'send':self.send_target,
'join':self.join
}
def run_plugins(self, listfrom, incoming):
for plugin in listfrom:
plugin.run(incoming, self.methods(), self.info(incoming))
'''
MESSAGE PARSING
'''
def core_commands_parse(self, incoming):
'''
PLUGINS
'''
self.run_plugins(self.plugins, incoming)
'''
BOT IRC FUNCTIONS
'''
def connect(self):
self.irc.connect((self.server_url, self.port))
def identify(self):
self.send(self.identify_command())
def greet(self):
self.send(self.set_nick_command())
self.send(self.present_command())
for channel in self.autojoin_channels:
self.send(self.join_channel_command(channel))
def pull(self):
while self.isListenOn:
try :
data = self.irc.recv(2048)
raw_msg = data.decode("UTF-8")
msg = raw_msg.strip('\n\r')
self.stay_alive(msg)
self.core_commands_parse(msg)
print(
"""***
{}
""".format(msg))
if len(data) == 0:
try:
self.irc.close()
self.registered_run()
except Exception as e:
print(e)
except Exception as e:
print(e)
# all in one for registered bot
def registered_run(self):
self.connect()
self.identify()
self.greet()
self.load_plugins(self.plugins)
self.pull()
def unregistered_run(self):
self.connect()
self.greet()
self.load_plugins(plugins)
self.pull()
'''
ONGOING REQUIREMENT/S
'''
def stay_alive(self, incoming):
if 'ping' in incoming.lower():
part = incoming.split(':')
if self.domain in part[1]:
self.send(self.pong_return())
print('''
***** message *****
ping detected from
{}
*******************
'''.format(part[1]))
self.irc.recv(2048).decode("UTF-8")
x = Bot_core(); x.registered_run()
Answer: I have the following points to make on the code:
- No logging
- No external config file for the settings (Open/Close Principal violation)
- Creating resources in the init instead of in a separate section (no re-use of common functions should the user attempt to connect to multiple IRC servers)
- I don't see any validation of config/data items. Users could have the expected strings or they could introduce some buffer overflow execution code. Always validate user input.
- String concatenation and formatting - don't use "+" to join strings and data together, use f-strings or .format()
- You have using long if else statements instead of ternary
if x == None:
return ''
else:
return x
should be:
return "" if not x else x
(also, no need to do == None as that is assumed).
You have many huge try/except blocks, reduce this to only the specific lines which could throw an exception and catch that specific exception.
There are many huge if: if: if: statements instead of reducing each if statement into a separate function (Single Responsibility Principal)
Your Exceptions printing to the screen instead of logging, you should not interrupt the UX - handle the error gracefully and continue on or warn the user you need to exit the program due to an error.
Plugins: There is no validation when loading the plugin. It could be malicious or faulty. You need to wrap loading the plugin in a try/except, and you need to run some checks on the plugin to determine validity. Like not trusting user input, you need to validate plugins so they don't crash the bot.
As hinted at before, you have plugin methods talking directly to send function, however there is no validation on what they're sending through the bot. Ensure you validate the input into .send(cmd) before transmitting it else you could unwittingly make a DDOS bot.
The connection function has no validation routine either, you should validate if the input is a valid domain name or ip address on loading (as an example, use regex). Also, you should validate if there is a valid listening port 'out there' before assuming the remainder of the data for that particular IRC Server is valid. Handle faults and errors gracefully in the UX.
Finally... Comments in capitals (please, no) or comments stating the obvious (unnecessary) or comments that don't correlate to the code. Your code needs to explain what it's doing, you should only need comments when the code is difficult (like some math code), and it should state "this is the sieve of eratosthenes" or some other comment which is the "Why" or "What", and not the "how" - the code is the "how" (I hope that makes sense?).
Also, when your code gets bigger, it might be an idea to export all the plugin code to a separate .py file named 'plugins.py' and import that into your main. Speaking of main, you're missing the entry point. Have a look at a few other examples on Code Review.
Good Luck! | {
"domain": "codereview.stackexchange",
"id": 31556,
"tags": "python, python-3.x, plugin, chat"
} |
Error husky simulation in ros noetic | Question:
Hello everyone, i am using husky navigation in ros noetic according to this tutorial husky navigation tutorial. However, when i run the command roslaunch husky_gazebo husky_playpen.launch, it always shows the error like this
[spawn_husky_model-10] process has died [pid 2730237, exit code 1, cmd /opt/ros/noetic/lib/gazebo_ros/spawn_model -x 0.0 -y 0.0 -z 0.0 -Y 0.0 -unpause -urdf -param robot_description -model / __name:=spawn_husky_model __log:=/home/ngochuy/.ros/log/04cddd9a-4078-11ec-95fc-204ef60a0173/spawn_husky_model-10.log].
log file: /home/ngochuy/.ros/log/04cddd9a-4078-11ec-95fc-204ef60a0173/spawn_husky_model-10*.log
Can anyone help me to fix this issue, please!
Thanks in advance, and have a nice day!
Originally posted by N.N.Huy on ROS Answers with karma: 32 on 2021-11-07
Post score: 0
Original comments
Comment by gvdhoorn on 2021-11-08:
I'm sorry to have to do this for something seemingly minor, but please don't post screenshots of terminal text or source code in question on ROS Answers. It's all text, so there is no need. Just copy-paste the text from the terminal or the source into your question text. Do make sure to format it properly by selecting the text and pressing ctrl+k (or clicking the Preformatted Text button (the one with 101010 on it)).
You don't need to post a new question, just edit your curent one. You can use the edit button/link for this.
After you replace the screenshot with the error message itself, we can re-open your question.
Comment by N.N.Huy on 2021-11-08:
Ok thank you!
Comment by osilva on 2021-11-08:
Hi @N.N.Huy that's old tutorial. Try installing navigation package: sudo apt-get install ros-noetic-husky-navigation
Comment by osilva on 2021-11-08:
For Husky more recent tutorials, check this: https://www.clearpathrobotics.com/assets/guides/kinetic/husky/HuskyMove.html
Answer:
Use a more recent version of Husky tutorial
https://www.clearpathrobotics.com/assets/guides/kinetic/husky/HuskyMove.html
install the simulator from the most recent Noetic version.
sudo apt-get install ros-kinetic-husky-simulator
sudo apt-get install ros-noetic-husky-desktop
For HUSKY MOVE BASE DEMO you also need to install:
sudo apt-get install ros-kinetic-husky-navigation
Originally posted by osilva with karma: 1650 on 2021-11-08
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by N.N.Huy on 2021-12-30:
Thank you so much ! | {
"domain": "robotics.stackexchange",
"id": 37099,
"tags": "ros, gazebo, husky"
} |
Hydrolysing large molecules to reveal amino acids | Question: Consider Oxytocin:
I am asked to hydrolyze Oxytocin and reveal 5 amino acids which are the result of this hydrolysis.
I must admit I have no idea where to begin. I know that generally amino acids are on the form of $\ce{NH_2-CH(R)-COOH}$, so perhaps looking at these double bonds to oxygen may be a starting point, as I assume they will become the acid group? But I really don't know how to move forward from there!
Answer: You are on the right track. $\ce{NH_2-C(H)R-COOH}$ is the formula for an alpha amino acid, one where the amino group of the molecule is attached to a carbon atom one carbon atom away from the carboxylate carbon. This is the most common type of amino acid found in biochemistry. (A beta amino acid would be $\ce{NH_2-C(H)R^2-C(H)R^1-COOH}$).
When amino acids condense into a polymer or oligomer they do so by forming peptide bonds, also called amide bonds.
$\ce{R^1COOH + NH2R^2 -> R^1CONH2R^2 + H2O}$
So to find the places in oxytocin that would be hyrolyzed into amino acids, look for individual carbon atoms that are attached to both (i) an oxygen atom via a double bond and (ii) a nitrogen atom while (iii) not being attached to any other heteroatoms. (Item (iii) is generally necessary to exclude other functional groups such as carbamides or carbamates, but these groups are not found in oxytocin.) For oxytocin, your impulse to look just at carbon double-bonded to oxygen (item (i) only) is OK, but be warned that this procedure is not precise enough in general, because molecules with e.g. ketone moieties will have C=O bonds but will not be hydrolyzed.
The red-circled amino group is a free amino group: the carbon attached to it is not bound to a carbonyl oxygen, so it is not a place where hydrolysis will happen. (However, look one carbon atom to the left of C atom attached to the circled N....)
@DavePhD is right; there are more than five amide bonds in oxytocin. I think there are a total of ten. Eight are the result of amide bond formation between alpha amino acids and two are the result of amide bond formation between an acid side chain and unsubstituted ammonia. | {
"domain": "chemistry.stackexchange",
"id": 2763,
"tags": "organic-chemistry, hydrolysis, proteins, amino-acids"
} |
Isn't it "trivial" to represent/reduce any classical physics problem into a Spin-Glass which is NP-Complete? | Question: In the late 80's there were several highly cited efforts to use Spin-Glass models to formulate other computational problems such as: Protein Folding and Neural Networks.
Isn't it straight forward to reduce any classical physics problem into Spin-Glass which is NP-complete problem?
What I am saying is: since Spin-Glass is at least as powerful model (computationally) as any classical physics model it is always possible to restate a classical system using Spin-Glass.
I believe I am missing a huge point here, because these 2 papers are highly cited ones.
EDIT: If you downvote, please, kindly consider to add a constructive comment.
Answer: It was only recently (2016) that it was proved mathematically that all of classical spin physics can be reproduced by the 2D Ising Model with linear terms (what physicists call "fields") with at most polynomial overhead: Simple universal models capture all classical spin physics.
So there's two things to say:
It is not "trivial" to do this reduction, since it took about 100 years to prove.
Not just any spin-glass problem encapsulates all of classical spin physics. The paper proves that a 2D spin-glass problem with linear terms is sufficient. But a 1D spin-glass, or a 2D spin-glass without linear terms, would not be enough. | {
"domain": "cstheory.stackexchange",
"id": 4458,
"tags": "complexity-classes, reductions, np-complete, physics, statistical-physics"
} |
Does environment server make global 3D representation with respect to static frame /odom or /map? | Question:
I am looking for a collision efficient 3D global representation. I think environment server does this using collision map, but I am not sure and hence want to clear my doubts.
Does environment server take the collision maps and put them into global representation which should be with respect to some static frame like /odom or /map. So if next time it receives a new collision map from different location of mobile base (as collision map is centered around the mobile base frame) then it should update the global 3D representation. could someone show me the right direction?
If above is true then Can I save the global 3d representation (collision map) and reload it again in the future??
Thanks,
Originally posted by VN on ROS Answers with karma: 373 on 2011-11-15
Post score: 0
Answer:
Usually, that kind of algorithms rely on TF to transform incoming sensor data into a global frame.
As you probably know, all your sensor data are supposed to carry a frame_id and a time stamp, which are used by TF to perform adequate transformations.
Checking the documentation of http://www.ros.org/wiki/planning_environment, you can find the following:
~global_frame (string, default: "")
If the global frame is specified to be the empty string, the root frame in which all monitoring will take place is the root link of the robot ("base_link" for the robot). Otherwise, this parameter can be set to a fixed frame in the world (e.g. "odom_combined", "map")
so in the end, it is up to you to chose what frame is to be used as reference.
Raph
Originally posted by raphael favier with karma: 1382 on 2011-11-18
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by raphael favier on 2011-11-18:
sorry, I never used this package. maybe check mapping_msgs, but it seems its API will undergo changes at it will move to using PCL. else, use the source luke ;)
Comment by VN on 2011-11-18:
I am assuming that same parameter (global_frame) holds in electric as well. I need to set this parameter "global_frame" under "environment_server" node in launch file?? Can you please specify how collision map represents unknown space?
Comment by VN on 2011-11-18:
Raphael: That makes sense. Actually I am using electric and in the starting of "planning_environment" documentation it says that most of the information on diamondback no longer holds so I didn't scroll down to read this information. | {
"domain": "robotics.stackexchange",
"id": 7308,
"tags": "ros, collision-map, environment-server"
} |
Are all the motions of fundamental particles strictly discrete? | Question: I was thinking about how we use the movements of particles in quartz crystals to measure time, using the convenient fact that it undergoes periodic and specific motions. Essentially, its motion has to be quantized for time measurement to be consistent.
Now, since energy is quantized (available only in "packets"), would that necessarily have to imply that all motions along all directions of any fundamental particles can occur in only those amounts that can be represented as a corresponding value of energy spent/obtained?
In the case that motion need actually be quantized, would angular motion need to be quantized too? If a particle is at $x_i, y_i, z_i$ and moving along the direction of the vector ${\bar v}$ at some time $t_i$, then would that render certain angles impossible for it to attain ($\pm\;\alpha$), in any plane $p_j$ such that $\bar v \in p_j$?
If there are any flaws in my understanding of physics, assumptions/premises, or reasoning, please help me understand those too.
Answer:
Now, since energy is quantized (available only in "packets"),
Energy is a continuous variable in the field of real numbers. It becomes quantized in potential wells, as with the hydrogen atom that has specific energy levels.
A free particle can have any energy and emit and a photon of any energy can come out in Compton scattering or brehmstrahlung..
So no quantization of motion, as in bound states motion is figurative as the only thing we know are the orbitals. | {
"domain": "physics.stackexchange",
"id": 37854,
"tags": "kinematics, spacetime, discrete"
} |
Reading the contents at three URLs using Promises | Question: I'm learning promises and now I'm trying to figure out if something in this code can be improved. This code is expected 3 urls and then async parallel calls should be done. When all requests are finished thet just show each request's data.
const hyperquest = require('hyperquest');
const BufferList = require('bl');
const Promise = require('bluebird');
var tasks = [];
for (var i = 0; i < 3; i++) {
new function(index) {
tasks[index] = new Promise(function(resolve) {
makeRequest(resolve, index);
});
}(i);
}
Promise.all(tasks).then(function(result) {
result.forEach(function(item) {
console.log(item.bl.toString());
})
});
function makeRequest(resolve, step) {
var bl = new BufferList();
var req = hyperquest(process.argv[step + 2]);
req.on('end', function() {
resolve({data: bl});
});
req.pipe(bl);
}
The main question is it right usage of Promise.
Should I use this
tasks[index] = new Promise(function(resolve) { makeRequest(resolve, index); });
or it can be simplified?
Also req.on('end', function() { resolve({data: bl}); }); is a direct call of resolver callback but maybe it's possible without it?
Answer: On the whole it does not look very elegant, especially new function, building functions in a loop and building the Promise array.
I would suggest you invest time in the bluebird Promise.map function and write something like this:
var indexes = [0,1,2];
Promise.map(indexes, function(index) {
return new Promise(function(resolve) {
makeRequest(resolve, index);
});
}).then(function(result) {
result.forEach(function(item) {
console.log(item.bl.toString());
})
});
Other than that, you seem very light on the failure handling, make sure to use catch and reject. | {
"domain": "codereview.stackexchange",
"id": 10557,
"tags": "javascript, node.js, promise"
} |
Find the asymptotic bound for the recurrence relation: $T(n) = T(\sqrt{n}) + 5n$ | Question: I've tried to expand the recursion:
$$T(n) = T(n^{\frac{1}{2}}) + 5n = T(n^{\frac{1}{4}}) + 5(n^{\frac{1}{2}} + n) = T(n^{\frac{1}{8}}) + 5(n^{\frac{1}{4}} + n^{\frac{1}{2}} + n)$$
We have a total of $\lfloor\log_2(n)\rfloor + 1$ elements in the recursion. So we can easily find an upper bound:
$$T(n) = O(n\cdot \log_2(n))$$
The thing is, I'm not sure it's a tight upper bound. More over, I haven't been able to reach a tight lower bound from the recursion. I've tried to, but I didn't get to the same assymptotic bound I've found above.
Any help would be appreciated.
Answer: Taking the square root of $n$ halves the number of bits, so you indeed halve that number $\lg n$ times before the iterated square root falls below some predefined constant (say $4$).
Now we can write
$$n+\sqrt n+\sqrt[4]n+\sqrt[8]n+\cdots<n+\sqrt n+\sqrt n+\sqrt n+\cdots=n+\sqrt n\lg n,$$ which is $O(n)$.
Justification:
With $n=m^2$, $$\sqrt n\lg n\le n$$
becomes
$$2m\lg m\le m^2$$
or
$$2\lg m\le m,$$
which is true as of $m=4$. | {
"domain": "cs.stackexchange",
"id": 21111,
"tags": "asymptotics, recurrence-relation"
} |
Are comparison sort algos appropriate for SUBJECTIVE sorting? | Question: I've been tasked with creating an online feature that ranks 50 fantasy characters from a variety of domains based on combat acumen and polls users one which one is the most powerful based on their votes on a series of face-to-face matchups. (Spiderman vs. Jon Snow, etc. etc.). If this is designed well, we hope to attract a large audience and high participation.
My first instinct was to use some variety of comparison sort--probably Bubble or Heap, given the manageable size. But as I brush up on the mechanics, all the literature I can find seems to be oriented toward stochastic sorts in which any pair of items can be definitely compared.
Another curveball here is that, as more people participate, the ordered list can (but needn't mustn't) be informed by all the people who have already weighted in. This will probably be necessary since we can't expect every user to weigh for hundreds of matchups. I'm not clear on how to tell the algorithm which items are already somewhere near their appropriate place in the list thanks to a lot of preexisting votes.
I'm sure I'm not the first to confront this issue since comparison algos are commonly used exact crowd wisdom on the best of, say, a few dozen photographs. I'm just not quite sure where to start. Am I looking for a variation on comparison sort with some addition search phrase for Google, or a different approach entirely?
Thanks![Insert favorite fantasy sign-off.]
Answer: I suggest reading about the theory on rating systems and ranking systems. There are many standard algorithms and methods for this. I would recommend reading the following resources, to get you started and to give you an entry-point and overview and let you figure out which might be best-suited for your particular situation:
The Bradley-Terry model
Pairwise comparison
The Elo rating system, and other rating systems (TrueSkill, Glicko, etc.)
Resources on Stats.SE: https://stats.stackexchange.com/q/30976/2921, https://stats.stackexchange.com/q/71297/2921, https://stats.stackexchange.com/q/15776/2921, https://stats.stackexchange.com/q/6379/2921
Finally, there is work on experiment design: how to sequentially select which pair of characters to compare, based on the results of comparisons so far. (Think of this like matchmaking in chess, where you want to figure out which pairs of players to have play each other, based on their results so far.) I can't remember pointers right now, but I remember that I've seen some material on Stats.SE about this as well, so do some searching there and you might find additional references and resources.
I suspect it is likely that one of these will be a reasonable choice for you.
There is a major caveat. It is important to understand is that there may not be any linear ordering of characters that has the property you want.
Think of the game rock-paper-scissors; there is no linear ordering of these three choices that ranks them in order of "power", and polling people about pairs of them won't let you find a linear order. In particular, none of those three choices are any more powerful than any other; rock beats scissors, paper beats rock, and scissors beats rock. This can show up in fantasy settings, too. | {
"domain": "cs.stackexchange",
"id": 13435,
"tags": "sorting, comparison, heap-sort"
} |
Elixir db model | Question: I have designed this model which is very flexible. For example, you can create asset types, assets and combinations of them infinitely. It is the front end to a Python Pyramid website, so all the validation and business logic is handled by the web app.
However, not being a db guy, I have this sneaking suspicion that the schema totally sucks. There may be performance issues etc that I haven't foreseen etc.
class Asset(Entity):
has_field('TimeStamp', Unicode, nullable=False)
has_field('Modified', Unicode)
belongs_to('AssetType', of_kind='AssetType', inverse='Assets')
has_many('Values', of_kind='Value', inverse='Asset')
Assets = ManyToMany('Asset')
@property
def Label(self):
if self.AssetType:
for f in self.AssetType.Fields:
if f.Label:
if self.Values:
for v in self.Values:
if v.Field.Name == f.Name:
return v.Value
def __repr__(self):
return '<Asset | %s>' % self.id
class AssetType(Entity):
has_field('Name', Unicode, primary_key=True)
has_field('Plural', Unicode)
has_many('Assets', of_kind='Asset', inverse='AssetType')
has_many('Fields', of_kind='Field', inverse='AssetType')
class Value(Entity):
has_field('Value', Unicode)
belongs_to('Asset', of_kind='Asset', inverse='Values')
belongs_to('Field', of_kind='Field', inverse='Values')
class Field(Entity):
has_field('Name', Unicode)
has_field('Unique', Unicode, default=False)
has_field('Label', Boolean, default=False)
has_field('Searchable', Boolean, default=False)
has_field('Required', Boolean, default=False)
has_many('Values', of_kind='Value', inverse='Field')
belongs_to('FieldType', of_kind='FieldType', inverse='Fields')
belongs_to('AssetType', of_kind='AssetType', inverse='Fields')
class FieldType(Entity):
has_field('Name', Unicode, primary_key=True)
has_field('Label', Unicode, unique=True)
has_many('Fields', of_kind='Field', inverse='FieldType')
Answer: You've reinvented a database inside a database. Basically, the Asset/AssetType is a simulation of a database inside a database which will as a result be slow. Also, you are going to spend a lot of effort reimplementing database features.
You could do this by using a NoSQL database which is designed to handle less structured data might be a good idea. Or you could create a table for each asset type which will perform better.
@property
def Label(self):
if self.AssetType:
for f in self.AssetType.Fields:
if f.Label:
if self.Values:
for v in self.Values:
if v.Field.Name == f.Name:
return v.Value
That's really nested which is bad sign. I suggest something like:
@property
def Label(self):
if self.AssetType:
label = self.AssetType.Label
field = self.find_field(label)
if field:
return field.Value
Or if you use the Null Object pattern:
@property
def Label(self):
return self.find_field(self.AssetType.label).Value | {
"domain": "codereview.stackexchange",
"id": 550,
"tags": "python, pyramid"
} |
What is the percentage increase in momentum if kinetic energy is increased by 4 percent? | Question: What is the percentage increase in momentum if kinetic energy is increased by 4 percent? I have tried to solve it using trial and error method and have got 4 percent as my answer. Can you please give a better method?
Edit:I posted this Q when I was ney to SE, and this was closed for not following the rules...but now I know how to solve this and again want to open this. Can I do it?
Answer: The kinetic energy $E_{\rm K}= \dfrac 1 2 mv^2 \Rightarrow \dfrac{dE_{\rm K}}{dv} = mv$ and the momentum $p = mv \Rightarrow \dfrac{dp}{dv} = m$
For small changes $\dfrac{\Delta E_{\rm k}}{E_{\rm k}} \approx \dfrac {2\Delta v}{v}$ and $\dfrac{\Delta p}{p} \approx \dfrac{\Delta v}{v}$
If the percentage change in kinetic energy is $4\%$ then $\dfrac{2\Delta v}{v}$ equates approximately to $4\%$ and $\dfrac{\Delta v}{v}$ (the change in the momentum) equates to approximately $2\%$.
Working through the calculations exactly as was done by @SuchDoge gives $1.98\%$ | {
"domain": "physics.stackexchange",
"id": 34650,
"tags": "homework-and-exercises, newtonian-mechanics, forces, kinematics, momentum"
} |
How can I do mathematical operations to two columns of a CSV file and save the result in a new CSV file? | Question: I have a CSV file (such as test1.csv). There are tabular values like the following.
S1 S2 S3
4.6 3.2 2.1
3.2 4.3 5.4
1.4 3.4 6.1
I want to do mathematical operations, such as R1=(S1+S2)/1.5 and R2=(S2+S3)/2.5. Then I want to save the results R1 and R2 in a new CSV file (such as test2.csv). I tried with the following code. That does not work.
import pandas as pd
df = pd.read_csv('test1.csv')
df2['R1'] = (df['S1'] + df['S2'])/1.5
df2['R2'] = (df['S2'] + df['S3'])/2.5
df2.to_csv('test2.csv')
Answer: You did not define df2 before attempting to use it.
Try this:
import pandas as pd
df = pd.read_csv('test1.csv')
df2 = pd.DataFrame({})
df2['R1'] = (df['S1'] + df['S2'])/1.5
df2['R2'] = (df['S2'] + df['S3'])/2.5
df2.to_csv('test2.csv') | {
"domain": "datascience.stackexchange",
"id": 11174,
"tags": "python, pandas"
} |
Soccer winning probabilies from ELO strength indicator | Question: What can I do to improve this function, which calculates the win expectancy of a soccer team, given their strength:
def winning_prob(first, second, first_at_home=False, second_at_home=False):
""" Return winning probability, given ELO ratings of two soccer teams
Args:
first: ELO rating of first team
second: ELO rating of second team
first_at_home: is first team playing at home?
second_at_home: is second team playing at home?
Returns:
Winning probability of first team against second team
"""
if first_at_home:
first = first + 100
if second_at_home:
second = second + 100
difference = (first - second) / 400.0
return 1.0 / (10**(-difference) + 1.0)
Especially the two boolean switches look clunky to me, but the explanation is simpler that way. What improvements do you see?
Answer: With the possibility of first_at_home and second_at_home both being True, there really is not too much to improve on here.
The only things I see are:
Instead of doing first = first + 100, you can do first += 100.
Instead of doing -((first - second)/ 400.0) you can do (second - first)/ 400.0.
There is also no need to specify your operands as floats (400.0 vs 400) because Python does float division with the basic / operator.
Here are my improvements:
def winning_prob(first, second, first_at_home=False, second_at_home=False):
""" Return winning probability, given ELO ratings of two soccer teams
Args:
first: ELO rating of first team
second: ELO rating of second team
first_at_home: is first team playing at home?
second_at_home: is second team playing at home?
Returns:
Winning probability of first team against second team
"""
if first_at_home:
first += 100
if second_at_home:
second += 100
return 1 / (10**((second - first)/400) + 1) | {
"domain": "codereview.stackexchange",
"id": 7961,
"tags": "python"
} |
Multiclass Regression for density prediction | Question: This is my first question in the DS-community, so I'm happily willing to accept any kind of (meta-) advice :).
I have time-series data for a set of users (~100), whereas each 15 min it is logged to which antenna (~80) they were connected (similar to cell phone connections).
Based on this data, I created a density vector, which for some time t (i.e. 01.01.2016 at 06:00) counts how many users are connected to which antenna.
Such a density object (for same day and time as given above) might look like this:
100: 5;
101: 2;
102: 3;
103: 0;
whereas the first number refers to some ID of the antenna, and the 2nd number refers to the number of users connected to the antenna.
I'm planning to feed this time-series data to a recurrent neural network.
The results should be the predicted number of users connected to the antenna at the next time step (so every 15 minutes). So it might predict for 01.01.2016 at 06:15:
100: 7;
101: 0;
102: 1;
103: 2;
Now I'm wondering what should the output layer be like? Regarding number of neurons and activation function especially. I've been reading quite a lot about multinomial logistic regression but some confirmation would be nice.
If it should output the predicted number of users per antenna, it should have probably the same number of neurons; so 80 it would be, just as in a multi-class classification scenario with a softmax activation function.
So what I need is a different activation function, but even after reading quite a bit, I couldn't wrap my head around it yet.
i.e. Get multiple output from Keras proposes to use a linear activation function, but in their case, they tried to predict the next 3 values, by using regression; whereas I am trying to predict the next 1 value for a set of antennas.
PS: For constructing the Neural Network, I'm using (Tensorflow-) Keras.
PPS: For feeding the neural network, I would generate the density vector for all time steps, and then feed batch-wise with batch_size = 80 (number of antennas). Out of curiosity: I happen to have only 1 feature, so the input_shape probably be (batch_size, 1, 1*80); If I was to have 2 features, would it then be (batch_size, 1, 2*80)?
PPPS: Not even quite sure how to name this problem. I think it would probably be called a (time-series) multiclass regression problem but I couldn't find any example with the same name (left aside the multinomial logistic regression).
Answer: If you want to predict the raw number of users, then this is a classical regression problem. Set an output layer with a node for each antenna, and no activation function.
If instead you need to predict a probability distribution / frequency on all antennas, use a softmax activation, so that each output vector would sum up to 1. | {
"domain": "datascience.stackexchange",
"id": 5488,
"tags": "machine-learning, python, keras, time-series, regression"
} |
Can we fill potato chips bags with a gas other than nitrogen? | Question: I understand that we fill potato chips bag with nitrogen to prevent oxidation. But why do we use nitrogen, instead of neon or hydrogen or something else?
My first guess is that nitrogen is lighter than neon/argon but what about hydrogen or helium?
Answer: As Nilay Ghosh said, nitrogen is cheap. Very cheap. Neon is expensive. Argon is cheaper than neon, but considerably more expensive than nitrogen. Helium is also expensive and needs to be used wisely, for important things, e.g., cryogenics. And hydrogen! I can just see the ads: “Buy our chips: they are lighter than air! But avoid open flames and sparks unless you want to be Hindenburged to a crisp (no pun intended)”
Another fill gas to avoid is sulfur hexafluoride. A tennis ball manufacturer once decided to fill tennis balls with sulfur hexafluoride, assuming this would prevent the balls from going flat as a consequence of the high molar mass of sulfur hexafluoride. But the tennis balls exploded on the shelves because air diffused in.
Thankfully, no one has ever tried using nitrous oxide as the fill gas in potato chip bags! | {
"domain": "chemistry.stackexchange",
"id": 14052,
"tags": "everyday-chemistry, food-chemistry"
} |
How can an artificial river turn desert into arable land? | Question: Egypt is planning to create a new delta on the left of the old one by building an artificial river that redirects agricultural waste water into the desert. The water should gradually turn the desert into land that's suitable for agriculture. Here's a video that explains the plan further.
This ambitious plan will cost around 9 billion euro's to complete. What I don't understand about it though is how simply redirecting water to a desert can turn it into arable land. The Egyptian desert consists of sandy soil and this soil type usually lacks the nutrients necessary for crops to grow. So how will this work?
Answer: Such projects take time to fully achieve their aims. In addition to providing water, the sand will need to undergo a process of soil conditioning. This will include the progressive addition of biomass, humus and compost. This will include a means of water retention, which may include the application of clay or slit either mechanically or via the outflow of the Nile River. | {
"domain": "earthscience.stackexchange",
"id": 2720,
"tags": "agriculture, desert"
} |
Uri Routing PHP Code | Question: I do programing with core php & i don't want to like use any cms or mvc framework. if any handy ideas then welcome it.
index.php file
<?php
/** INCLUDE BSIC CONFIGURATION FILE */
strip_php_extension();
/** GET REQUEST NAME */
$request_uri = trim(filter_input(INPUT_GET, 'uri_req'),'/');
/** CALL REQUESTED PAGE */
_PageCall($request_uri);
?>
function.php file
<?php
/**
* strip_php_extension()
*
* check website url in .php file name is exist
*
* @param null
* @return redirect the page with removing .php extension from the url bar
*/
function strip_php_extension()
{
$uri = filter_input(INPUT_SERVER,'REQUEST_URI');
$ext = substr(strrchr($uri, '.'), 1);
if ($ext == 'php')
{
$url = substr($uri, 0, strrpos($uri, '.'));
redirect($url);
}
}
/**
* _PageCall()
*
* check in web current web request
*
* @param null,filename
* @return return file if or return 404 file
*/
function _PageCall($file_name=null){
$pagedir = ROOT.DIRECTORY_SEPARATOR.PAGEDIR;
/* Every File Request Must Be in HTML */
if(".html"===substr($file_name, -5)):
$file_name = trim($file_name,'.html');
if(file_exists($pagedir.$file_name.'.php')):
require_once $pagedir.$file_name.'.php';
else:
redirect('404.html');
endif;
else:
if(NULL==$file_name||$file_name=='index.html'):
require_once $pagedir.'index.php';
elseif($file_name=='a'):
require_once $pagedir.'ajax.php';
else:
redirect(SITEURL.'404.html');
endif;
endif;
} ?>
Answer: Guard against ../ in the uri_req parameter
I'm seeing a possible security hole. A big one. Consider the following URL:
http://yoursite.com/index.php?uri_req=../../hacking_scripts/dump_database.php
Now let's pretend for a moment that I'm criminal who has previously breached the web server that hosts your web application. Your app is stored at:
/www/himanshu/index.php
Now when I breached the web server, I created the following file:
/hacking_scripts/dump_database.php
Which contains PHP code to connect to your database as the same username and password as your web site, because after all, I can view the raw PHP code and found a database connection string with a username and password.
The dump_database.php script basically queries for all the tables your web site's user has available to it, does a SELECT * on the table and dumps out the raw data in the resulting HTML file. Upon hitting the URL above, your router gleefully loads that file and executes the PHP inside because I placed ../../ in the uri_req parameter.
Some other improvements:
Don't redirect to a 404 Not Found page, because the HTTP status will first be a 301 redirect to a GET request that results in a 200 OK response. Your intended response is 404 Not Found, so set the HTTP status to that value and simply include the 404.html file.
Web crawlers and search engines, when encountering a 404 Not Found, will not attempt (or should not attempt) to hit that URL again. In your setup, they will be back to the same "Not Found" URL over and over again.
The _PageCall function name doesn't really tell you what it does. It routes the request. Why not call it something like routeRequest?
Add a try-catch around the calls to other PHP files, then return a 500 Server Error response. This way you don't get ugly stack traces showing up in production, and you can create your own custom, prettier looking error page for unhandled catastrophic errors.
function _PageCall($file_name=null) {
try {
// route the request
} catch (Exception $ex) {
http_response_code(500);
require_once($pagedir . "500.html");
// Log the error message and stack trace
}
}
Remove duplicated 404 Not Found logic by separating the detecting of a file to load versus the actual executing of that file
function _PageCall($file_name=null) {
try {
$pagedir = ROOT.DIRECTORY_SEPARATOR.PAGEDIR;
$page = getFullFilePath($pagedir, $file_name);
if (!isset($page)) {
$page = $pagedir . '404.html';
http_response_code(404);
}
require_once($page);
} catch (Exception $ex) {
http_response_code(500);
require_once($pagedir . "500.html");
// Log the error message and stack trace
}
}
function getFullFilePath($pagedir, $file_name=null) {
/* Every File Request Must Be in HTML */
if(".html"===substr($file_name, -5)):
$file_name = trim($file_name,'.html');
if(file_exists($pagedir.$file_name.'.php')):
return $pagedir.$file_name.'.php';
else:
return null;
endif;
else:
if(NULL==$file_name||$file_name=='index.html'):
return $pagedir.'index.php';
elseif($file_name=='a'):
return $pagedir.'ajax.php';
else:
return null;
endif;
endif;
}
Whitespace is your friend. For readability put spaces around all operators and the conditions for if statements:
if(".html"===substr($file_name, -5)):
Becomes:
if (".html" === substr($file_name, -5)):
And put some empty lines between IF statements:
$file_name = trim($file_name,'.html');
if (file_exists($pagedir . $file_name . '.php')):
Add additional functions for detecting certain kinds of files so that:
if (".html" === substr($file_name, -5)):
Becomes:
if (isHtmlFile($file_name)):
Use a consistent variable naming convention. I see $pagedir and $file_name. In the very least avoid using all lower case letters. Separate words by an underscore or use camelCase.
This isn't a bad script to do simple routing. Really the most important change is to fix the security flaw allowing you to specify any file path in the URL parameter. | {
"domain": "codereview.stackexchange",
"id": 19753,
"tags": "php, security, url-routing"
} |
Isopropanol precipitation of DNA - duration and magnitude of cold storage | Question: DNA prep protocols often include a final precipitation step with alcohol, often isopropanol, where the DNA must be kept in the alcohol, at a low temperature such as -20C or -70C, often overnight.
What is the point of keeping it at low temperature? Is lower better? At what point do you see diminishing returns?
What about the time? Is overnight really necessary, or would, say, an hour suffice? Do lower temperatures result in quicker precipitation?
Answer: For the precipitation you use different reagents: First you add salt under slightly acidic conditions to make sure that the DNA precipitates in a less polar environment. This is achieved by adding alcoholes like ethanol or isopropanol.
It is commonly thought that a incubation at low temperatures enhances the results, but this is interestingly not true. There is a paper from the Bethesda Research Laboratories which is called "Ethanol Precipitation of DNA." which analyzes these relationships (temperature, time and alcohol concentration) in great detail, its definitely worth reading.
Figure 1-3 from this paper shows the influence for Ethanol (but I am pretty sure that this is not much different for isopropanol):
I would prefer ethanol precipitation to ispropanol whenever possible. Isopropanol co-precipates salts, so you have to wash a few times with 70% ethanol and it is less volatile than ethanol, so the drying step takes longer. Its also less effective cleaning organic compounds like phenol away and sometimes the pellets are harder to see.
On the other it can be useful when you have to deal with large volumes, since you only need 0,7 volumes of isopropanol compared with 2,5 volumes of ethanol.
Precipitations with Isopropanol are usually not incubated but centrifuged directly after mixing. Here is a protocol from my lab (which originally came from Qiagen, if I remember correct):
Adjust the salt concentration, for example, with sodium acetate (0.3
M, pH 5.2, final concentration) or ammonium acetate (2.0–2.5 M,
final concentration).
Add 0.6–0.7 volumes of room-temperature isopropanol to the DNA
solution and mix well.
Centrifuge the sample immediately at 10,000–15,000 x g for 15–30 min
at 4°C
Carefully decant the supernatant without disturbing the pellet.
Wash the DNA pellet by adding 1–10 ml (depending on the size of the
preparation) of room-temperature 70% ethanol. This removes
co-precipitated salt and replaces the isopropanol with the more
volatile ethanol, making the DNA easier to redissolve.
Centrifuge at 10,000–15,000 x g for 5–15 min at 4°C
Repeat the washing and centrifugation steps once more.
Carefully decant the supernatant without disturbing the pellet.
Air-dry the pellet for 5–20 min (depending on the size of the
pellet).
Redissolve the DNA in a suitable buffer. | {
"domain": "biology.stackexchange",
"id": 1840,
"tags": "biochemistry, dna-isolation"
} |
semaphore implementation with test and set | Question: I'm trying to understand the algorithm for implementing semaphores on SMP system using test and set instruction described here:
https://people.mpi-sws.org/~druschel/courses/os/lectures/proc4.pdf
(and a bunch of similar implementations on the web).
Locking semaphore looks like this:
void P(Semaphore s)
{
Disable interrupts;
while (ldl(s->lock) != 0 k !stc(s->lock, 1));
if (s->count > 0) {
s->count -= 1;
s->lock = 0;
Enable interrupts;
return;
}
Add(s->q, current thread);
s->lock = 0;
sleep(); /* re-dispatch */
Enable interrupts;
}
I have a number of questions about this. One is -- what is enabling and disabling of interrupts for? It seems that the whole reason we are using test and set is that interrupt disasbling is not feasible on SMP systems.
Then, what does the sleep() followed by enable interrupts mean? Does it mean that we do whole sleep with interrupts disabled? That can't be right as it defeats the purpose of sleeping.
And if we first enable interrupts and then sleep, doesn't it mean that we have a possibility of missing a wakeup after lock is set to 0 but before we sleep?
Answer: The purpose of the spinlock is to prevent other cores from entering the critical section. The purpose of disabling interrupts is to prevent this core from re-entering the critical section.
The only way this core could try to re-acquire the semaphore is if an interrupt occurs and the interrupt handler needs the semaphore for some reason. Disabling interrupts prevents this from occurring.
As for the reason why sleep() is called after releasing the spinlock but before interrupts are enabled... well, that depends too much on the specifics of the operating system design to give a general answer.
The whole of the "waiting on the semaphore" operation should be atomic. That includes waiting on its queue and transitioning the thread from its "running" state to its "waiting" state. Presumably, that's what the call to sleep() is supposed to do.
You wonder if there's a possible race condition where this CPU releases the spinlock on the semaphore, then some other CPU signals the semaphore, then this cpu calls sleep(). That isn't necessarily a problem. If sleep() is written such that if it finds that the current thread is runnable it handles that case correctly, it won't cause a problem.
So why are interrupts disabled when you call sleep()? Some kernels are just designed that way, in that interrupts need to be disabled when you call into the scheduler. The call to sleep() may not actually "sleep"; it might just be whatever the scheduler has to do to make a thread sleep.
Alternatively, if there is a thread context switch hidden deep inside that call to sleep(), that may be where interrupts are re-enabled.
Ultimately, this code is designed to get the general idea across. How you would implement it in a real kernel would be like this, but this isn't necessarily drop-in code.
(Consider that load-linked/store-conditional instructions are spelled out in the example but there's no memory barrier operation on the spinlock release. That's a very specific ISA!) | {
"domain": "cs.stackexchange",
"id": 10100,
"tags": "concurrency, synchronization"
} |
Weinberg QFT 1 Normalization one 1 particle states p. 66 | Question: I encounter a question regarding the derivation of the normalization of 1 Particle states in Weinbergs book (Formula 2.5.14).
Similar questions were asked in A question on page 65 of Weinberg's QFT volume 1 and Inner product of standard-momentum one-particle states in Weinberg but I didn't see my questioned answered.
To achieve the formula for the normalization of a general scalar product of 1 particle states with momenta $p, p'$ and polarizations $\sigma,\sigma'$: $(\psi_{p',\sigma'},\psi_{p,\sigma})$ proportional to $\delta^3(p-p')$ he expands it to $N(p)(U^{-1}(L(p))\psi_{p',\sigma'},\psi_{p,\sigma})$ by using $\psi_{p,\sigma} = N(p) U(L(p)) \psi_{k,\sigma}$ where $U(L(p))$ is the standard quantum lorentz transformation to transform the standard states with standard momentum k to arbitrary states with momentum which satisfy $p^2 = k^2$. Then he derives the delta function normalization coming from normalization of standard states. For this he uses that $L^{-1}(p)p' = k'$. This particular statement makes no sense to me.
If we say, both states describe the same particle then $k' = k$ but then $L^{-1}(p)p' = k'$ can not hold except when $p = p'$. Otherwise if both states describe different particles it is generally not true that $L^{-1}(p)p' = k'$ where $k'$ should be a standard momentum since we only have 6 standard momenta $k$ and the relation should hold in general for his argument to work. So if anyone can enlighten me I would be glad.
Answer: I had basically the same questions as you! In my case at least, it stemmed from not properly understanding the previous arguments, so I'll try to explain everything in an organized way. It took many hours of being confused, but I think I finally get it.
But first, let me be clear with this: $k'$ is not a standard momentum. The notation Weinberg chose is somewhat confusing here in my opinion, but the book is nearly perfect so we can forgive him :). I'll switch the notation a tiny bit. Heads up: the actual answer to your question comes after this first section, but I think this is very useful as well.
Introduction: how and why we use standard momentum states
The $\sigma$ indices in $\Psi_{p,\sigma}$ indicate degrees of freedom of a particle that are not included in its momentum, and we want to understand how these change under Lorentz transformations. To start, I'll use a $\Phi$ basis instead of the $\Psi$ basis to indicate eigenstates of the remaining non-momentum observables needed to cover the entire Hilbert space. In other words, we have some set of commuting observables $\mathcal{O}$ for which $\mathcal{O}\Phi_{p,\alpha}=\alpha \Phi_{p,\alpha}$.
The most general transformation that the state will undergo is
$$
U(\Lambda)\Phi_{p,\alpha} = \sum_{\alpha'} \sum_{p'} A(p',p,\alpha',\alpha,\Lambda) \Phi_{p',\alpha'},
$$
where the sum over $p'$ is continuous over all possible momenta. But from previous arguments in the book, it is clear that $U(\Lambda)\Phi_{p,\alpha}$ has momentum $\Lambda p$, so we can write
$$
U(\Lambda) \Phi_{p,\alpha} = \sum_{\alpha'} C_{\alpha'\alpha}(\Lambda,p) \Phi_{\Lambda p, \alpha'}.
$$
This is telling us that under a Lorentz transformation, it is possible that $\alpha$ indices will get mixed up. Note that what happens for a momentum index is very simple: $p\to \Lambda p$. Meanwhile for $\alpha$, we might get a complicated superposition of various states.
Let's simplify this: we choose a standard momentum $p\equiv k$ and standard transformation $\Lambda\equiv L(k,p)$ in the above equation, such that $Lk=p$. Having fixed $k$ and $L$, $C_{\alpha'\alpha}$ only depends on $p$ implicitly through $L$ (it no longer depends on a general transformation $\Lambda$). This is important. (But it will only become clear later.) Plugging in above, we get
$$
U(L(k,p)) \Phi_{k,\alpha} = \sum_{\alpha'} C_{\alpha'\alpha}(p) \Phi_{p,\alpha'}.
$$
But now let's choose another discrete basis to label our states with: $\Psi_{p,\sigma}$ instead of $\Phi_{p,\alpha}$ (which would correspond to other observables, of course). We have
$$
\Psi_{p, \sigma} \equiv \sum_{\alpha} \tilde{B}_{\sigma \alpha}(p) \Phi_{p, \alpha}, \quad \Phi_{p, \alpha} \equiv \sum_{\sigma} B_{\alpha \sigma}(p) \Psi_{p, \sigma}.
$$
Plugging in above, we get (assuming linear $U$ for simplicity)
$$
\sum_\sigma \left(B_{\alpha \sigma}(k) U(L) \Psi_{k,\sigma} - \sum_{\alpha'} C_{\alpha'\alpha}(p) B_{\alpha'\sigma}(p) \Psi_{p,\sigma}\right) = 0
$$
Here's the trick: choose the $\Psi_{p,\sigma}$ basis such that $B_{\alpha \sigma}(k) = \delta_{\alpha \sigma}$, and for each other $p$,
$$
\sum_{\alpha'}C_{\alpha' \alpha}(p) B_{\alpha'\sigma}(p) = \frac{\delta_{\alpha \sigma}}{N(p)}.
$$
Finally then with this new basis we have
$$
N(p)U(L(k,p)) \Psi_{k,\sigma} = \Psi_{p,\sigma}.
$$
The new $\Psi$ basis is fantastic because under this Lorentz transformation, we have the simple relation above - no mixing of $\sigma$ indices. This is what Weinberg means when he says that we define $\Psi_{p,\sigma}$ in this way. But this isn't valid for any Lorentz transformation! Note that it was important to fix a momentum $k$ and transformation $L$ in the above arguments, so that the $C_{\alpha'\alpha}$ coefficients only depended on $p$. If they also depended on $\Lambda$, we would not be able to consistently choose a $\Psi$ basis that satisfies the above formula. In particular, we have that for some general $\Lambda$, $U(\Lambda)\Psi_{k,\sigma}$ will end up in a superposition of states with different $\sigma$ values.
We note that choosing a standard momentum $k$ and transformation $L$ doesn't allow us to reach all possible momenta $p$. We can only reach those momenta that have the same mass as $k$ and value of sgn($k^0$). Thus we require 6 classes of standard momenta and transformations. Within each class we might also have different particle species (e.g., particles of different positive mass), which require different standard momenta $k$ and transformations $L$.
The little group
The little group consists of transformations $W$ satisfying $Wk=k$. In other words, $U(W)$ acting on $\Psi_{k,\sigma}$ only mixes up $\sigma$ indices. A general Lorentz transformation has 6 independent parameters, so there are 6 generators. But the constraint $Wk=k$ imposes 3 independent conditions, resulting in $W$ having 3 parameters. We then expect the little group to have 3 generators. Indeed this is the case for all $p\neq 0$, where we have the groups SO(3) and ISO(2). The $p=0$ case imposes no restrictions on $W$ so we still have SO(3,1).
The transformation rule will be of the form
$$
U(W)\Psi_{k,\sigma} = \sum_{\sigma'} D_{\sigma'\sigma}(W) \Psi_{k,\sigma'}
$$
As a check to see if you're following: what should the value of $D_{\sigma'\sigma}(W)$ be when $W$ is a standard transformation?
Once we have the little group matrices, we're all set, since we can find how our general states transform!
\begin{align}
U(\Lambda) \Psi_{p,\sigma} &= N(p) U(\Lambda) U(L(k,p)) \Psi_{k,\sigma}\\
&=N(p) U(L(k,\Lambda p)) U(L^{-1}(k,\Lambda p)\Lambda L(k,p)) \Psi_{k,\sigma}\\
&=N(p) U(L(k,\Lambda p)) U(W(\Lambda,p)) \Psi_{k,\sigma} \\
&=N(p) \sum_{\sigma'} D_{\sigma'\sigma}(W(\Lambda,p)) U(L(k,\Lambda p)) \Psi_{k,\sigma'}\\
&= \frac{N(p)}{N(\Lambda p)} \sum_{\sigma'} D_{\sigma'\sigma}(W(\Lambda,p)) \Psi_{\Lambda p, \sigma'},
\end{align}
where we identified the little group element $W(\Lambda,p)=L^{-1}(k,\Lambda p) \Lambda L(k,p)$.
Normalizing standard momentum states
I leave it to you to show the following: if we want the $D$ matrices to furnish a unitary representation of the little group, we require the states to be normalized as
$$
(\Psi_{k,\sigma},\Psi_{p,\sigma'}) = \delta^3(\vec{p}-\vec{k}) \delta_{\sigma\sigma'}.
$$
Here $p$ is not a standard momentum. Why can this normalization be done? We know that both states are eigenstates of $\vec{P}$, so they should be orthogonal if they correspond to different eigenvalues. The delta function doesn't have a pre-factor dependent on $k$ because we can just absorb that into the definition of $\Psi_{k,\sigma}$ (and there's no $p$ dependent factor because for $p\neq k$, this is zero anyways). The deep part of this normalization, the part which really determines that the $D$ matrices furnish a unitary representation, is the $\delta_{\sigma\sigma'}$ factor.
Normalizing general 1-particle states
Now all we're missing is the case of $(\Psi_{p,\sigma},\Psi_{p',\sigma'})$. The product doesn't involve a standard momentum $k$, so the potential pre-factor dependent on $p$ that I mentioned above may pop up here, and it is not obvious a priori that the $\sigma$ labels will give a delta factor. Spoiler: a momentum dependent pre-factor does show up, but again we get rid of it by re-scaling $\Psi_{p,\sigma}$. This re-scaling is allowed because of the $N(p)$ factor we included in the definition of $\Psi_{p,\sigma}$ in the first section. But the delta factor for the $\sigma$ labels stay the same. Let's re-derive this here:
\begin{align}
(\Psi_{p',\sigma'},\Psi_{p,\sigma}) &= N(p)(\Psi_{p',\sigma'},U(L(k,p))\Psi_{k,\sigma}) \\
&=N(p) (U(L^{-1}(k,p))\Psi_{p',\sigma'},\Psi_{k,\sigma}) \\
&=\frac{N(p)N^*(p')}{N^*(L^{-1}(k,p)p')}\sum_\alpha D_{\alpha\sigma'}^*(W)(\Psi_{L^{-1}p',\alpha},\Psi_{k,\sigma})\\
&=\frac{N(p)N^*(p')}{N^*(q)}\sum_\alpha D_{\alpha\sigma'}^*(W)\delta^3(\vec{q}-\vec{k}) \delta_{\alpha\sigma}\\
&=\frac{N(p)N^*(p')}{N^*(q)} D_{\sigma\sigma'}^*(W) \delta^3(\vec{q}-\vec{k}).
\end{align}
Here we defined $q=L^{-1}(k,p)p'$.
For the specific $W(L^{-1},p')$ here, you can check in a few steps that $D_{\sigma\sigma'}(W)=\delta_{\sigma\sigma'}$. Since $(q-k)=L^{-1}(k,p)(p'-p)$, the above quantity is nonzero only for $p'=p$, so we can write it as
$$
|N(p)|^2 \delta_{\sigma\sigma'} \delta^3(\vec{q}-\vec{k}).
$$
(because when $p=p'$, $q=L^{-1}(k,p)p'=L^{-1}(k,p)p=k$ and $N(k)=1$).
The final step in the normalization is relating the delta function above to $\delta^3(\vec{p}-\vec{p}')$.
I hope this helped! Let me know if anything above is unclear. | {
"domain": "physics.stackexchange",
"id": 68847,
"tags": "quantum-field-theory, special-relativity, hilbert-space, momentum, normalization"
} |
Is there a spacelike curve connection two events in Minkowski space? | Question: Assume two events in Minkowski Space (signature $(-,+,+,+)$) $p_1$ and $p_2$. One claims that there is a spacelike curve which connects these two points in Minkowski space.
I have some trouble to prove that in a mathematical framework.
By our definition a spacelike curve is a $C^1$ curve $\xi$ fullfilling the condition $$\eta(\dot{\xi}(s),\dot{\xi}(s))>0,\quad\forall s\in I.$$
I want to argue in the following way:
Consider a $C^1$ curve $\xi(t)=(ct,x(t),y(t),z(t))$ in the domain $t\in[t_1,t_2]$ such that $\xi(t_1)=p_1$ and $\xi(t_2)=p_2$.
One checks that
$$\eta(\dot{\xi}(t),\dot{\xi}(t))=-c^2+\dot{x}^2(t)+\dot{y}^2(t)+\dot{z}^2(t)=-c^2+|v|^2$$
Since $|v|<c$ we condlude $\eta(\dot{\xi}(t),\dot{\xi}(t))<0$, which menas that $\xi$ is not spacelike.
Where is the mistake? I'm open to suggestions.
EDIT:
Firstly I considered a straight line in the time component. Now I think one should use an arbitrary function such that the condition are fullfilled. I call this function $s$.
Hence,
$$\eta(\dot{\xi}(t),\dot{\xi}(t))=-c^2\dot{s}^2(t)+\dot{x}^2(t)+\dot{y}^2(t)+\dot{z}^2(t)=-c^2\dot{s}^2(t)+|v|^2$$
The expression should larger than zero,
$$\dot{s}^2(t)<\frac{|v|^2}{c^2}\Rightarrow \dot{s}(t)<\frac{|v|}{c}$$.
Integration over $t$ yields the possible functions $s$.
Answer: The mistake is that in the equation under "One checks that", the term $-c^2$ shouldn't be there. Instead, one should insert minus the squared value of the $t$-derivative of $c$, and the derivative of a constant is zero. This is assuming that $t$ is really a different name for $s$, an arbitrary parameter along the curve, and the letter $c$ means that the curve is located at $x^0={\rm const}$, an equal time slice.
If the letter $t$ is supposed to mean $x^0$ itself, then the expression for $\xi(t)$ is inconsistent because this curve doesn't lie in any $x^{0}={\rm const}$. Its initial component shouldn't be $c$ but rather $x^0=t$. If that's so, the curve will indeed be timelike (like a trajectory of a massive body in a spacetime), and not spacelike, if $v\lt c$ everywhere along the curve.
After this answer was written down, the question was edited and the first component $c$ was replaced by $ct$. Then indeed, this path in the spacetime is timelike, and not spacelike, as the previous paragraph says. | {
"domain": "physics.stackexchange",
"id": 21563,
"tags": "homework-and-exercises, special-relativity"
} |
Create a function with an undetermined number of successive calls | Question: As part of a programming challenge, we are tasked with creating a function with an undetermined number of successive calls. As an example, let's say the function returns simply the sum of the provided arguments, it should work as follows :
sum(4)() // 4
sum(4)(5)() // 9
sum(4)(5)(9)() // 18
sum(4)(5)(9)(1)() // 19
// etc...
The problem is simplified by the allowed empty function call at the end as an indication of end of calls.
I have worked on a solution that does the job but using global variables inside the function itself :
var sum = function (a) {
if (!sum.init) {
sum.total = 0;
sum.init = true;
}
if (!arguments.length) {
sum.init = false;
return sum.total;
}
sum.total += a;
return sum;
};
This solution works but uses state, global variables and function object trickery which is not ideal. My question here is whether there is a way to solve the problem in a purely recursive way.
As a side note, I do not believe the problem can be solved if that last empty call () is not provided, but if I'm wrong please let me know.
Answer:
This solution works but uses state, global variables and function object trickery which is not ideal.
Good hunch. There's certainly some weird stuff going on with your current implementation.
My question here is whether there is a way to solve the problem in a purely recursive way.
Yep! You certainly can. It's a little tricky but we can make light work of it using a couple helper functions identity and sumk.
sumk uses a continuation to keep a stack of the pending add computations and unwinds the stack with 0 whenever the first () is called.
const identity = x => x
const sumk = (x,k) =>
x === undefined ? k(0) : y => sumk(y, next => k(x + next))
const sum = x => sumk(x, identity)
console.log(sum()) // 0
console.log(sum(1)()) // 1
console.log(sum(1)(2)()) // 3
console.log(sum(1)(2)(3)()) // 6
console.log(sum(1)(2)(3)(4)()) // 10
console.log(sum(1)(2)(3)(4)(5)()) // 15
To make sense of this, remember sumk takes a continuation as an argument. When a Number is given, we recurse sumk with a newly created a continuation that is the sum of the given Number and whatever number comes next. When the Number input is finally undefined, we end the chain of additions with an empty Number (0). Finally the computation is complete and sent to the original continuation provided by sum, the identity function. Since identity just reflects its input, the computed sum will be the final return value.
I think a line-by-line evaluation really helps understand the process of a function. I'll walk you thru a the evaluation of the sum of 3 numbers. When I use the substitution model, notice I'm alpha renaming the parameter generated in lambda.
// instead of:
next => k(x + next)
// you'll see
A => k(x + A)
B => k(x + B)
C => k(x + C)
This renaming of the bound variable just helps you read the code better when the lambdas become nested.
OK, so here we go !
sum(1)(2)(3)()
= sumk(1, identity)(2)(3)()
= (y => sumk(y, A => identity(1 + A)))(2)(3)()
= sumk(2, A => identity(1 + A))(3)()
= (y => sumk(y, B => (A => identity(1 + A))(2 + B)))(3)()
= sumk(3, B => (A => identity(1 + A))(2 + B))()
= (y => sumk(y, C => (B => (A => identity(1 + A))(2 + B))(3 + C)))()
= sumk(undefined, C => (B => (A => identity(1 + A))(2 + B))(3 + C))
= (C => (B => (A => identity(1 + A))(2 + B))(3 + C))(0)
= (B => (A => identity(1 + A))(2 + B))(3 + 0)
= (B => (A => identity(1 + A))(2 + B))(3)
= (A => identity(1 + A))(2 + 3)
= (A => identity(1 + A))(5)
= identity(1 + 5)
= identity(6)
= 6
And finally, if you're not too keen on having sumk in the global scope, you can nest it as an auxiliary function inside sum itself
const identity = x => x
const sum = x => {
const aux = (x,k) =>
x === undefined ? k(0) : y => aux(y, next => k(x + next))
return aux(x, identity)
}
sum(1)(2)(3)() // 6
This was a really fun question and I hope you learn a lot from the answer. If you need any other help, just ask ^_^
EDIT: I see another answer uses currying to achieve the same goal. I didn't originally think to solve the problem this way, so it's cool to see multiple approaches being used. To iterate on that implementation, I might do it something like this
// credit to alebianco for the currying idea
const sum = x => y =>
y === undefined ? x : sum (x + y)
console.log(sum()) // OOPS!
console.log(sum(1)()) // 1
console.log(sum(1)(2)()) // 3
console.log(sum(1)(2)(3)()) // 6
console.log(sum(1)(2)(3)(4)()) // 10
console.log(sum(1)(2)(3)(4)(5)()) // 15
That ends up being quite elegant. But this currying solution actually has a problem with the following corner case
// should return 0, but always returns a function
console.log(sum())
// y => y === undefined ? x : sum (x + y)
Not really a big issue, but the sumk solution I provided above does not suffer from this. | {
"domain": "codereview.stackexchange",
"id": 24150,
"tags": "javascript, functional-programming"
} |
What do you see if you look at your own retina with the same eye? (optical feedback) | Question: When you record a video while pointing the viewfinder towards the screen showing a live preview you create optical feedback: video example. An anoalogous effect occurs when you turn your microphone towards the speaker.
Once you look at your own retina with the same eye, does our brain create a similar effect as recorded in the video? Or could it be something different all together?
Obviously the best way to find out is to 'just go ahead' and see for myself, but it seems as if I don't have the proper conditions to test this at home: The pupil gets too small in the mirror and thus the retina is plain dark. Such an experiment would require eye drops to force widen the pupil and allow for more light onto the retina.
Answer: There was an answer posted here and deleted again - which I think has found the error my question. Please undelete it :)
What the eye observes once it glances at the retina is a physically constant image of the retina.
In order for the optical feedback loop to work however, the retina image 'in' the mirror would need to be affected by what is projected onto the retina of the observing eye. Since the retina image in the mirror is constant (there is no link of any kind between whe mirrored retina an the observing retina) the projected image onto the retina of the observing eye is constant as well.
The loop is broken and there is no optical feedback. | {
"domain": "biology.stackexchange",
"id": 1798,
"tags": "eyes, human-eye"
} |
Is there a name, and symbol, for how quickly an object changes spacetime coordinates, in each direction? | Question: If an object is changing spacetime coordinates, that are measured from some inertial reference frame A, at some rate in its own reference frame, is there a name and symbol for this spacetime vector?
Answer: Yes, the proper velocity for the spatial coordinates, and the four-velocity if you include time as well. | {
"domain": "physics.stackexchange",
"id": 46065,
"tags": "special-relativity, spacetime, reference-frames"
} |
teleop turtlebot | Question:
hey everybody:
I want to build a node to use keyboard or ps3 handshacker to control my turtlebot ( useing C++ or python )means use this node to send twist message to cmd_vel , cuz i think there must be someone done this thing already , so I'd like to ask if someone has code? and could you pass it to me ?
ps : I'm using ros INDIGO and i want to build a node to using keyboard to congtrol my turtlebot , also i want to learn the code , does anyone have turtle_teleop_key.cpp in INDIGO vision?
Originally posted by forinkzan on ROS Answers with karma: 141 on 2014-12-03
Post score: 0
Original comments
Comment by forinkzan on 2014-12-10:
hello everyone i found that hydro teleop_key.cpp could use in indigo...............
Comment by Ahyan on 2015-01-07:
"--i want to build a node to using keyboard to control my turtlebot , also i want to learn the code , does anyone have turtle_teleop_key.cpp in INDIGO vision?"
Me too. From where I can collect hydro version of turtlebot_key.cpp? Can anybody help?
Comment by forinkzan on 2015-01-07:
@Ahyan i just use the turtlesim_teleop node in ROS WIKI tutorials ,and modify the "turtle vel/cmd" to "vel cmd" in its code , then it work well - -|||
Answer:
There is already a teleop package for the turtlebot called turtlebot_teleop. You can use the tutorials to use the keyboard to teleoperate your turtlebot. Also there are tutorials for using joysticks, ps3 joystics and xbox360 game controller to teleop your robot.
To use the keyboard, open a new terminal and type
roslaunch turtlebot_teleop keyboard_teleop.launch
The launch file is easy to understand.
Originally posted by Vegeta with karma: 340 on 2014-12-03
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by forinkzan on 2014-12-03:
@ankitravankar thx | {
"domain": "robotics.stackexchange",
"id": 20226,
"tags": "turtlebot, teleop"
} |
E2 reaction - strong base | Question: In E2 reaction, there is more than one product. According to Zaitsev’s rule – “The alkene formed in greatest amount is the one that corresponds to removal of the hydrogen from the β-carbon having the fewest hydrogen substituents.” (Wikipedia)
Now, my question is, how can the base effect the percentage of the minor and major products? Does a stronger base give us more major product and vice versa?
The particular question I’m trying to solve is:
In the E2 reaction of 2-bromo-2,3-dimethyl butane, which base would give more of the minor product and which of the major product?
Answer: It will be more difficult for sterically hindered bases to remove the proton of the β-carbon having the fewest hydrogen substituents, since this carbon atom has more bulky groups around it compared to the the others. So base 3, being the less hindered base, will provide a higher percentage of the major Zaitsev’s product and base 2 will provide a higher percentage of the minor product compared to the other bases. | {
"domain": "chemistry.stackexchange",
"id": 3526,
"tags": "organic-chemistry, acid-base"
} |
RGBDSLAM fails rosmake and rosdep in fuerte | Question:
I am trying to use rgbdslam in fuerte on 12.04 LTS. I was able to overcome some problems, like the dependency on eigen by following these directions.
Now I am having a different error.
bowser@bowser-ros:~/fuerte_workspace$ rosdep install rgbdslam
ERROR: the following packages/stacks could not have their rosdep keys resolved
to system dependencies:
rgbdslam: Cannot locate rosdep definition for [gl2ps]
I tried to install the packages as described in the link, but they were already up to date. Rosmake also fails with following error:
/usr/bin/ld: cannot find -lGLEW
/usr/bin/ld: cannot find -lIL
collect2: ld returned 1 exit status
make[1]: *** [siftgpu] Error 1
make[1]: Leaving directory `/home/bowser/fuerte_workspace/rgbdslam/external/siftgpu/linux'
------------------------------------------------------------------
CMake Error at CMakeLists.txt:103 (MESSAGE):
SiftGPU cannot be compiled. Returned: 2
-- Configuring incomplete, errors occurred!
the full build_output.log is here.
I am trying to learn more about SiftGPU to figure out what problem could be, but I'm not really sure where to look right now, or if siftgpu itself is the problem, or if I'm missing some other dependency.
Originally posted by jerdman on ROS Answers with karma: 35 on 2012-07-25
Post score: 1
Answer:
Seems like you lack a dependency of SiftGPU. Either search on http://packages.ubuntu.com/ for a package containing libGLEW and libIL or disable it at the top of CMakeLists.txt.
Originally posted by Felix Endres with karma: 6468 on 2012-07-25
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by joq on 2012-07-26:
Is this a bug in rgbdslam? The bugs link on the stack page is broken: http://ros.org/wiki/rgbdslam_freiburg
Comment by Felix Endres on 2012-07-30:
rgbdslam doesn't support fuerte yet (and will not until at least september), so I don't consider this a bug I can fix yet. I don't know what is wrong with that page, do you? It definitely is indexed, since ros.org/wiki/rgbdslam comes from a subdirectory. Maybe a problem with the stack.xml? | {
"domain": "robotics.stackexchange",
"id": 10358,
"tags": "slam, navigation, ubuntu, ros-fuerte, ubuntu-precise"
} |
Linear programming vs integer linear programming | Question: Given $A,b$, let $Ax \le b$ be an instance of linear programming on the variables $x=(x_1,\dots,x_n)$. Assume that the constraints $0 \le x_i$ and $x_i \le 1$ are included in $A,b$.
Suppose that there is a feasible solution $x \in \mathbb{R}^n$ to the linear programming problem where $x_1 = 1$. Also, suppose there exists a solution to the corresponding integer linear program, i.e., there exists $x' \in \{0,1\}^n$ such that $Ax' \le b$. Are we guaranteed that there exists a solution with $x'_1=1$, i.e., there exists $x' \in \{0,1\}^n$ such that $Ax' \le b$ and $x'_1=1$?
To put it another way: if we solve the linear program associated with a zero-or-one ILP instance, and find that one of the variables gets assigned to 1 in some solution to the linear program, does it follow that there exists a solution to the ILP instance where we set that variable to 1?
I am skeptical but could not find either a proof or a counterexample.
Answer: The program $x_1 + 2x_2 = 2$ has the integer solution $(0,1)$ and the fractional solution $(1,1/2)$ but not an integer solution of the form $(1,\cdot)$. | {
"domain": "cs.stackexchange",
"id": 17083,
"tags": "linear-programming, integer-programming"
} |
Create AsyncCursor wrapper | Question: Is this good OOP style? Do I need to split this class into two classes? If so, how do I do this? What methods should I need to add to the interfaces?
public sealed class DeferredResultCollection<TResult> : IEnumerable<TResult>, IDisposable
{
private readonly IAsyncCursor<TResult> _asyncCursor;
private readonly IList<TResult> _list;
public DeferredResultCollection(IAsyncCursor<TResult> asyncCursor)
{
_asyncCursor = asyncCursor;
}
public DeferredResultCollection(IList<TResult> list)
{
_list = list;
}
public IEnumerator<TResult> GetEnumerator()
{
if (_list != null)
{
foreach (var item in _list)
{
yield return item;
}
}
if (_asyncCursor != null)
{
for (; _asyncCursor.MoveNextAsync().Result;)
{
foreach (var result in _asyncCursor.Current)
{
yield return result;
}
}
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public void Dispose()
{
if (_asyncCursor != null)
{
_asyncCursor.Dispose();
}
}
}
Answer:
Is it good OOP style?
No. The class contains a IAsyncCursor<TResult> field and a IList<TResult> field,
but depending on how it's constructed,
only one of these will be actually used.
This is a pretty clear indication that we have two classes masquerading as one.
Maybe I need to split this class into two classes?
You hit the nail on the head!
How to do this? What methods should I need to take out to the interfaces?
The interface should contain GetEnumerator and that's it.
That's the common functionality.
Let's call the interface DeferredResultCollection<TResult>.
Next, add two classes that implement this interface,
one using a IAsyncCursor<TResult> and the other a IList<TResult>,
let's call these CursorBasedDeferredResultCollection<TResult> and ListBasedDeferredResultCollection<TResult>.
Optionally, you could add a factory,
let's call it DeferredResultCollections,
with two methods, such that both return objects of the interface type DeferredResultCollection<TResult>,
but one of them takes a IAsyncCursor<TResult> and the other a IList<TResult> as parameter,
and create and return objects with the appropriate implementations.
This has the benefit that users don't need to learn the names of the implementation classes.
In fact they don't even need to know them at all (they can be private). | {
"domain": "codereview.stackexchange",
"id": 16861,
"tags": "c#, object-oriented"
} |
Does this image-processing library break the Single Responsibility Principle? | Question: I'm writing a small library to do some image-processing on GPU for WinRT, however, I'm not sure if such design breaks SRP.
One class loads and saves image, renders and maintains filters - isn't it too much?
Code
public class FilteredImageProvider : IDisposable
{
public event Action ImageSourceChangedEvent;
GPUImage currentImage;
SharpDXRenderer renderer;
FilterContainer filterContainer;
ImageGPULoaderFactory loaderFactory;
public FilteredImageProvider()
{
this.renderer = new SharpDXRenderer();
this.filterContainer = new FilterContainer();
this.loaderFactory = new ImageGPULoaderFactory();
}
public ImageSource Image
{
get { return renderer.GetRendererSource(); }
}
public async Task LoadFromFile(IStorageFile sourceFile)
{
var imageLoader = loaderFactory.GetImageLoader(sourceFile);
if (currentImage != null)
currentImage.Dispose();
currentImage = await imageLoader.LoadImage();
renderer.Load(currentImage);
ImageSourceChangedEvent();
}
public async Task SaveToFile(IStorageFile destinyFile)
{
var imageSaver = loaderFactory.GetImageLoader(destinyFile);
var deviceManager = renderer.GetDeviceManager();
await imageSaver.SaveImage(currentImage, deviceManager);
}
public void Render()
{
var filterCompilator = new FilterCompilator();
var renderedImage = currentImage.GetImageAsEffect();
var compiledEffect = filterCompilator.CompileFiltersToEffect(renderedImage, filterContainer);
renderer.Render(compiledEffect);
}
public void AddFilter(IFilter filter)
{
filterContainer.AddFilter(filter);
}
public void RemoveFilter(IFilter filter)
{
filterContainer.RemoveFilter(filter);
}
public void Undo()
{
filterContainer.Undo();
}
public void Redo()
{
filterContainer.Redo();
}
public void Dispose()
{
if (currentImage != null)
currentImage.Dispose();
renderer.Dispose();
}
}
Answer: Private Fields
I prefer to prefix my private fields with an underscore, and as readonly as possible. If you don't like the underscore, fine - but then be consistent with this, and don't use it only in your constructor - use this whenever you're referring to a private field. It greatly helps telling private fields from parameters.
Personally I don't like sprinkling this everywhere in my code, so I prefer the underscore; I also like my access modifiers explicit:
public class FilteredImageProvider : IDisposable
{
private readonly SharpDXRenderer _renderer;
private readonly FilterContainer _filterContainer;
private readonly ImageGPULoaderFactory _loaderFactory;
private GPUImage _currentImage;
//...
Constructor
The readonly fields are still assigned in the constructor:
public FilteredImageProvider()
{
_renderer = new SharpDXRenderer();
_filterContainer = new FilterContainer();
_loaderFactory = new ImageGPULoaderFactory();
}
These three should be injected as constructor arguments - creating these instances within the constructor breaks SRP, at least if you're applying DI principles. SRP is only the S of SOLID. By taking these instances in as constructor arguments, you greatly facilitate the rest of SOLID.
You're mostly exposing cherry-picked encapsulated methods, that works for me. However I find that you shouldn't be newing up the FilterCompilator like you're doing. It should be constructor-injected, and perhaps made a dependency of FilterContainer, or of another type that would deal with everything that can be done with a Filter.
Note: If _renderer is injected in the constructor, then you shouldn't call _renderer.Dispose() since you've put that burden onto the caller (ideally your favorite IoC container). | {
"domain": "codereview.stackexchange",
"id": 5568,
"tags": "c#, image, library"
} |
Viterbi-like algorithm suggesting top-N probable state sequences implementation | Question: Traditional Viterbi algorithm (say, for hidden Markov models) provides the most probable hidden state sequence given a sequence of observations.
There probably is an algorithm for decoding top-N probable hidden states sequences (k-shortest paths or the like).
But is there a good implementation anywhere?
Thanks.
UPD (copy-pasted from comments): Viterbi algorithm decodes the MOST probable sequence given observations + model. That is, argmax_x p(state_1,...,state_n|obs_1,...,obs_n). What I am asking for is an implementation of how to get the 'next argmax-es' with possibly slightly smaller probabilities GIVEN the observed sequence.
Answer: Apparently, I misunderstood your question.
There are several methods for finding the k-best paths with extending versions of the Viterbi algorithm.
My first advice would be to look at this question on SO that is similar to yours and has a good illustrated answer.
Then, I would refer you to two articles/thesis that are publicly available and from where one can extend his/her research. (Disclaimer: these references may not be the "best" one but I've chosen them because they are publicly available and provide a good number of reference to deepen research on the topic)
The k-best paths in Hidden Markov Models. Algorithms and Applications to Transmembrane Protein Topology Recognition. Thesis by Golod, available here. (a thesis like this gives countless references on the topic)
Decoding HMMs using the k-best paths: algorithms and applications by Brown and Dolog, available here (a partial short version of what you will find in the aforementioned thesis)
My previous answer:
For anyone coming across this question looking for a way of computing some of the typical state sequences of an HMM (as I thought first this question was about), just know that such a concept of most probable sequence without specifying data is not really something used in any theory about HMMs, as far as I know. However, one can follow these steps:
As a first try, I would implement something like this:
Get the state at time $t=0$
Draw an initial state $s_0$ from the initial state probability mass function (pmf)
Get the state at time $t+1$
Draw the new state $s_1$ from the pmf defined by the $s_0$-th row of the transition matrix
Repeat this step as many times as needed to draw states up to $s_N$
Then you can repeat the entire procedure $X$ times in order to get as many sample paths as you wish.
This is very fast and easy to implement and it will give you what you want. Many scientific libraries in many languages have a built-in function for drawing a random sample from a pmf. | {
"domain": "datascience.stackexchange",
"id": 2085,
"tags": "algorithms, graphs, markov-process, sequence, graphical-model"
} |
Do effects of caffeine on human body change with habitual use? | Question: I've been reading about homeostatic nature of a lot of neurobiological processes - the brain is trying to maintain a balance by desensitizing receptors, re-uptaking and breaking down neurotransmitters.
With this in mind, I'm interested in what happens to the receptors in the brain with chronic use of caffeinated beverages. Let's say an occasional caffeine drinker likes the cognitive boost of caffeine and starts to consume it habitually/chronically - every day. Will the caffeine drinker experience the same effects day after day, or will the effect change over time?
If there is a change in how the body responds to caffeine, are there any time frames that can be used to estimate when the change takes place - is it X days/weeks of habitual use?
Thank you for your input!
Answer: I don't have any references off hand (though if you like, I will find them). The body acclimates to caffeine intake, becoming desensitized to chronic caffeine use, analogous to chronic narcotic use. Habitual coffee drinkers, soda drinkers, and so forth must achieve the same level of daily caffeine intake to maintain a normal state of being. Taking in less caffeine leads to withdrawal symptoms, and taking in more caffeine leads to the familiar stimulatory effects. As for time frame of addiction, this likely varies, but I hazard a guess it's on the order of a few days to a week. Fortunately, caffeine is not nearly as strong and addictive as narcotics, however it is the most widespread.
Edit: Caffeine exerts high-affinity and low-specificity for adenosine receptors. This was a review article I fould that may help you out.
Dunwiddie TV, Masino SA. The role and regulation of adenosine in the central nervous system. Annu Rev Neurosci. 2001;24:31-55. | {
"domain": "biology.stackexchange",
"id": 686,
"tags": "human-biology, neuroscience, metabolism"
} |
Are these claims of "revolutionising" understanding of human vision and hearing valid? | Question: I've started a hobby machine vision project (and posted some questions to this end on other SE sites) and on a side track, also been looking at relevant research in human vision (and partly, hearing).
I came across the work of one "James T Fulton" through an excerpt of a book which claims to have "revolutionised" our understanding of human vision and hearing.
Here are the respective links:
http://neuronresearch.net/vision/
http://www.neuronresearch.net/hearing/
I tried cross-verifying these claims via Google, but nothing turned up on searching either for the author or the book (apart from some Amazon links and very obscure references).
Considering the controversial natural of the claims made here, I would have imagined some level of debate, but the silence I found on the Internet is puzzling. Puzzling, because the sheer level of DETAIL in these books, prevents me from dismissing the claims altogether as well.
Can someone validate some of the content from these books so I can get an idea of the overall legitimacy of the work (and decide if I should continue reading, or not). Below might be a good link to start from:
http://neuronresearch.net/vision/pdf/11Biophenom.pdf
Adding a few examples:
The simplest one - is that the Principle of Univariance is not entirely correct. This is examined on pages 15 through 17 at this link (where I read it):
http://neuronresearch.net/vision/pdf/11Biophenom.pdf
There is this claim quoted verbatim from the site:
"The theory shows that the ARCHITECTURE OF ALL VISION IS TETRACHROMATIC. Although traditionally called trichromats, it is shown that HUMANS ARE BLOCKED TETRACHROMATS"
Caps in original. I just pasted it here to give an idea of the tone of some of the text.
This specific claim is "proven" here (in short):
http://neuronresearch.net/vision/files/tetracomparison.htm
And a longer form of the text is linked in the same page
There is also a theory proposed on why cochlea are coiled:
http://neuronresearch.net/hearing/pdf/coiledcochlea.pdf
Thanks!
Answer: OK, I'll field this one. I'll ignore any of the tell-tale signs of hokum such as writing in ALL CAPS.
Nevertheless, it's a lot of hokum. It's true that he goes into a lot of detail and I'm sure his math looks nice but the fact is that it's not grounded in reality. I would consider myself to be something of an expert (in training) in the field of phototransduction, so I'll focus on claims related to it. However, if he's as sloppy with the rest of the visual process as he is with phototransduction, then his claims are entirely bogus. I'll just be working from the Synopsis section.
Where to start...How about the first sentence of the "Background" section:
This work originated in the 1960's with the realization that rhodopsin, as then defined, did not meet the requirements for being a chromophore. It was particularly deficient in the structural characteristics required of a good chromophore.
FALSE. Rhodopsin is not a chromophore and, to my knowledge no one has ever claimed it to be a chromophore. Rhodopsin is a protein. It is coupled with a chromophore, retinal, a form of vitamin A. OK, so if this is the foundation of his research, he is off to a bad start.
The basic assumption had been that the residues of a destructive process could be easily returned to their original state and that state was a simple chemical bond involving only two components in a single molecule...It was assumed that one of the residues was the alcohol or aldehyde of Vitamin A. The other residue was assumed to be a protein and was given the name opsin. Valiant, but unsuccessful, efforts were made to define the nature of the molecule and achieve the formation of rhodopsin in the laboratory.
It absolutely is possible to reconstitute rhodopsin with the chromophore in the lab. This has been going on since at least 1983. Also, the crystal structure of rhodopsin, including the chromophore, was resolved in 2000.
A new class of retinoids was defined by the author at that time, the Rhodonines. This class met the requirements of physical chemistry and photochemistry for a high performance chromophore. However, it was difficult to obtain acceptance of the Rhodonines as a replacement for Rhodopsin within the vision research community.
I have never heard of Rhodonines and web searches only result in his page. Due to his confusion of terminology, I don't know if he is proposing them to be proteins ("replacement for Rhodopsin") or simple chemical molecules ("a high performance chromophore"). If it were the former, one would wonder why Rhodonines were not identified in a comprehensive proteomics assay of the rod outer segment. One would also wonder why rhodopsin is so highly expressed in the outer segment (on the order of 1e8 molecules in mammals and 1e9 molecules in amphibians, more than any other protein in that compartment). If it were the former, one would wonder why there is an entire biochemical cycle dedicated to recycling retinal that takes place just outside the outer segment.
I'll ignore whatever physical state these Rhodonines are supposedly in since they don't exist. I'll also ignore this "Activa" thing. He owns a patent on it, which doesn't bode well for its existence in nature.
The visual system is a very sophisticated system. It uses many of the most sophisticated methodologies known to man at the start of the 21st Century. Failure to recognize these mechanisms and methodologies leads to an inadequate understanding of the overall process.
I'll agree there, with the exception of the use of the word "methodologies"! This isn't engineering, this is biology.
The visual system employs a number of time related processes that have not previously been addressed in the literature. To understand these processes, it is necessary to employ "complex algebra" in the differential equations arena. Employing these techniques provides the complete solution to the overall photoexcitation/de-excitation process within the Outer Segment of the photoreceptor
Phototransduction has a rich history of mathematical modeling. And yes, they involve "complex algebra in the differential equations arena." An excellent review of the first few decades of it can be found in the bible: Phototransduction in vertebrate rods and cones: molecular mechanisms of amplification, recovery and light adaptation. Since that publication, there have been two very nice lineages of models: one comprehensive one that focuses on the proteins (1, 2, 3) and one that focuses on spatial accuracy and stochastic interactions and gives more attention to second messengers (1, 2, 3).
It has also been compounded by the historically poor preparation of the researchers in the field of mathematics.
I challenge him to read one of DiBenedetto's modeling papers and not glow in admiration of his mathematical prowess.
The goal has been to present an overall view of the visual system in a defendable mathematical context and a global scientific framework. This goal has required the introduction of techniques and mechanisms not normally found in the literature of vision. This has been particularly true in two areas, the definition and detailing of the initial photodetection process and a similar detailing of the mechanisms of neural signal transmission. In both cases, the dominance of chemically based concepts is shown to have impeded progress. The description of the visual system, including the neural system, as an entirely electronic, more precisely electrolytic, based system leads to much greater insight into the operation of the visual system than any chemically based theory can offer.
I had to quote in full here. This is entirely bogus. The "initial photodetection process" (phototransduction) is entirely chemical in nature. All of the main players in the process are known and their interactions are largely well understood. Viewing the system as entirely "electronic" ignores the mountains of evidence of all of the proteins participating in it.
Probably the most venerable is that of a dichotomy between types of photoreceptors, the rods and cones. The theory demonstrates in excruciating detail that there is only one functional type of photoreceptor cell and that it is associated with one of four types of chromophore. These chromophores are sensitive in the ultraviolet, the short, the medium and the long wavelength portions of the visual spectrum of light.
The problem with this is that if you look at a retina, you can see rods and cones. You can generate knock-out animals that have only one or the other. Those with only rods cannot handle bright visual stimulii, those with only cones cannot see in the dark. More damning is that you have two distinct phototransduction cascades, separated by evolution dating back to the origins of vertebrates. They share only a few proteins in common and otherwise have unique paralogs performing similar duties. You can isolate individual rod cells and, if you're clever and you have the right species, individual cone cells (they're much smaller and harder to harvest in animals like mice or cows). You can measure their electrophysiological characteristics and find that they are extremely different: cone responses are fast while rods can respond to a single photon of light. Their morphologies are completely different: the rod outer segment is filled with lipid bilayer disks, while that of the cone has a series of in-folds.
As for the whole tetrachromat thing, well, again I have to assume that he's referring to opsins when he talks about chromophores. Thanks to genomics, we can be confident that there are only three cone opsin varieties in old world apes (and one rod opsin). The rest of mammals only have two. If you get into other vertebrates, you'll find more...if you get into invertebrates, you'll find ridiculous numbers. There's not much to say here. A fourth cone opsin protein simply does not exist in the human genome.
So, that's just a quick overview. Do not worry about this guy's research. It is unsubstantiated and exists in a vacuum outside of the rest of the vision research world. I do wish that there were a way for him to work with others. I do wish there were a way for him to integrate current knowledge into his work. But the problem is that his work as it stands simply seems to ignore the wealth of data generated on the visual system and instead treats it as some theoretical circuit. | {
"domain": "biology.stackexchange",
"id": 1161,
"tags": "neuroscience, vision, eyes, hearing, human-ear"
} |
About local planner? | Question:
Hi all,
I'd like to implement a new local planner in move_base.
My new planner is based on a Fuzzy-PID controller and is implemented in c++.
My question is: can I integrate it in the ROS's navigation stack? and how match is it difficult or easy to do?
waiting for your answers and your help.
Best regards.
Originally posted by assil on ROS Answers with karma: 41 on 2014-09-02
Post score: 0
Answer:
To integrate a custom local planner with the navigation stack is fairly easy: You need to write a plugin for move_base that adheres to the nav_core::BaseLocalPlanner interface.
You can find documentation here and as an example you can take a look at the code of the dwa_local_planner (pay special attention to dwa_planner_ros.h and dwa_planner_ros.cpp)
Originally posted by Martin Peris with karma: 5625 on 2014-09-02
This answer was ACCEPTED on the original site
Post score: 3 | {
"domain": "robotics.stackexchange",
"id": 19273,
"tags": "navigation, base-local-planner"
} |
What is the centripetal acceleration for non-uniform speed? | Question: In NCERT physics page no. 122 example 6.7 there is an argument next to equation 6.12 i.e., $T_A -mg= \frac {mv_0^2}{L}$ which means that at the lowest point the centripetal force is equal to $ \frac {mv_0^2}{L}$ which means that centripetal acceleration is $\frac {v^2}{r}$ which I think isn't true as the speed of the ball is constantly changing so we can't use the formula of $\frac {v^2}{r}$ for calculating centripetal acceleration and hence the force. I know the derivation of the formula of centripetal acceleration for uniform circular motion from Halliday Resnick and Walker and the formula is derived on the assumption that speed is constant. So I want a confirmation whether my thinking about the logic being used is wrong is correct or not?
Answer: In polar coordinates, the acceleration vector for planar motion is given by
$$\mathbf a=(\ddot r-r\dot\theta^2)\hat r+(r\ddot\theta+2\dot r\dot\theta)\hat\theta$$
If our motion is along a circle, we have $\dot r=\ddot r=0$, so our acceleration reduces to
$$\mathbf a=-r\dot\theta^2\hat r+r\ddot\theta\hat\theta$$
The centripetal acceleration is the radial component of the acceleration
$$a_c=r\dot\theta^2$$
Using $\dot\theta=v/r$ we end up with the familiar result
$$a_c=\frac{v^2}{r}$$
Notice how we didn't assume anything about the speed $v$. This expression is valid for when $v$ is not constant. We will just have a changing centripetal acceleration, and we will also have a non-zero tangential acceleration as $\ddot\theta=\dot v/r\neq 0$. | {
"domain": "physics.stackexchange",
"id": 62802,
"tags": "newtonian-mechanics, work, centripetal-force"
} |
Change NDT matching TF transform frame in Autoware | Question:
I am using Autoware 1.14 in ROS Melodic, installed from source and built with CUDA.
I want to use the lidar localizer algorithms with my ROS system that contains another autonomous car package.
Specifically, my system's TF tree is as follows: map -> odom -> base_footprint -> base_link ->the rest of the TF frames.
When I am using the NDT Matching algorithm, it provides a transform map -> base_link and it changes completely my TF tree and several other functionalities.
So, I thought to change the NDT algorithms's (matching and mapping) source code, so that they provide a transform from map to odom instead of base_link.
Does this sound correct? Or is there any other way to make this work?
Thanks in advance.
Originally posted by kosmastsk on ROS Answers with karma: 210 on 2020-08-17
Post score: 0
Answer:
In order to solve this problem, I made the lidar_localizer nodes totally independent from the rest of the Autoware packages.
It is important to have the odom --> base_link transform, which is provided by Gazebo, another simulator or the robot itself. So, changing the source code to publish from map --> odom, another change needs to be made.
Before publishing the transform, you should subtract the odom-->base_link from map-->base_link which is calculated by the code and then publish its result as map-->odom.
Originally posted by kosmastsk with karma: 210 on 2020-09-16
This answer was ACCEPTED on the original site
Post score: 1 | {
"domain": "robotics.stackexchange",
"id": 35428,
"tags": "ros-melodic"
} |
Difference between two forms of benzothiadiazole | Question: Can someone please advise / point me to information regarding the difference between these two compounds:
https://www.combi-blocks.com/cgi-bin/find.cgi?HC-3087
https://www.combi-blocks.com/cgi-bin/find.cgi?JH-8440
The reason for this question is that rdkit assigns them the same 'inchikey', as if they were identical, or spontaneously interconverting substances (like tautomers).
But they aren't tautomers, as there is no atom being transferred, only electrons; and they are distinct commercial substances, sold in fairly large quantities, so I would guess they are individually sufficiently stable.
Many variously decorated commercial compounds containing either of these moieties are also found, reinforcing the above conclusion.
So it would seem that the machinery that calculates the inchikey is wrong in treating these two molecules as identical.
Any ideas?
EDIT summary of insights from the comments
According to both the people who commented so far, these two species are resonance structures of the same molecule.
The fact that different cas numbers, names, catalogue numbers from a single vendor are found for the two, would therefore seem to be incorrect, misleading and unjustified.
On the contrary, the fact that rdkit returns the same inchikey for the two seems to be correct.
Answer: In practice, there is no difference between the two structures - they represent the same compound if you were to purchase them. That said, in terms of "correctness", the non-hypervalent structure is preferred, even though its Lewis structure still does not accurately represent the full electronic structure of the molecule. You could add some non-hypervalent charge-separated resonance forms which improve the description, as shown below.
The overarching truth is that occasionally you can find compounds with unusual Lewis structures available for purchase because they weren't sanity-checked. This is especially likely with combinatorial chemistry suppliers as they will often automatically populate their catalogues with thousands of structures. For example, sometimes you can find a cyclohexa-2,4-diene-1-one on offering, though on further inspection a chemist would realize these are actually a phenol tautomer, which typically only exist in part-per-trillion amounts.
But with benzothiadiazole specifically, the problem goes a bit deeper. While a trivial and satisfying Lewis structure can be assigned to it (or more precisely, benzo[c][1,2,5]thiadiazole), what about the related structure benzobis(thiadiazole) (more precisely, benzo[1,2-c:4,5-c']bis([1,2,5]thiadiazole)), which contains another fused benzothiadiazole ring opposite to the first? This is a well-known compound, of which hundreds of variants have been explored in organic electronics.
It turns out in this case that there is no "trivial" Lewis structure which can be assigned to the compound, at least not without a hypervalent sulfur atom. And curiously, that's what the literature has settled on - a quick Google image search of benzobisthiadiazole will turn up dozens of articles using the hypervalent structure shown below. It has become the standard in the field, and nobody particularly cares. It is possible to draw a non-hypervalent charge-separated resonance form, but it is ignored. The true electronic structure is even more complex, apparently including non-negligible biradicaloid character, which can be shown on a non-hypervalent Lewis structure.
As a last point, x-ray crystallographic data of benzobisthiadiazole compounds show that the two thiadiazole rings are typically completely equivalent (there is no single hypervalent ring, as the widely-drawn structures suggest), indicating that it really all boils down to Lewis structures just being limited tools at representing reality. You always have a trade-off between simplicity of representation and accuracy, and sometimes this trade-off is particularly unsatisfying. | {
"domain": "chemistry.stackexchange",
"id": 17591,
"tags": "organic-chemistry, cheminformatics"
} |
Area of the triangle generated by the hands of a clock | Question: Problem explanation:
Let’s suppose we have a clock that has an hour hand 3 units long, a minute hand 4 units long, and a second hand 5 units long. The hour hand moves once every hour, the minute hand moves once every minute, and the second hand moves once every second.
Therefore, exactly every second, the triangle defined by the ends of the hands changes its area.
Which is the maximum area between two given times?
Input: A sequence of hours in sets of 2 in the format hh:mm:ss.
Output: The maximum area defined by the ends of the hands in square units.
For a more comprehensive explanation: https://jutge.org/problems/P17681_en/statement
My code:
I have written a function that takes as arguments the position of the hands in radians and returns its area (trigonometrically intensive).
double area(double secRad, double minRad, double houRad){
double alfaSM = atan( (5-4*cos(abs(secRad-minRad)))/(4*sin(abs(secRad-minRad))) );
double alfaSH = atan( (5-3*cos(abs(secRad-houRad)))/(3*sin(abs(secRad-houRad))) );
double alfaMH = atan( (4-3*cos(abs(minRad-houRad)))/(3*sin(abs(minRad-houRad))) );
double distSM = abs(5*sin(alfaSM)+4*sin(abs(secRad-minRad)-alfaSM));
double distSH = abs(5*sin(alfaSH)+3*sin(abs(secRad-houRad)-alfaSH));
double distMH = abs(4*sin(alfaMH)+3*sin(abs(minRad-houRad)-alfaMH));
double s = (distSM+distSH+distMH)/2;
return s*(s-distSM)*(s-distSH)*(s-distMH);
}
My main function reads the input from the user and calculates the area for each second in the range of time the user has entered, stores the maximum area and finally prints it.
int main(){
char z;
double hI,mI,sI, hF,mF,sF;
while(cin >> hI >> z >> mI >> z >> sI >> hF >> z >> mF >> z >> sF){
double maxArea = 0;
for(int i = 3600*hI+60*mI+sI; i <= 3600*hF+60*mF+sF; i++){
double cacheArea = area( 2*pi*((double)(i%60)/60) , 2*pi*(double)((i%3600)/60)/60, 2*pi*(double)((i%43200)/3600)/12);
if(cacheArea > maxArea) maxArea = cacheArea;
}
cout << fixed << setprecision(3) << sqrt(maxArea) << endl;
}
}
Write this at the beginning so you can test the program:
#include <iostream>
#include <cmath>
#include <iomanip>
#define pi 3.14159265358979323846
using namespace std;
Where am I now and what is left?
The algorithm of the code above works as intended, but it isn't efficient enough. The online judge throws a time limit exceeded error.
I have managed to reduce the number of calculations of area needed by supposing that if the range of time is more than an hour, we just need to check all the combinations for the first hour, since the maximum area will be there and will be appearing again every hour(the relative positions repeat).
I have also implemented a variable that checks if the separation between hands of the clock in the time its checking is smaller than the separation of the hands in the maximum area found until that moment. If it is, the program skips the area calculation.
These changes seem to improve performance considerably (from 43,200 area calculations to ~1,500 in the complete spin of 12 hours), but the judge still doesn't admit it.
I seem to need a cleaner solution, a mathematically prettier one, but I'm struggling to find one.
Answer: The first thing to do is to get rid of trig functions which are slow (15-30 cycles) and replace them with precomputed tables. Since the hands have only 60 possible positions each this is a 60-slot table (plus a bit extra because who bothers with bounds checking, honestly?.... cough)
If you feel like cheating, you can also use a precomputed table in the source, but that would be a bit wimpy IMO.
Rotational Symmetry
Let's create a function A(h,m,s) which computes the area depending on hour, minute and second.
If we look at the clock face and rotate the entire thing (three clock hands included) by any angle, then the area will not change. Thus we can put the hours hand on "0" (the other hands also follow).
Therefore,
\$ A(h,m,s) = A(0, m-5h \mod 60, s-5h \mod 60)\$
Thus we can make a neat little precomputed table area[minute][second], filled with 3600 values, and use that! Populating this takes negligible time relative to the programs' libc initialization time, so I did not bother to optimize it further. It is just a bunch of MACs which modern cpus crunch at insane speed anyway.
Additionally, a per-minute max area is computed and stored in array marea. This corresponds to the max over h=0, m, s=[0..59] and it can be extended to any h:m by rotating as explained above.
Counting
The rest is very simple, We start, say at 00:01:02 and end at 00:35:15.
First we increment the seconds from 02 to 59 until we get to 00:02:00
Then we increment the minutes, until we reach 00:35:00
Then we increment the seconds again until we reach the final time.
At each step we pick the area from the precomputed table (or the per-minute precomputed max) and compare it to our running maximum.
It's pretty fast since it does basically nothing except a few int ops, a float cmp, and there should be a few CMOVs if the compiler does its job. No trig at all...
This ugly piece of code should thus do the deed...
#include <iostream>
#include <stdio.h>
#include <cmath>
#include <iomanip>
#define pi 3.14159265358979323846
using namespace std;
float sin_lut[76];
inline float sint( int x ) { return sin_lut[x]; }
inline float cost( int x ) { return sin_lut[15+x]; }
float sarea[60][60]; // area for 00:m:s
float marea[60]; // max area for 00:m and all seconds
float globalmax = 0.0;
// this is to avoid typing tons of if()
struct Maxi
{
float _max;
Maxi( float x ) { _max=x; }
// adds the area of h:m:s to the running max
void hms( int h, int m, int s )
{
m -= h*5; // align hour hands on zero by rotating the dial
s -= h*5;
m = (m<0) ? (m+60) : m; // bounds check
s = (s<0) ? (s+60) : s;
float x = sarea[m][s];
if(x>_max) _max=x;
};
// adds the area of h:m to the running max (with all possible seconds)
void hm( int h, int m )
{
m -= h*5; // align hour hands on zero by rotating the dial
m = (m<0) ? (m+60) : m; // bounds check
float x = marea[m];
if(x>_max) _max=x;
};
};
float get_max_area( int hI, int mI, int sI, int hF, int mF, int sF )
{
Maxi maxa( sarea[mI][sI] ); // this is our running max area
//~ printf( "\n\nSearch %02d:%02d:%02d - %02d:%02d:%02d\n\n", hI,mI,sI, hF,mF,sF );
// if it makes one complete turn it will explore all positions...
// thus return the global maximum.
int tI = hI*3600 + mI*60 + sI;
int tF = hF*3600 + mF*60 + sF;
if( tF-tI >= 3600 )
return globalmax;
// increase time in clever steps...
if( hI < hF || mI < mF )
{
// go to the end of the current minute.
while( sI < 60 )
maxa.hms( hI, mI, sI++ );
sI = 0;
mI++;
if( mI>=60 )
{
mI=0;
hI++;
}
}
// Increase minutes...
while( hI < hF || mI < mF )
{
maxa.hm( hI, mI++ );
if( mI>=60 )
{
mI=0;
hI++;
}
}
// finish the last minute...
while( sI <= sF )
{
maxa.hms( hI, mI, sI++ );
}
return maxa._max;
}
int main()
{
/* Precalculate sine table because sin is slow ;) */
for( int i=0; i<=15; i++ )
{
float s = sin( i* (pi/30.) );
sin_lut[i] = sin_lut[30-i] = sin_lut[i+60] =s;
sin_lut[i+30] = sin_lut[60-i] = -s;
}
// for( int i=0; i<60; i++ ) printf( "%d %1.08f %1.08f\n", i, sint(i), cost(i) );
/* Precompute area for all minutes and second positions
for zero hour. We can use this to determine the area for
any h:m:s by rotating the whole clock face to put the hour
hand on zero, so this covers all cases.
Note: symmetry could be used here, also the constants could be optimized
but this whole thing takes less than 1ms, much less time than libc startup/init
so... who cares?
*/
for( int m=0; m<60; m++ )
{
float x2 = 4.0*sint(m);
float y2 = 4.0*cost(m);
float maxarea = 0.0;
for( int s=0; s<60; s++ )
{
float x3 = 5.0*sint(s);
float y3 = 5.0*cost(s);
// cross product area would be: 0.5* (-x2*y1 + x3*y1 + x1*y2 - x3*y2 - x1*y3 + x2*y3
// however, conveniently, hour is zero, so x1=0 and y1=3...
float a = abs( 0.5* (-x2*3.0 + x3*3.0 - x3*y2 + x2*y3));
sarea[m][s] = a;
if( a>maxarea )
maxarea = a;
//~ printf( "00:%02d:%02d %2.08f\n", m,s,area[m][s] );
}
marea[m] = maxarea;
//~ printf( "00:%02d:xx %2.08f\n", m,maxarea );
}
// and the max during one hour, which is also the global max...
for( int i=0; i<60; i++ )
if( marea[i]>globalmax )
globalmax = marea[i];
// process input
char z;
int hI,mI,sI, hF,mF,sF;
while(cin >> hI >> z >> mI >> z >> sI >> hF >> z >> mF >> z >> sF)
{
float m = get_max_area( hI, mI, sI, hF, mF, sF );
//~ printf( "%02d:%02d:%02d - %02d:%02d:%02d - %.03f\n", hI,mI,sI, hF,mF,sF, m );
printf( "%.03f\n", m );
}
}
Oops I forgot the review.
The point here was to kinda reverse-engineer the judging process. For just a few input lines, even your implementation which uses lots of slow trig should be fast enough on a modern cpu to take negligible time compared to program init.
Therefore, if you get a time limit error, this means the input file must contain tons of lines.
Therefore, precalculated look-up tables are a good fit.
And using a non-constant time step (first second, then minutes) reduces the number of iterations substantially, but it does require a precalculated maximum area per each minute table. You could include it as text into the source code, but that would be lame... Thus it is best to precalculate it all and then rip through the input file.
I would not be surprised if cin>> was the slowest part of the program... | {
"domain": "codereview.stackexchange",
"id": 28551,
"tags": "c++, programming-challenge, datetime, time-limit-exceeded, computational-geometry"
} |
Can anyone give an example for a physically real process of entanglement? What instrument do you use? | Question: Can anyone give an example for a physically real and common process of entanglement?
Please not "only" a mathematical expression. I need a physically real example.
Answer: Certain crystals can split a laser beam of a visible light unto two beams of infrared. For example, hypothetically, each photon of the 0.5um wavelength (green) can be split into two entangled photons of 1.0um (near infrared). BBO or beta-barium borate is one example of such a crystal. | {
"domain": "physics.stackexchange",
"id": 43298,
"tags": "quantum-entanglement"
} |
rosrun can't find camera1394_node or camera1394_nodelet | Question:
I am running ROS Indigo on a fresh install of Ubuntu 14.04 (Linux turtlebot 3.13.0-35-generic #62-Ubuntu SMP Fri Aug 15 01:58:42 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux)
The only ROS package is the 1394-master branch as downloaded from github: https://github.com/ros-drivers/camera1394
When I try to run the camera1394_node, ros can't find the executable:
turtlebot@turtlebot:~/ros$ rosrun camera1394 camera1394_node
[rosrun] Couldn't find executable named camera1394_node below /home/turtlebot/ros/src/camera1394-master
Its the same with camera1394_nodelet:
turtlebot@turtlebot:~/ros$ rosrun camera1394 camera1394_nodelet
[rosrun] Couldn't find executable named camera1394_nodelet below /home/turtlebot/ros/src/camera1394-master
I don't see any errors when I catkin_make:
$ cd ~/ros/ && catkin_make
Base path: /home/turtlebot/ros
Source space: /home/turtlebot/ros/src
Build space: /home/turtlebot/ros/build
Devel space: /home/turtlebot/ros/devel
Install space: /home/turtlebot/ros/install
####
#### Running command: "cmake /home/turtlebot/ros/src -DCATKIN_DEVEL_PREFIX=/home/turtlebot/ros/devel -DCMAKE_INSTALL_PREFIX=/home/turtlebot/ros/install" in "/home/turtlebot/ros/build"
####
-- The C compiler identification is GNU 4.8.2
-- The CXX compiler identification is GNU 4.8.2
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Using CATKIN_DEVEL_PREFIX: /home/turtlebot/ros/devel
-- Using CMAKE_PREFIX_PATH: /opt/ros/indigo
-- This workspace overlays: /opt/ros/indigo
-- Found PythonInterp: /usr/bin/python (found version "2.7.6")
-- Using PYTHON_EXECUTABLE: /usr/bin/python
-- Using Debian Python package layout
-- Using empy: /usr/bin/empy
-- Using CATKIN_ENABLE_TESTING: ON
-- Call enable_testing()
-- Using CATKIN_TEST_RESULTS_DIR: /home/turtlebot/ros/build/test_results
-- Looking for include file pthread.h
-- Looking for include file pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found gtest sources under '/usr/src/gtest': gtests will be built
-- Using Python nosetests: /usr/bin/nosetests-2.7
-- catkin 0.6.9
-- BUILD_SHARED_LIBS is on
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- ~~ traversing 1 packages in topological order:
-- ~~ - camera1394
-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-- +++ processing catkin package: 'camera1394'
-- ==> add_subdirectory(camera1394-master)
-- Using these message generators: gencpp;genlisp;genpy
-- Boost version: 1.54.0
-- Found the following Boost libraries:
-- thread
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.26")
-- checking for module 'libdc1394-2'
-- found libdc1394-2, version 2.2.1
-- camera1394: 0 messages, 2 services
-- Configuring done
-- Generating done
-- Build files have been written to: /home/turtlebot/ros/build
####
#### Running command: "make -j4" in "/home/turtlebot/ros/build"
####
Scanning dependencies of target _camera1394_generate_messages_check_deps_SetCameraRegisters
Scanning dependencies of target std_msgs_generate_messages_cpp
Scanning dependencies of target _camera1394_generate_messages_check_deps_GetCameraRegisters
Scanning dependencies of target camera1394_gencfg
[ 0%] Built target std_msgs_generate_messages_cpp
[ 4%] Generating dynamic reconfigure files from cfg/Camera1394.cfg: /home/turtlebot/ros/devel/include/camera1394/Camera1394Config.h /home/turtlebot/ros/devel/lib/python2.7/dist-packages/camera1394/cfg/Camera1394Config.py
Scanning dependencies of target std_msgs_generate_messages_lisp
[ 4%] Built target std_msgs_generate_messages_lisp
[ 4%] [ 4%] Built target _camera1394_generate_messages_check_deps_GetCameraRegisters
Built target _camera1394_generate_messages_check_deps_SetCameraRegisters
Scanning dependencies of target std_msgs_generate_messages_py
Scanning dependencies of target camera1394_generate_messages_cpp
Scanning dependencies of target camera1394_generate_messages_lisp
[ 4%] Built target std_msgs_generate_messages_py
[ 12%] [ 12%] Generating Lisp code from camera1394/SetCameraRegisters.srv
Generating C++ code from camera1394/SetCameraRegisters.srv
Scanning dependencies of target camera1394_generate_messages_py
Generating reconfiguration files for Camera1394 in camera1394
[ 16%] Wrote header file in /home/turtlebot/ros/devel/include/camera1394/Camera1394Config.h
Generating Python code from SRV camera1394/SetCameraRegisters
[ 20%] [ 20%] Built target camera1394_gencfg
Generating Lisp code from camera1394/GetCameraRegisters.srv
[ 25%] Generating Python code from SRV camera1394/GetCameraRegisters
[ 29%] Generating C++ code from camera1394/GetCameraRegisters.srv
[ 29%] Built target camera1394_generate_messages_lisp
[ 33%] Generating Python srv __init__.py for camera1394
[ 33%] Built target camera1394_generate_messages_py
[ 33%] Built target camera1394_generate_messages_cpp
Scanning dependencies of target camera1394_generate_messages
Scanning dependencies of target camera1394_nodelet
Scanning dependencies of target camera1394_node
[ 33%] Built target camera1394_generate_messages
[ 37%] [ 41%] [ 45%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/dev_camera1394.cpp.o
Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/camera1394_node.cpp.o
[ 50%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/driver1394.cpp.o
Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/nodelet.cpp.o
[ 54%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/features.cpp.o
[ 58%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/format7.cpp.o
[ 62%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/driver1394.cpp.o
[ 66%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/modes.cpp.o
[ 70%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/registers.cpp.o
[ 75%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_node.dir/trigger.cpp.o
[ 79%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/dev_camera1394.cpp.o
[ 83%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/features.cpp.o
Linking CXX executable /home/turtlebot/ros/devel/lib/camera1394/camera1394_node
[ 83%] Built target camera1394_node
[ 87%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/format7.cpp.o
[ 91%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/modes.cpp.o
[ 95%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/registers.cpp.o
[100%] Building CXX object camera1394-master/src/nodes/CMakeFiles/camera1394_nodelet.dir/trigger.cpp.o
Linking CXX shared library /home/turtlebot/ros/devel/lib/libcamera1394_nodelet.so
[100%] Built target camera1394_nodelet
Originally posted by benabruzzo on ROS Answers with karma: 79 on 2014-09-09
Post score: 0
Original comments
Comment by benabruzzo on 2014-09-10:
Tutorial fail:
source ./devel/setup.bash
//\ running this allows me to now run camera1394_node, the nodlet is still missing
Answer:
I am on vacation until next week.
This driver was recently released as an Indigo package. You can install it this way:
sudo apt-get install ros-indigo-camera1394
Originally posted by joq with karma: 25443 on 2014-09-12
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by benabruzzo on 2014-09-12:
Worked like a charm. | {
"domain": "robotics.stackexchange",
"id": 19352,
"tags": "ros, catkin, camera1394, ubuntu, ubuntu-trusty"
} |
shared pointer vs make shared ros2 | Question:
What is the difference between make shared and shared pointer? Accompanied by relevant links/information about the two as a basis would help when explaining. When/why would one be chosen over the other?
Originally posted by MustafaTatli on ROS Answers with karma: 3 on 2021-07-19
Post score: 0
Answer:
A shared pointer is the pointer to the object (e.g. shared_ptr<Object>) that your program will use. You have options between shared, unique, weak, and auto -- each have different semantics and uses in programs.
make_shared is simply the factory to create this shared pointer to your object. On construction, you could use new to allocate your memory of your new pointer object, but there are various reasons why you might not want to populate your shared pointer on construction. For instance if you have a shared pointer in your program as a class member that you want to populate only after you've configured your program since the object needs some of those parameters.
So make_shared is a convenient factory to populate your shared pointer by passing through the constructor arguments to Object and returning a shared_ptr which you're using to actually accomplish your intended task. make_shared returns a shared_ptr.
Ex.
shared_ptr<Object> obj_ptr;
... configurations ...
obj_ptr = make_shared<Object>(object_argument_1, object_argument_2, ...);
obj_ptr->getThing();
Originally posted by stevemacenski with karma: 8272 on 2021-07-19
This answer was ACCEPTED on the original site
Post score: 2 | {
"domain": "robotics.stackexchange",
"id": 36730,
"tags": "ros, shared-ptr, make"
} |
Inefficient Solution - Advent of Code 2021, Day 3, Part 1 | Question: My code is working but it is extremely long. So, I guess there is a way to make it shorter/more efficient.
The problem solved here is from Advent of Code 2021, Day 3, Part 1: https://adventofcode.com/2021/day/3
binary_list = open("data.txt").read().split("\n")
counter_1st_bit_0 = 0
counter_1st_bit_1 = 0
counter_2nd_bit_0 = 0
counter_2nd_bit_1 = 0
counter_3rd_bit_0 = 0
counter_3rd_bit_1 = 0
counter_4th_bit_0 = 0
counter_4th_bit_1 = 0
counter_5th_bit_0 = 0
counter_5th_bit_1 = 0
counter_6th_bit_0 = 0
counter_6th_bit_1 = 0
counter_7th_bit_0 = 0
counter_7th_bit_1 = 0
counter_8th_bit_0 = 0
counter_8th_bit_1 = 0
counter_9th_bit_0 = 0
counter_9th_bit_1 = 0
counter_10th_bit_0 = 0
counter_10th_bit_1 = 0
counter_11th_bit_0 = 0
counter_11th_bit_1 = 0
counter_12th_bit_0 = 0
counter_12th_bit_1 = 0
gamma_rate_queue = []
epsilon_rate_queue = []
for i in range(0, len(binary_list), 1):
for j in range(0, len(binary_list[i]), 1):
if binary_list[i][0] == "0":
counter_1st_bit_0 += 1
elif binary_list[i][0] == "1":
counter_1st_bit_1 += 1
if binary_list[i][1] == "0":
counter_2nd_bit_0 += 1
elif binary_list[i][1] == "1":
counter_2nd_bit_1 += 1
if binary_list[i][2] == "0":
counter_3rd_bit_0 += 1
elif binary_list[i][2] == "1":
counter_3rd_bit_1 += 1
if binary_list[i][3] == "0":
counter_4th_bit_0 += 1
elif binary_list[i][3] == "1":
counter_4th_bit_1 += 1
if binary_list[i][4] == "0":
counter_5th_bit_0 += 1
elif binary_list[i][4] == "1":
counter_5th_bit_1 += 1
if binary_list[i][5] == "0":
counter_6th_bit_0 += 1
elif binary_list[i][5] == "1":
counter_6th_bit_1 += 1
if binary_list[i][6] == "0":
counter_7th_bit_0 += 1
elif binary_list[i][6] == "1":
counter_7th_bit_1 += 1
if binary_list[i][7] == "0":
counter_8th_bit_0 += 1
elif binary_list[i][7] == "1":
counter_8th_bit_1 += 1
if binary_list[i][8] == "0":
counter_9th_bit_0 += 1
elif binary_list[i][8] == "1":
counter_9th_bit_1 += 1
if binary_list[i][9] == "0":
counter_10th_bit_0 += 1
elif binary_list[i][9] == "1":
counter_10th_bit_1 += 1
if binary_list[i][10] == "0":
counter_11th_bit_0 += 1
elif binary_list[i][10] == "1":
counter_11th_bit_1 += 1
if binary_list[i][11] == "0":
counter_12th_bit_0 += 1
elif binary_list[i][11] == "1":
counter_12th_bit_1 += 1
def gamma_rate_finder():
if counter_1st_bit_0 > counter_1st_bit_1:
gamma_rate_queue.append('0')
elif counter_1st_bit_0 < counter_1st_bit_1:
gamma_rate_queue.append('1')
if counter_2nd_bit_0 > counter_2nd_bit_1:
gamma_rate_queue.append('0')
elif counter_2nd_bit_0 < counter_2nd_bit_1:
gamma_rate_queue.append('1')
if counter_3rd_bit_0 > counter_3rd_bit_1:
gamma_rate_queue.append('0')
elif counter_3rd_bit_0 < counter_3rd_bit_1:
gamma_rate_queue.append('1')
if counter_4th_bit_0 > counter_4th_bit_1:
gamma_rate_queue.append('0')
elif counter_4th_bit_0 < counter_4th_bit_1:
gamma_rate_queue.append('1')
if counter_5th_bit_0 > counter_5th_bit_1:
gamma_rate_queue.append('0')
elif counter_5th_bit_0 < counter_5th_bit_1:
gamma_rate_queue.append('1')
if counter_6th_bit_0 > counter_6th_bit_1:
gamma_rate_queue.append('0')
elif counter_6th_bit_0 < counter_6th_bit_1:
gamma_rate_queue.append('1')
if counter_7th_bit_0 > counter_7th_bit_1:
gamma_rate_queue.append('0')
elif counter_7th_bit_0 < counter_7th_bit_1:
gamma_rate_queue.append('1')
if counter_8th_bit_0 > counter_8th_bit_1:
gamma_rate_queue.append('0')
elif counter_8th_bit_0 < counter_8th_bit_1:
gamma_rate_queue.append('1')
if counter_9th_bit_0 > counter_9th_bit_1:
gamma_rate_queue.append('0')
elif counter_9th_bit_0 < counter_9th_bit_1:
gamma_rate_queue.append('1')
if counter_10th_bit_0 > counter_10th_bit_1:
gamma_rate_queue.append('0')
elif counter_10th_bit_0 < counter_10th_bit_1:
gamma_rate_queue.append('1')
if counter_11th_bit_0 > counter_11th_bit_1:
gamma_rate_queue.append('0')
elif counter_11th_bit_0 < counter_11th_bit_1:
gamma_rate_queue.append('1')
if counter_12th_bit_0 > counter_12th_bit_1:
gamma_rate_queue.append('0')
elif counter_12th_bit_0 < counter_12th_bit_1:
gamma_rate_queue.append('1')
gamma_rate = ''.join(gamma_rate_queue)
return gamma_rate
def epsilon_rate_finder():
if counter_1st_bit_0 < counter_1st_bit_1:
epsilon_rate_queue.append('0')
elif counter_1st_bit_0 > counter_1st_bit_1:
epsilon_rate_queue.append('1')
if counter_2nd_bit_0 < counter_2nd_bit_1:
epsilon_rate_queue.append('0')
elif counter_2nd_bit_0 > counter_2nd_bit_1:
epsilon_rate_queue.append('1')
if counter_3rd_bit_0 < counter_3rd_bit_1:
epsilon_rate_queue.append('0')
elif counter_3rd_bit_0 > counter_3rd_bit_1:
epsilon_rate_queue.append('1')
if counter_4th_bit_0 < counter_4th_bit_1:
epsilon_rate_queue.append('0')
elif counter_4th_bit_0 > counter_4th_bit_1:
epsilon_rate_queue.append('1')
if counter_5th_bit_0 < counter_5th_bit_1:
epsilon_rate_queue.append('0')
elif counter_5th_bit_0 > counter_5th_bit_1:
epsilon_rate_queue.append('1')
if counter_6th_bit_0 < counter_6th_bit_1:
epsilon_rate_queue.append('0')
elif counter_6th_bit_0 > counter_6th_bit_1:
epsilon_rate_queue.append('1')
if counter_7th_bit_0 < counter_7th_bit_1:
epsilon_rate_queue.append('0')
elif counter_7th_bit_0 > counter_7th_bit_1:
epsilon_rate_queue.append('1')
if counter_8th_bit_0 < counter_8th_bit_1:
epsilon_rate_queue.append('0')
elif counter_8th_bit_0 > counter_8th_bit_1:
epsilon_rate_queue.append('1')
if counter_9th_bit_0 < counter_9th_bit_1:
epsilon_rate_queue.append('0')
elif counter_9th_bit_0 > counter_9th_bit_1:
epsilon_rate_queue.append('1')
if counter_10th_bit_0 < counter_10th_bit_1:
epsilon_rate_queue.append('0')
elif counter_10th_bit_0 > counter_10th_bit_1:
epsilon_rate_queue.append('1')
if counter_11th_bit_0 < counter_11th_bit_1:
epsilon_rate_queue.append('0')
elif counter_11th_bit_0 > counter_11th_bit_1:
epsilon_rate_queue.append('1')
if counter_12th_bit_0 < counter_12th_bit_1:
epsilon_rate_queue.append('0')
elif counter_12th_bit_0 > counter_12th_bit_1:
epsilon_rate_queue.append('1')
epsilon_rate = ''.join(epsilon_rate_queue)
return epsilon_rate
binary_gamma_rate = gamma_rate_finder()
binary_epsilon_rate = epsilon_rate_finder()
def Binary_to_DecimalValue(n):
b_num = list(n)
value = 0
for i in range(len(b_num)):
digit = b_num.pop()
if digit == '1':
value = value + pow(2, i)
return value
gamma_rate = Binary_to_DecimalValue(binary_gamma_rate)
epsilon_rate = Binary_to_DecimalValue(binary_epsilon_rate)
PowerConsumption = gamma_rate * epsilon_rate
print("Gamma rate is: {} | {}".format(gamma_rate, binary_gamma_rate) + "\n" + "Epsilon rate is: {} | {}".format(epsilon_rate, binary_epsilon_rate) + "\n" + "The Power Consumption is: {}".format(PowerConsumption))
Any advice about my code would be helpful, I guess there is a way to replace lists here and to make the loops and functions more efficient.
Thanks.
Answer: I'm just going to make one suggestion, which you should be able to apply across a lot of your code.
Use list structures, which are analogous to arrays in other languages.
That's it. Lists.
Use lists for counting values
Your first use should be to replace the 0/1 variables in counting code:
counter_1st_bit_0 = 0
counter_1st_bit_1 = 0
Let's make those two variables into a single list with two elements:
counter_1st_bit = [0, 0]
(Note: the [0, 0] syntax is a list literal. These things are so common that Python supports typing them directly into the code.)
Now you can access counter_1st_bit[0] and counter_1st_bit[1] as two separate values (which they are!) and update them based on your input:
if binary_list[i][0] == "0":
counter_1st_bit[0] += 1
elif binary_list[i][0] == "1":
counter_1st_bit[1] += 1
Once you make that change, you can go one step farther and convert the string value to an integer value:
bit = int(binary_list[i][0])
if bit == 0:
counter_1st_bit[0] += 1
elif bit == 1:
counter_1st_bit[1] += 1
Then you realize that the bit value can only be either 0 or 1, so it's not if / elif but rather if / else:
bit = int(binary_list[i][0])
if bit == 0:
counter_1st_bit[0] += 1
else:
counter_1st_bit[1] += 1
And then you realize that the 0 and 1 are the values of bit:
bit = int(binary_list[i][0])
if bit == 0:
counter_1st_bit[bit] += 1
else:
counter_1st_bit[bit] += 1
And then you realize that the statements are the same, so you don't need the if and else:
bit = int(binary_list[i][0])
counter_1st_bit[bit] += 1
Now you're cooking with gas! Because you're only using the bit value one time, in one place, you can get rid of that variable:
counter_1st_bit[int(binary_list[i][0])] += 1
And you can make the same change across all the counter variables:
counter_1st_bit[int(binary_list[i][0])] += 1
counter_2nd_bit[int(binary_list[i][1])] += 1
counter_3rd_bit[int(binary_list[i][2])] += 1
counter_4th_bit[int(binary_list[i][3])] += 1
:
counter_12th_bit[int(binary_list[i][11])] += 1
And then you realize that your counter_xxx_bit variables could be replaced by a list of 12 lists!
# at start:
counter = [[0,0] for _ in range(12)]
# in loop:
for j in range(0, len(binary_list[i]), 1):
counter[j][int(binary_list[i][j])] += 1
(Note: The [... for ... in ...] syntax is called a list comprehension. It's a shorthand that allows specifying the contents of a list as a single expression. This is super-valuable in places where statements are not permitted, like argument defaults and lambda functions. And initializing list variables...)
You can probably see how to make similar changes to your gamma and epsilon computation functions. | {
"domain": "codereview.stackexchange",
"id": 42547,
"tags": "python, python-3.x"
} |
Why do dogs come in all shapes, but cats are generally all the same? | Question: Minus hair and fur, cats (the domestic kind) seem to be just scaled versions of the same thing. Dogs on the other hand can have long necks, short necks, long bodies, short legs, long ears, short ears.....
Answer: Until recently, cats were not extensively selectively bred, but were allowed to roam freely and therefore interbred randomly. Darwin pointed this out, contrasting cats to species that were routinely "enclosed":
On the other hand, cats, from their nocturnal rambling habits, can not be easily matched, and, although so much valued by women and children, we rarely see a distinct breed long kept up; such breeds as we do sometimes see are almost always imported from some other country.
In contrast, dogs and some other species were selectively bred for certain characteristics "each good for man in different ways":
The key is man's power of accumulative selection: nature gives successive variations; man adds them up in certain directions useful to him. In this sense he may be said to have made for himself useful breeds.
Why were cats not selectively bred, while dogs were? This gets to be somewhat speculative, but it's clear from looking at dog breeds that many of the different breed functions need to have a larger animal than a cat, as well as one with different personality: Herding sheep, hunting foxes or badgers, fighting bulls, etc. Cats' function for humans has traditionally been to hunt mice and rats, a role for which they're already beautifully adapted.
(There are a number of small dog breeds, of course, but these parallel cat breeds in that many are ornamental, or else originated as rat-hunters.) | {
"domain": "biology.stackexchange",
"id": 8266,
"tags": "species"
} |
Solve $T (n) = T (\frac n2) + n(2 - \cos n)$ | Question: For the following recurrence relation:
$$T (n) = T (n/2) + n(2 - \cos n)$$
I see it based on values of $\cos$ function given that it output values in range, but this does not seem to have anything to do with solving the problem. I found a solution which says that comparing this to master method yields that this does not apply to master method, so please based on master method why the previous recurrence relation does not apply based on 3 cases we have?
Solution Found: Does not apply. We are in Case 3, but the regularity condition is violated. (Consider $n = 2πk$, where k is odd and arbitrarily large. For any such choice of n, you can show that $c \ge 3/2$, thereby violating the regularity condition.)
Answer: Since $|\cos n| \leq 1$, we have $1 \leq 2-\cos n \leq 3$, and so $$ T(n) = T(n/2) + \Theta(n). $$ This is something that the master theorem can handle. | {
"domain": "cs.stackexchange",
"id": 19024,
"tags": "recurrence-relation, discrete-mathematics"
} |
looking for a robot to buy | Question:
My laboratory is going to buy a robot for research within the prize range of 7200 us dollars. Would you please suggest one with following features as much as possible?
Ros supported
Simulation feature
Kinect sensor
Having an arm
Good mobility
Thank you
Originally posted by Ahyan on ROS Answers with karma: 1 on 2015-02-02
Post score: 0
Original comments
Comment by Ahyan on 2015-02-03:
Any suggestion pls?
Comment by ahendrix on 2015-02-03:
I'm not aware of any robots with those features in that price range; sorry.
Comment by Martin Günther on 2015-02-11:
Especially the arm requirement will be tricky. Decent robot arms start around 25.000 €, I believe.
Answer:
In that price range getting a powerful robot is difficult, but you can have a look at the Corobot. Also check here. It comes in a price range of around 4500 USD. You will still have to get an arm for your robot or build one from scratch like the turtlebot_arm.
Originally posted by AlexR with karma: 654 on 2015-02-05
This answer was ACCEPTED on the original site
Post score: 2
Original comments
Comment by Ahyan on 2015-02-06:
Thank you for your suggestion | {
"domain": "robotics.stackexchange",
"id": 20755,
"tags": "ros"
} |
Cannot launch node of type [joint_state_publisher/joint_state_publisher] | Question:
I have the latest beta version of hydro installed on Ubuntu 13.04 and encountered this error when going through the urdf_tutorials:
ERROR: cannot launch node of type [joint_state_publisher/joint_state_publisher]: can't locate node [joint_state_publisher] in package [joint_state_publisher]
I have installed joint_state_publisher, and checked again:
$ rosdep install joint_state_publisher
#All required rosdeps installed successfully
$ rosrun joint_state_publisher joint_state_publisher
[rosrun] Couldn't find executable named joint_state_publisher below /opt/ros/hydro/share/joint_state_publisher
[rosrun] Found the following, but they're either not files,
[rosrun] or not executable:
[rosrun] /opt/ros/hydro/share/joint_state_publisher
There is a folder in share for joint_state_publisher, but no executable code:
$ ls /opt/ros/hydro/share/joint_state_publisher/
cmake package.xml
There is nothing in lib or include
$ ls /opt/ros/hydro/lib | grep joint_state_publisher
$ ls /opt/ros/hydro/include/ | grep joint_state_publisher
I'm not sure if this is where I should be publishing problems with pre-releases, so please direct me to the correct forum.
Brent
Originally posted by Brent Bailey on ROS Answers with karma: 51 on 2013-07-06
Post score: 1
Answer:
This is a good place to report Hydro problems.
It's possible that joint_state_publisher was converted to catkin, but is not installing its files correctly.
Originally posted by joq with karma: 25443 on 2013-07-07
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by jdom1824 on 2019-02-13:
I have the same problem but in kinetics, could you correct it?
Comment by ixtiyoruz on 2020-02-16:
I also have the problem, is there any solution? | {
"domain": "robotics.stackexchange",
"id": 14825,
"tags": "joint-state-publisher, ros-hydro"
} |
Wave function in tensor product of Hilbert spaces | Question: If I had the wave function
$$\Psi\equiv\psi(r,\theta,\phi)\otimes\chi \in \mathscr{L}^2(\mathbb{R}^3)\otimes\mathbb{C}^{2S+1},$$
where $S$ is the spin of the state, is it correct to normalize the spin part of $\Psi$, namely $\chi$, regarding the spatial parameters $(r,\theta,\phi)$ as if they were fixed?
I mean: if $\psi\propto\sum Y_l^m (\theta,\phi)$, is it correct to say that the $Y_l^m(\theta,\phi)$'s are just numbers in $\mathbb{C}^{2S+1}$?
Answer: Actually, it seems you are asking a question about the definition of Hilbert space. A Hilbert space, as a vector space, is an inner product space -- that's the defining property of it. Therefore to define a Hilbert space, you have to define the inner product $\langle\psi_1|\psi_2\rangle$ of two vectors inside.
The inner product, by definition, should be a scalar. Below we restrict our discussion on wave-functions in quantum physics.
Firstly, for a simple Hilbert space structure $\mathcal{H}$, say, either $\psi$-space or $\chi$-space, when talking about wave-functions, it would be enough to define the scalar as a number. Therefore, when normalize vectors, i.e. wave-functions in $\mathcal{H}$, it is equivalent to require the result to be a number "$1$".
Now you are dealing with a tensor product space structure: $\mathcal{H}_0=\mathcal{H}_1\otimes\mathcal{H}_2$. The "vector" in $\mathcal{H}_0$ actually is defined as a tensor product of two simple vectors $\vec{v}_0 =\vec{v}_1\otimes\vec{v}_2$. Then the problem is what is the "scalar" in this $\mathcal{H}_0$? A more reasonable way to define a scalar is actually $a\otimes b$, where both $a, b\in\mathbb{C}$ are "daily-life" numbers. Therefore, more precisely, the normalization requirement now is modified as $\langle\vec{v}_0|\vec{v}_0\rangle = 1\otimes1\equiv \mathbb{1}$. That's why when you normalize it, you should do $\psi$ part and $\chi$ part separately.
Now you may ask: why don't I still use the old definition for scalar, as in a simple $\mathcal{H}$, i.e. just a "number"?
Formally (mathematically), definition of tensor product space should not include any operation mixing them, therefore it's not a good idea to mix two "$1$" in the $1\otimes1$ structure -- for sure you could define another operation if you want, but it has nothing to do with inner product and normalization procedure.
Practically (physically), you could consider the following condition: I fix the spin part as $\chi = |1\rangle$ by turning on some external coupling acting only on the spin, and then vary the potential energy affecting spatial part. In this case, apparently the normalization of spatial part should not affect spin part at all -- which has already been fixed by hand. | {
"domain": "physics.stackexchange",
"id": 45377,
"tags": "quantum-mechanics, hilbert-space, quantum-spin, spinors"
} |
Why does t = $\frac{1}{4}$ for destructive interference by parallel-sided thin films? | Question: I knew for destructive interference for reflected light in thin films: 2t = mλ
Where t is the thickness of the thin film, m is an integer (0,1,2...) and λ is the wavelength of light in the thin film.
However, on some websites and books t = $\frac{λ}{4}$
My questions are:
Why does t = $\frac{λ}{4}$?
Does it have something to do with a phase change on reflection of π?
Does a phase change on reflection happen when light reflects off from less dense → more dense or vice versa?
Answer: There are two things to consider:
As you guessed there is a phase inversion for the reflected light when it is reflected off of a medium that is more optically dense (slower speed of light).
You need to be clear on whether you are considering the reflected light or the transmitted light. Whatever film thickness produced destructive interference for transmitted light will simultaneously produce constructive interference for reflected light and vice versa.
Destructive interference for reflection would occur for $t=\frac{\lambda}{4}$ when the film has an index of refraction between that of the two other media. For example an anti reflective coating on a lens would have a value less than that of the glass (and more than that of air). In that way there is a phase inversion at both interfaces for light starting in the air, or neither for light originating in the glass, either case resulting in an effective path difference for reflection of half a wavelength. | {
"domain": "physics.stackexchange",
"id": 90114,
"tags": "reflection, refraction, interference"
} |
Parity Operator eigenstates in arbitrary basis | Question: On page 298 of Shankar's 'Principles of Quantum Mechanics' the author makes the statement :
""In an arbitrary $\Omega$ basis, $\psi(\omega)$ need not be even or odd, even if $| \psi \rangle $ is a parity eigenstate. ""
Can anyone show me how this is the case when in the X basis the parity eigenfunctions $\psi(x)$ can only have even or odd parity?
Answer: A good answer is already given by @Pleba, I'm sort of try to give a familiar example.
First In general,
$$\Pi |\psi\rangle =\Pi \int |\omega\rangle \langle \omega |\psi\rangle dx$$
As work out by R. Shankar
$$\langle \omega|\Pi|\psi\rangle =\psi(-\omega)$$
Let $\psi(-\omega)=\pm\psi(\omega)$
Then for some other basis
$$\langle \alpha|\Pi|\psi\rangle =\psi(-\alpha)=\int\langle \alpha|\omega\rangle\langle\omega|\Pi|\psi\rangle d\omega=\int\langle \alpha|\omega\rangle\psi(-\omega) d\omega$$
$$\langle \alpha|\Pi|\psi\rangle=\pm\int \langle \alpha|\omega\rangle\psi(\omega) d\omega \stackrel{?}{=} \pm \phi(\alpha)$$
It depends on the overlap function $ \langle \alpha|\omega\rangle$ what's going to happen.
For example, consider $|\omega\rangle=|x\rangle$ and $|\alpha\rangle=|p\rangle$ with $\langle p|x\rangle \sim e^{-ipx/\hbar}$ which is not an odd or even function. So if $\psi(-x)=\pm \psi(x)$ You can not conclude that $\psi(-p)=\pm \psi(p).$ | {
"domain": "physics.stackexchange",
"id": 74365,
"tags": "quantum-mechanics, parity"
} |
Can magnetic loops with no source current knot or link? | Question: The answer to this question is obviously no. I would like to pose a variation of that question. Suppose a simply connected domain of a 3-d vacuum space has no source current. Does there exist a case where two closed loops of magnetic field residing in that domain knot into a link? (Either static or dynamic answer would be great.)
Answer: A magnetic field line is a line that is tangent to the magnetic field vector at every point along the line. The question asks if two magnetic field lines can be linked, if the magnetic field is consistent with Maxwell's equations in a vacuum. The answer is yes, which I'll prove by constructing an example.
The example
In terms of the complex-valued field $\mathbf{F}\equiv\mathbf{E}+i\mathbf{B}$, Maxwell's equations in a vacuum can be written as a pair of equations
$$
\newcommand{\bfA}{\mathbf{A}}
\newcommand{\bfB}{\mathbf{B}}
\newcommand{\bfE}{\mathbf{E}}
\newcommand{\bfF}{\mathbf{F}}
\nabla\cdot\bfF=0
\hspace{2cm}
i\dot\bfF=\nabla\times\bfF.
$$
Any field that satisfies the first equation at time $t=0$ can be evolved in time using the second equation, and the first equation is equivalent to the pair of uncoupled equations $\nabla\cdot\bfB=0=\nabla\cdot\bfE$, so the question reduces to this purely spatial question about the magnetic field at time $t=0$: If the vector field $\bfB$ satisfies $\nabla\cdot\bfB=0$, then can two of its lines be linked?
If $\bfB=\nabla\times\bfA$ for some smooth nonsingular vector field $\bfA$, then we automatically have $\nabla\cdot\bfB=0$. We can use this fact to construct an example. Use a Cartesian coordinate system $x,y,z$, and use the abbreviations
$$
f(x,y)\equiv x^2+y^2
\hspace{2cm}
g(x,z)\equiv (x-1)^2+z^2
$$
and consider these two circles:
Circle $C'$ is defined by $f(x,y)=1$ and $z=0$.
Circle $C''$ is defined by $g(x,z)=1$ and $y=0$.
These circles are linked. The goal is to construct a field $\bfA$ such that both of these circles are everywhere tangent to the field $\bfB\equiv \nabla\times\bfA$, so that $C'$ and $C''$ are both lines of $\bfB$. To do this, write $\bfA=\bfA'+\bfA''$ with
$$
\bfA'=
\begin{cases}
(0,0,(f(x,y))^{-1/2}) & \text{for }|f(x,y)-1|<0.001\text{ and }|z|<0.001\\
(0,0,0) & \text{for } |f(x,y)-1| > 0.002\text{ or }|z|>0.002
\end{cases}
$$
and
$$
\bfA''=
\begin{cases}
(0,(g(x,z))^{-1/2},0) & \text{for }|g(x,z)-1|<0.001\text{ and }|y|<0.001\\
(0,0,0) & \text{for } |g(x,z)-1| > 0.002\text{ or }|y|>0.002,
\end{cases}
$$
and interpolate smoothly between $0.001$ and $0.002$ in each case. Then $\bfA'$ is nonzero only in a small neighborhood of $C'$, and $\bfA''$ is nonzero only in a small neighborhood of $C''$, and both $\bfA'$ and $\bfA''$ are smooth and nonsingular everywhere. This implies that $\bfB$ is smooth and nonsingular everywhere. Within the neighborhoods defined by the $0.001$ cutoff, the fields satisfy
\begin{align}
\nabla\times\bfA' &\propto (f(x,y))^{-3/2}(y,-x,0) \\
\nabla\times\bfA'' &\propto (g(x,z))^{-3/2}(z,0,-(x-1)),
\end{align}
and these are everywhere tangent to the circles $C'$ and $C''$, respectively. The supports of $\bfA'$ and $\bfA''$ do not overlap, so $\bfB\equiv\nabla\times(\bfA'+\bfA'')$ is also everywhere tangent to the circles $C'$ and $C''$, and these circles are linked. This completes the construction.
The intuition that led to the example
Here's the intuition that led to the example. Consider two straight current-carrying wires $W'$ and $W''$, one on the line $x=y=0$ and one on the line $x-1=z=0$. The magnetic field produced by either one of these wires by itself has circular field lines centered on that wire. Let $\bfA'$ and $\bfA''$ be the corresponding vector potentials. Choose the circles $C'$ and $C''$ as above, and modify $\bfA'$ and $\bfA''$ to drop smoothly to zero as we move away from those respective circles, so that their supports don't overlap, and take the currents to be zero for consistency. Now take $\bfB$ to be the magnetic field with vector potential $\bfA'+\bfA''$, and we have two linked magnetic field lines in a magnetic field that is everywhere smooth and nonsingular and satisfies Maxwell's equations in a vacuum at $t=0$. What happens at $t\neq 0$ can be inferred from Maxwell's equations using this initial condition together with an additional initial condition on the electric field, but the question allows for a dynamic answer, so what happens at $t\neq 0$ is not important. The magnetic field lines are linked at $t=0$, at least.
Can a magnetic field line form an arbitrary knot?
As Hans suggested in a comment, the idea described above can be simplified and generalized by using (temporary) solenoids instead of (temporary) individual wires. Take any knot drawn in 3d space, which may have multiple linked components. Replace the drawn line with a long-and-thin solenoid so that the original line coincides with a magnetic field line enclosed by the solenoid. Let $\bfA$ be the vector potential of the whole set-up, and modify $\bfA$ so that it falls rapidly but smoothly to zero away from the original drawn line. For consistency with Maxwell's equations, also delete the current (the solenoid), because now $\bfA$ is zero in the space where the current was. Since $\bfA$ is still the same as before in a neighborhood of the drawn line, taking the curl of $\bfA$ will give a magnetic field that is well-defined everywhere, satisfies $\nabla\cdot\bfB=0$ everywhere, and has a line that coincides with the original drawn line. | {
"domain": "physics.stackexchange",
"id": 74622,
"tags": "electromagnetism, magnetic-fields, topology"
} |
Avoid re-initialize values in javascript | Question: In the below code, variable xAxis is re-initialized multiple times before every push into the array. How can i optimize this code?
var xAxis=1;
vm.pnLreportingPlotData.push(plotModel.map(function(o){
return {x:xAxis++,y:o.PnLreporting};
})
);
var xAxis=1;
vm.var95PlotData.push(plotModel.map(function(o){
return { x:xAxis++, y:o.VaR95Total};
})
);
var xAxis=1;
vm.var99PlotData.push(plotModel.map(function(o){
var xAxis = 1;
return { x:xAxis++, y:o.VaR99Total};
})
);
Answer: You could remove the need for your xAxis variable through using the index when mapping:
vm.pnLreportingPlotData.push(plotModel.map(function (o, i) {
return { x: i + 1, y: o.PnLreporting }
}
vm.var95Plotdata.push(plotModel.map(function (o, i) {
return { x: i + 1, y: o.VaR95Total }
}
vm.var99PlotData.push(plotModel.map(function (o, i) {
return { x: i + 1, y: o.VaR99Total }
}
Array.prototype.map provides three arguments, currentValue (o), index, and array (which isn't needed in our example)
N.B. you did not need to keep declaring xAxis after you had done so the first time (no need for the multiple var declarations, they could simply become xAxis = 1)
While the above answers your question of how to avoid redeclaring xAxis, I've added a few additional comments below:
Naming
You could possibly benefit from using more descriptive variable names. It is not immediately obvious what variables such as o represent. Likewise, i could be changed to index if you found it helped readability.
Consistency:
var95Plotdata uses lowercase 'd' for 'data', however var99PlotData capitializes it (is this a typo?)
VaR95Total capitalizes both 'V' and 'R' - can this be changed to be consistent with your view model or is this coming from an external api?
Extracting logic
Since the logic is almost identical besides property names, you could extract the logic if you wanted:
function getPlotData(property) {
// Ideally renaming `o` to whatever it represents
return plotModel.map(function (o, index) {
return { x: index + 1, y: o[property] }
}))
}
vm.pnLreportingPlotData.push(getPlotData('PnLreporting'))
vm.var95Plotdata.push(getPlotData('vaR95Total'))
vm.var99PlotData.push(getPlotData('VaR99Total')) | {
"domain": "codereview.stackexchange",
"id": 26691,
"tags": "javascript, jquery"
} |
Qiskit: PauliTrotterEvolution for Hamiltonian simulation | Question: Context:
I have H (in qiskit.opflow notation).
I want a circuit which does exp(-itH)
Solution attempt:
I think qiskit.opflow.evolutions.PauliTrotterEvolution should do the trick.
Also, I found this nice tutorial https://nahumsa.github.io/n-blog/2021-05-11-quantum_simulation/
Minimalistic example:
from qiskit.opflow import X, Y, Z, I, PauliTrotterEvolution
from qiskit.circuit import Parameter
hamiltonian = 3*(X^X^Z) - 1*(Z^X^Z)
# evolution operator
evo_time = Parameter('t')
evolution_op = (evo_time*hamiltonian).exp_i()
print(evolution_op)
# into circuit
num_time_slices = 1
trotterized_op = PauliTrotterEvolution(
trotter_mode='trotter',
reps=num_time_slices).convert(evolution_op)
trotterized_op.to_circuit().draw('mpl')
My output:
Problem:
(a) the Hamiltonian doesn't seem to get trotterized (split up)
(b) the circuit doesn't show the individual gates
Compare that to the nice output of the tutorial quoted above:
My questions:
Is PauliTrotterEvolution the right tool in the first place?
Why does the circuit look different for me than in the linked tutorial?
Solution
using decompose() shows the gates for single-term-hamiltonians
hamiltonian = (X^X^Z)
evo_time = Parameter('t')
evolution_op = (evo_time*hamiltonian).exp_i()
num_time_slices = 1
trotterized_op = PauliTrotterEvolution(
trotter_mode='trotter',
reps=num_time_slices).convert(evolution_op)
trot_op_circ = trotterized_op.to_circuit()
trot_op_circ_decomp = trot_op_circ.decompose()
trot_op_circ_decomp.draw('mpl')
and multiple-term-hamiltonians are split up, but not decomposed into gates:
hamiltonian = (X^X^Z) + (Z^X^Z)
evo_time = Parameter('t')
evolution_op = (evo_time*hamiltonian).exp_i()
num_time_slices = 1
trotterized_op = PauliTrotterEvolution(
trotter_mode='trotter',
reps=num_time_slices).convert(evolution_op)
trot_op_circ = trotterized_op.to_circuit()
trot_op_circ_decomp = trot_op_circ.decompose()
trot_op_circ_decomp.draw('mpl')
To decompose multiple-term-hamiltonians into gates, decompose can be called repeatedly:
trot_op_circ_decomp = trot_op_circ.decompose()
trot_op_circ_decomp = trot_op_circ_decomp.decompose()
trot_op_circ_decomp.draw('mpl')
Answer: You have to decompose the circuit produced by the PauliTrotterEvolution to see what gates are applied inside
# your code from above
circuit = trotterized_op.to_circuit()
decomposed = circuit.decompose()
decomposed.draw('mpl')
In Qiskit we often wrap subcircuits into a block so that you can see what's going on on a higher level of abstraction. But you can still have a look inside those blocks by decomposing :) | {
"domain": "quantumcomputing.stackexchange",
"id": 3512,
"tags": "qiskit, programming, hamiltonian-simulation"
} |
Finding Big O of 1/n + 1/n+2 ... 1/n' | Question: The number of computations of an algorithm is $n'm\sum_{x=0}^{n'-m} 1/(m+x)$. What is the complexity of the algorithm ?
Thanx for help.
Answer: The series $\sum_{m=1}^\infty 1/m$ is known as the harmonic series. Its partial sums are the harmonic numbers $H_n := \sum_{m=1}^n 1/m$. It is known that $H_n = \ln n + O(1)$. In fact, there are much more accurate asymptotic estimates, such as $H_n = \ln n + \gamma + O(1/n)$, where $\gamma$ is Euler's constant.
In your case, you have a more general type of sum of the form
$$
\sum_{m=a}^b \frac{1}{m} = H_b - H_{a-1} = \ln \frac{b}{a} + O\left(\frac{1}{a}\right).
$$ | {
"domain": "cs.stackexchange",
"id": 11130,
"tags": "asymptotics"
} |
Binary tree encoding | Question: I implemented a solution to this coding challenge on the Code Golf. I have decent experience with C/C++, but it's been a while since I've used them extensively.
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
// Prototypes
struct BTnode;
struct BTnode * bt_add_left(struct BTnode * node, int data);
struct BTnode * bt_add_right(struct BTnode * node, int data);
int bt_depth(struct BTnode * tree);
int bt_encode_preorder(int * list, struct BTnode * tree, int index);
struct BTnode * bt_node_create(int data);
int bt_node_delete(struct BTnode * node);
void bt_print_preorder(struct BTnode * tree);
int * encode(struct BTnode * tree);
struct BTnode * decode(int * list);
// Binary tree node
struct BTnode
{
int data;
struct BTnode *left, *right;
};
// Add node to this node's left
struct BTnode * bt_add_left(struct BTnode * node, int data)
{
struct BTnode * newnode = bt_node_create(data);
node->left = newnode;
return newnode;
}
// Add node to this node's right
struct BTnode * bt_add_right(struct BTnode * node, int data)
{
struct BTnode * newnode = bt_node_create(data);
node->right = newnode;
return newnode;
}
// Determine depth of the tree
int bt_depth(struct BTnode * tree)
{
int depth;
int leftdepth = 0;
int rightdepth = 0;
if( tree == NULL ) return 0;
if( tree->left != NULL )
leftdepth = bt_depth(tree->left);
if( tree->right != NULL )
rightdepth = bt_depth(tree->right);
depth = leftdepth;
if(rightdepth > leftdepth)
depth = rightdepth;
return depth + 1;
}
// Recursively add node values to integer list, using 0 as an unfolding sentinel
int bt_encode_preorder(int * list, struct BTnode * tree, int index)
{
list[ index++ ] = tree->data;
// This assumes the tree is complete (i.e., if the current node does not have
// a left child, then it does not have a right child either)
if( tree->left != NULL )
{
index = bt_encode_preorder(list, tree->left, index);
index = bt_encode_preorder(list, tree->right, index);
}
// Add sentinel
list[ index++ ] = 0;
return index;
}
// Allocate memory for a node
struct BTnode * bt_node_create(int data)
{
struct BTnode * newnode = (struct BTnode *) malloc(sizeof(struct BTnode));
newnode->left = NULL;
newnode->right = NULL;
newnode->data = data;
return newnode;
}
// Free node memory
int bt_node_delete(struct BTnode * node)
{
int data;
if(node == NULL)
return 0;
data = node->data;
if(node->left != NULL)
bt_node_delete(node->left);
if(node->right != NULL)
bt_node_delete(node->right);
free(node);
return data;
}
// Print all values from the tree in pre-order
void bt_print_preorder(struct BTnode * tree)
{
printf("%d ", tree->data);
if(tree->left != NULL)
bt_print_preorder(tree->left);
if(tree->right != NULL)
bt_print_preorder(tree->right);
}
// Decode binary tree structure from a list of integers
struct BTnode * decode(int * list)
{
struct BTnode * tree;
struct BTnode * nodestack[ list[0] ];
int i,j;
// Handle trivial case
if( list == NULL ) return NULL;
tree = bt_node_create( list[1] );
nodestack[ 1 ] = tree;
j = 1;
for(i = 2; i < list[0]; i++)
{
if( list[i] == 0 )
{
//printf("popping\n");
j--;
}
else
{
if( nodestack[j]->left == NULL )
{
//printf("Adding %d to left of %d\n", list[i], nodestack[j]->data);
nodestack[ j+1 ] = bt_add_left(nodestack[j], list[i]);
j++;
}
else
{
//printf("Adding %d to right of %d\n", list[i], nodestack[j]->data);
nodestack[ j+1 ] = bt_add_right(nodestack[j], list[i]);
j++;
}
}
}
return tree;
}
// Encode binary tree structure as a list of integers
int * encode(struct BTnode * tree)
{
int maxnodes, depth, length;
int * list;
int j;
// Handle trivial case
if(tree == NULL) return NULL;
// Calculate maximum number of nodes in the tree from the tree depth
maxnodes = 1;
depth = bt_depth(tree);
for(j = 0; j < depth; j++)
{
maxnodes += pow(2, j);
}
// Allocate memory for the list; we need two ints for each value plus the
// first value in the list to indicate length
list = (int *) malloc( ((maxnodes * 2)+1) * sizeof(int));
length = bt_encode_preorder(list, tree, 1);
list[ 0 ] = length;
return list;
}
int main()
{
struct BTnode * tree;
struct BTnode * newtree;
int * list;
int i;
/* Provided example
5
/ \
3 2
/ \
2 1
/ \
9 9
*/
tree = bt_node_create(5);
bt_add_left(tree, 3);
struct BTnode * temp = bt_add_right(tree, 2);
bt_add_right(temp, 1);
temp = bt_add_left(temp, 2);
bt_add_left(temp, 9);
bt_add_right(temp, 9);
printf("T (traversed in pre-order): ");
bt_print_preorder(tree);
printf("\n");
list = encode(tree);
printf("T (encoded as integer list): ");
for(i = 1; i < list[0]; i++)
printf("%d ", list[i]);
printf("\n");
newtree = decode(list);
printf("T' (decoded from int list): ");
bt_print_preorder(newtree);
printf("\n\n");
// Free memory
bt_node_delete(tree);
bt_node_delete(newtree);
free(list);
return 0;
}
How could my program be improved? I'm thinking mostly in terms of clarity/readability, maintainability, and reusability, but I also welcome any comments about my implementation of the data structures and any possible improvements in terms of performance or correctness.
Answer: In int bt_depth(struct BTnode * tree)
Too many different checks for NULL.
You only need to check once. The call to bt_depth() on the left and right nodes will perform there own explicit checks don't try and pre optimize.
int bt_depth(struct BTnode * tree)
{
if( tree == NULL ) return 0;
int leftdepth = bt_depth(tree->left);
int rightdepth = bt_depth(tree->right);
return max(leftdepth, rightdepth) + 1;
}
In int bt_encode_preorder(int * list, struct BTnode * tree, int index)
You are using tree without check for NULL tree
list[ index++ ] = tree->data;
You are also doing a recursive call without checking.
At some point you may end up hitting a NULL and trying to de-reference it.
if( tree->left != NULL )
{
index = bt_encode_preorder(list, tree->left, index);
index = bt_encode_preorder(list, tree->right, index); // tree->right may be NULL!!!!
}
In int bt_node_delete(struct BTnode * node)
This is note a node delete this is a full tree delete.
It should be named appropriately.
In void bt_print_preorder(struct BTnode * tree)
It is easier just to check if the current node is NULL.
The always print left and right nodes.
void bt_print_preorder(struct BTnode * tree)
{
if (tree == NULL) return;
printf("%d ", tree->data);
bt_print_preorder(tree->left);
bt_print_preorder(tree->right);
}
In encode(struct BTnode * tree)
// Calculate maximum number of nodes in the tree from the tree depth
maxnodes = 1;
depth = bt_depth(tree);
for(j = 0; j < depth; j++)
{
maxnodes += pow(2, j);
}
This is the only use of bt_depth(). Rather than do this why not just have a function called bt_count_nodes(BTnode* tree) that actual counts the nodes (bt_depth actually traverses all the nodes anyway (why not count them instead of the depth).
In struct BTnode * decode(int * list)
I can quite work out if this is correct without running it. This is a bad sign that the code could do with simplification. | {
"domain": "codereview.stackexchange",
"id": 47,
"tags": "c, tree"
} |
Is "Experimental Complexity Theory" being used to solve open problems? | Question: Scott Aaronson proposed an interesting challange: can we use supercomputers today to help solve CS problems in the same way that physicists use large particle colliders?
More concretely, my proposal is to
devote some of the world’s computing
power to an all-out attempt to answer
questions like the following: does
computing the permanent of a 4-by-4
matrix require more arithmetic
operations than computing its
determinant?
He concludes that this would require ~$10^{123}$ floating point operations, which is beyond our current means. The slides are available and are also worth reading.
Is there any precedence for solving open TCS problems through brute force experimentation?
Answer: In "Finding Efficient Circuits Using SAT-solvers", Kojevnikov, Kulikov, and Yaroslavtsev have used SAT solvers to find better circuits for computing $MOD_k$ function.
I have used computers to find proofs of time-space lower bounds, as described here. But that was only feasible because I was working with an extremely restrictive proof system.
Maverick Woo and I have been working for some time to find the "right" domain for proving circuit upper/lower bounds using computers. We had hoped that we may resolve $CC^0$ vs $ACC^0$ (or a very weak version of it) using SAT solvers, but this is looking more and more unlikely. (I hope Maverick doesn't mind me saying this...)
The first generic problem with using brute-force search to prove nontrivial lower bounds is that it just takes too damn long, even on a very fast computer. The alternative is to try to use SAT solvers, QBF solvers, or other sophisticated optimization tools, but they do not seem to be enough to offset the enormity of the search space. Circuit synthesis problems are among the hardest practical instances one can come by.
The second generic problem is that the "proof" of the resulting lower bound (obtained by running brute-force search and finding nothing) would be insanely long and apparently yield no insight (other than the fact that the lower bound holds). So a big challenge to "experimental complexity theory" is to find interesting lower bound questions for which the eventual "proof" of the lower bound is short enough to be verifiable, and interesting enough to lead to further insights. | {
"domain": "cstheory.stackexchange",
"id": 44,
"tags": "cc.complexity-theory"
} |
NPI-candidate hereditary graph property? | Question: A graph property is called hereditary if it is closed with respect to deleting vertices. There are many interesting hereditary graph properties. Moreover, a number of nontrivial general facts are also known about hereditary classes of graphs, see "Global properties of hereditary classes?"
Considering complexity, hereditary graph properties include both polynomial-time decidable and NP-complete ones. We know, however, that there are a number of natural problems in NP that are candidates for NP-intermediate status, see a nice collection in Problems Between P and NPC. Among the numerous answers there, however, none of them looks like a hereditary graph property (unless I overlooked something).
Question: Do you know a hereditary graph property that is a candidate for NP-intermediate status? Or else, is there a dichotomy theorem for hereditary graph properties?
Answer: Is there a particular style of problem you are looking for, or anything related to a hereditary graph property? Two common types of problems would be
(1) recognition: does a given $G$ have the hereditary property? or
(2) find the largest (induced or not) subgraph $H$ in $G$ having the hereditary property.
As I'm sure you are familiar, (2) is NP-complete (Mihalis Yannakakis: Node- and Edge-Deletion NP-Complete Problems. STOC 1978: 253-264) But I'm not sure if you are specifically only asking about problems of type (1) (recognition problems.)
There are a few recognition problems of hereditary graph classes which are still open. I think the one-in-one-out graphs are open to recognize and clearly in NP.
Graphclasses.org also reports that the related class of opposition graphs is still open to recognize (and these are also clearly in NP.) Apparently, so is the class of Domination graphs.
A large list of open (and unknown) recognition status can be found on that site, and pretty much all of those properties appear to be hereditary.
http://www.graphclasses.org/classes/problem_Recognition.html
There is one recognition problem they list under GI-complete, which is not a hereditary property ... so it is interesting to think that perhaps deciding a hereditary problem may indeed have a dichotomy theorem. | {
"domain": "cstheory.stackexchange",
"id": 2563,
"tags": "cc.complexity-theory, graph-theory, graph-algorithms, p-vs-np"
} |
My BlackJack Game in C# Console | Question: what do you think of my BlackJack game in regards in Object-Oriented Programming?
My code is at https://github.com/ngaisteve1/BlackJack
A follow-up question is available: Follow-Up
using System;
using System.Threading;
public class BlackJackGame
{
private static deckCard deck;
public void Play()
{
bool continuePlay = true;
Console.Title = "Steve BlackJack Game (Version 2)";
Console.Write("Steve BlackJack Game ");
Utility.MakeColor2(" ♠ ",ConsoleColor.White);
Utility.MakeColor2(" ♥ ",ConsoleColor.Red);
Utility.MakeColor2(" ♣ ",ConsoleColor.White);
Utility.MakeColor2(" ♦ ",ConsoleColor.Red);
deck = new deckCard();
Console.Write("\n\nEnter player's name: ");
// Create player
var player = new Player(Console.ReadLine());
// Create dealer
var dealerComputer = new Player();
while (continuePlay)
{
// Initialize screen and player's certain property - Start
Console.Clear();
player.IsNaturalBlackJack = false;
player.IsBusted = false;
dealerComputer.IsNaturalBlackJack = false;
dealerComputer.IsBusted = false;
// Initialize screen and player's certain property - End
if (deck.GetRemainingDeckCount() < 20)
{
// Get a new shuffled deck.
deck.Initialize();
Console.WriteLine("Low number of cards remaining. New cold deck created.");
}
deck.ShowRemainingDeckCount();
// Show player bank roll
Console.WriteLine($"{player.Name} Chips Balance: {player.ChipsOnHand}");
// Get bet amount from player
Console.Write("Enter chip bet amount: ");
player.ChipsOnBet = Convert.ToInt16(Console.ReadLine());
// Deal first two cards to player
deck.DealHand(player);
// Show player's hand
player.ShowUpCard();
Thread.Sleep(1500);
// Deal first two cards to dealer
deck.DealHand(dealerComputer);
// Show dealer's hand
dealerComputer.ShowUpCard(true);
Thread.Sleep(1500);
// Check natural black jack
if (!checkNaturalBlack(player, dealerComputer))
{
// If both also don't have natural black jack,
// then player's turn to continue.
PlayerAction(player);
Console.WriteLine("\n--------------------------------------------------");
PlayerAction(dealerComputer);
Console.WriteLine("\n--------------------------------------------------");
//Announce the winner.
AnnounceWinner(player, dealerComputer);
}
Console.WriteLine("This round is over.");
Console.Write("\nPlay again? Y or N? ");
continuePlay = Console.ReadLine() == "Y" ? true : false;
// for brevity, no input validation
}
Console.WriteLine($"{player.Name} won {player.TotalWins} times.");
Console.WriteLine($"{dealerComputer.Name} won {dealerComputer.TotalWins} times.");
Console.WriteLine("Game over. Thank you for playing.");
}
private static void PlayerAction(Player currentPlayer)
{
// set to player's turn
bool playerTurnContinue = true;
string opt = "";
while (playerTurnContinue)
{
Console.Write($"\n{currentPlayer.Name}'s turn. ");
if (currentPlayer.Name.Equals("Dealer"))
{
Thread.Sleep(2000); // faking thinking time.
// Mini A.I for dealer.
opt = currentPlayer.GetHandValue() < 16 ? "H" : "S";
}
else
{
// Prompt player to enter Hit or Stand.
Console.Write("Hit (H) or Stand (S): ");
opt = Console.ReadLine();
}
switch (opt.ToUpper())
{
case "H":
Console.Write($"{currentPlayer.Name} hits. ");
Thread.Sleep(1500);
// Take a card from the deck and put into player's Hand.
currentPlayer.Hand.Add(deck.DrawCard());
Thread.Sleep(1500);
// Check if there is any Ace in the Hand. If yes, change all the Ace's value to 1.
if (currentPlayer.GetHandValue() > 21 && currentPlayer.CheckAceInHand())
currentPlayer.Hand = currentPlayer.ChangeAceValueInHand();
currentPlayer.ShowHandValue();
break;
case "S":
if (currentPlayer.GetHandValue() < 16)
Console.WriteLine($"{currentPlayer.Name} is not allowed to stands when hand value is less than 16.");
else
{
Console.WriteLine($"{currentPlayer.Name} stands.");
Thread.Sleep(1500);
// Show player's hand
currentPlayer.ShowUpCard();
Thread.Sleep(1500);
Console.WriteLine($"{currentPlayer.Name}'s turn is over.");
Thread.Sleep(1500);
playerTurnContinue = false;
}
break;
default:
Console.WriteLine("Invalid command.");
break;
}
// If current player is busted, turn is over.
if (currentPlayer.GetHandValue() > 21)
{
Utility.MakeColor("Busted!", ConsoleColor.Red);
Thread.Sleep(1500);
Console.WriteLine($"{currentPlayer.Name}'s turn is over.");
Thread.Sleep(1500);
currentPlayer.IsBusted = true;
playerTurnContinue = false;
}
// If current player total card in hand is 5, turn is over.
else if (currentPlayer.Hand.Count == 5)
{
Console.WriteLine($"{currentPlayer.Name} got 5 cards in hand already.");
Thread.Sleep(1500);
Console.WriteLine($"{currentPlayer.Name}'s turn is over.");
Thread.Sleep(1500);
playerTurnContinue = false;
}
}
}
private static bool checkNaturalBlack(Player _player, Player _dealer)
{
Console.WriteLine();
if (_dealer.IsNaturalBlackJack && _player.IsNaturalBlackJack)
{
Console.WriteLine("Player and Dealer got natural BlackJack. Tie Game!");
_dealer.ShowUpCard();
return true;
}
else if (_dealer.IsNaturalBlackJack && !_player.IsNaturalBlackJack)
{
Console.WriteLine($"{_dealer.Name} got natural BlackJack. {_dealer.Name} won!");
_dealer.ShowUpCard();
_dealer.AddWinCount();
_player.ChipsOnHand = _player.ChipsOnHand - (int)Math.Floor(_player.ChipsOnBet * 1.5);
return true;
}
else if (!_dealer.IsNaturalBlackJack && _player.IsNaturalBlackJack)
{
Console.WriteLine($"{_player.Name} got natural BlackJack. {_player.Name} won!");
_player.AddWinCount();
_player.ChipsOnHand = _player.ChipsOnHand + (int)Math.Floor(_player.ChipsOnBet * 1.5);
return true;
}
// guard block
return false;
}
private static void AnnounceWinner(Player _player, Player _dealer)
{
Console.WriteLine();
if (!_dealer.IsBusted && _player.IsBusted)
{
Console.WriteLine($"{_dealer.Name} won.");
_dealer.AddWinCount();
}
else if (_dealer.IsBusted && !_player.IsBusted)
{
Console.WriteLine($"{_player.Name} won.");
_player.AddWinCount();
_player.ChipsOnHand = _player.ChipsOnHand + _player.ChipsOnBet;
}
else if (_dealer.IsBusted && _player.IsBusted)
Console.WriteLine("Tie game.");
else if (!_dealer.IsBusted && !_player.IsBusted)
if (_player.GetHandValue() > _dealer.GetHandValue())
{
Console.WriteLine($"{_player.Name} won.");
_player.AddWinCount();
_player.ChipsOnHand = _player.ChipsOnHand + _player.ChipsOnBet;
}
else if (_player.GetHandValue() < _dealer.GetHandValue())
{
Console.WriteLine($"{_dealer.Name} won.");
_dealer.AddWinCount();
_player.ChipsOnHand = _player.ChipsOnHand - _player.ChipsOnBet;
}
else if (_player.GetHandValue() == _dealer.GetHandValue())
Console.WriteLine("Tie game.");
}
}
Answer: I have played a lot of Blackjack in my life and was looking for a little challenge when I came across your question.
So, first let me thank you for inspiring me to code a version of Windows Console Blackjack. I incorporated some of your code and ideas - like the console symbols and color changes, and the shuffle algorithm.
My version is here: https://github.com/lucidobjects/Blackjack
That's my basic take on how to model Blackjack in OOP. I invite you to play it and review the code. As you will see, I adhere to object-oriented principles, including preventing any object from directly setting the internals of any other object.
Regarding your code, here are some thoughts:
You are definitely on the right track to think about building a Hand
class.
Because they have enormously different behaviors, combining
Dealer and Player into a single class is a challenging road to take. I would strongly recommend splitting Dealer out to its own class (as
I did).
Your BlackJackGame class encompasses the functionality of a Casino, a Table, and a Dealer - all of which should be separate classes.
You might want to look into encapsulating the console
writing on the appropriate objects. All the classes that get written
to the screen could have a public Draw() method.
The Deck's DealHand method is another indication that there should be a Dealer
class.
For things like Deck.GetRemainingDeckCount() you might want
to consider a Remaining property rather than a method. Though
picking between a method and a property can be tricky, as I found out
when coding mine. I came to the conclusion that if something has the
attribute throughout its existence then it's a property. If it only
has the attribute sometimes - like after the cards have been dealt,
then more likely a method.
If you are committed to C#, I recommend learning LINQ. It took me a while to even start with LINQ and a while more to learn the basics. But, it has definitely been worth the investment. Now I'm a huge fan of LINQ.
Another C# feature you might want to investigate is expression-bodied members, which I use extensively.
It's a matter of personal preference but I'm a devotee of var.
I also avoid Utility classes as much as possible. In recent memory I
have successfully avoided them entirely.
I also typically avoid static but in the case of the Table class in my version, I figured that's the stuff that's written directly on the felt and/or a sign on the table, so I made an exception.If I were implementing a multi-table casino, that stuff would stop being static and probably move into a TableRules class. | {
"domain": "codereview.stackexchange",
"id": 33903,
"tags": "c#, console"
} |
What other shielding material than lead is effective against gamma rays? | Question: As the question in the title states I am wondering what material can be effectively used to shield gamma rays apart from lead? I believe concrete is often used, but it is nowhere near as effective as lead (6 cm to match 1 cm of lead as I understand it). I also hear significant bodies of water helps, as does tightly packed dirt, but surely there must be other materials that shield nearly as effectively as lead?
Answer: There is nothing magical about lead for this purpose. The driving factor is the number of electrons per unit volume, which reduces (to a first approximation) to the mass density.
You get very good (better than lead) shielding performance from gold, tungsten, mercury, etc; and quite reasonable performance from iron or copper.
Question for the student: why is lead such a common choice for this application? | {
"domain": "physics.stackexchange",
"id": 36209,
"tags": "electromagnetic-radiation, radiation, radioactivity"
} |
How is the equation of Mach number derived? | Question: Wikipedia states that for a pitot-static tachometer, the mach number for subsonic flow equates to
$$M = \sqrt{5\left[\left(\frac{p_t}{p_s}\right)^\frac{2}{7}-1\right]}.$$
How did they get to that result? Is there a derivation, or is it just from a polynomial fit of a tabulated set of data?
Update
I accepted J.G's answer after glancing at the referenced flight test document (a treasure in itself) and realising that $\frac {7}{5}$ is the same as 1.4, but there remains an issue.
Sadly I don't have my uni books anymore with Bernoulli's equation for compressible flow. The issue is with dynamic pressure: for incompressible flows we can take $p_d = \frac {1}{2} \cdot \rho \cdot V^2$, for compressible flow this is $p_d = \frac {1}{2} \cdot \gamma \cdot p_{static} \cdot M^2$.
Right? If I substitute this I don't get to the equation above. So the answer is unfortunately not accepted anymore.
Answer: The factors are not obvious, I agree.
For instance, for a polytrope index, $\gamma$, of 7/5 the exponent of 2/7 corresponds to a term of the form $\left( \tfrac{\gamma - 1}{\gamma} \right)$, which is our first hint. The second hint is that the pitot tube system can be applied to a Bernoulli system. The third thing to note is that for subsonic speeds, which is where a pitot tube actually functions, one can get away with assuming incompressible flow (I know it seems odd since things obviously do compress a little, but the effects can be considered secondary for most intents and purposes).
For a polytropic ideal gas, we know that $P \propto \rho^{\gamma}$. Thus, we can say that:
$$
P = \kappa \ P_{s} \ \rho^{\gamma} \tag{1}
$$
where $P_{s}$ is the static pressure (also can be considered the pressure at infinity). We can rewrite this equation in terms of density to find:
$$
\rho = \kappa^{-\frac{1}{\gamma}} \ \left( \frac{P}{P_{s}} \right)^{\frac{1}{\gamma}} \tag{2}
$$
The differential form of Bernoulli's equation can be given as:
$$
u \ du + \frac{ 1 }{ \rho } \ \frac{ d P }{ d \rho } \ d \rho = 0 \tag{3}
$$
and we know that the speed of sound is given by:
$$
\begin{align}
C_{s}^{2} & = \frac{ \partial P }{ \partial \rho } \tag{4a} \\
& = \gamma \ \kappa \ P_{s} \ \rho^{\gamma - 1} \tag{4b} \\
& = \frac{ \gamma \ P }{ \rho } \tag{4c}
\end{align}
$$
If we replace the $\rho$ in Equation 4b with the form shown in Equation 2, one can show that the 2nd term in Equation 3 can be rewritten as:
$$
\begin{align}
\frac{ 1 }{ \rho } \ \frac{ d P }{ d \rho } \ d \rho & = \frac{ \gamma \ \kappa \ P_{s} }{ \rho } \ \rho^{\gamma - 1} \ d \rho \tag{5a} \\
& = \frac{ \gamma \ \kappa \ P_{s} }{ \rho } \ \kappa^{-\frac{ \gamma - 1 }{ \gamma }} \ \left( \frac{ P }{ P_{s} } \right)^{\frac{ \gamma - 1 }{ \gamma }} \ d \rho \tag{5b} \\
& = \frac{ \gamma \ \ P_{s} }{ \rho } \ \kappa^{\frac{ 1 }{ \gamma }} \ \left( \frac{ P }{ P_{s} } \right)^{\frac{ \gamma - 1 }{ \gamma }} \ d \rho \tag{5c}
\end{align}
$$
If we differentiate Equation 2, we find:
$$
d \rho = \left( \frac{ \rho }{ \gamma \ P_{s} } \right) \ \left( \frac{ P }{ P_{s} } \right)^{-1} \ dP \tag{6}
$$
so that Equation 5c can be rewritten as:
$$
\frac{ \gamma \ \ P_{s} }{ \rho } \ \kappa^{\frac{ 1 }{ \gamma }} \ \left( \frac{ P }{ P_{s} } \right)^{\frac{ \gamma - 1 }{ \gamma }} \ d \rho = \kappa^{\frac{ 1 }{ \gamma }} \ \left( \frac{ P }{ P_{s} } \right)^{-\frac{ 1 }{ \gamma }} \ dP \tag{7}
$$
We define $u \ du \rightarrow C_{s}^{2} \ M \ dM$, thus we rewrite Equation 3 as:
$$
C_{s}^{2} \ M \ dM + \kappa^{\frac{ 1 }{ \gamma }} \ \left( \frac{ P }{ P_{s} } \right)^{-\frac{ 1 }{ \gamma }} \ dP = 0 \tag{8}
$$
We also define $\alpha = \tfrac{ P }{ P_{s} }$ so that $dP \rightarrow P_{s} \ d\alpha$. If we integrate Equation 8 with the limits ranging from $P_{s}$ to $P$, the change of variables makes the 2nd term go to:
$$
\kappa^{\frac{ 1 }{ \gamma }} \ P_{s} \int_{\alpha}^{1} \ d\alpha \ \alpha^{-\frac{ 1 }{ \gamma }} = \left[ \frac{ \gamma \ \kappa^{\frac{ 1 }{ \gamma }} \ P_{s} }{ \gamma - 1 } \ \alpha^{\frac{ \gamma - 1 }{ \gamma }} \right]_{\alpha}^{1} \tag{9}
$$
Thus, Equation 8 can be rewritten as:
$$
0 = \frac{ 1 }{ 2 } C_{s}^{2} \ M^{2} - \frac{ \gamma \ \kappa^{\frac{ 1 }{ \gamma }} \ P_{s} }{ \gamma - 1 } \left[ \alpha^{\frac{ \gamma - 1 }{ \gamma }} - 1\right] \tag{10}
$$
which after some algebra reduces to:
$$
M^{2} = \frac{ 2 }{ \gamma - 1 } \left[ \alpha^{\frac{ \gamma - 1 }{ \gamma }} - 1\right] \tag{11}
$$
As stated above, for $\gamma$ = 7/5, this results in the form about which you are concerned. | {
"domain": "physics.stackexchange",
"id": 41884,
"tags": "homework-and-exercises, fluid-dynamics, acoustics, speed, shock-waves"
} |
amcl publish frequency map-> odom | Question:
Hello! in my tf tree I have that between /map and /odom there's amcl that publish at 20Hz. How to increase this speed rate?
Originally posted by alex920a on ROS Answers with karma: 35 on 2014-09-05
Post score: 0
Answer:
It depends on rate of incoming laser messages.
Originally posted by bvbdort with karma: 3034 on 2014-09-05
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by alex920a on 2014-09-05:
ok so where do I have to look? In the robot's urdf where is specified the laserscan? Or somewhere else?
Comment by bvbdort on 2014-09-05:
its from your laser driver. or you can check your laser message frequency by using rostopic hz /lasertopicname
Comment by alex920a on 2014-09-05:
ok perfect! I'll check ! thanks!
Comment by alex920a on 2014-09-07:
where do I have to modify in order to increase frequency? In the urdf file I have :
<sick_laser_v0 name="laser" parent="base" ros_topic="scan"
update_rate="30" min_angle="-0.97" max_angle="0.97" l_x="0.5" l_y="0.0" l_z="0.25" l_r="0.0" l_p="0.0" l_yaw="0.0"/>
Comment by bvbdort on 2014-09-07:
I guess you are using simulator, i have no idead about gazebo. try changing update_rate. are you getting 30hz for rostopic hz /lasertopicname
Comment by alex920a on 2014-09-07:
Yes I'm using Gazebo. Still 20 hz after modifying update_rate to 30
Comment by alex920a on 2014-09-09:
Other comments?
Comment by bvbdort on 2014-09-09:
may be you are not modifying right file. try modifying update rate in gazebo_ros_laser_controller plugin | {
"domain": "robotics.stackexchange",
"id": 19310,
"tags": "ros, navigation, frequency, rate, amcl"
} |
A question on the electromagnetic field tensor | Question: Consider
\begin{equation}
\delta \left(F^{\mu\nu}F_{\mu\nu}\right)=2F^{\mu\nu}\delta F_{\mu\nu}
\end{equation}
I am trying to convince myself that the above holds for any arbitrary explicit form of $F_{\mu\nu}$. I was able to see that the equation holds in cartesian coordinates by explicitly writing it out. However, I don't think it is sufficient to say that since the equation holds in cartesian coordinates, the equation is then true for any coordinates. Moreover, the metric tensor need not be diagonal so the components of the transformed rank 2 tensor might not contain terms present in the respective components of the original tensor.
Is there a proper and easy to understand explanation to why the equation is always true, which can also be applied to other objects such as $\partial_{\mu}\phi\partial^{\mu}\phi$ ?
Answer: If $\delta$ commutes with metric tensors in addition to obeying the Leibniz rule,$$\begin{align}\delta(F^{\mu\nu}F_{\mu\nu})&=g^{\mu\rho}g^{\nu\sigma}\delta(F_{\rho\sigma}F_{\mu\nu})\\&=g^{\mu\rho}g^{\nu\sigma}\delta F_{\rho\sigma}F_{\mu\nu}+g^{\mu\rho}g^{\nu\sigma}F_{\rho\sigma}\delta F_{\mu\nu}\\&= \delta F_{\rho\sigma}F^{\rho\sigma}+F^{\mu\nu}\delta F_{\mu\nu}\\&= \delta F_{\mu\nu}F^{\mu\nu}+F^{\mu\nu}\delta F_{\mu\nu},\end{align}$$where in the last line we relabel indices in one term. Note to get this as $2F^{\mu\nu}\delta F_{\mu\nu}$ we technically also need commutativity (for example, you won't get the result you requested if $F$ is Grassmann-valued.) You're welcome to repeat this in the much simpler case of $\delta(\partial_\mu\phi\partial^\mu\phi)$ as an exercise. | {
"domain": "physics.stackexchange",
"id": 81396,
"tags": "electromagnetism, metric-tensor, tensor-calculus, variational-calculus"
} |
ROS2: Using parameters in a class (Python) - No output before Ctrl+C | Question:
Hello,
I'm new to ROS2 but have previously worked with ROS Kinetic. I have built ROS2 Foxy on 64-bit Windows 10 and currently going through tutorials in Tutorial: Using parameters in a class (Python), which involves declaring a parameter and then changing it dynamically via terminal and launch file. The node runs fine with ros2 run python_parameters param_talker and I can change the parameter in a terminal, but when re-parametering through a launch file, it seems to completely stall and won't output anything via self.get_logger().info(). That is, until I hit Ctrl+C, at which point the node spins once with the new parameter before termination.
Output:
C:\dev_ws>ros2 launch python_parameters python_parameters_launch.py
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [param_talker.EXE-1]: process started with pid [3264]
[WARNING] [launch]: user interrupted with ctrl-c (SIGINT)
[ERROR] [param_talker.EXE-1]: process has died [pid 3264, exit code 3221225786, cmd 'C:\dev_ws\install\lib\python_parameters\param_talker.EXE --ros-args -r __node:=custom_parameter_node --params-file C:\Users\h25902\AppData\Local\Temp\launch_params_c0x3kned'].
[param_talker.EXE-1] [INFO] [1600066038.332151500] [custom_parameter_node]: Hello earth!
Node (\dev_ws\src\python_parameters\python_parameters\python_parameters_node.py):
import rclpy
import rclpy.node
from rclpy.exceptions import ParameterNotDeclaredException
from rcl_interfaces.msg import ParameterType
class MinimalParam(rclpy.node.Node):
def __init__(self):
super().__init__('minimal_param_node')
timer_period = 2 # seconds
self.timer = self.create_timer(timer_period, self.timer_callback)
self.declare_parameter("my_parameter")
def timer_callback(self):
# First get the value parameter "my_parameter" and get its string value
my_param = self.get_parameter("my_parameter").get_parameter_value().string_value
# Send back a hello with the name
self.get_logger().info('Hello %s!' % my_param)
# Then set the parameter "my_parameter" back to string value "world"
my_new_param = rclpy.parameter.Parameter(
"my_parameter",
rclpy.Parameter.Type.STRING,
"world"
)
all_new_parameters = [my_new_param]
self.set_parameters(all_new_parameters)
def main():
rclpy.init()
node = MinimalParam()
rclpy.spin(node)
if __name__ == '__main__':
main()
Setup.py (\dev_ws\src\python_parameters\setup.py):
from setuptools import setup
import os
from glob import glob
package_name = 'python_parameters'
setup(
name=package_name,
version='0.0.0',
packages=[package_name],
data_files=[
('share/ament_index/resource_index/packages',
['resource/' + package_name]),
('share/' + package_name, ['package.xml']),
(os.path.join('share', package_name), glob('launch/*_launch.py'))
],
install_requires=['setuptools'],
zip_safe=True,
maintainer='Todo Todo',
maintainer_email='todo.todo@todo.fi',
description='Python parameter tutorials',
license='TODO: LICENSE DECLARATION',
tests_require=['pytest'],
entry_points={
'console_scripts': [
'param_talker = python_parameters.python_parameters_node:main',
],
},
)
Launch file (\dev_ws\src\python_parameters\launch\python_parameters_launch.py):
from launch import LaunchDescription
from launch_ros.actions import Node
def generate_launch_description():
return LaunchDescription([
Node(
package="python_parameters",
executable="param_talker",
name="custom_parameter_node",
output="screen",
emulate_tty=True,
parameters=[
{"my_parameter": "earth"}
]
)
])
Does anybody have previous experiences/any reason what's causing this? I tried the C++ version of the tutorial (which works fine), but I'd prefer using Python with ROS2 later on.
Thank you in advance!
EDIT: This appears to be the case for all launch files created in a Python package. Any thoughts?
Originally posted by SSar on ROS Answers with karma: 53 on 2020-09-14
Post score: 0
Answer:
on windows you have to:
set "RCUTILS_LOGGING_BUFFERED_STREAM=1"
It is described here:
https://index.ros.org/doc/ros2/Tutorials/Logging-and-logger-configuration/
Originally posted by aalexanderr with karma: 46 on 2020-09-14
This answer was ACCEPTED on the original site
Post score: 1
Original comments
Comment by SSar on 2020-09-15:
That did it. Thanks! | {
"domain": "robotics.stackexchange",
"id": 35531,
"tags": "ros, ros2, tutorials"
} |
forcing decision tree use specific features first | Question: My goal it to force some feature used firstly to split tree. Below, the function splitted tree using feature_3 first. For instance, is there a way to force to use feature_2 first instead of feature_3 ?
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn import tree
iris = datasets.load_iris()
X = iris.data
y = iris.target
fit = DecisionTreeClassifier(max_leaf_nodes=3, random_state=0).fit(X,y)
text_representation = tree.export_text(fit)
print('Graph')
print(text_representation)
Answer: If you want to force your own split (your own segmentation of the data), split the data yourself and build separate trees. This will allow each tree to split, optimize, build to the proper depth, regularization, etc. for each segment.
Then your scoring routine looks at your segmentation then uses the appropriate tree.
I use this technique when I believe (research by SMEs and data) that the segments are different enough - often even have different data available to each - that makes this extra effort worthwhile. I do not segment and build the models, then compare to segmented models to check which gives me the performance I need. | {
"domain": "datascience.stackexchange",
"id": 10814,
"tags": "decision-trees"
} |
How can I determine transmission/reflection coefficients for light? | Question: When light rays reflect off a boundary between two materials with different indices of refraction, a lot of the sources I've seen (recently) don't discuss the relation between the amplitude (or equivalently, intensity) of the transmitted/reflected rays and the original ray. Mostly they just discuss the phase difference induced by the reflection, for instance to calculate thin film interference effects.
Is it possible to calculate the transmission coefficient $T$ and reflection coefficient $R$ based on other optical properties of the materials, such as the index of refraction? Or do they need to be looked up from a reference table?
Answer: In addition to Fresnel equations, and in response to your question regarding the "... relation between the amplitude of the transmitted/reflected rays and the original ray":
$$T_{\parallel}=\frac{2n_{1}\cos\theta_{i}}{n_{2}\cos\theta_{i}+n_{1}\cos\theta_{t}}A_{\parallel}$$
$$T_{\perp}=\frac{2n_{1}\cos\theta_{i}}{n_{1}\cos\theta_{i}+n_{2}\cos\theta_{t}}A_{\perp}$$
$$R_{\parallel}=\frac{n_{2}\cos\theta_{i}-n_{1}\cos\theta_{t}}{n_{2}\cos\theta_{i}+n_{1}\cos\theta_{t}}A_{\parallel}$$
$$R_{\perp}=\frac{n_{1}\cos\theta_{i}-n_{2}\cos\theta_{t}}{n_{1}\cos\theta_{i}+n_{2}\cos\theta_{t}}A_{\perp}$$
where $A_{\parallel}$ and $A_{\perp}$ is the parallel and perpendicular component of the amplitude of the electric field for the incident wave, respectively. Accordingly for the $T$ (transmitted wave) and $R$ (reflected wave). I think the notation is straightforward to understand. This set of equations are also called Fresnel equations (there are three or four representations). | {
"domain": "physics.stackexchange",
"id": 19,
"tags": "optics, electromagnetic-radiation, reflection, refraction"
} |
Complexity of Computing Lexicographically Minimal Element of Orbit | Question: Given strong generators for a group $(G \leq S_n, *)$ acting on bitstrings of length $n$ and an element $s \in \{0, 1\}^n$, how hard is it to compute the lexicographically minimal element of $G.s$, the orbit of $s$ in $G$?
Answer: This problem is $FP^{NP}$-complete, as shown here.
It means that the lexicographical leader of the orbit is built in deterministic polynomial time with access to a $NP$-oracle. | {
"domain": "cstheory.stackexchange",
"id": 3963,
"tags": "permutations, gr.group-theory, complexity"
} |
Theoretically, Which d orbital participates in sp3d and sp3d2? | Question: This question is regarding the old way (inaccurate) $\ce{PCl5}$ and $\ce{SF6}$ are taught in elementary chemistry.
In the crystal field splitting diagram for triagonal bypyramidal geometry, $\mathrm{d}_{z^2}$ has the highest energy. Now, since $\mathrm{s}$, $\mathrm{p}$, $\mathrm{d}$ of same shell hybridise, then the $\mathrm{d}$ orbital closest in energy to $\mathrm{s}$ and $\mathrm{p}$ should take part, which should be $\mathrm{d}_{xz}$ or $\mathrm{d}_{yz}$. Instead, the hybridisation is said to be $\mathrm{sp^3d}_{z^2}$.
A similar problem arises for molecules with an octahedral shape. It has $\mathrm{d}_{x^2-y^2}$ and $\mathrm{d}_{z^2}$ which are higher in energy than the rest and have higher energy difference with $\mathrm{s}$ and $\mathrm{p}$ orbitals of the same shell. There is no such problem for $\mathrm{d^2sp^3}$, $\mathrm{d^3s}$, $\mathrm{dsp^2}$.
EDIT: Let me change the question to ask the same thing. In outer shell octahedral complexes ($\mathrm{s,p,d}$ of same principal quantum numbers take part) why are the $\mathrm{e_g}$ orbitals taking part in sigma bond formation when they are furthest away from $n\mathrm{s}$ and $n\mathrm{p}$ orbitals?
There is no such problem for inner shell complexes.
Answer: The answer to this lies in the symmetry of the orbitals involved.
Naturally, an octahedral complex will have the point group $O_\mathrm{h}$ — the octahedral point group. In molecular orbital theory, we will first need to work out how the ligand group orbitals transform; then we can find out which metal orbitals transform identically, i.e. belong to the same irreducible representation; only these will form bonds with each other. To arrive at the group orbitals’ irreducible representations, we need to see how they transform under the given symmetry operations of the group; thankfully, a copy of the character table can be found on meta (Attention: MathJax heavy link! Give the page about 30 seconds time to load!) Since your diagram does not include π interactions, I will ignore them, too.
\begin{array}{lcccccccccc}\hline
\text{Irrep} & E & 8 C_3 & 6 C_2 & 6 C_4 & 3 C_4^2 & i & 6 S_4 & 8 S_6 & 3 \sigma_\mathrm{h} & 6 \sigma_\mathrm{d} \\ \hline
6 \sigma_{\ce{L\bond{->}M}} & 6 & 0 & 0 & 2 & 2 & 0 & 0 & 0 & 4 & 2 \\ \hline\end{array}
We now need to use the reduction formula to determine how many times each irreducible representation is part of this reducible representation.
$$n = \frac 1h \sum_{\chi} N \chi_\mathrm{r} \chi_\mathrm{i}\tag{1}$$
Wherein $n$ is the number of times an irreducible representation is included, $N$ is the coefficient of a symmetry operation group, $h$ is the sum of all coefficients ($48$ in $O_\mathrm{h}$) and $\chi$ is the character of the irreducible or reducible representation, respectively. Thankfully, a large number of reducible group characters end up as $0$, easing our calculations. This should be the result:
$$\begin{align}n(\mathrm{a_{1g}}) &= \frac 1{48} (6 + 2 \times 6 \times 1 + 2 \times 3 \times 1 + 4 \times 3 \times 1 + 2 \times 6 \times 1) &&= 1\\
n(\mathrm{a_{2g}}) &= \frac 1{48} (6 + 2 \times 6 \times (-1) + 2 \times 3 \times 1 + 4 \times 3 \times 1 + 2 \times 6 \times (-1)) &&= 0\\
n(\mathrm{e_g}) &= \frac 1{48} (12 + 2 \times 6 \times 0 + 2 \times 3 \times 2 + 4 \times 3 \times 2 + 2 \times 6 \times 0) &&= 1\\
n(\mathrm{t_{1g}}) &= \frac 1{48} (18 + 2 \times 6 \times 1 + 2 \times 3 \times (-1) + 4 \times 3 \times (-1) + 2 \times 6 \times (-1)) &&= 0\\
n(\mathrm{t_{2g}}) &= \frac 1{48} (18 + 2 \times 6 \times (-1) + 2 \times 3 \times (-1) + 4 \times 3 \times (-1) + 2 \times 6 \times 1) &&= 0\\
n(\mathrm{a_{1u}}) &= \frac 1{48} (6 + 2 \times 6 \times 1 + 2 \times 3 \times 1 + 4 \times 3 \times (-1) + 2 \times 6 \times (-1)) &&= 0\\
n(\mathrm{a_{2u}}) &= \frac 1{48} (6 + 2 \times 6 \times (-1) + 2 \times 3 \times 1 + 4 \times 3 \times (-1) + 2 \times 6 \times 1) &&= 0\\
n(\mathrm{e_u}) &= \frac 1{48} (12 + 2 \times 6 \times 0 + 2 \times 3 \times 2 + 4 \times 3 \times (-2) + 2 \times 6 \times 0) &&= 0\\
n(\mathrm{t_{1u}}) &= \frac 1{48} (18 + 2 \times 6 \times 1 + 2 \times 3 \times (-1) + 4 \times 3 \times 1 + 2 \times 6 \times 1) &&= 1\\
n(\mathrm{t_{2u}}) &= \frac 1{48} (18 + 2 \times 6 \times (-1) + 2 \times 3 \times (-1) + 4 \times 3 \times 1 + 2 \times 6 \times (-1)) &&= 0\end{align}$$
We see, that the six σ-symmetric ligand orbitals transform as $\mathrm{a_{1g} + e_g + t_{1u}}$ — this is also shown in your picture. Now that we have this settled, we need to ask ourselves which metal orbitals these irreducibles correspond to. $\mathrm{a_{1g}}$, the totally symmetric irreducible is easy; it must be the $\mathrm{s}$ orbital. Furthermore, the character table tells us that $(x,y,z)$ transforms as $\mathrm{t_{1u}}$ which means that the $\mathrm{p}$ orbitals must transform as that. (You can verify this by performing the same exercise on the three central $\mathrm{p}$ orbitals.)
This leaves the $\mathrm{d}$ orbitals which — for symmetry reasons as there is no pentadegenerate irreducible — have to split up into at least a group of three and two ($\mathrm{t}_x$ and $\mathrm{e}_x$). Again, glancing at the character table tells us that $\mathrm{d}_{xy}, \mathrm{d}_{xz}$ and $\mathrm{d}_{yz}$ transform as $\mathrm{t_{2g}}$ and $\mathrm{d}_{x^2 - y^2}$ and $\mathrm{d}_{z^2}$ as $\mathrm{e_g}$. The first can easily be verified by the same exercise as for the $\mathrm{p}$ orbitals; the second not so easily due to the ‘unnatural’ shape of the $\mathrm{d}_{z^2}$ orbital. However, do trust the calculations that they are equivalent.
Therefore, it is only $\mathrm{d}_{z^2}$ and $\mathrm{d}_{x^2-y^2}$ that can interact with the ligand group orbitals and which therefore form the σ bonds as your scheme shows. And this also makes sense from a purely qualitative point of view: The $\mathrm{d}_{z^2}$ orbital is pointing towards the $z$-axis ligands (and the ring is somewhat interacting with the $x,y$-axis ligands) while each of the four lobes of $\mathrm{d}_{x^2-y^2}$ is pointing towards a ligand on the $x$ or $y$ axis. The $\mathrm{t_{2g}}$ orbitals are pointing in between the coordinate axis directions and therefore between the ligands and are thus nonbonding with respect to σ bonds. | {
"domain": "chemistry.stackexchange",
"id": 7290,
"tags": "orbitals, hybridization, crystal-field-theory"
} |
A tiny library for textual serialization of lists in Java | Question: Note: see the next iteration.
I have this tiny library for (de)serializing lists of objects. The requirement is that each object's state may be represented textually as a single line of text. Later on, the deserialization routine attempts to map each text line to the object it encodes.
Factory.java:
package net.coderodde.lists.serial;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
/**
* This class holds the methods for serializing/deserializng the lists. The
* convention used here is that every object can be serialized to a single line
* describing the state of that object. Upon deserialization, each row should
* gracefully produce the object with its recorded state. For instance, the
* format of comma-separated values (CSV) resembles this very well.
*
* @author Rodion "rodde" Efremov
* @version 1.6
*/
public class Factory {
/**
* Serializes the all the elements. Each element should serialize to a
* single line, as the deserialization routine assumes that each line
* represents the state of exactly one element.
*
* @param <E> the actual element type.
* @param list the list of elements to serialize.
* @param serializer the element serializer.
* @return the string representation of the entire list.
*/
public static <E> String serialize(List<E> list,
LineSerializer<E> serializer) {
StringBuilder sb = new StringBuilder();
list.stream().forEach((element) -> {
sb.append(serializer.serialize(element)).append("\n");
});
return sb.toString();
}
/**
* Deserializes the entire text <code>text</code> to the list of elements
* being encoded. This routine assumes that each line describes a single
* element.
*
* @param <E> the actual element type.
* @param text the text to deserialize.
* @param deserializer the deserialization object.
* @return the list of elements.
*/
public static <E> List<E> deserialize(String text,
LineDeserializer<E> deserializer) {
return deserialize(text, deserializer, new ArrayList<>());
}
/**
* Deserializes the entire text <code>text</code> to the list of elements
* being encoded. This routine assumes that each line describes a single
* element.
*
* @param <E> the actual element type.
* @param text the text to deserialize.
* @param deserializer the deserialization object.
* @param list the list for holding the elements.
* @return the list of elements.
*/
public static <E> List<E> deserialize(String text,
LineDeserializer<E> deserializer,
List<E> list) {
if (list == null) {
return deserialize(text, deserializer);
}
list.clear();
Scanner scanner = new Scanner(text);
while (scanner.hasNextLine()) {
list.add(deserializer.deserialize(scanner.nextLine()));
}
return list;
}
}
LineSerializer.java:
package net.coderodde.lists.serial;
/**
* This interface defines the API for serializing an element to a string.
*
* @author Rodion "rodde" Efremov
* @version 1.6
* @param <E> the element type.
*/
@FunctionalInterface
public interface LineSerializer<E> {
/**
* Returns the textual representation of the input element.
*
* @param element the element to serialize.
* @return the textual representation of the input element.
*/
public String serialize(E element);
}
LineDeserializer.java:
package net.coderodde.lists.serial;
/**
* This interface defines the API for deserializing the elements from their
* textual representation.
*
* @author Rodion "rodde" Efremov
* @version 1.6
* @param <E> the element type.
*/
@FunctionalInterface
public interface LineDeserializer<E> {
/**
* Deserializes the element from its textual representation.
*
* @param text the string representing the state of the element.
* @return the actual, deserialized element.
*/
public E deserialize(String text);
}
Demo.java:
package net.coderodde.lists.serial;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Objects;
import java.util.Random;
public class Demo {
public static void main(final String... args) {
// Create.
List<Integer> input = getRandomInts(100, new Random());
// Serialize.
String text = Factory.serialize(input, (e) -> e.toString());
// Deserialize.
List<Integer> output =
Factory.deserialize(text,
(e) -> Integer.parseInt(e),
new LinkedList());
System.out.println("Input list size: " + input.size());
System.out.println("Input list type: " + input.getClass().getName());
System.out.println("Output list size: " + output.size());
System.out.println("Output list type: " + output.getClass().getName());
for (int i = 0; i < input.size(); ++i) {
if (!Objects.equals(input.get(i), output.get(i))) {
throw new IllegalStateException("Lists do not agree! :-[");
}
}
System.out.println("Lists agree! :-]");
}
private static List<Integer> getRandomInts(int size, Random random) {
List<Integer> ret = new ArrayList<>();
for (int i = 0; i < size; ++i) {
ret.add(random.nextInt());
}
return ret;
}
}
So, what do you think?
Answer: Factory is a terrible name for a class. What does it make? As best as I can tell, it doesn't make anything, it just de/serializes. Honestly, I don't think you even need it; you could easily put (de)serialize in the respective classes.
LineSerializer and LineDeserializer operate on Strings, which may very well contain newlines or carriage returns or both. It'd recommend renaming them to StringSerializer and StringDeserializer, respectively.
In serialize:
list.stream().forEach((element) -> {
sb.append(serializer.serialize(element)).append("\n");
});
Why are you making a stream just to iterate over a List? Use an advanced for (for (a : b)):
for (E element : list) {
sb.append(serializer.serialize(element)).append("\n");
}
Streams are designed to be used when you have to do lots of fancy stuff, like filter, map, and all that good stuff. Building one just to iterate over things is kinda a waste. Sure, it works, and it's fancy, but you may as well use the old methods.
Your method documentation is mostly good, and very complete. I like it. Keep it up. You should continue documenting even in the methods, however -- whenever you make an unintuitive choice, leave a note explaining why.
public static void main(final String... args) {
// ...
}
You know, I've never seen it written like this. It's always been String[] args, rather than final String... args. Functionally, they're the same, but just thought I'd point out that the convention seems to be an array, not varargs.
One big issue though: What if people want to have newlines in their object definitions? It'd be more extensible to take a List<String> and iterate over that, then maybe provide an overloaded method that takes a String and splits it along platform-specific newlines and calls the normal method on the resulting List. In pseudocode that looks awfully like Ruby:
def serialize(list_of_strings, serializer)
return list_of_strings.map { |string| serializer.serialize(string) }
end
def serialize(string, serializer)
return serialize(string.split(/\r?\n/), serializer)
end
You know, rereading the Ruby, I just realized that if you chose to use List<String>/List<T>s, it's a lot simpler: Just use Collection#Stream, Stream#map, and Stream#toArray(IntFunction<A[]<). If you'd like an example of what I mean, I'd be happy to type up what I mean. It's a little confusing to use. | {
"domain": "codereview.stackexchange",
"id": 14218,
"tags": "java, serialization"
} |
How to find the average half life of radioactive nuclide which undergoes two different decays? | Question:
Find the average life of a radio nuclide which decays by parallel paths,
\begin{align}
A &\rightarrow B\\
2A &\rightarrow B,
\end{align}
where the decay constants are $\lambda_1 = \pu{0.018 s-1}$ and $\lambda_2 = \pu{0.001 s-1}$, respectively.
I used the formula
$$\frac{1}{t} = \frac{t_1 t_2}{t_1 + t_2,}$$
where $t$ represents the mean life. The equation can be written as:
$$\frac{1}{t} = \lambda_1 + \lambda_2$$
But I am getting the correct answer only if I take $\lambda_2$ as 2 times the given decay constant for the second reaction. Is this because of $\underline{2}A$ on the reactant side instead of $A$?
Answer:
But I am getting the correct answer only if I take $\lambda_2$ as $2$ times the given decay constant for the second reaction. Is this because of $\underline{2}A$ on the reactant side instead of $A$?
Yes.
The way I tend to approach half-life problems is to recast them as the relevant kinetic differential equations. Assuming the first-order kinetics as stated in the problem, the contribution of reaction 1 is
$$
\left(\frac{\mathrm{d}[A]}{\mathrm{d}t}\right)_1 = -\lambda_1[A],
$$
and that of reaction 2 is†
$$
\left(\frac{\mathrm{d}[A]}{\mathrm{d}t}\right)_2 = -2\lambda_2[A].
$$
Implicit in the above is the assumption that the first-order decay constants apply to the stoichiometry of the reactions as-written. As you correctly note, that extra factor of $2$ in the differential equation for reaction 2 is because two $A$ nuclei are involved per "unit" of reaction progression.
The total rate of loss of $A$ is the sum of the above:
$$
\frac{\mathrm{d}[A]}{\mathrm{d}t}
= -\lambda_1[A] -2\lambda_2[A]
= -\left(\lambda_1 + 2\lambda_2\right)[A].
$$
This is a straightforward first-order ODE, readily solved for the overall half-life by standard methods.
† While it seems quite unusual to me to have a second-order reaction ($2A\longrightarrow B$) with a first-order rate constant (units of $\mathrm{s}^{-1}$), for the sake of the problem I'm content to roll with it. I'm sure they cast the problem this way so that the calculus was straightforward.
As well, simultaneously having $A \longrightarrow B$ and $2A \longrightarrow B$ processes would seem to violate all kinds of conservation laws--the problem makes no physical sense as a result. Plus, I know of no second-order nuclear process that occurs outside of a particle accelerator. $2A \longrightarrow C$ would have been much less preposterous, even if implausible in the context of low-energy radioactive decay. | {
"domain": "chemistry.stackexchange",
"id": 17089,
"tags": "kinetics, radioactivity"
} |
How do I average 60 values published values from a node | Question:
Hello,
Is there a way I can take 60 values from a topic and calculate the average after every 60 values. I'm working with GPS and the accuracy is about 3 meters. So I'll get readings that are ~3 meters of where I'm actually at. Taking the average helps pin point my approximate location but I don't know a way to do it without copying and pasting my data into an excel sheet.
Thank you in advance.
Originally posted by SupermanPrime01 on ROS Answers with karma: 27 on 2017-05-15
Post score: 2
Answer:
The simplest way is to calculate a moving average. Here is some rather ugly code that calculates the average of the 60 most recent numbers received on a topic:
import rospy
from std_msgs.msg import Int32
history = []
def callback(msg):
global history
history.append(msg.data)
if len(history) > 60:
history = history[-60:]
average = sum(history) / float(len(history))
rospy.loginfo('Average of most recent {} samples: {}'.format(len(history), average))
n = rospy.init_node('moving_average')
s = rospy.Subscriber('/numbers', Int32, callback)
rospy.spin()
It's averaging integers but modifying it to average GPS values would be simple. Also note that it will immediately start producing an answer even before it gets 60 values. If you don't want that, then you should change it to not calculate an average unless the history length is 60.
This code uses pure Python so it's not that fast. You can do it faster using numpy, for which Stack Overflow is full of answers.
If you explicitly want to take only the last 60 samples, average them, and then wait for 60 more samples before producing another average, then you should only calculate an average if the history length is 60, and clear the history after calculating the average.
A better way to estimate robot pose than a simple moving average is to use a filter such as an Extended Kalman Filter. These are much more involved to write and use, but fortunately you're using ROS and ROS has packages to do this sort of thing for you! The robot_pose_ekf package is one of the most commonly used. It's intended to work with data from sensors such as odometry and IMUs, but it can be convinced to use GPS data. If you have other input data such as odometry then so much the better, because you will get a better position estimate.
Originally posted by Geoff with karma: 4203 on 2017-05-15
This answer was ACCEPTED on the original site
Post score: 3
Original comments
Comment by mgruhler on 2017-05-16:
Extending this: there is also the robot_localization package which is intended to use GPS data...
Comment by VictorLamoine on 2021-01-15:
Beware that this solution assumes that the rate of receiving the data is constant which is not always the case. It can lead to errors. Usually it makes more sense to keep values for a certain amount of time rather than a number of values. | {
"domain": "robotics.stackexchange",
"id": 27910,
"tags": "ros, gps, data"
} |
Cosine similarity between query and document confusion | Question: I am going through the Manning book for Information retrieval. Currently I am at the part about cosine similarity. One thing is not clear for me.
Let's say that I have the tf idf vectors for the query and a document. I want to compute the cosine similarity between both vectors.
When I compute the magnitude for the document vector, do I sum the squares of all the terms in the vector or just the terms in the query?
Here is an example : we have user query "cat food beef" .
Lets say its vector is (0,1,0,1,1).( assume there are only 5 directions in the vector one for each unique word in the query and the document)
We have a document "Beef is delicious"
Its vector is (1,1,1,0,0). We want to find the cosine similarity between the query and the document vectors.
Answer: You want to use all of the terms in the vector.
In your example, where your query vector $\mathbf{q} = [0,1,0,1,1]$ and your document vector $\mathbf{d} = [1,1,1,0,0]$, the cosine similarity is computed as
similarity $= \frac{\mathbf{q} \cdot \mathbf{d}}{||\mathbf{q}||_2 ||\mathbf{d}||_2} = \frac{0\times1+1\times1+0\times1+1\times0+1\times0}{\sqrt{1^2+1^2+1^2} \times \sqrt{1^2+1^2+1^2}} = \frac{0+1+0+0+0}{\sqrt{3}\sqrt{3}} = \frac{1}{3}$ | {
"domain": "datascience.stackexchange",
"id": 2167,
"tags": "similarity, information-retrieval, ranking, cosine-distance, similar-documents"
} |
States of $^{18}$O using nuclear shell model | Question: Using the nuclear shell model it can be proved that in the external shell $1d_{5/2}$ of $^{18}$O there are two neutrons. Thus, by composition of angular momentum, the possible states of the nucleus are:
$I^P$ = $0^+$ , $1^+$, $2^+$, $3^+$, $4^+$, $5^+$.
During the lecture my professor said that experimentally we can find only the states $0^+$ , $2^+$, $4^+$ and this lack can be explained through a reasoning based on isospin.
Why the states $1^+$, $3^+$, $5^+$ are prohibited?
Answer: You (your professor) have selected a case, where $^{16}O$ is magic nucleus and it works as a core for the games of orbitals above and it is much easier to put strong statements like it can be proved that in the external shell $1d_{5/2}$ there are two nucleons in $^{18}O$. We could start a discussion here...
Neverthless: you cannot find $5^+$, it would be against Pauli principle (same projections m=+5/2). Wouldnt it?
More generally, for identical nucleons, the isospin projection $T_z = t_z(1) + t_z(2) = 1$ and hence total isospin is $T=1$. This is a symetric wavefunction (look at the deuteron and pp and nn cases). This implies (from Pauli principle), that the space-spin part must be antisymetric ....
$\Psi(j^2JM) = N \sum_{m,m'}{ (jmjm'|jjJM) [\phi_1(m)\phi_2(m')- \phi_1(m')\phi_2(m)]}$
can be rewritten as
$\Psi(j^2JM) = N \sum_{m,m'}{ (jmjm'|jjJM) - (jm'jm|jjJM) )\ \phi_1(m)\phi_2(m')}$
and for these two Clebsh -Gordan coefficients there is a rule
$\Psi(j^2JM) = N [1-(-1)^{2j-J}] \sum_{m,m'}{ (jmjm'|jjJM) \ \phi_1(m)\phi_2(m')}$
In your case $2j=5$ is odd, so $J$ must be even, else the right side vanishes. And here you have the total angular momentum possible from $J=0,...(2j-1)$. With pure isospin $T=1$ as a gift. The $p-n$ interaction can have both isospins, which ends up with some mixing when $j_p \ne j_n$.
Check the Casten's book (I forgot the title, but it is quite famous). | {
"domain": "physics.stackexchange",
"id": 39878,
"tags": "nuclear-physics, isospin-symmetry"
} |
Can the Schrodinger equation describe planetary motion? | Question: I was asked on an exam whether the Schrodinger equation can be used to describe planetary motion and my answer was "No, because the solutions are wavefunctions which give probabilities but everything can be exactly measured for large objects."
Then I read this article which suggested it should be possible. What would the Hamiltonian be, and how do we get definite results instead of probabilities?
Answer: Yes, you can construct classical orbits from the Schrodinger equation, as long as you take the right limit. For example, consider the hydrogen atom. While the lower energy levels, like the $1s$ or $2p$ look nothing like classical orbits, you may construct wavefunctions that do by superposing solutions with high $n$ appropriately. Such solutions obey the Schrodinger equation with Hamiltonian $H = p^2/2m - e^2/r$, but have a sharp peak, which orbits the nucleus in a circular or elliptical trajectory. For very high $n$ the peak can get extremely sharp, so the position is definite for all practical purposes.
Heuristically this works because, given a state that is a superposition of states with different $\ell$'s and $m$'s but the same $n$, the coefficients of the $|n, \ell, m \rangle$ states inside are essentially a discrete Fourier transform of the position space wavefunction, with $O(n^2)$ entries. For higher $n$, you can use this to build sharper and sharper peaks. This reasoning is not even necessary; we know it has to work because everything is quantum, so there must be a way to reproduce classical results within quantum mechanics.
By treating a planet as a single particle, the same reasoning holds. However, as pointed out by other answers, this isn't the full story, because a planet is much more complicated than an electron. In fact this complication is essential, because if a planet and star were single particles in perfectly empty space, there's no particular reason the planet would end up in one of these classical-looking states with a sharply peaked position. For a real planet this is a consequence of decoherence: superpositions that don't have sharply peaked positions are not stable against interaction with the environment. That's how the classical world emerges.
I should also note that the link you gave is not an example of this. That article is about an astrophysical quantity that happens to be described by an equation with the same form as the Schrodinger equation, but absolutely nothing quantum is going on. It's just a coincidence, which arises because there aren't that many different low-order PDEs. If there is any reason at all, it's simply that we look for simple equations on both microscopic and macroscopic scales. | {
"domain": "physics.stackexchange",
"id": 49937,
"tags": "quantum-mechanics, orbital-motion, schroedinger-equation, solar-system"
} |
A question about electric fields and one of their formulas | Question: I have a question based on a task given by my physics textbook:
Two identical elongated objects with identical charge are put in front of each other as in the attached picture. The objects are 0.03 metre long and 0.02 metre wide each. The electric field between them is 8000 N/C. The question then is: how much charge should be given to these objects for that particular electric field to come about.
The answer is based on the formulas E=σ/εo (sorry, the o should be in subscript); and σ=q/A. I understand the logic fully and actually got the answer right, except for one detail: in the book they use the A of only one of the two charged objects (0.03 x 0.02 m = 0.006 m) for the formula; whereas I doubled it to 0.012 m, seeing that there are two objects. Given that the formula E=σ/εo was explained earlier in the book as an amalgamation of E=σ/2εo + σ/2εo = σ/εo, where each σ/2εo refers to one of two identical objects placed opposite each other with opposite charge (as in the exercise), this seemed like the logical thing to do.
Can anybody explain what I am getting wrong?
Answer: Each charged plate on its own creates a field with intensity $E=\frac{\sigma}{2 \varepsilon_0}=\frac{q}{2A\varepsilon_0}$. Combining both, with $q$ being the charge of each plate and $A$ the area of each plate, you get... | {
"domain": "physics.stackexchange",
"id": 51305,
"tags": "electric-fields"
} |
Merge sort on an Integer class | Question: This is a specific case in merge sort. I'm trying to do a merge sort on an array that's created using the Java Integer class. My implementation is slow and therefore needs some modifications for better performance. I believe the process where I copy the original items to two new arrays over and over again is slowing it down.
How do I merge sort without copying? The sorting must be stable and both methods should return Integer[].
private static Integer[] mergeSort(Integer[] a, int p, int q)
{
if (a.length <= 1) return a;
int mid = (int)Math.floor((q-p)/2);
Integer[] left = new Integer[(mid - p) + 1];
Integer[] right = new Integer[q - mid];
int index = 0;
for (int i = 0; i < left.length; i++)
{
left[i] = a[index++];
}
for (int i = 0; i < right.length; i++)
{
right[i] = a[index++];
}
left = mergeSort(left, 0, left.length-1);
right = mergeSort(right, 0, right.length-1);
return merge(left, right);
}
private static Integer[] merge(Integer[] a, Integer[] b)
{
int i = 0; int j = 0; int k = 0;
Integer[] result = new Integer[a.length+b.length];
while (i < a.length || j < b.length)
{
if (i != a.length && j != b.length)
{
if (a[i].compareTo(b[j]) <= 0)
{
result[k++] = a[i++];
}
else
{
result[k++] = b[j++];
}
}
else if (i < a.length)
{
result[k++] = a[i++];
}
else if (j < b.length)
{
result[k++] = b[j++];
}
}
return result;
Answer: It was quite hard, but I figured it out. You can even pre-create the buffer with some value you know the mergesort will never exceed, and save some object creation overhead.
public class Mergesort
{
public static void main(String[] args){
int[] array = new int[]{1, 3, 5, 7, 9, 2, 4, 6, 8, 10};
array = mergeSort(array, 0, array.length-1);
for(int i = 0; i < array.length; i++){
System.out.print(array[i] + " ");
}
System.out.println();
}
private static int[] mergeSort(int[] a, int p, int q)
{
if (q-p < 1) return a;
int mid = (q+p)/2;
mergeSort(a, p, mid);
mergeSort(a, mid+1, q);
return merge(a, p, q, mid);
}
private static int[] merge(int[] a, int left, int right, int mid)
{
//buffer - in the worst case, we need to buffer the whole left partition
int[] buffer = new int[a.length / 2 + 1];
int bufferSize = 0;
int bufferIndex = 0;
int leftHead = left;
int rightHead = mid+1;
//we keep comparing unless we hit the boundary on either partition
while(leftHead <= mid && rightHead <= right){
//no data in the buffer - normal compare
if((bufferSize - bufferIndex) == 0){
//right is less than left - we overwrite left with right and store left in the buffer
if(a[leftHead] > a[rightHead]){
buffer[bufferSize] = a[leftHead];
a[leftHead] = a[rightHead];
bufferSize++;
rightHead++;
}
}
//some data in the buffer - we use buffer (instead of left) for comparison with right
else{
//right is less than buffer
if(buffer[bufferIndex] > a[rightHead]){
//we overwrite next left value, but must save it in the buffer first
buffer[bufferSize] = a[leftHead];
a[leftHead] = a[rightHead];
rightHead++;
bufferSize++;
}
//buffer is less than right = we use the buffer value (but save the overwriten value in the buffer)
else{
buffer[bufferSize] = a[leftHead];
a[leftHead] = buffer[bufferIndex];
bufferSize++;
bufferIndex++;
}
}
leftHead++;
}
//now we hit the end of either partition - now we have only two of them (buffer and either left or right)
//so we do traditional merge using these two
while(leftHead <= right && (bufferSize - bufferIndex) > 0){
if(rightHead <= right && a[rightHead] < buffer[bufferIndex]){
a[leftHead] = a[rightHead];
rightHead++;
}
else{
if(leftHead <= mid){
buffer[bufferSize] = a[leftHead];
bufferSize++;
}
a[leftHead] = buffer[bufferIndex];
bufferIndex++;
}
leftHead++;
}
return a;
}
}
I did not extensively test it, nor did I measure it. You can try that and post the results. | {
"domain": "codereview.stackexchange",
"id": 1514,
"tags": "java, performance, sorting, mergesort"
} |
Any comparison between transformer and RNN+Attention on the same dataset? | Question: I am wondering what is believed to be the reason for superiority of transformer?
I see that some people believe because of the attention mechanism used, it’s able to capture much longer dependencies. However, as far as I know, you can use attention also with RNN architectures as in the famous paper attention is introduced(here)).
I am wondering whether the only reason for the superiority of transformers is because they can be highly parallelized and trained on much more data?
Is there any experiment comparing transformers and RNN+attention trained on the exact same amount of data comparing the two?
Answer: If you go through the main introductory paper of the transformer ("Attention is all you need"), you can find the comparison of the model with other state-of-the-art machine translation method:
For example, Deep-Att + PosUnk is a method that has utilized RNN and attention for the translation task. As you can see, the training cost for the transformer with self-attention is $2.3 \cdot 10^{19}$ (FLOPs) and $1.0 \cdot 10^{20}$ (FLOPs) for the "Deep-Att + PosUnk" method (the transformer is 4 times faster) on "WMT14 English-to-French" dataset.
Please note that the BLEU is a crucial factor here (not merely training cost). Hence, you can see the BLEU value of the transformer superior to the ByteNet (Neural Machine Translation in Linear Time). Although the ByteNet has not adopted the RNN, you can find the comparison of the ByteNet with other "RNN + Attention" methods in its original paper:
Hence, by transitivity property of the BLEU score, you can find that the transformer has already outperformed other "RNN + Attention" methods in terms of the BLEU score (please check their performance on "WMT14" dataset). | {
"domain": "ai.stackexchange",
"id": 2348,
"tags": "natural-language-processing, recurrent-neural-networks, long-short-term-memory, transformer, attention"
} |
Local versus non-local functionals | Question: I'm new to field theory and I don't understand the difference between a "local" functional and a "non-local" functional. Explanations that I find resort to ambiguous definitions of locality and then resort to a list of examples. A common explanation is that local functionals depend on the value of the integrand "at a single point."
For instance, this functional is given as local, $$ F_1[f(x)] = \int_{a}^{b} dx f(x) $$ but this functional is not $$ F_2[f(x)] = \int_{a}^{b} \int_{a}^{b} dx dx' f(x) K(x, x') f(x') $$
To further compound my confusion, some references (see Fredrickson, Equilibrium Theory of Inhomogeneous Polymers) state that gradients make a functional non-local (or I have even heard the term semi-local), whereas others (see Why are higher order Lagrangians called 'non-local'?) state that gradients do not make a functional non-local.
Is there a more rigorous definition of locality?
Answer: Yes, there are rigorous ways of defining locality in such contexts, but the precise terminology used unfortunately depends on both the context, and who is making the definition.
Let me give an example context and definition.
Example context/definition.
For conceptual simplicity, let $\mathcal F$ denote a set of smooth, rapidly decaying functions $f:\mathbb R\to \mathbb R$. A functional $\Phi$ on $\mathcal F$ is a function $\Phi:\mathcal F\to \mathbb R$.
A function (not yet a functional on $\mathcal F$) $\phi:\mathcal F\to\mathcal F$ is called local provided there exists a positive integer $n$, and a function $\bar\phi:\mathbb R^{n+1}\to\mathbb R$ for which
\begin{align}
\phi[f](x) = \bar\phi\big(x, f(x), f'(x), f''(x), \dots, f^{(n)}(x)\big) \tag{1}
\end{align}
for all $f\in \mathcal F$ and for all $x\in\mathbb R$. In other words, such a function is local provided it depends only on $x$, the value of the function $f$ at $x$, and the value of any finite number of derivatives of $f$ at $x$.
A functional $\Phi$ is called an integral functional provided there exists a function $\phi:\mathcal F\to\mathcal F$ such that
\begin{align}
\Phi[f] = \int_{\mathbb R} dx \, \phi[f](x). \tag{2}
\end{align}
An integral functional $\Phi$ is called local provided there exists some local function $\phi:\mathcal F\to\mathcal F$ for which $(2)$ holds.
What could we have defined differently?
Some authors might not allow for derivatives in the definition $(1)$, or might call something with derivatives semi-local. This makes intuitive sense because if you think of Taylor expanding a function, say, in single-variable calculus, you get
\begin{align}
f(x+a) = f(x) + f'(x)a + f''(x)\frac{a^2}{2} + \cdots,
\end{align}
and if you want $a$ to be large, namely if you want information about what the function is doing far from $x$ (non-local behavior), then you need more an more derivative terms to sense that. The more derivatives you consider, the more you sense the "non-local" behavior of the function.
One can also generalize to situations in which the functions involved are on manifolds, or are not smooth but perhaps only differentiable a finite number of times etc., but these are just details and I don't think illuminate the concept.
Example 1 - a local functional.
Suppose that we define a function $\phi_0:\mathcal F\to \mathcal F$ as follows:
\begin{align}
\phi_0[f](x) = f(x),
\end{align}
then $\phi_0$ is a local function $\mathcal F\to\mathcal F$, and it yields a local integral functional $\Phi_0$ given by
\begin{align}
\Phi_0[f] = \int_{\mathbb R} dx\, \phi_0[f](x) = \int_{\mathbb R} dx\, f(x),
\end{align}
which simply integrates the function over the real line.
Example 2 - another local functional.
Consider the function $\phi_a:\mathcal F\to\mathcal F$ defined as follows:
\begin{align}
\phi_a[f](x) = f(x+a).
\end{align}
Is this $\phi_a$ local? Well, for $a=0$ it certainly is since it agrees with $\phi_0$. What about for $a\neq 0$? Well for such a case $\phi_a$ certainly is not because $f(x+a)$ depends both on $f(x)$ and on an infinite number of derivatives of $f$ at $x$. What about the functional $\Phi_a$ obtained by integrating $\phi_a$? Notice that
\begin{align}
\Phi_a[f]
&= \int_{\mathbb R} dx\,\phi_a[f](x) \\
&= \int_{\mathbb R} dx\, f(x+a) \\
&= \int_{\mathbb R} dx\, f(x) \\
&= \int_{\mathbb R} dx\, \phi_0[f](x)\\
&= \Phi_0[f].
\end{align}
So $\Phi_a[f]$ is local even though $\phi_a$ is not for $a\neq 0$.
The lesson of this example is this: you may encounter an integral functional $\Phi:\mathcal F\to\mathbb R$ that is defined by integrating over a non-local function $\phi:\mathcal F\to\mathcal F$. However, there might still be a way of writing the functional $\Phi$ as the integral over a different function, say $\phi'$, that is local, in which case we can assert that $\Phi$ is local as well because to verify that a functional is local, you just need to find one way of writing it as the integral of a local function. | {
"domain": "physics.stackexchange",
"id": 14619,
"tags": "lagrangian-formalism, field-theory, locality, non-locality"
} |
Is this legitimate for building a JSON file? | Question: I have written a unit / component in Delphi which helps make it easy to write JSON data. I know there are already existing ways to do this, but the ways I've used are too awkward. I have used an XML writer in Delphi before (not sure exactly what one) but I liked the way it worked, and re-created a JSON version. It's very lightweight, and I'd like to know if I'm missing anything important.
PS - I know there is an unimplemented property LineBreak but that's to come later...
unit JsonBuilder;
interface
uses
Windows, Classes;
type
TJsonBuilder = class(TComponent)
private
FLines: TStringList;
FIndent: Integer;
FLineBreak: Boolean;
FHierarchy: Integer;
FNeedComma: Boolean;
function GetLines: TStrings;
procedure SetIndent(const Value: Integer);
procedure SetLineBreak(const Value: Boolean);
procedure SetLines(const Value: TStrings);
function GetAsJSON: String;
procedure SetAsJSON(const Value: String);
procedure WriteLine(const S: String);
procedure WriteStartProperty(const Name: String);
procedure UpdateStat(const H: Integer; const C: Boolean);
function GetIndent: String;
public
constructor Create(AOwner: TComponent); override;
destructor Destroy; override;
procedure Clear;
procedure WriteStartDoc;
procedure WriteEndDoc;
procedure WriteStartObject; overload;
procedure WriteStartObject(const Name: String); overload;
procedure WriteEndObject;
procedure WriteStartArray; overload;
procedure WriteStartArray(const Name: String); overload;
procedure WriteEndArray;
procedure WriteString(const Name, Value: String);
procedure WriteInteger(const Name: String; const Value: Integer);
procedure WriteBoolean(const Name: String; const Value: Boolean);
procedure WriteDouble(const Name: String; const Value: Double);
procedure WriteCurrency(const Name: String; const Value: Currency);
procedure WriteStream(const Name: String; AStream: TStream);
procedure WriteBreak;
property AsJSON: String read GetAsJSON write SetAsJSON;
property Lines: TStrings read GetLines write SetLines;
published
property Indent: Integer read FIndent write SetIndent;
property LineBreak: Boolean read FLineBreak write SetLineBreak;
end;
function JsonEncode(const S: String): String;
function Base64Encode(AStream: TStream; const LineBreaks: Boolean = False): string;
implementation
uses
StrUtils, SysUtils;
const
Base64Codes: array[0..63] of char =
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
function Base64Encode(AStream: TStream; const LineBreaks: Boolean = False): string;
const
dSize=57*100;//must be multiple of 3
var
d:array[0..dSize-1] of byte;
i,l:integer;
begin
Result:='';
l:=dSize;
while l=dSize do
begin
l:=AStream.Read(d[0],dSize);
i:=0;
while i<l do
begin
if i+1=l then
Result:=Result+
Base64Codes[ d[i ] shr 2]+
Base64Codes[((d[i ] and $3) shl 4)]+
'=='
else if i+2=l then
Result:=Result+
Base64Codes[ d[i ] shr 2]+
Base64Codes[((d[i ] and $3) shl 4) or (d[i+1] shr 4)]+
Base64Codes[((d[i+1] and $F) shl 2)]+
'='
else
Result:=Result+
Base64Codes[ d[i ] shr 2]+
Base64Codes[((d[i ] and $3) shl 4) or (d[i+1] shr 4)]+
Base64Codes[((d[i+1] and $F) shl 2) or (d[i+2] shr 6)]+
Base64Codes[ d[i+2] and $3F];
inc(i,3);
if LineBreaks then
if ((i mod 57)=0) then Result:=Result+#13#10;
end;
end;
end;
function JsonEncode(const S: String): String;
var
X: Integer;
C: Char;
procedure Repl(const Val: String);
begin
Delete(Result, X, 1);
Insert(Val, Result, X);
end;
begin
Result:= StringReplace(S, '\', '\\', [rfReplaceAll]);
Result:= StringReplace(Result, '"', '\"', [rfReplaceAll]);
Result:= StringReplace(Result, '/', '\/', [rfReplaceAll]);
for X := Length(Result) downto 1 do begin
C:= Result[X];
if C = #10 then Repl('\'+#10);
if C = #13 then Repl('\'+#13);
end;
end;
{ TJsonBuilder }
constructor TJsonBuilder.Create(AOwner: TComponent);
begin
inherited;
FLines:= TStringList.Create;
FHierarchy:= 0;
FIndent:= 4;
FLineBreak:= True;
FNeedComma:= False;
end;
destructor TJsonBuilder.Destroy;
begin
FLines.Free;
inherited;
end;
procedure TJsonBuilder.Clear;
begin
FLines.Clear;
end;
procedure TJsonBuilder.SetIndent(const Value: Integer);
begin
FIndent := Value;
if FIndent < 0 then FIndent:= 0;
end;
procedure TJsonBuilder.SetLineBreak(const Value: Boolean);
begin
FLineBreak := Value;
end;
function TJsonBuilder.GetLines: TStrings;
begin
Result:= TStrings(FLines);
end;
procedure TJsonBuilder.SetLines(const Value: TStrings);
begin
FLines.Assign(Value);
end;
procedure TJsonBuilder.UpdateStat(const H: Integer; const C: Boolean);
begin
FHierarchy:= FHierarchy + H;
FNeedComma:= C;
end;
function TJsonBuilder.GetIndent: String;
var
X: Integer;
Y: Integer;
begin
Result:= '';
for X := 1 to FHierarchy do
for Y := 1 to FIndent do
Result:= Result + ' ';
end;
procedure TJsonBuilder.WriteLine(const S: String);
var
T: String;
begin
if FNeedComma then begin
//Add comma to prior line
T:= FLines[FLines.Count-1];
T:= T + ',';
FLines[FLines.Count-1]:= T;
end;
FLines.Add(GetIndent+S);
end;
procedure TJsonBuilder.WriteStartDoc;
begin
WriteStartObject;
end;
procedure TJsonBuilder.WriteEndDoc;
begin
WriteEndObject;
end;
procedure TJsonBuilder.WriteStartObject;
begin
WriteLine('{');
UpdateStat(1, False);
end;
procedure TJsonBuilder.WriteStartObject(const Name: String);
begin
WriteStartProperty(Name);
WriteStartObject;
end;
procedure TJsonBuilder.WriteEndObject;
begin
UpdateStat(-1, False);
WriteLine('}');
UpdateStat(0, True);
end;
procedure TJsonBuilder.WriteStartArray;
begin
WriteLine('[');
UpdateStat(1, False);
end;
procedure TJsonBuilder.WriteStartArray(const Name: String);
begin
WriteStartProperty(Name);
WriteStartArray;
end;
procedure TJsonBuilder.WriteEndArray;
begin
UpdateStat(-1, False);
WriteLine(']');
UpdateStat(0, True);
end;
procedure TJsonBuilder.WriteStartProperty(const Name: String);
begin
WriteLine('"'+Name+'" :');
UpdateStat(0, False);
end;
procedure TJsonBuilder.WriteInteger(const Name: String; const Value: Integer);
begin
WriteLine('"'+Name+'" : '+IntToStr(Value));
UpdateStat(0, True);
end;
procedure TJsonBuilder.WriteStream(const Name: String; AStream: TStream);
var
S: String;
begin
S:= Base64Encode(AStream);
WriteLine('"'+Name+'" : "'+S+'"');
UpdateStat(0, True);
end;
procedure TJsonBuilder.WriteString(const Name, Value: String);
begin
WriteLine('"'+Name+'" : "'+JsonEncode(Value)+'"');
UpdateStat(0, True);
end;
procedure TJsonBuilder.WriteBoolean(const Name: String; const Value: Boolean);
begin
WriteLine('"'+Name+'" : '+IfThen(Value, 'true', 'false'));
UpdateStat(0, True);
end;
procedure TJsonBuilder.WriteBreak;
begin
WriteLine('');
UpdateStat(0, False);
end;
procedure TJsonBuilder.WriteCurrency(const Name: String; const Value: Currency);
begin
WriteLine('"'+Name+'" : '+FormatFloat('0.#', Value));
UpdateStat(0, True);
end;
procedure TJsonBuilder.WriteDouble(const Name: String; const Value: Double);
begin
WriteLine('"'+Name+'" : '+FormatFloat('0.#', Value));
UpdateStat(0, True);
end;
function TJsonBuilder.GetAsJSON: String;
begin
Result:= FLines.Text;
end;
procedure TJsonBuilder.SetAsJSON(const Value: String);
begin
FLines.Text:= Value;
end;
end.
Implementation of this object is like so...
function GetSomething: String;
var
JB: TJsonBuilder;
X: Integer;
begin
JB:= TJsonBuilder.Create(nil);
try
JB.WriteStartDoc;
JB.WriteStartObject('some_object_name');
JB.WriteString('prop_name', 'Property Value');
JB.WriteInteger('another_prop', 123);
JB.WriteStartArray('items');
for X := 1 to 5 do begin
JB.WriteStartObject;
JB.WriteString('some_item_name', 'Item '+IntToStr(X));
JB.WriteCurrency('item_price', 19.99 * X);
JB.WriteEndObject;
end;
JB.WriteEndArray;
JB.WriteEndObject;
JB.WriteEndDoc;
Result:= JB.AsJSON;
finally
JB.Free;
end;
end;
This outputs the following JSON text:
{
"some_object_name" :
{
"prop_name" : "Property Value",
"another_prop" : 123,
"items" :
[
{
"some_item_name" : "Item 1",
"item_price" : 20
},
{
"some_item_name" : "Item 2",
"item_price" : 40
},
{
"some_item_name" : "Item 3",
"item_price" : 60
},
{
"some_item_name" : "Item 4",
"item_price" : 80
},
{
"some_item_name" : "Item 5",
"item_price" : 100
}
]
}
}
Answer: It is rather verbose and complicated and it doesn't seem to really allow per your conventions for valid structures like [{"name":"bob"},{"name":"ted"}] (although that may appear to me that way because I just skimmed through it), and property names do need to be proper JSON strings... but if it works for you, you'll hear no arguments from me!
Actually I gave it more than just a skim.
You're doing something weird with your control characters. You can't just put a slash in front of them to escape. The character represented by "\r" -- I think in your code represented by #10, but I always forget which of \r and \n is char(10) and char(13) -- literally needs to have \r in the JSON encoding, most certainly not {the actual contol character! That breaks strings!!
(Please See the http://json.org side panel).
char
any-Unicode-character-
except-"-or-\-or-
control-character
\"
\\
\/
\b
\f
\n
\r
\t
\u four-hex-digits
To encode for JSON, you really don't have to do much, but what you do you ought to make the best effort to do correctly!!
Over all, I find your code needlessly verbose to be used regularly. It might work fine for some code automatically generated, though, once you fix the string encoding and any other possible encoding issues. | {
"domain": "codereview.stackexchange",
"id": 4094,
"tags": "json, delphi"
} |
Generalizing a Gaussian distribution | Question: Perhaps this a nonsensical question but hear me out.
I have a random variable $x$ whose moments I can calculate. The first moment $<x>$ is zero and the second $<x^2> = X^2$ is something nonzero. I decided to use a Gaussian distribution function to characterize this i.e
\begin{equation}
P(x) = e^{\frac{-x^2}{2X^2}}
\end{equation}
I was questioned on this choice and now, I want to know how to do this generally. I can calculate a few more moments (they get harder though and I can't calculate them all). I need to know if a) Choosing a Gaussian is justifiable and b) if yes, how do I justify and if not, what is the correct distribution? Are they always, for instance, corrections to the Gaussian or something like that?
Answer: There is a sense in which specifying an unknown probability distribution with
known mean and variance by a gaussian with the corresponding mean
and variance is the correct choice, and that is when those are the only things you are willing to say about the random variable. The gaussian distribution is the maximum entropy distribution with a given mean and variance.
It is the best probability distribution to use when you literally want to assume nothing else about your variable.
Gaussian Case
Let's say the mean and the variance are literally the only things you know about your variable. Then, we might be interested in which probability distribution best describes the variable. Surely we would want to make as general a choice as possible. This can be formulated. If we want to maximize our ignorance of the probability distribution subject to some constraints, we want to maximize the differential entropy of our distribution subject to those constraints.
In your case, this means we want to maximize:
$$ S = - \int dx \, p(x) \log p(x) $$
subject to the constraints
$$ \int dx \, p(x) = 1 \quad \int dx \, x \, p(x) = \mu \quad \int dx\, (x-\mu)^2 p(x) = \sigma^2 $$
The first constraint ensures that we have a proper probability density, and the second and third constraints are your observations about the mean and the variance. We can solve this with Lagrange multipliers:
$$ -\int dx \, p(x) \log p(x) + \lambda_0 \left[ \int dx\, p(x) - 1 \right] + \lambda_1 \left[ \int dx \, x \, p(x) - \mu \right] + \lambda_2 \left[ \int dx \, (x- \mu)^2 p(x) - \sigma^2 \right] $$
Where, taking the variation of $p(x)$ we obtain
$$ \int dx\, \left[ -\log p(x) - 1 + \lambda_0 + \lambda_1 x + \lambda_2 (x- \mu)^2 \right] = 0 $$
which is only satisfied if the integrand vanishes everywhere, suggesting:
$$ p(x) = \exp \left[ 1 + \lambda_0 + \lambda_1 x + \lambda_2 (x - \mu)^2 \right] $$
The only remaining problem is to determine the $\lambda$s from the constraints, from which we find
$$ p(x) = \frac{1}{\sqrt{ 2 \pi \sigma^2 } } \exp \left( - \frac{(x-\mu)^2 }{ 2\sigma^2 } \right) $$
The standard gaussian.
Generalization
What is really neat is that this generalizes. If you happen to have a set of functions $f_i$ for which you know the expectations under your probability distribution:
$$ \langle f_i (x) \rangle = \alpha_i $$
The probability distribution with the largest entropy consistent with those observations is:
$$ p(x) \propto \exp \left[\sum_i \lambda_i f_i(x) \right] $$
Where the proportionality constant and all of the $\lambda$s must be fixed to make all of the observed observations true. If you wanted a better distribution for your variable, you could go on to measure other quantities and reform your estimate for the probability distribution in this way.
Boltzmann Distribution
What is really interesting is that the probability distribution with maximum entropy that has as its only constraint a known expectation is the exponential distribution:
$$ p(x) \propto \exp \left( \frac{x}{\mu} \right) $$
which you might recognize as the Boltzmann distribution
This is not an accident. ET Jaynes would use this fact to build his principle of maximum entropy, and more generally formulate statistical mechanics in terms of information theory. For an nice introduction, consider his paper Information Theory and Statistical Mechanics [doi] [pdf] | {
"domain": "physics.stackexchange",
"id": 20558,
"tags": "probability, dirac-delta-distributions, randomness"
} |
How much activation energy is required to combust propane? | Question: According to WolframAlpha, the change in enthalphy in the combustion of propane is -2220 kJ/mol. How do I calculate the activation energy required start the reaction in the first place? Do you use the same principles as when calculating delta H or am I missing something mind bogglingly obvious?
Answer: http://www.engineeringtoolbox.com/fuels-ignition-temperatures-d_171.html
The autoignition temperature is an external variable. It depends upon total pressure and oxygen partial pressure. For paper it is famously 232.778 C (significant figures aside). | {
"domain": "chemistry.stackexchange",
"id": 1084,
"tags": "organic-chemistry, energy, enthalpy, combustion"
} |
How to determine the concentration of a base through indirect titration? | Question:
$5.267~\mathrm{g}$ of $\ce{Na2CO3}$ was dissolved in $250.00\ \mathrm{mL}$ of water. $10.00\ \mathrm{mL}$ of this solution was titrated with $\ce{HCl}$. The end-point occurred at $21.30\ \mathrm{mL}$. This acid was titrated with $25.00\ \mathrm{mL}$ of $\ce{Ba(OH)2}$. The end-point occurred at $27.10\ \mathrm{mL}$. Calculate the concentration of the $\ce{Ba(OH)2}$.
[Answer: $0.1013~\mathrm{M}$]
Using the mass they gave me ($5.267~\mathrm{g}$) I worked out the moles of $\ce{Na2CO3}$ which was $0.0497\ \mathrm{mol}$.
I then worked out the concentration of $\ce{Na2CO3}$ using the moles I just worked out and the volume they gave me ($0.25~\mathrm{L}$) and I got $0.1988~\mathrm{M}$.
The question says that only $0.01~\mathrm{L}$ was used, so I worked out the number of moles this would be which was $0.001988\ \mathrm{mol}$.
Using stoichiometry, I multiplied this value by two to get the moles of $\ce{HCl}$ that would have reacted with this, which was $0.003976\ \mathrm{mol}$.
It says that the end-point occurred at $21.30~\mathrm{mL}$, which if you subtract the volume of $\ce{HCl}$ which was already there, I worked out that $11.3~\mathrm{mL}$ or $0.0113~\mathrm{L}$ of $\ce{HCl}$ must have been used.
From here, I worked out the concentration of $\ce{HCl}$ (using $0.0113~\mathrm{L}$ and $0.003976\ \mathrm{mol}$) and got $0.35186~\mathrm{M}$.
It says that for the next reaction, $25~\mathrm{mL}$ of $\ce{Ba(OH)2}$ was used and the end-point occurred at $27.10~\mathrm{mL}$. From this information, I worked out that $0.0021~\mathrm{L}$ of $\ce{HCl}$ must have been used in this reaction.
By knowing the volume of $\ce{HCl}$ used, I worked out the amount (moles) of $\ce{HCl}$ used in the reaction, using the previously found concentration. I got $0.0007389\ \mathrm{mol}$.
Using stoichiometry, I worked out the amount of $\ce{Ba(OH)2}$ which must have reacted with this many moles of $\ce{HCl}$. I divided the $\ce{HCl}$ moles by $2$ and got $0.00036945\ \mathrm{mol}$.
By using the volume of $\ce{Ba(OH)2}$ used which they give me ($0.025~\mathrm{L}$) and the moles I just found ($0.00036945\ \mathrm{mol}$) I finally worked out the concentration of $\ce{Ba(OH)2}$, which was $0.0148~\mathrm{M}$.
My answer, $0.0148~\mathrm{M}$ is different to the answer the question gives ($0.1013~\mathrm{M}$), and I'm not sure why.
Answer:
It says that the end-point occurred at $21.30\ \mathrm{mL}$, which if you subtract the volume of $\ce{HCl}$ which was already there, I worked out that $11.3\ \mathrm{mL}$ or $0.0113\ \mathrm{L}$ of $\ce{HCl}$ must have been used.
I think, this is where you went wrong. If a question says something along the lines of
The end-point occurs at $\dots~\mathrm{ml}$
then you should generally assume that that volume of whatever you are titrating with was used. In this case, that would mean a full $21.3~\mathrm{ml}\ \ce{HCl}$. The same thing of course happened at your point 7, where you write:
It says that for the next reaction, $25\ \mathrm{mL}$ of $\ce{Ba(OH)2}$ was used and the end-point occurred at $27.10\ \mathrm{mL}$. From this information, I worked out that $0.0021\ \mathrm{L}$ of $\ce{HCl}$ must have been used in this reaction.
Working through this bit by bit, and starting at your point 5:
$$c(\ce{HCl}) = \frac{n(\ce{HCl})}{V(\ce{HCl})} = \frac{0.003976~\mathrm{mol}}{0.0213~\mathrm{l}} = 0.1867~\mathrm{M}$$
For the titration of $\ce{Ba(OH)2}$:
$$n(\ce{HCl}) = c(\ce{HCl}) \times V(\ce{HCl}) = 0.1867~\mathrm{M} \times 27.10~\mathrm{ml} = 5.059~\mathrm{mmol} = 0.005059~\mathrm{mol}\\
n(\ce{Ba(OH)2}) = \frac{n(\ce{HCl})}{2} = \frac{5.059~\mathrm{mmol}}{2} = 2.529~\mathrm{mmol} = 0.002529~\mathrm{mol}\\
c(\ce{Ba(OH)2}) = \frac{n(\ce{Ba(OH)2})}{V(\ce{Ba(OH)2})} = \frac{2.529~\mathrm{mmol}}{25~\mathrm{ml}} = \frac{0.002529~\mathrm{mol}}{0.025~\mathrm{l}} = 0.1012~\mathrm{M}$$
The difference still there can be attributed to a rounding error somewhere along the way. | {
"domain": "chemistry.stackexchange",
"id": 5765,
"tags": "acid-base, titration"
} |
Naming of substituted amines and amides | Question: While naming substituted amines and amides, should we consider the "N" while deciding the order of substituents or simply place the substituents attached to nitrogen in the beginning?
E.g. Should the compound be named:
3-chloro-N-ethyl-4-methyl-N-propylhexanamide
or
N-chloro-N-propyl-3-chloro-4-methylhexanamide.
Answer: Substituents are listed alphabetically by the name of the substituent. The N in front of the substituent is a locant, and thus is not used when alphabetizing:
3-chloro-N-ethyl-4-methyl-N-propylhexanamide is correct because the substituents are in correct alphabetic order:
chloro, ethyl, methyl, propyl | {
"domain": "chemistry.stackexchange",
"id": 3754,
"tags": "organic-chemistry, nomenclature, amines"
} |
Complex damped exponential signal with repetitive poles and the significance of falling factorial | Question: I was modelling a complex damped exponential signal (discrete) with unique poles as below:
\begin{equation}
x = \sum_{k=1}^{K} (a_k e^{(j\phi_k)})(e^{\{(j2\pi f_k - \alpha_k)\Delta t\}t}), \quad t = 0,1,...,N-1
\end{equation}
where $K \in \mathbb{N}$, $a_k$ amplitudes, $\phi_k$ phases, $\alpha_k$ damping factors, $f_k$ - frequencies in Hz and $\Delta t$ - time interval. The above equation is simplified as:
\begin{equation}
x = \sum_{k=1}^{K} (c_k)(z_k^t), \quad t = 0,1,...,N-1
\end{equation}
This signal satisfies below homogeneous linear recursion form:
\begin{equation}
x(t) + p_1x(t-1) +...+ p_Kx(t-K) = 0
\end{equation}
where $p_1,...,p_K$ are coefficients of polynomial
\begin{equation}
P(z) = \prod_{k=1}^K (z-z_k) = \sum_{k=0}^K p_{K-k}z^k
\end{equation}
where $p_0=1$ and $p_K \neq 0$
Now, I want to convert the model for repetitive poles case (i.e. my prediction polynomial $P(z)$ can have multiple roots).
\begin{equation}
P(z) = \prod_{k=1}^K (z-z_k)^{M_k} = \sum_{k=0}^K p_{K-k}z^k
\end{equation}
where $M_k \in \mathbb{N}$ and can be greater than 1.
I came across this paper High resolution spectral analysis of mixtures of complex exponentials modulated by polynomials, that talks about falling factorial ($F_m$) in the modified formulation of signal for multiple poles case where the original signal $x(t)$ is modified as below:
\begin{equation}
x = \sum_{k=1}^{K} \sum_{m=0}^{M_k-1} (c_k)F_m(t)(z_k^{t-m}), \quad t = 0,1,...,N-1
\end{equation}
where the falling factorial $F_m$ is defined as:
\begin{equation}
F_m(t) =
\begin{cases}
0, \quad & m < 0,\\
1, \quad & m = 0,\\
\frac{1}{m!} \prod_{j=0}^{m-1} (t-j), \quad & m > 0
\end{cases}
\end{equation}
Could someone please explain the significance of $F_m$ and how does a signal with multiple poles vary from the same signal with unique poles? only in terms of Amplitude?
Answer: A complex-valued analytic signal:
$$\begin{align}
x_\mathrm{a}(t) &= \sum\limits_{k=1}^{K} g_k \ e^{-\alpha_k t} \ e^{j(2\pi f_k t + \phi_k)} \ u(t) \\
&= \sum\limits_{k=1}^{K} \underbrace{g_k e^{j\phi_k}}_{c_k} \ e^{(-\alpha_k+j2\pi f_k)t} \ u(t) \\
&= \sum\limits_{k=1}^{K} c_k \ e^{(-\alpha_k+j2\pi f_k)t} \ u(t) \\
&= x(t) + j\hat{x}(t) \\
\end{align}$$
where $\quad K\in\mathbb{Z}>0, \quad g_k,\alpha_k,f_k\in\mathbb{R}>0, \quad \phi_k\in\mathbb{R} \quad$ and
$$ \hat{x}(t) = \mathscr{H}\Big\{ x(t) \Big\} $$
is the Hilbert Transform.
$u(t)$ is the unit step function:
$$ u(t) \triangleq \begin{cases}
0 \quad & t < 0, \\
1 \quad & t \ge 0, \\
\end{cases}$$
It is not too hard to see that
$$ x(t) = \sum\limits_{k=1}^{K} g_k \ e^{-\alpha_k t} \ \cos(2\pi f_k t + \phi_k) \ u(t) $$
and
$$ \hat{x}(t) = \sum\limits_{k=1}^{K} g_k \ e^{-\alpha_k t} \ \sin(2\pi f_k t + \phi_k) \ u(t) $$
If this complex analytic signal is uniformly sampled at sample rate $f_\mathrm{s}=\frac{1}{T} \quad T>0$, then
$$\begin{align}
x_\mathrm{a}[n] &\triangleq x_\mathrm{a}(t) \Bigg|_{t=nT} \\ \\
&\triangleq x_\mathrm{a}(nT) \\
&= \sum\limits_{k=1}^{K} g_k \ e^{-\alpha_k nT} \ e^{j(2\pi f_k nT + \phi_k)} \ u(nT) \\
&= \sum\limits_{k=1}^{K} \underbrace{g_k e^{j\phi_k}}_{c_k} \ e^{(-\alpha_k+j2\pi f_k)nT} \ u(nT) \\
&= \sum\limits_{k=1}^{K} c_k \ (\underbrace{e^{(-\alpha_k+j2\pi f_k)T}}_{p_k})^n \ u(nT) \\
&= \sum\limits_{k=1}^{K} c_k \ (p_k)^n \ u[n] \\
\end{align}$$
where $n\in\mathbb{Z}$ is the sample index and
$$\begin{align}
c_k &\triangleq g_k e^{j\phi_k} \\
\\
p_k &\triangleq e^{-\alpha_k T}\ e^{j2\pi f_k T} \\
\end{align}$$
So applying the Z Transform
$$\begin{align}
X_\mathrm{a}(z) &= \mathcal{Z}\Big\{ x_\mathrm{a}[n] \Big\} \\
&= \mathcal{Z}\left\{ \sum\limits_{k=1}^{K} c_k \ p_k^n \ u[n] \right\} \\
&= \sum\limits_{k=1}^{K} c_k \ \mathcal{Z}\Big\{ p_k^n \ u[n] \Big\} \\
&= \sum\limits_{k=1}^{K} \frac{c_k}{1-p_k z^{-1}} \\
&= \sum\limits_{k=1}^{K} \frac{c_k \, z}{z-p_k} \\
\end{align}$$
So this signal looks like the impulse response of a complex-coefficient LTI system having $p_k$ as poles of the transfer function.
Now to confirm that this signal satisfies the below homogeneous linear recursion form:
$$ x_\mathrm{a}[n] + a_1 x_\mathrm{a}[n-1] +...+ a_K x_\mathrm{a}[n-K] = 0 $$
where $a_1,...,a_K$ are coefficients of polynomial
$$ P(z) = \prod\limits_{k=1}^K (z-p_k) = \sum\limits_{m=0}^K a_{K-m}\ z^m $$
where $\ a_0=1 \ $ and $\ a_K \neq 0 \ $.
The Z Transform is:
$$ X_\mathrm{a}(z) + a_1 X_\mathrm{a}(z) z^{-1} +...+ a_K X_\mathrm{a}(z) z^{-K} = 0 $$
which, after factoring out $X_\mathrm{a}(z)$ is
$$ \sum\limits_{m=0}^K a_m z^{-m} = 0$$
But this doesn't say anything about $X_\mathrm{a}(z)$. So let's try this:
$$\begin{align}
0 &\stackrel{?}{=} \sum\limits_{m=0}^K a_m x_\mathrm{a}[n-m] \\
&= \sum\limits_{m=0}^K a_m \sum\limits_{k=1}^{K} c_k \ (p_k)^{n-m} \ u[n-m] \\
\end{align}$$
Now, to make our lives simpler, let's say for the moment that we're looking at times where the sample index $n \ge K$ so that the unit step function $u[n-m]=1$ goes away. So far, the right-hand limit $N-1$ suggested in the OP hasn't appeared to us yet.
$$\begin{align}
0 &\stackrel{?}{=} \sum\limits_{m=0}^K a_m x_\mathrm{a}[n-m] \\
&= \sum\limits_{m=0}^K a_m \sum\limits_{k=1}^{K} c_k \ (p_k)^{n-m} \\
&= \sum\limits_{m=0}^K \sum\limits_{k=1}^{K} a_m c_k \ (p_k)^{n-m} \\
&= \sum\limits_{k=1}^K \sum\limits_{m=0}^{K} a_m c_k \ (p_k)^{n-m} \\
&= \sum\limits_{k=1}^K \sum\limits_{m=0}^{K} a_m c_k \ p_k^{n} \ p_k^{-m} \\
&= \sum\limits_{k=1}^K c_k \ p_k^{n} \sum\limits_{m=0}^{K} a_m \ p_k^{-m} \\
\end{align}$$
So, if it can be made true that
$$ \sum\limits_{m=0}^{K} a_m \ p_k^{-m} = 0 $$
for every $\ p_k, \quad 1 \le k \le K \quad $ then the "homogeneous linear recursion form" above will be satisfied. That means that $p_k$ are roots to the polynomial
$$ P(z) = \prod\limits_{k=1}^K (z-p_k) = \sum\limits_{m=0}^K a_{K-m}z^m $$
Okay, so this means that for the All-pole model (which is a small misnomer because there can be zeros in the $z$-plane, but all zeros are located at the origin $z=0$), then
$$\sum\limits_{m=0}^K a_m x_\mathrm{a}[n-m] = 0$$
which means that this recursion formula works:
$$\begin{align}
x_\mathrm{a}[n] &= -\sum\limits_{m=1}^K a_m x_\mathrm{a}[n-m] \\
&= - \big( a_1 x_\mathrm{a}[n-1] + a_2 x_\mathrm{a}[n-2] + ... + a_K x_\mathrm{a}[n-K] \big) \\
\end{align} $$
So the current complex sample of $x_\mathrm{a}[n]$ can be defined recursively, solely in terms of the $K$ most recent samples in the past, $x_\mathrm{a}[n-m]$ for $1 \le m \le K$, with fixed feedback coefficients: $-a_m$.
Now to get this complex function to appear exactly as the originally defined
$$x_\mathrm{a}[n] = \sum\limits_{k=1}^{K} c_k \ p_k^n \ u[n]$$
that will require setting the initial states of $x_\mathrm{a}[n]$ for $-K \le n \le -1$ to specific values. I'm pretty sure that those initial states must fit the given exponential form for $-K \le n \le -1$ even though the unit step function explicitly would set those samples to zero. So the initial states must be:
$$ x_\mathrm{a}[n] = \sum\limits_{k=1}^{K} c_k \ p_k^n \qquad \text{for} \ -K \le n \le -1 $$
So far, whether there are multiple poles or not, does not seem to be an issue. And the right-hand limit of $n \le N-1$ also does not seem to be an issue as long as it's larger than $K$.
Now consider a single set of coincident multiple poles. The Binomial Theorem provides:
$$\begin{align}
(z-p)^K &= \sum\limits_{k=0}^{K} \frac{K!}{(K-k)! \, k!} z^k (-p)^{K-k} \\
\\
\prod\limits_{m=1}^K (z-p) &= \sum\limits_{k=0}^K a_{K-k} z^k \\
\end{align}$$
That says that the polynomial coefficients are:
$$ a_{K-k} = \frac{K!}{(K-k)! \, k!} (-p)^{K-k}$$
or
$$ a_k = \frac{K!}{(K-k)! \, k!} (-p)^k $$
and the recursion is
$$\begin{align}
x_\mathrm{a}[n] &= -\sum\limits_{k=1}^K a_k \ x_\mathrm{a}[n-k] \\
&= -\sum\limits_{k=1}^K \frac{K!}{(K-k)! \, k!} (-p)^k \ x_\mathrm{a}[n-k] \\
\end{align} $$
Now I am not sure what that tells us, but your feedback coefficients $a_k$ have the form as shown above.
Okay, i don't wanna do different multiple poles, just a set of identical coincident poles. With multiple sets of poles you can split them into parallel sections but that's partial fraction expansion and that's hard.
With a set of identical multiple poles, the recursion equation comes out as above with the factorials in the denominator. | {
"domain": "dsp.stackexchange",
"id": 10341,
"tags": "discrete-signals, poles-zeros, complex, damping"
} |
How to add two hydroxy groups to 1,5-diphenylpentan-3-ol with as high as possible yield? | Question: From 1,5-diphenylpentan-3-ol (above), how to add the two hydroxy groups in red (below)?
Is there any proven successful synthesis route to 1,5-diphenylpentane-1,3,5-triol with higher yield (40 % or above)?
Answer: A few possibilities for direct hydroxylation:
$\ce{SeO2}$ immediately comes to mind as one potential option; it is well known to selectively oxidize allylic positions to yield alcohols and carbonyl compounds, and there appear to be references suggesting it reacts at the benzylic position as well.
In Advanced Organic Chemistry, March notes that cerium(IV) triflate "converts benzylic arenes to benzylic alcohols, although the major product is the ketone when > 15% of water is present." Ref: Laali, K.K.; Herbert, M.; Cushnyr, B.; Bhatt, A.; Terrano, D. J. Chem. Soc., Perkin Trans. 1 2001, 578.
Intriguingly, March also reports a variety of oxidations accomplished enzymatically. I have no experience and very limited knowledge of these reactions, so I'll just give the relevant primary literature references:
Adam, W.; Lukacs, Z.; Saha-Moller, C.R.; Weckerle, B.; Schreier, P. Eur. J. Org. Chem. 2000, 2923.
Hamada, H.; Tanaka, T.; Furuya, T.; Takahata, H.; Nemoto, H. Tetrahedron Lett. 2001, 42, 909.
Adam, W.; Lukacs, Z.; Harmsen, D.; Saha-Moller, C.R.; Schreier, P. J. Org. Chem. 2000, 65, 878.
If those approaches aren't appealing, there are various less direct routes. Radical bromination (with, e.g., NBS under photochemical conditions), followed either by direct substitution with hydroxide, or alternatively elimination followed by subsequent acid-catalyzed hydration, would likely work well. If you need stereoselectivity, there are are numerous epoxidation reactions (that is, after bromination and subsequent elimination) that would likely be appropriate (Sharpless, Jacobsen, Shi, and certainly numerous others). Conveniently, your molecule would already have the allylic alcohol necessary for Sharpless asymmetric epoxidation, should you want to go that route. In any case, it should be possible to open the epoxides at the less hindered carbon with a suitable hydride donor (e.g., LAH) to yield the alcohols after work-up. | {
"domain": "chemistry.stackexchange",
"id": 1618,
"tags": "organic-chemistry, synthesis"
} |
Is every edge of a graph included in some spanning tree? | Question: Let's say we have a graph $G$. We pick one edge from it (any edge). Will there always be such a spanning tree that contains that very edge?
I think the answer is yes, because no matter what we do we can always create such a spanning tree so that the very edge we picked is included. Of course, a more formal proof would be needed?
Answer: Note that the initial graph $G$ needs to be connected or it has no spanning trees at all. (Though the same argument applied to each component would show that any graph has a spanning forest containing any chosen edge.)
Let $G=(V,E)$ be a connected graph, let $T$ be any spanning subtree and let $e$ be any edgein $E$. We claim that there is a spanning tree that includes $e$. If $e\in T$, we are done. Otherwise, $T+e$ contains a cycle. That cycle necessarily contains at least one edge $e'\neq e$ (actually, it contains at least two). $T+e-e'$ is a spanning tree that contains $e$.
You should prove to yourself that $T+e-e'$ really is a spanning tree. | {
"domain": "cs.stackexchange",
"id": 14038,
"tags": "graphs, spanning-trees"
} |
Problem with ar_pose, error: Rectified topic '/camera/image_rect' requested but camera publishing '/camera/camera_info' is uncalibrated** | Question:
Hi,
I successfully calibrated my camera (rosrun uvc_cam_node works well) but when I launch ar_pose I get error that my camera isn't calibrated:
[ERROR] [1416872150.188997696]: Rectified topic '/camera/image_rect' requested but camera publishing '/camera/camera_info' is uncalibrated
Please help me and dispel my doubts:
Compare two rosnode lists:
Are they the same? /uvc_camera node = /camera/uvc_camera_node
uvc_camera_node is a node which transmit video from camera to ROS
and /camera/uvc_camera_node is a node which ar_pose use to get video to recognize maker/tag?
Compare topic lists:
/camera_info, /image_raw and /camera/camera_info, /camera/image_raw
should I remap something in launch?
Why in launch we set parameters and after It there is command 'find' which load parameters from file? is there any sense?
DETAILS:
when I ran:
uvc_camera I get:
przemek@przem:~/tum_simulator_ws/src/ar_tools/ar_pose/launch$ rosrun uvc_camera uvc_camera_node
[ INFO] [1416870421.617654535]: using default calibration URL
[ INFO] [1416870421.617885710]: camera calibration URL: file:///home/przemek/.ros/camera_info/camera.yaml
opening /dev/video0
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
discrete: 640x480: 1/30 1/15
discrete: 352x288: 1/30 1/15
discrete: 320x240: 1/30 1/15
discrete: 176x144: 1/30 1/15
discrete: 160x120: 1/30 1/15
discrete: 1280x1024: 2/15 1/5
int (Brightness, 0, id = 980900): -10 to 10 (1)
int (Contrast, 0, id = 980901): 0 to 20 (1)
int (Saturation, 0, id = 980902): 0 to 10 (1)
int (Gamma, 0, id = 980910): 100 to 200 (1)
int (Gain, 0, id = 980913): 32 to 48 (1)
menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
0: Disabled
1: 50 Hz
2: 60 Hz
int (Sharpness, 0, id = 98091b): 0 to 10 (1)
select timeout in grab
select timeout in grab
and everything seems to be ok...
Rosnode and rostopic list:
przemek@przem:/opt/ros/indigo/share/uvc_camera$ rosnode list
/rosout
/uvc_camera
przemek@przem:/opt/ros/indigo/share/uvc_camera$ rostopic list
/camera_info
/image_raw
....
When I run ar_single_pose I get error:
process[camera/uvc_camera_node-4]: started with pid [22299]
process[ar_pose-5]: started with pid [22314]
[ INFO] [1416872056.747424013]: camera calibration URL: file:///opt/ros/indigo/share/uvc_camera/camera_calibration.yaml
[ INFO] [1416872057.006555638]: Starting ArSinglePublisher
[ INFO] [1416872057.008427734]: Publish transforms: 1
[ INFO] [1416872057.013649211]: Publish visual markers: 1
[ INFO] [1416872057.015248159]: Threshold: 100
[ INFO] [1416872057.016890548]: Marker Width: 152.4
[ INFO] [1416872057.019849023]: Reverse Transform: 0
[ INFO] [1416872057.023319574]: Marker frame: ar_marker
[ INFO] [1416872057.027037854]: Use history: 1
[ INFO] [1416872057.046105711]: Marker Center: (0.0,0.0)
[ INFO] [1416872057.046718919]: Subscribing to info topic
opening /dev/video0
pixfmt 0 = 'YUYV' desc = 'YUV 4:2:2 (YUYV)'
discrete: 640x480: 1/30 1/15
discrete: 352x288: 1/30 1/15
discrete: 320x240: 1/30 1/15
discrete: 176x144: 1/30 1/15
discrete: 160x120: 1/30 1/15
discrete: 1280x1024: 2/15 1/5
int (Brightness, 0, id = 980900): -10 to 10 (1)
int (Contrast, 0, id = 980901): 0 to 20 (1)
int (Saturation, 0, id = 980902): 0 to 10 (1)
int (Gamma, 0, id = 980910): 100 to 200 (1)
int (Gain, 0, id = 980913): 32 to 48 (1)
menu (Power Line Frequency, 0, id = 980918): 0 to 2 (1)
0: Disabled
1: 50 Hz
2: 60 Hz
int (Sharpness, 0, id = 98091b): 0 to 10 (1)
select timeout in grab
[ INFO] [1416872059.116906053]: *** Camera Parameter ***
--------------------------------------
SIZE = 320, 240
Distortion factor = 0.000000 0.000000 0.000000 1.000000
0.00000 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000
0.00000 0.00000 0.00000 0.00000
--------------------------------------
[ INFO] [1416872059.117126682]: Loading pattern
[ INFO] [1416872059.118051380]: Subscribing to image topic
[ERROR] [1416872060.024344493]: Rectified topic '/camera/image_rect' requested but camera publishing '/camera/camera_info' is uncalibrated
[ERROR] [1416872090.108035723]: Rectified topic '/camera/image_rect' requested but camera publishing '/camera/camera_info' is uncalibrated
[ERROR] [1416872120.186654733]: Rectified topic '/camera/image_rect' requested but camera publishing '/camera/camera_info' is uncalibrated
[ERROR] [1416872150.188997696]: Rectified topic '/camera/image_rect' requested but camera publishing '/camera/camera_info' is uncalibrated
and now: rosnode and rostopic list:
przemek@przem:/opt/ros/indigo/share/uvc_camera$ rosnode list
/ar_pose
/camera/image_proc
/camera/uvc_camera_node
przemek@przem:/opt/ros/indigo/share/uvc_camera$ rostopic list
/camera/camera_info
/camera/image_raw
...
Originally posted by green96 on ROS Answers with karma: 115 on 2014-11-24
Post score: 0
Answer:
Have you really calibrated your camera using the camera calibrator?
http://wiki.ros.org/camera_calibration
Some camera drivers may publish an empty uncalibrated camera_info along with the image, which seems to be the case as your calib matrix contains all zeros and the distortion factor is the default 0.000000 0.000000 0.000000 1.000000
Originally posted by Wolf with karma: 7555 on 2014-11-25
This answer was ACCEPTED on the original site
Post score: 0
Original comments
Comment by green96 on 2014-11-25:
EDIT / I found solution!
Yes, but I was doing it in wrong way. Because commit button didn't work, I manually copied file from calibration tar to appropriate localisation.
How to repair commit button in calibration: http://tinyurl.com/nksyrdm
I calibrated it again and commited setting to driver. | {
"domain": "robotics.stackexchange",
"id": 20149,
"tags": "ros, ar-pose, uvc-cam"
} |
Can the momentum eigenstates be non-orthogonal? | Question: Consider the Hilbert space of a particle, whose position domain is confined to $q\in[0,1]$ (e.g. a particle in a box with unit width). Using
$$
1=\int_0 ^1 dq |q\rangle\langle q|
$$
and the position representation of the discrete momentum eigenstates
$$
\langle q | p_n\rangle=e^{i\pi n q},
$$
inserting the above identity operator and integrating leads to the scalar product
$$
\langle p_n|p_m\rangle=\frac{(-1)^{m-n}-1}{i\pi(m-n)}\neq\delta_{nm}.
$$
This would mean that the eigenbasis of a physical observable is not orthogonal.
Is there an error in my derivation, and if not, how can this be understood physically?
Answer:
This would mean that the eigenbasis of a physical observable is not orthogonal. Is there an error in my derivation, and if not, how can this be understood physically?
The set of eigenfunctions of $\hat p$ in the sense
$$
\hat{p}\phi = p\phi
$$
is sure to be orthogonal if they belong to a subset of $L^2((0,1))$ on which the operator $\hat{p}$ is symmetric, meaning
$$
\int_0^1 \phi_1^* \hat{p} \phi_2dq = \int_0^1 (\hat{p}\phi_1)^* \phi_2 dq
$$
for any two functions of the subset.
The momentum operator $\hat{p} = -i\hbar \partial/\partial q$ on $(0,1)$ is symmetric only for subset of eigenfunctions $e^{ipq/\hbar}$ that obey favorable boundary condition (with the right value of $p$ - see Ruslan's answer) this subset of eigenfuncitons is orthogonal and forms a basis of the subset.
For most of the eigenfunctions $e^{iqp/\hbar}$, however, the operator $\hat{p}$ is not symmetric and there is no orthogonality. | {
"domain": "physics.stackexchange",
"id": 19931,
"tags": "quantum-mechanics, hamiltonian-formalism, potential-energy, fourier-transform, discrete"
} |
MMSE - How to minimize a complex error with respect to a set of real parameters | Question: Suppose there's a complex signal $X(k)$ (where $k \in \{0, 1, 2,...,N - 1\}$) corrupted by additive complex noise. Its estimate $\hat{X}(k)$ is a linear combination of a set of real parameters $A_r$ ($r \in \{1, 2, 3,..., R$})
$$\hat{X}(k) = \sum_{r=1}^R A_rZ_r(k)$$
where $Z_r(k)$ is complex (and known).
I wish to obtain the real values of $A_r$ for which the error is minimized. If I simply differentiate the mean squared error (MSE) $\frac{1}{N}\sum_{k = 0}^{N-1}(X(k) - \hat{X}(k))^2$ with respect to each $A_r$ and set the resulting derivatives equal to zero, the values of $A_r$ I'll obtain will be complex, so that's not a solution.
My question is: how do I obtain the optimal values of $A_r$ such that the MSE is minimized, under the constraint that each $A_r$ should be real?
Answer: A very common approach is to consider $X(k) \;\text{and} \;\hat{X}(k)$ as elements vector space $C^R$, and consider the distance between the 2 vectors as a norm, which are real, positive or zero, and satisfy the triangle inequality.
So using vector-matrix notation with the vectors as column vectors:
$$ \text{error}^2 =(\mathbf{X}-\mathbf{\hat{X}})^H
(\mathbf{X}-\mathbf{\hat{X}})
$$
where $H$ is conjugate transpose.
Constraint is satisfied when imaginary part of $A_r$ is zero.
$$
\frac{1}{2}(A_r - A_r^*)=0
$$
which is appended to the objective as a lagrange multiplier(s)
$$
\text{error}^2+\sum \lambda_r \frac{1}{2}(A_r - A_r^*)
$$
These sorts of problems are easier when you use Brandwood derivatives,
D. H. Brandwood, "A complex gradient operator and its application in adaptive array theory," in Communications, Radar and Signal Processing, IEE Proceedings F, vol. 130, no. 1, pp. 11-16, February 1983.
doi: 10.1049/ip-f-1.1983.0003
Abstract: The problem of minimising a real scalar quantity (for example array output power, or mean square error) as a function of a complex vector (the set of weights) frequently arises in adaptive array theory. A complex gradient operator is defined in the paper for this purpose and its use justified. Three examples of its application to array theory problems are given.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4645581&isnumber=4645575
This is the convention used in
Van Trees, Harry L. Optimum array processing: Part IV of detection, estimation and modulation theory. Vol. 1. New York, NY, USA: John Wiley & Sons, 2002. | {
"domain": "dsp.stackexchange",
"id": 5716,
"tags": "dft, estimation, complex, parameter-estimation"
} |
"rosmake teleop_base" couldn't find dependency [joy] of [teleop_base] | Question:
Hi,
I am using diamondback-full-desktop ROS version on Ubuntu 10.10. I was trying to run this command
$rosmake teleop_base
and it says that [rospack] couldn't find dependency [joy] of [teleop_base].
Is not [joy] a part of diamondback-full-desktop or do I need to install it separately? [I have control_toolbox -which is one dependency of teleop_base]
Thanks
VN
Originally posted by VN on ROS Answers with karma: 373 on 2011-07-15
Post score: 2
Answer:
joy is part of the joystick_drivers stack. I don't think it's part of the ros-diamondback-desktop-full metapackage.
You can install it from binaries using
sudo apt-get install ros-diamondback-joystick-drivers
Originally posted by Ivan Dryanovski with karma: 4954 on 2011-07-15
This answer was ACCEPTED on the original site
Post score: 6 | {
"domain": "robotics.stackexchange",
"id": 6154,
"tags": "ros, teleop-base"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.